News
Industry

Lessons from the OneLogin Breach

Gaurav Kumar

06.08.17 14:05 PM

Lessons from OneLogin Breach

Research has shown that people with the GG genotype are able to quickly learn from their mistakes. We are starting the “Cybersecurity GG Genotype” blog series where we will analyze breaches and provide prescriptive guidelines on how to avoid becoming a victim.

On May 31st, OneLogin reported an attack on their AWS cloud infrastructure. In this blog, we have analyzed the incident based on the statements provided by OneLogin and suggested best practices to avoid these issues. 

Statement#1: “Our review has shown that a threat actor obtained access to a set of AWS keys ...”

Let us begin with some background information on Amazon Web Services (AWS) keys - what are they and why do they matter? AWS keys are associated with an AWS IAM account and are generated as a pair - AWS access key and AWS secret key. Conceptually, the AWS access key is similar to a username and the AWS secret key is similar to a password. In this specific instance, the attacker acquired the pair which implies that he or she can now perform actions permitted for that account.

The question that arises is how the attacker may have acquired these credentials. The OneLogin blog is light on details but we can assume that one of the following two things happened:

  1. The attacker exploited a vulnerability which exposed the keys: In all likelihood, the vulnerability was not a zero-day vulnerability; otherwise we would have observed mass exploitation by now. Keeping up with vulnerability patching can be difficult, especially for large organizations with lots of systems. They need a way to be able to identify systems with critical vulnerabilities and prioritize patching. For example, if there is an OpenSSL vulnerability, it is important to identify which systems are exposed to the internet and patch them on a priority basis. Unfortunately, determining which systems are exposed to the internet is not a trivial task in a dynamic cloud environment and requires advanced analytics.
  2. The attacker passively stumbled upon the keys: There are many ways in which sensitive data can be accidentally exposed and the RedLock Cloud Security Intelligence (CSI) team found dozens of such systems on the internet that were leaking credentials. Approximately 70% of the incidents involved systems management software such as Kubernetes and Jenkins, while the remaining 30% involved poorly configured applications. For instance, we found AWS key pairs in the debug header that was being sent by web servers. We also discovered public EBS volumes that have confidential data in them.

Lessons Learned
  • Invest in a tool that can help you prioritize the vulnerabilities that need to be fixed. Specifically, the tool should be able to identify systems that are exposed to the internet and have critical vulnerabilities, especially those with a CVE ID.
  • Audit your network and limit internet access to only those systems that require it.
  • Enforce strong authentication on all systems. Unfortunately, we are observing a trend where some systems do not have authentication turned on by default.

 

Statement#2: “... used [AWS keys] to access the AWS API from an intermediate host with another, smaller service provider in the US.”

The key takeaway here is that the attacker was able to use the stolen access keys from an untrusted IP address range, assuming that OneLogin did not have a trust relationship with the “smaller service provider” in the US.

Lesson Learned

Restrict the use of AWS access keys and accounts to your trusted networks. If this is not possible, monitor the geolocation of the IP addresses making administrative changes to your environment and investigate any anomalous activity.

 

Statement #3: “Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance.”

This is a bit perplexing - we are not sure why the attacker had to create instances to do reconnaissance. However, the key observation here is that the stolen access keys were associated with a user account or role with elevated privileges that allowed the attacker to launch instances.

 

Statement#4: “The threat actor was able to access database tables that contain information about users, apps, and various types of keys.”

Most likely, either the attacker obtained access to the database credentials or he or she used the master password reset functionality provided by AWS.

Lesson Learned

Review permissions assigned to access keys, security groups, NACLs, etc. You will mostly likely find cases where elevated permissions were assigned for a specific purpose but were not revoked after.

  

Statement#5: “OneLogin staff was alerted of unusual database activity around 9 am PST …”

It appears that it took OneLogin seven hours to detect the attack. We are not sure if this is due to the unavailability of staff during non-business hours, or if it took seven hours for their monitoring systems to detect the issue. Either way, the attacker had a free pass to the environment for seven hours and which is plenty of time to perform malicious activity.

Lesson Learned

Invest in a tool that provides detailed context on the issue so that you can detect issues faster. Many tools generate alerts with very little context. For example, an alert that notifies you when a large number of instances are created is not sufficient. It could be a false positive caused by an auto-scaling system that spins up instances on-demand. 

A more helpful alert would be the following:

User “prod-db-back-up” just launched 3 instances (i-123,i-456,i-789) in the us-east-1 region from a previously unknown location (Wichita,Kansas,USA) and these instances are connected to following internet IPs 8.8.8.8:53,8.8.8.8:53

For a tool to be able to generate a contextual alert such as the one above, it must correlate configuration, user activity, and network traffic data.

    

Subscribe to Email Updates

Recent Posts