Tag Archives: cloud security

Introducing InsightCloudSec

Post Syndicated from Brian Johnson original https://blog.rapid7.com/2021/07/07/introducing-insightcloudsec/

Introducing InsightCloudSec

A little over a year ago, when DivvyCloud first combined forces with Rapid7, I wrote that we did it because of culture, technology alignment, and the opportunity to drive even greater innovation in cloud security.

Those core values remain true, and so does something else. As more organizations adopt and invest in cloud and container technology, the issues facing cloud security programs continue to put enterprises at significant risk. The need for fully-integrated solutions that enable continuous security and compliance for cloud environments also continues to increase exponentially. Organizations need simple yet comprehensive visibility into their cloud and container environments to help mitigate risk, potential threats, and misconfigurations. With this growing need for automation and security, I’m proud to announce our next step in helping to drive cloud security forward: InsightCloudSec.

InsightCloudSec is our combined solution of all the things our customers love about DivvyCloud, now with cluster-level Kubernetes security capabilities from Alcide, aligned and integrated with Rapid7’s Insight Platform. Our team and customer-driven, innovation-focused roadmap will stay the same. Combining all of our efforts and energy into InsightCloudSec allows us to move forward as a more powerful cloud security solution.

Combining forces to become a leader in cloud security

Rapid7 acquired both DivvyCloud and Alcide in the past 15 months to more effectively solve the complex security challenges that we’ve seen arise from the scale and speed of cloud adoption.

Now more than ever, it’s clear that solving these challenges requires a single solution that brings the core capabilities of cloud security together, including posture management, identity and access management, infrastructure as code analysis, and Kubernetes protection. Furthermore, we want to bring all of this cloud security functionality together on a platform that also features additional security capabilities, from vulnerability management to detection and response, in order to provide the full context necessary to accurately assess risk within the cloud environment.

By bringing all of these features together, our goal is to help our customers achieve 3 ultimate goals:

  • Shift Left—We want to help customers prevent problems before they happen by providing a single, consistent set of security checks throughout the CI/CD pipeline to uncover misconfigurations and policy violations without delaying deployment.
  • Reduce Noise—We want to simplify and speed up risk assessment by combining unified visibility and shared terminology with rich contextual insights gathered from across each layer of your cloud environment.
  • Automate Workflows—Finally, we want to help security teams enable developers to take full advantage of the speed and scale of the cloud safely with precise automation that speeds up remediation, reduces busywork, and allows the security team to focus on the bigger picture.

Behind this announcement, our team is as energized and dedicated as ever to our goal of delivering innovative capabilities and the industry-leading support that has helped our customers develop some of the most mature cloud security programs in the world over the last eight years.

If you’re interested in learning more about what we have in store, join us for an upcoming webinar on July 27 at 11:00 a.m. EDT.

Automated remediation level 3: Governance and hygiene

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/06/28/automated-remediation-level-3-governance-and-hygiene/

Mold it, make it, just don’t fake it

Automated remediation level 3: Governance and hygiene

At a quick glance, it seems like the title of this blog is “government hygiene.” Most likely, that wouldn’t be a particularly exciting read, but we’re hoping you might be engaged enough to gain a few takeaways from this fourth piece in our series on automating remediation and how it can benefit your team and cost center.

The best way to mold a solution that makes sense for your company and cloud security is by adding actions that cause the fewest deviations in your day-to-day operations. Of course, there are several best-practice use cases that can make sense for your organization. Let’s take a look at a few so you can decide which one(s) work(s) for you.

Environment enforcement

Sandboxes are designed to be safe spaces, so they should also be clean spaces. As Software Development Life Cycles (SDLC) accelerate and security posture moves increasingly left into the hands of the developers spinning things up, it’s important to not only isolate and lock down your sandbox space, but to create a repeat cleaning schedule. Your software release cycle can also act as regularly scheduled sandbox maintenance.

No exemptions for expensive instances

Spinning up instances that suck up resources from other critical applications can cost you. Sometimes they’re necessary, often they’re not. Whether it’s by cost, family type, or hardware specs, continuous monitoring is key so that even when unnecessarily resource-intensive processes aren’t automatically killed, you still have a good idea of what’s costing too much time and too much money. AWS CloudWatch, for instance, can help you monitor EC2 instances by stopping and starting them at scheduled intervals.

Cleanliness ≠ costliness

Properly automating anything in cloud security is ultimately going to save money for the organization. But, as we’ve discussed to some extent above and throughout this series, you’ll want to make sure automation isn’t creating unnecessary instances, orphaning outdated resources, or stagnating old snapshots and unused databases. Yep, there are a lot of things that can start to add up and begin puffing out a budget. Creating more efficient data pipelines and discovering which parts of the remediation process are the most labor-intensive can help identify where you should focus effort and resources. In this way, you can begin to target those areas that will require the most regular hygiene and cleanup.

Put a cork in the port (exposure)

Since everything on the internet is communicated and transferred via ports, it’s probably a good idea to think about locking down exposed ports that may be running protocols like Secure Shell (SSH) or Remote Desktop (RDP). Automating this type of cleanup will require knowing, similar to the above section, which ports do most of the heavy lifting in the daily rhythms of your cloud -security operations. If a port isn’t being used in a meaningful way — or you simply don’t have any idea what its use is — best to shut it down.

Stay vigilant while basking in benefits

Ensuring you’re getting the most organized automation framework as possible takes work, but it’s considerably less work than if you had no framework at all. Automating good governance and hygiene practices can add time saved to the overall benefits gained from this work. But, we must all be good monitors of these processes and put checks in place to ensure your automation framework actually works for you and continues to save time and effort for years to come.

With that, we’re ready for a deep-dive into the final of 4 Levels of Automated Remediation. You can also read the previous entry in this series here.  

Level 4 coming soon!

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/06/23/cve-2021-20025-sonicwall-email-security-appliance-backdoor-credential/

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

The virtual, on-premises version of the SonicWall Email Security Appliance ships with an undocumented, static credential, which can be used by an attacker to gain root privileges on the device. This is an instance of CWE-798: Use of Hard-coded Credentials, and has an estimated CVSSv3 score of 9.1. This issue was fixed by the vendor in version 10.0.10, according to the vendor’s advisory, SNWLID-2021-0012.

Product Description

The SonicWall Email Security Virtual Appliance is a solution which “defends against advanced email-borne threats such as ransomware, zero-day threats, spear phishing and business email compromise (BEC).” It is in use in many industries around the world as a primary means of preventing several varieties of email attacks. More about SonicWall’s solutions can be found at the vendor’s website.


This issue was discovered by William Vu of Rapid7, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.


The session capture detailed below illustrates using the built-in SSH management interface to connect to the device as the root user with the password, “sonicall”.

This attack was tested on and, the latest builds available at the time of testing.

[email protected]:~$ ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null [email protected]
Warning: Permanently added '' (ECDSA) to the list of known hosts.
For CLI access you must login as snwlcli user.
[email protected]'s password: sonicall
[[email protected] ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
[[email protected] ~]# uname -a
Linux snwl.example.com 4.19.58-snwl-VMWare64 #1 SMP Wed Apr 14 20:59:36 PDT 2021 x86_64 GNU/Linux
[[email protected] ~]# grep root /etc/shadow
[[email protected] ~]# snwlcli
Login: admin
SNWLCLI> version


Given remote root access to what is usually a perimeter-homed device, an attacker can further extend their reach into a targeted network to launch additional attacks against internal infrastructure from across the internet. More locally, an attacker could likely use this access to update or disable email scanning rules and logic to allow for malicious email to pass undetected to further exploit internal hosts.


As detailed in the vendor’s advisory, users are urged to use at least version 10.0.10 on fresh installs of the affected virtual host. Note that updating to version 10.0.10 or later does not appear to resolve the issue directly; users who update existing versions should ensure they are also connected to the MySonicWall Licensing services in order to completely resolve the issue.

For sites that are not able to update or replace affected devices immediately, access to remote management services should be limited to only trusted, internal networks — most notably, not the internet.

Disclosure Timeline

Rapid7’s coordinated vulnerability disclosure team is deeply impressed with this vendor’s rapid turnaround time from report to fix. This vulnerability was both reported and confirmed on the same day, and thanks to SonicWall’s excellent PSIRT team, a fix was developed, tested, and released almost exactly three weeks after the initial report.

  • April, 2021: Issue discovered by William Vu of Rapid7
  • Thu, Apr 22, 2021: Details provided to SonicWall via their CVD portal
  • Thu, Apr 22, 2021: Confirmed reproduction of the issue by the vendor
  • Thu, May 13, 2021: Update released by the vendor, removing the static account
  • Thu, May 13, 2021: CVE-2021-20025 published in SNWLID-2021-0012
  • Wed, Jun 23, 2021: Rapid7 advisory published with further details

AWS Verified episode 5: A conversation with Eric Rosenbach of Harvard University’s Belfer Center

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-5-a-conversation-with-eric-rosenbach-of-harvard-universitys-belfer-center/

I am pleased to share the latest episode of AWS Verified, where we bring you conversations with global cybersecurity leaders about important issues, such as how to create a culture of security, cyber resiliency, Zero Trust, and other emerging security trends.

Recently, I got the opportunity to experience distance learning when I took the AWS Verified series back to school. I got a taste of life as a Harvard grad student, meeting (virtually) with Eric Rosenbach, Co-Director of the Belfer Center of Science and International Affairs at Harvard University’s John F. Kennedy School of Government. I call it, “Verified meets Veritas.” Harvard’s motto may never be the same again.

In this video, Eric shared with me the Belfer Center’s focus as the hub of the Harvard Kennedy School’s research, teaching, and training at the intersection of cutting edge and interdisciplinary topics, such as international security, environmental and resource issues, and science and technology policy. In recognition of the Belfer Center’s consistently stellar work and its six consecutive years ranked as the world’s #1 university-affiliated think tank, in 2021 it was named a center of excellence by the University of Pennsylvania’s Think Tanks and Civil Societies Program.

Eric’s deep connection to the students reflects the Belfer Center’s mission to prepare future generations of leaders to address critical areas in practical ways. Eric says, “I’m a graduate of the school, and now that I’ve been out in the real world as a policy practitioner, I love going into the classroom, teaching students about the way things work, both with cyber policy and with cybersecurity/cyber risk mitigation.”

In the interview, I talked with Eric about his varied professional background. Prior to the Belfer Center, he was the Chief of Staff to US Secretary of Defense, Ash Carter. Eric was also the Assistant Secretary of Defense for Homeland Defense and Global Security, where he was known around the US government as the Pentagon’s cyber czar. He has served as an officer in the US Army, written two books, been the Chief Security Officer for the European ISP Tiscali, and was a professional committee staff member in the US Senate.

I asked Eric to share his opinion on what the private sector and government can learn from each other. I’m excited to share Eric’s answer to this with you as well as his thoughts on other topics, because the work that Eric and his Belfer Center colleagues are doing is important for technology leaders.

Watch my interview with Eric Rosenbach, and visit the AWS Verified webpage for previous episodes, including interviews with security leaders from Netflix, Vodafone, Comcast, and Lockheed Martin. If you have an idea or a topic you’d like covered in this series, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How to replicate secrets in AWS Secrets Manager to multiple Regions

Post Syndicated from Fatima Ahmed original https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/

On March 3, 2021, we launched a new feature for AWS Secrets Manager that makes it possible for you to replicate secrets across multiple AWS Regions. You can give your multi-Region applications access to replicated secrets in the required Regions and rely on Secrets Manager to keep the replicas in sync with the primary secret. In scenarios such as disaster recovery, you can read replicated secrets from your recovery Regions, even if your primary Region is unavailable. In this blog post, I show you how to automatically replicate a secret and access it from the recovery Region to support a disaster recovery plan.

With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other secrets. When you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets to a Region is a security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require replication of secrets across Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more Regions to support these scenarios.

With this new feature, you can create Regional read replicas for your secrets. When you create a new secret or edit an existing secret, you can specify the Regions where your secrets need to be replicated. Secrets Manager will securely create the read replicas for each secret and its associated metadata, eliminating the need to maintain a complex solution for this functionality. Any update made to the primary secret, such as a secret value updated through automatic rotation, will be automatically propagated by Secrets Manager to the replica secrets, making it easier to manage the life cycle of multi-Region secrets.

Note: Each replica secret is billed as a separate secret. For more details on pricing, see the AWS Secrets Manager pricing page.

Architecture overview

Suppose that your organization has a requirement to set up a disaster recovery plan. In this example, us-east-1 is the designated primary Region, where you have an application running on a simple AWS Lambda function (for the example in this blog post, I’m using Python 3). You also have an Amazon Relational Database Service (Amazon RDS) – MySQL DB instance running in the us-east-1 Region, and you’re using Secrets Manager to store the database credentials as a secret. Your application retrieves the secret from Secrets Manager to access the database. As part of the disaster recovery strategy, you set up us-west-2 as the designated recovery Region, where you’ve replicated your application, the DB instance, and the database secret.

To elaborate, the solution architecture consists of:

  • A primary Region for creating the secret, in this case us-east-1 (N. Virginia).
  • A replica Region for replicating the secret, in this case us-west-2 (Oregon).
  • An Amazon RDS – MySQL DB instance that is running in the primary Region and configured for replication to the replica Region. To set up read replicas or cross-Region replicas for Amazon RDS, see Working with read replicas.
  • A secret created in Secrets Manager and configured for replication for the replica Region.
  • AWS Lambda functions (running on Python 3) deployed in the primary and replica Regions acting as clients to the MySQL DBs.

This architecture is illustrated in Figure 1.

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

In the primary region us-east-1, the Lambda function uses the credentials stored in the secret to access the database, as indicated by the following steps in Figure 1:

  1. The Lambda function sends a request to Secrets Manager to retrieve the secret value by using the GetSecretValue API call. Secrets Manager retrieves the secret value for the Lambda function.
  2. The Lambda function uses the secret value to connect to the database in order to read/write data.

The replicated secret in us-west-2 points to the primary DB instance in us-east-1. This is because when Secrets Manager replicates the secret, it replicates the secret value and all the associated metadata, such as the database endpoint. The database endpoint details are stored within the secret because Secrets Manager uses this information to connect to the database and rotate the secret if it is configured for automatic rotation. The Lambda function can also use the database endpoint details in the secret to connect to the database.

To simplify database failover during disaster recovery, as I’ll cover later in the post, you can configure an Amazon Route 53 CNAME record for the database endpoint in the primary Region. The database host associated with the secret is configured with the database CNAME record. When the primary Region is operating normally, the CNAME record points to the database endpoint in the primary Region. The requests to the database CNAME are routed to the DB instance in the primary Region, as shown in Figure 1.

During disaster recovery, you can failover to the replica Region, us-west-2, to make it possible for your application running in this Region to access the Amazon RDS read replica in us-west-2 by using the secret stored in the same Region. As part of your failover script, the database CNAME record should also be updated to point to the database endpoint in us-west-2. Because the database CNAME is used to point to the database endpoint within the secret, your application in us-west-2 can now use the replicated secret to access the database read replica in this Region. Figure 2 illustrates this disaster recovery scenario.

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active


The procedure described in this blog post requires that you complete the following steps before starting the procedure:

  1. Configure an Amazon RDS DB instance in the primary Region, with replication configured in the replica Region.
  2. Configure a Route 53 CNAME record for the database endpoint in the primary Region.
  3. Configure the Lambda function to connect with the Amazon RDS database and Secrets Manager by following the procedure in this blog post.
  4. Sign in to the AWS Management Console using a role that has SecretsManagerReadWrite permissions in the primary and replica Regions.

Enable replication for secrets stored in Secrets Manager

In this section, I walk you through the process of enabling replication in Secrets Manager for:

  1. A new secret that is created for your Amazon RDS database credentials
  2. An existing secret that is not configured for replication

For the first scenario, I show you the steps to create a secret in Secrets Manager in the primary Region (us-east-1) and enable replication for the replica Region (us-west-2).

To create a secret with replication enabled

  1. In the AWS Management Console, navigate to the Secrets Manager console in the primary Region (N. Virginia).
  2. Choose Store a new secret.
  3. On the Store a new secret screen, enter the Amazon RDS database credentials that will be used to connect with the Amazon RDS DB instance. Select the encryption key and the Amazon RDS DB instance, and then choose Next.
  4. Enter the secret name of your choice, and then enter a description. You can also optionally add tags and resource permissions to the secret.
  5. Under Replicate Secret – optional, choose Replicate secret to other regions.

    Figure 3: Replicate a secret to other Regions

    Figure 3: Replicate a secret to other Regions

  6. For AWS Region, choose the replica Region, US West (Oregon) us-west-2. For Encryption Key, choose Default to store your secret in the replica Region. Then choose Next.

    Figure 4: Configure secret replication

    Figure 4: Configure secret replication

  7. In the Configure Rotation section, you can choose whether to enable rotation. For this example, I chose not to enable rotation, so I selected Disable automatic rotation. However, if you want to enable rotation, you can do so by following the steps in Enabling rotation for an Amazon RDS database secret in the Secrets Manager User Guide. When you enable rotation in the primary Region, any changes to the secret from the rotation process are also replicated to the replica Region. After you’ve configured the rotation settings, choose Next.
  8. On the Review screen, you can see the summary of the secret configuration, including the secret replication configuration.

    Figure 5: Review the secret before storing

    Figure 5: Review the secret before storing

  9. At the bottom of the screen, choose Store.
  10. At the top of the next screen, you’ll see two banners that provide status on:
    • The creation of the secret in the primary Region
    • The replication of the secret in the Secondary Region

    After the creation and replication of the secret is successful, the banners will provide you with confirmation details.

At this point, you’ve created a secret in the primary Region (us-east-1) and enabled replication in a replica Region (us-west-2). You can now use this secret in the replica Region as well as the primary Region.

Now suppose that you have a secret created in the primary Region (us-east-1) that hasn’t been configured for replication. You can also configure replication for this existing secret by using the following procedure.

To enable multi-Region replication for existing secrets

  1. In the Secrets Manager console, choose the secret name. At the top of the screen, choose Replicate secret to other regions.
    Figure 6: Enable replication for existing secrets

    Figure 6: Enable replication for existing secrets

    This opens a pop-up screen where you can configure the replica Region and the encryption key for encrypting the secret in the replica Region.

  2. Choose the AWS Region and encryption key for the replica Region, and then choose Complete adding region(s).
    Figure 7: Configure replication for existing secrets

    Figure 7: Configure replication for existing secrets

    This starts the process of replicating the secret from the primary Region to the replica Region.

  3. Scroll down to the Replicate Secret section. You can see that the replication to the us-west-2 Region is in progress.
    Figure 8: Review progress for secret replication

    Figure 8: Review progress for secret replication

    After the replication is successful, you can look under Replication status to review the replication details that you’ve configured for your secret. You can also choose to replicate your secret to more Regions by choosing Add more regions.

    Figure 9: Successful secret replication to a replica Region

    Figure 9: Successful secret replication to a replica Region

Update the secret with the CNAME record

Next, you can update the host value in your secret to the CNAME record of the DB instance endpoint. This will make it possible for you to use the secret in the replica Region without making changes to the replica secret. In the event of a failover to the replica Region, you can simply update the CNAME record to the DB instance endpoint in the replica Region as a part of your failover script

To update the secret with the CNAME record

  1. Navigate to the Secrets Manager console, and choose the secret that you have set up for replication
  2. In the Secret value section, choose Retrieve secret value, and then choose Edit.
  3. Update the secret value for the host with the CNAME record, and then choose Save.

    Figure 10: Edit the secret value

    Figure 10: Edit the secret value

  4. After you choose Save, you’ll see a banner at the top of the screen with a message that indicates that the secret was successfully edited.Because the secret is set up for replication, you can also review the status of the synchronization of your secret to the replica Region after you updated the secret. To do so, scroll down to the Replicate Secret section and look under Region Replication Status.


    Figure 11: Successful secret replication for a modified secret

    Figure 11: Successful secret replication for a modified secret

Access replicated secrets from the replica Region

Now that you’ve configured the secret for replication in the primary Region, you can access the secret from the replica Region. Here I demonstrate how to access a replicated secret from a simple Lambda function that is deployed in the replica Region (us-west-2).

To access the secret from the replica Region

  1. From the AWS Management Console, navigate to the Secrets Manager console in the replica Region (Oregon) and view the secret that you configured for replication in the primary Region (N. Virginia).

    Figure 12: View secrets that are configured for replication in the replica Region

    Figure 12: View secrets that are configured for replication in the replica Region

  2. Choose the secret name and review the details that were replicated from the primary Region. A secret that is configured for replication will display a banner at the top of the screen stating the replication details.

    Figure 13: The replication status banner

    Figure 13: The replication status banner

  3. Under Secret Details, you can see the secret’s ARN. You can use the secret’s ARN to retrieve the secret value from the Lambda function or application that is deployed in your replica Region (Oregon). Make a note of the ARN.

    Figure 14: View secret details

    Figure 14: View secret details

During a disaster recovery scenario when the primary Region isn’t available, you can update the CNAME record to point to the DB instance endpoint in us-west-2 as part of your failover script. For this example, my application that is deployed in the replica Region is configured to use the replicated secret’s ARN.

Let’s suppose your sample Lambda function defines the secret name and the Region in the environment variables. The REGION_NAME environment variable contains the name of the replica Region; in this example, us-west-2. The SECRET_NAME environment variable is the ARN of your replicated secret in the replica Region, which you noted earlier.

Figure 15: Environment variables for the Lambda function

Figure 15: Environment variables for the Lambda function

In the replica Region, you can now refer to the secret’s ARN and Region in your Lambda function code to retrieve the secret value for connecting to the database. The following sample Lambda function code snippet uses the secret_name and region_name variables to retrieve the secret’s ARN and the replica Region values stored in the environment variables.

secret_name = os.environ['SECRET_NAME']
region_name = os.environ['REGION_NAME']

def openConnection():
    # Create a Secrets Manager client
    session = boto3.session.Session()
    client = session.client(
        get_secret_value_response = 
    except ClientError as e:
        if e.response['Error']['Code'] == 

Alternately, you can simply use the Python 3 sample code for the replicated secret to retrieve the secret value from the Lambda function in the replica Region. You can review the provided sample codes by navigating to the secret details in the console, as shown in Figure 16.

Figure 16: Python 3 sample code for the replicated secret

Figure 16: Python 3 sample code for the replicated secret


When you plan for disaster recovery, you can configure replication of your secrets in Secrets Manager to provide redundancy for your secrets. This feature reduces the overhead of deploying and maintaining additional configuration for secret replication and retrieval across AWS Regions. In this post, I showed you how to create a secret and configure it for multi-Region replication. I also demonstrated how you can configure replication for existing secrets across multiple Regions.

I showed you how to use secrets from the replica Region and configure a sample Lambda function to retrieve a secret value. When replication is configured for secrets, you can use this technique to retrieve the secrets in the replica Region in a similar way as you would in the primary Region.

You can start using this feature through the AWS Secrets Manager console, AWS Command Line Interface (AWS CLI), AWS SDK, or AWS CloudFormation. To learn more about this feature, see the AWS Secrets Manager documentation. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Fatima Ahmed

Fatima is a Security Global Practice Lead at AWS. She is passionate about cybersecurity and helping customers build secure solutions in the AWS Cloud. When she is not working, she enjoys time with her cat or solving cryptic puzzles.

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/03/iam-never-gonna-give-you-up-never-gonna-breach-your-cloud/

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

Identity and access management (IAM) credentials have solved myriad security issues, but the recent cloud-based IAM movement has left many scratching their heads as to why it can be so complex.

IAM on-premises vs. IAM off-premises

IAM on-premises, well, it’s become a whole lot simpler. In many organizations, it is LDAP-based, so most things are tied back into it, such as database credentials and system accounts. There are more known processes and ways to deal with those. However, when it comes to the cloud, organizations now have to deal with inheritance and different aspects that may not correlate back to the on-premises world. These new concepts, new constructs, and different ways to interact can be overwhelming.

Complexity can really become an issue with something like assume-role in AWS. Going to a least-privileged model can frustrate people, so they may just want access to everything on a given surface, promising to scale the permissions back later. The worry there is that you can end up with over-permissioned identities that never get fixed. With assume-role in particular, credentials are no longer stored inside a physical operating system, but rather in a metadata layer associated to a piece of infrastructure. This applies to a number of different services specifically within the cloud provider—everything from compute instances, to database instances, to storage assets, and more. These aspects can all be very complex to secure, but there’s no question it makes operations safer. Speaking of safe…

Going fast vs. going safe

This may be stating the obvious, but a general sentiment of feature developers is that they aren’t all that excited about security. Often, this is because their jobs may depend on speed—they’re told to go fast. A main value prop on the cloud is that teams buy into the sentiment of “I can be as agile as I need to be, I can be as quick as I need to be.” So there’s a sort of relief at that, but it’s very likely those teams are also told, “Make sure we don’t have incidents, data breaches, or data exposures.” A tension can grow from increased use of the cloud and operational offloading of IAM protocols. In this case, keeping things as simple as possible is a way to maintain efficient processes and keep them moving. What are some tools organizations can use to lessen friction between developer teams and security?  

Service-control policies and session-control policies

If there’s an umbrella structure sitting over your cloud accounts, or over apps like Google projects, policies can be pushed from a top level down with service-control policies. A session policy is generated either from the user side, from an application, or from an assume-role. Going even further, there’s also an identity policy associated with each individual in an active session. With all of this potential complexity, how would organizations go about simplifying, especially as they shift authentication to the cloud?

The above process does provide an abundance of granularity as well as freedom to explicitly allow or deny at many, many levels. The flipside of that, of course, is that there are many, many levels.  

  • Leverage all aspects: Some companies may restrict access to specific services or regions. Plus, they may not want to go and use just any old cloud service due to various IAM implications, based on the level of security in the organization. So it’s about tailoring a set or policies to specific goals.
  • Pre-canned policies: Leveraging automation, it might be faster to deploy these types of policies while also leaving room for a certain amount of autonomy. In this way, teams can tailor some resource-level access standards.  

Again, based on a company’s specific goals, these types of processes can help ease friction between teams looking to ship a project fast and those trying their best to keep them secure.

Seeing it and protecting it

At the end of the day, the feeling from much of the security world is that visibility has to be there for SecOps to be able to tell you that something is secure. They may not need to go inside and read the data, but if there is no visibility, they can’t comment on any aspect of configuration. An old idea around securing infrastructure is that everything in a private network is implicitly authorized to access everything else. So, once a security checkpoint is crossed, anything could be accessed.

Of course, this whole process is aiming to improve incident-response efficiency by reducing security controls at every step of the kill chain. This would mean not assuming identities and not trusting anybody, also known as zero trust. Therefore, if the information on your website is something that’s super-secret, it might make sense to put it behind a VPN just for that extra layer of security—even if you have HTTPS authentication authorization—against someone who might not be part of the team.

A zero-trust initiative opens the discussion of workload identity in the cloud. Teams could use Google-native services to ensure apps are talking to each other, authenticating properly (AWS), and that connections originate from the right machines.

Identity is everything

At the end of the day, it’s never about invasion of any sort of privacy. It’s all about securing as much as you can and authenticating connections to protect against threats. A combination of technical processes and open communication is key in mitigating the challenges of protecting against those threats in a cloud-based IAM solution.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

How to Achieve and Maintain Continuous Cloud Compliance

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/01/how-to-achieve-and-maintain-continuous-cloud-compliance/

How to Achieve and Maintain Continuous Cloud Compliance

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

There are two things that make data a hot topic. First, keeping track of your organization’s sheer volume of data is extremely difficult. Second, the evolving nature of the threat and threat-vector landscape can make data management and protection astonishingly challenging. We should be focused on staying compliant in the present, but also paying attention to and evolving for what’s coming in the near future.

Why is it so hard to achieve continuous compliance in the cloud?

Getting it all right and continuously achieving compliance can be taxing on any security organization, and the cloud adds another layer of mobility to this. Pushing more operations into the cloud has so many “shiny” benefits that the possibility of losing direct visibility into your physical environment could be a drawback whose true impact isn’t known until the long term.

People can also be a big x-factor when it comes to cloud compliance. For years, your workforce has been trained in one area of compliance, and now you might have to take such measures as hiring new talent or retraining existing employees in proper cloud-compliance methodologies. So, it’s certainly worth it to regularly take a hard look at your existing policies, because the last thing anyone wants is for their compliance to be out-of-date and no longer addressing the right issues.

Taking these considerations into account, and given the ephemeral nature of the cloud in general, is continuous cloud compliance even achievable?

The answer is yes, but of course it depends on what you’re trying to get compliant with. You might actually meet all of the laws, rules, and regulations in your industry, but at the same time not mitigating all the risks. This means ensuring you have preventative controls in place, as well as continuous monitoring and scanning, so that you can identify threats faster and mitigate more overall risk.

Therefore, the answer will be different for each organization as goals are defined and you go about building a unique and continuous cloud-compliant solution. A more specific goal would come back to people and ensuring they work within the guardrails you set for them in your monitoring and preventative measures.

Changing the culture of blame

What happens when unencrypted data ends up in the cloud? Is the engineer to blame? The security team? Is your organization properly calibrated to share accountability with other teams, or does responsibility end with you? There is no question that continuous compliance takes constant teamwork that revolves around being solution-oriented and learning to not “point the finger.” It’s important that all players are comfortable asking for help and, in a worst-case scenario, don’t feel scared for their jobs.

Innovation comes from experimentation, and personnel should feel a certain amount of freedom to do that as well as identify when things go wrong and own it. Sourcing the right talent plays a big role in changing the culture of blame. On-premises skills are different from a cloud-based set. For instance, an engineer who is great at automation on-premises could be struggling in the cloud. It simply comes down to a new spectrum of vocabularies they have to learn. It’s key to source talent that’s eager to learn, so that the whole team can ultimately become successful in this new environment.

Constant tuning

It would be amazing if there were a big red easy button for automating compliance and just calling it done. As automation does become easier, it will still take vigilance to optimize processes as budgets are freed up and stakeholders decide where it should be spent. Consider having the conversation around spending money to be compliant at the beginning of the project or you might delay it for a long period of time, and then spend $100 million more to become compliant. Getting everyone on the same page early can save lots of red-tape cutting and, of course, money.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

How to set up a recurring Security Hub summary email

Post Syndicated from Justin Criswell original https://aws.amazon.com/blogs/security/how-to-set-up-a-recurring-security-hub-summary-email/

AWS Security Hub provides a comprehensive view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and best practices. In this post, we’ll show you how to set up weekly email notifications using Security Hub to provide account owners with a summary of the existing security findings to prioritize, new findings, and links to the Security Hub console for more information.

When you enable Security Hub, it collects and consolidates findings from AWS security services that you’re using, such as intrusion detection findings from Amazon GuardDuty, vulnerability scans from Amazon Inspector, Amazon Simple Storage Service (Amazon S3) bucket policy findings from Amazon Macie, publicly accessible and cross-account resources from IAM Access Analyzer, and resources lacking AWS WAF coverage from AWS Firewall Manager. Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions.

Cloud security processes can differ from traditional on-premises security in that security is often decentralized in the cloud. With traditional on-premises security operations, security alerts are typically routed to centralized security teams operating out of security operations centers (SOCs). With cloud security operations, it’s often the application builders or DevOps engineers who are best situated to triage, investigate, and remediate the security alerts. This integration of security into DevOps processes is referred to as DevSecOps, and as part of this approach, centralized security teams look for additional ways to proactively engage application account owners in improving the security posture of AWS accounts.

This solution uses Security Hub custom insights, AWS Lambda, and the Security Hub API. A custom insight is a collection of findings that are aggregated by a grouping attribute, such as severity or status. Insights help you identify common security issues that might require remediation action. Security Hub includes several managed insights, or you can create your own custom insights. Amazon SNS topic subscribers will receive an email, similar to the one shown in Figure 1, that summarizes the results of the Security Hub custom insights.

Figure 1: Example email with a summary of security findings for an account

Figure 1: Example email with a summary of security findings for an account

Solution overview

This solution assumes that Security Hub is enabled in your AWS account. If it isn’t enabled, set up the service so that you can start seeing a comprehensive view of security findings across your AWS accounts.

A recurring Security Hub summary email provides recipients with a proactive communication that summarizes the security posture and any recent improvements within their AWS accounts. The email message contains the following sections:

Here’s how the solution works:

  1. Seven Security Hub custom insights are created when you first deploy the solution.
  2. An Amazon CloudWatch time-based event invokes a Lambda function for processing.
  3. The Lambda function gets the results of the custom insights from Security Hub, formats the results for email, and sends a message to Amazon SNS.
  4. Amazon SNS sends the email notification to the address you provided during deployment.
  5. The email includes the summary and links to the Security Hub UI so that the recipient can follow the remediation workflow.

Figure 2 shows the solution workflow.

Figure 2: Solution overview, deployed through AWS CloudFormation

Figure 2: Solution overview, deployed through AWS CloudFormation

Security Hub custom insight

The finding results presented in the email are summarized by Security Hub custom insights. A Security Hub insight is a collection of related findings. Each insight is defined by a group by statement and optional filters. The group by statement indicates how to group the matching findings, and identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers. The optional filters narrow down the matching findings for the insight. For example, you might want to see only the findings from specific providers or findings associated with specific types of resources. Figure 3 shows the seven custom insights that are created as part of deploying this solution.

Figure 3: Custom insights created by the solution

Figure 3: Custom insights created by the solution

Sample custom insight

Security Hub offers several built-in managed (default) insights. You can’t modify or delete managed insights. You can view the custom insights created as part of this solution in the Security Hub console under Insights, by selecting the Custom Insights filter. From the email, follow the link for “Summary Email – 02 – Failed AWS Foundational Security Best Practices” to see the summarized finding counts, as well as graphs with related data, as shown in Figure 4.

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Let’s evaluate the filters that create this custom insight:

Filter setting Filter results
Type is “Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices” Captures all current and future findings created by the security standard AWS Foundational Security Best Practices.
Status is FAILED Captures findings where the compliance status of the resource doesn’t pass the assessment.
Workflow Status is not SUPPRESSED Captures findings where Security Hub users haven’t updated the finding to the SUPPRESSED status.
Record State is ACTIVE Captures findings that represent the latest assessment of the resource. Security Hub automatically archives control-based findings if the associated resource is deleted, the resource does not exist, or the control is disabled.
Group by SeverityLabel Creates the insight and populates the counts.

Solution artifacts

The solution provided with this blog post consists of two files:

  1. An AWS CloudFormation template named security-hub-email-summary-cf-template.json.
  2. A zip file named sec-hub-email.zip for the Lambda function that generates the Security Hub summary email.

In addition to the Security Hub custom insights as discussed in the previous section, the solution also deploys the following artifacts:

  1. An Amazon Simple Notification Service (Amazon SNS) topic named SecurityHubRecurringSummary and an email subscription to the topic.
    Figure 5: SNS topic created by the solution

    Figure 5: SNS topic created by the solution

    The email address that subscribes to the topic is captured through a CloudFormation template input parameter. The subscriber is notified by email to confirm the subscription, and after confirmation, the subscription to the SNS topic is created.

    Figure 6: SNS email subscription

    Figure 6: SNS email subscription

  2. Two Lambda functions:
    1. A Lambda function named *-CustomInsightsFunction-* is used only by the CloudFormation template to create the custom Insights.
    2. A Lambda function named SendSecurityHubSummaryEmail queries the custom insights from the Security Hub API and uses the insights’ data to create the summary email message. The function then sends the email message to the SNS topic.

      Figure 7: Example of Lambda functions created by the solution

      Figure 7: Example of Lambda functions created by the solution

  3. Two IAM roles for the Lambda functions provide the following rights, respectively:
    1. The minimum rights required to create insights and to create CloudWatch log groups and logs.
          "Version": "2012-10-17",
          "Statement": [
                  "Action": [
                  "Resource": "arn:aws:logs:*:*:*",
                  "Effect": "Allow"
                  "Action": [
                  "Resource": "*",
                  "Effect": "Allow"

    2. The minimum rights required to query Security Hub insights and to send email messages to the SNS topic named SecurityHubRecurringSummary.
          "Version": "2012-10-17",
          "Statement": [
                  "Action": "sns:Publish",
                  "Resource": "arn:aws:sns:[REGION]:[ACCOUNT-ID]:SecurityHubRecurringSummary",
                  "Effect": "Allow"
      } ,
          "Version": "2012-10-17",
          "Statement": [
                  "Effect": "Allow",
                  "Action": [
                  "Resource": "*"

  4. A CloudWatch scheduled event named SecurityHubSummaryEmailSchedule for invoking the Lambda function that generates the summary email. The default schedule is every Monday at 8:00 AM GMT. This schedule can be overwritten by using a CloudFormation input parameter. Learn more about creating Cron expressions.

    Figure 8: Example of CloudWatch schedule created by the solution

    Figure 8: Example of CloudWatch schedule created by the solution

Deploy the solution

The following steps demonstrate the deployment of this solution in a single AWS account and Region. Repeat these steps in each of the AWS accounts that are active with Security Hub, so that the respective application owners can receive the relevant data from their accounts.

To deploy the solution

  1. Download the CloudFormation template security-hub-email-summary-cf-template.json and the .zip file sec-hub-email.zip from https://github.com/aws-samples/aws-security-hub-summary-email.
  2. Copy security-hub-email-summary-cf-template.json and sec-hub-email.zip to an S3 bucket within your target AWS account and Region. Copy the object URL for the CloudFormation template .json file.
  3. On the AWS Management Console, open the service CloudFormation. Choose Create Stack with new resources.

    Figure 9: Create stack with new resources

    Figure 9: Create stack with new resources

  4. Under Specify template, in the Amazon S3 URL textbox, enter the S3 object URL for the file security-hub-email-summary-cf-template.json that you uploaded in step 1.

    Figure 10: Specify S3 URL for CloudFormation template

    Figure 10: Specify S3 URL for CloudFormation template

  5. Choose Next. On the next page, under Stack name, enter a name for the stack.

    Figure 11: Enter stack name

    Figure 11: Enter stack name

  6. On the same page, enter values for the input parameters. These are the input parameters that are required for this CloudFormation template:
    1. S3 Bucket Name: The S3 bucket where the .zip file for the Lambda function (sec-hub-email.zip) is stored.
    2. S3 key name (with prefixes): The S3 key name (with prefixes) for the .zip file for the Lambda function.
    3. Email address: The email address of the subscriber to the Security Hub summary email.
    4. CloudWatch Cron Expression: The Cron expression for scheduling the Security Hub summary email. The default is every Monday 8:00 AM GMT. Learn more about creating Cron expressions.
    5. Additional Footer Text: Text that will appear at the bottom of the email message. This can be useful to guide the recipient on next steps or provide internal resource links. This is an optional parameter; leave it blank for no text.
    Figure 12: Enter CloudFormation parameters

    Figure 12: Enter CloudFormation parameters

  7. Choose Next.
  8. Keep all defaults in the screens that follow, and choose Next.
  9. Select the check box I acknowledge that AWS CloudFormation might create IAM resources, and then choose Create stack.

Test the solution

You can send a test email after the deployment is complete. To do this, navigate to the Lambda console and locate the Lambda function named SendSecurityHubSummaryEmail. Perform a manual invocation with any event payload to receive an email within a few minutes. You can repeat this procedure as many times as you wish.


We’ve outlined an approach for rapidly building a solution for sending a weekly summary of the security posture of your AWS account as evaluated by Security Hub. This solution makes it easier for you to be diligent in reviewing any outstanding findings and to remediate findings in a timely way based on their severity. You can extend the solution in many ways, including:

  1. Add links in the footer text to the remediation workflows, such as creating a ticket for ServiceNow or any Security Information and Event Management (SIEM) that you use.
  2. Add links to internal wikis for workflows like organizational exceptions to vulnerabilities or other internal processes.
  3. Extend the solution by modifying the custom insights content, email content, and delivery frequency.

To learn more about how to set up and customize Security Hub, see these additional blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Justin Criswell

Justin is a Senior Security Specialist Solutions Architect at AWS. His background is in cloud security and customer success. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS Security Services to increase visibility and reduce risk.


Kavita Mahajan

Kavita is a Senior Partner Solutions Architect at AWS. She has a background in building software systems for the financial sector, specifically insurance and banking. Currently, she is focused on helping AWS partners develop capabilities, grow their practices, and innovate on the AWS platform to deliver the best possible customer experience.

Why More Teams are Shifting Security Analytics to the Cloud This Year

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/17/why-more-teams-are-shifting-security-analytics-to-the-cloud-this-year/

Why More Teams are Shifting Security Analytics to the Cloud This Year

As the threat landscape continues to evolve in size and complexity, so does the security skills and resource gap, leaving organizations both understaffed and overwhelmed. An ESG study found that 63% of organizations say security is more difficult than it was two years ago. Teams cite the growing attack surface, increasing alerts, and bandwidth as key reasons.

For their research, ESG surveyed hundreds of IT and cybersecurity professionals to gain more insights into strategies for driving successful security analytics and operations. Read the highlights of their study below, and check out the full ebook, “The Rise of Cloud-Based Security Analytics and Operations Technologies,” here.

The attack surface continues to grow as cloud adoption soars

Many organizations have been adopting cloud solutions, giving teams more visibility across their environments, while at the same time expanding their attack surface. The trend toward the cloud is only continuing to increase—ESG’s research found that 82% of organizations are dedicated to moving a significant amount of their workload and applications to the public cloud. The surge in remote work over the past year has arguably only amplified this, making it even more critical for teams to have detection and response programs that are more effective and efficient than ever before.

Organizations are looking toward consolidation to streamline incident detection and response

ESG found that 70% of organizations are using a SIEM tool today as well as an assortment of other other point solutions, such as an EDR or Network Traffic Analysis solution. While this fixes the visibility issue plaguing security teams today, it doesn’t help with streamlining detection and response, which is likely why 36% of cybersecurity professionals say integrating disparate security analytics and operations tools is one of their organization’s highest priorities. Consolidating solutions drastically cuts down on false-positive alerts, eliminating the noise and confusion of managing multiple tools.

Combat complexity and drive efficiency with the right cloud solution

A detection and response solution that can correlate all of your valuable security data in one place is key for accelerated detection and response across the sprawling attack surface. Rapid7’s InsightIDR provides advanced visibility by automatically ingesting data from across your environment—including logs, endpoints, network traffic, cloud, and use activity—into a single solution, eliminating the need to jump in and out of multiple tools and giving hours back to your team. And with pre-built automation workflows, you can take action directly from within InsightIDR.

See ESG’s findings on cloud-based solutions, automation/orchestration, machine learning, and more by accessing “The Rise of Cloud-Based Security Analytics and Operations Technologies” ebook.


Get the latest stories, expertise, and news about security today.

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Post Syndicated from Brian Johnson original https://blog.rapid7.com/2021/02/01/rapid7-acquires-leading-kubernetes-security-provider-alcide/

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Organizations around the globe continue to embrace the flexibility, speed, and agility of the cloud. Those that have adopted it are able to accelerate innovation and deliver real value to their customers faster than ever before. However, while the cloud can bring a tremendous amount of benefits to a company, it is not without its risks. Organizations need comprehensive visibility into their cloud and container environments to help mitigate risk, potential threats, and misconfigurations.

At Rapid7, we strive to help our customers establish and implement strategies that enable them to rapidly adopt and secure cloud environments. Looking only at cloud infrastructure or containers in a silo provides limited ability to understand the impact of a possible vulnerability or breach.

To help our customers gain a more comprehensive view of their cloud environments, I am happy to announce that we have acquired Alcide, a leader in Kubernetes security based in Tel Aviv, Israel. Alcide provides seamless Kubernetes security fully integrated into the DevOps lifecycle and processes so that business applications can be rapidly deployed while also protecting cloud environments from malicious attacks.

Alcide’s industry-leading cloud workload protection platform (CWPP) provides broad, real-time visibility and governance, container runtime and network monitoring, as well as the ability to detect, audit, and investigate known and unknown security threats. By bringing together Alcide’s CWPP capabilities with our existing posture management (CSPM) and infrastructure entitlements (CIEM) capabilities, we will be able to provide our customers with a cloud-native security platform that enables them to manage risk and compliance across their entire cloud environment.

This is an exciting time in cloud security, as we’re witnessing a shift in perception. Cloud security teams are no longer viewed as a cost center or operational roadblock and have earned their seat at the table as a critical investment essential to driving business forward. With Alcide, we’re excited to further increase that competitive advantage for our customers.

We look forward to joining forces with Alcide’s talented team as we work together to provide our customers comprehensive, unified visibility across their entire cloud infrastructure and cloud-native applications.

Welcome to the herd, Alcide!

Use Macie to discover sensitive data as part of automated data pipelines

Post Syndicated from Brandon Wu original https://aws.amazon.com/blogs/security/use-macie-to-discover-sensitive-data-as-part-of-automated-data-pipelines/

Data is a crucial part of every business and is used for strategic decision making at all levels of an organization. To extract value from their data more quickly, Amazon Web Services (AWS) customers are building automated data pipelines—from data ingestion to transformation and analytics. As part of this process, my customers often ask how to prevent sensitive data, such as personally identifiable information, from being ingested into data lakes when it’s not needed. They highlight that this challenge is compounded when ingesting unstructured data—such as files from process reporting, text files from chat transcripts, and emails. They also mention that identifying sensitive data inadvertently stored in structured data fields—such as in a comment field stored in a database—is also a challenge.

In this post, I show you how to integrate Amazon Macie as part of the data ingestion step in your data pipeline. This solution provides an additional checkpoint that sensitive data has been appropriately redacted or tokenized prior to ingestion. Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover sensitive data in AWS.

When Macie discovers sensitive data, the solution notifies an administrator to review the data and decide whether to allow the data pipeline to continue ingesting the objects. If allowed, the objects will be tagged with an Amazon Simple Storage Service (Amazon S3) object tag to identify that sensitive data was found in the object before progressing to the next stage of the pipeline.

This combination of automation and manual review helps reduce the risk that sensitive data—such as personally identifiable information—will be ingested into a data lake. This solution can be extended to fit your use case and workflows. For example, you can define custom data identifiers as part of your scans, add additional validation steps, create Macie suppression rules to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

Solution overview

Many of my customers are building serverless data lakes with Amazon S3 as the primary data store. Their data pipelines commonly use different S3 buckets at each stage of the pipeline. I refer to the S3 bucket for the first stage of ingestion as the raw data bucket. A typical pipeline might have separate buckets for raw, curated, and processed data representing different stages as part of their data analytics pipeline.

Typically, customers will perform validation and clean their data before moving it to a raw data zone. This solution adds validation steps to that pipeline after preliminary quality checks and data cleaning is performed, noted in blue (in layer 3) of Figure 1. The layers outlined in the pipeline are:

  1. Ingestion – Brings data into the data lake.
  2. Storage – Provides durable, scalable, and secure components to store the data—typically using S3 buckets.
  3. Processing – Transforms data into a consumable state through data validation, cleanup, normalization, transformation, and enrichment. This processing layer is where the additional validation steps are added to identify instances of sensitive data that haven’t been appropriately redacted or tokenized prior to consumption.
  4. Consumption – Provides tools to gain insights from the data in the data lake.


Figure 1: Data pipeline with sensitive data scan

Figure 1: Data pipeline with sensitive data scan

The application runs on a scheduled basis (four times a day, every 6 hours by default) to process data that is added to the raw data S3 bucket. You can customize the application to perform a sensitive data discovery scan during any stage of the pipeline. Because most customers do their extract, transform, and load (ETL) daily, the application scans for sensitive data on a scheduled basis before any crawler jobs run to catalog the data and after typical validation and data redaction or tokenization processes complete.

You can expect that this additional validation will add 5–10 minutes to your pipeline execution at a minimum. The validation processing time will scale linearly based on object size, but there is a start-up time per job that is constant.

If sensitive data is found in the objects, an email is sent to the designated administrator requesting an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step. In most cases, the reviewer will choose to adjust the sensitive data cleanup processes to remove the sensitive data, deny the progression of the files, and re-ingest the files in the pipeline.

Additional considerations for deploying this application for regular use are discussed at the end of the blog post.

Application components

The following resources are created as part of the application:

Note: the application uses various AWS services, and there are costs associated with these resources after the Free Tier usage. See AWS Pricing for details. The primary drivers of the solution cost will be the amount of data ingested through the pipeline, both for Amazon S3 storage and data processed for sensitive data discovery with Macie.

The architecture of the application is shown in Figure 2 and described in the text that follows.

Figure 2: Application architecture and logic

Figure 2: Application architecture and logic

Application logic

  1. Objects are uploaded to the raw data S3 bucket as part of the data ingestion process.
  2. A scheduled EventBridge rule runs the sensitive data scan Step Functions workflow.
  3. triggerMacieScan Lambda function moves objects from the raw data S3 bucket to the scan stage S3 bucket.
  4. triggerMacieScan Lambda function creates a Macie sensitive data discovery job on the scan stage S3 bucket.
  5. checkMacieStatus Lambda function checks the status of the Macie sensitive data discovery job.
  6. isMacieStatusCompleteChoice Step Functions Choice state checks whether the Macie sensitive data discovery job is complete.
    1. If yes, the getMacieFindingsCount Lambda function runs.
    2. If no, the Step Functions Wait state waits 60 seconds and then restarts Step 5.
  7. getMacieFindingsCount Lambda function counts all of the findings from the Macie sensitive data discovery job.
  8. isSensitiveDataFound Step Functions Choice state checks whether sensitive data was found in the Macie sensitive data discovery job.
    1. If there was sensitive data discovered, run the triggerManualApproval Lambda function.
    2. If there was no sensitive data discovered, run the moveAllScanStageS3Files Lambda function.
  9. moveAllScanStageS3Files Lambda function moves all of the objects from the scan stage S3 bucket to the scanned data S3 bucket.
  10. triggerManualApproval Lambda function tags and moves objects with sensitive data discovered to the manual review S3 bucket, and moves objects with no sensitive data discovered to the scanned data S3 bucket. The function then sends a notification to the ApprovalRequestNotification Amazon SNS topic as a notification that manual review is required.
  11. Email is sent to the email address that’s subscribed to the ApprovalRequestNotification Amazon SNS topic (from the application deployment template) for the manual review user with the option to Approve or Deny pipeline ingestion for these objects.
  12. Manual review user assesses the objects with sensitive data in the manual review S3 bucket and selects the Approve or Deny links in the email.
  13. The decision request is sent from the Amazon API Gateway to the receiveApprovalDecision Lambda function.
  14. manualApprovalChoice Step Functions Choice state checks the decision from the manual review user.
    1. If denied, run the deleteManualReviewS3Files Lambda function.
    2. If approved, run the moveToScannedDataS3Files Lambda function.
  15. deleteManualReviewS3Files Lambda function deletes the objects from the manual review S3 bucket.
  16. moveToScannedDataS3Files Lambda function moves the objects from the manual review S3 bucket to the scanned data S3 bucket.
  17. The next step of the automated data pipeline will begin with the objects in the scanned data S3 bucket.


For this application, you need the following prerequisites:

You can use AWS Cloud9 to deploy the application. AWS Cloud9 includes the AWS CLI and AWS SAM CLI to simplify setting up your development environment.

Deploy the application with AWS SAM CLI

You can deploy this application using the AWS SAM CLI. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism. AWS SAM is an open-source framework that you can use to build serverless applications on AWS.

To deploy the application

  1. Initialize the serverless application using the AWS SAM CLI from the GitHub project in the aws-samples repository. This will clone the project locally which includes the source code for the Lambda functions, Step Functions state machine definition file, and the AWS SAM template. On the command line, run the following:
    sam init --location gh: aws-samples/amazonmacie-datapipeline-scan

    Alternatively, you can clone the Github project directly.

  2. Deploy your application to your AWS account. On the command line, run the following:
    sam deploy --guided

    Complete the prompts during the guided interactive deployment. The first deployment prompt is shown in the following example.

    Configuring SAM deploy
            Looking for config file [samconfig.toml] :  Found
            Reading default arguments  :  Success
            Setting default arguments for 'sam deploy'
            Stack Name [maciepipelinescan]:

  3. Settings:
    • Stack Name – Name of the CloudFormation stack to be created.
    • AWS RegionRegion—for example, us-west-2, eu-west-1, ap-southeast-1—to deploy the application to. This application was tested in the us-west-2 and ap-southeast-1 Regions. Before selecting a Region, verify that the services you need are available in those Regions (for example, Macie and Step Functions).
    • Parameter StepFunctionName – Name of the Step Functions state machine to be created—for example, maciepipelinescanstatemachine).
    • Parameter BucketNamePrefix – Prefix to apply to the S3 buckets to be created (S3 bucket names are globally unique, so choosing a random prefix helps ensure uniqueness).
    • Parameter ApprovalEmailDestination – Email address to receive the manual review notification.
    • Parameter EnableMacie – Whether you need Macie enabled in your account or Region. You can select yes or no; select yes if you need Macie to be enabled for you as part of this template, select no, if you already have Macie enabled.
  4. Confirm changes and provide approval for AWS SAM CLI to deploy the resources to your AWS account by responding y to prompts, as shown in the following example. You can accept the defaults for the SAM configuration file and SAM configuration environment prompts.
    #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
    Confirm changes before deploy [y/N]: y
    #SAM needs permission to be able to create roles to connect to the resources in your template
    Allow SAM CLI IAM role creation [Y/n]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    Save arguments to configuration file [Y/n]: y
    SAM configuration file [samconfig.toml]: 
    SAM configuration environment [default]:

    Note: This application deploys an Amazon API Gateway with two REST API resources without authorization defined to receive the decision from the manual review step. You will be prompted to accept each resource without authorization. A token (Step Functions taskToken) is used to authenticate the requests.

  5. This creates an AWS CloudFormation changeset. Once the changeset creation is complete, you must provide a final confirmation of y to Deploy the changeset? [y/N] when prompted as shown in the following example.
    Changeset created successfully. arn:aws:cloudformation:ap-southeast-1:XXXXXXXXXXXX:changeSet/samcli-deploy1605213119/db681961-3635-4305-b1c7-dcc754c7XXXX
    Previewing CloudFormation changeset before deployment
    Deploy this changeset? [y/N]:

Your application is deployed to your account using AWS CloudFormation. You can track the deployment events in the command prompt or via the AWS CloudFormation console.

After the application deployment is complete, you must confirm the subscription to the Amazon SNS topic. An email will be sent to the email address entered in Step 3 with a link that you need to select to confirm the subscription. This confirmation provides opt-in consent for AWS to send emails to you via the specified Amazon SNS topic. The emails will be notifications of potentially sensitive data that need to be approved. If you don’t see the verification email, be sure to check your spam folder.

Test the application

The application uses an EventBridge scheduled rule to start the sensitive data scan workflow, which runs every 6 hours. You can manually start an execution of the workflow to verify that it’s working. To test the function, you will need a file that contains data that matches your rules for sensitive data. For example, it is easy to create a spreadsheet, document, or text file that contains names, addresses, and numbers formatted like credit card numbers. You can also use this generated sample data to test Macie.

We will test by uploading a file to our S3 bucket via the AWS web console. If you know how to copy objects from the command line, that also works.

Upload test objects to the S3 bucket

  1. Navigate to the Amazon S3 console and upload one or more test objects to the <BucketNamePrefix>-data-pipeline-raw bucket. <BucketNamePrefix> is the prefix you entered when deploying the application in the AWS SAM CLI prompts. You can use any objects as long as they’re a supported file type for Amazon Macie. I suggest uploading multiple objects, some with and some without sensitive data, in order to see how the workflow processes each.

Start the Scan State Machine

  1. Navigate to the Step Functions state machines console. If you don’t see your state machine, make sure you’re connected to the same region that you deployed your application to.
  2. Choose the state machine you created using the AWS SAM CLI as seen in Figure 3. The example state machine is maciepipelinescanstatemachine, but you might have used a different name in your deployment.
    Figure 3: AWS Step Functions state machines console

    Figure 3: AWS Step Functions state machines console

  3. Select the Start execution button and copy the value from the Enter an execution name – optional box. Change the Input – optional value replacing <execution id> with the value just copied as follows:
        “id”: “<execution id>”

    In my example, the <execution id> is fa985a4f-866b-b58b-d91b-8a47d068aa0c from the Enter an execution name – optional box as shown in Figure 4. You can choose a different ID value if you prefer. This ID is used by the workflow to tag the objects being processed to ensure that only objects that are scanned continue through the pipeline. When the EventBridge scheduled event starts the workflow as scheduled, an ID is included in the input to the Step Functions workflow. Then select Start execution again.

    Figure 4: New execution dialog box

    Figure 4: New execution dialog box

  4. You can see the status of your workflow execution in the Graph inspector as shown in Figure 5. In the figure, the workflow is at the pollForCompletionWait step.
    Figure 5: AWS Step Functions graph inspector

    Figure 5: AWS Step Functions graph inspector

The sensitive discovery job should run for about five to ten minutes. The jobs scale linearly based on object size, but there is a start-up time per job that is constant. If sensitive data is found in the objects uploaded to the <BucketNamePrefix>-data-pipeline-upload S3 bucket, an email is sent to the address provided during the AWS SAM deployment step, notifying the recipient requesting of the need for an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step as shown in Figure 6.

Figure 6: Sensitive data identified email

Figure 6: Sensitive data identified email

When you receive this notification, you can investigate the findings by reviewing the objects in the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket. Based on your review, you can either apply remediation steps to remove any sensitive data or allow the data to proceed to the next step of the data ingestion pipeline. You should define a standard response process to address discovery of sensitive data in the data pipeline. Common remediation steps include review of the files for sensitive data, deleting the files that you do not want to progress, and updating the ETL process to redact or tokenize sensitive data when re-ingesting into the pipeline. When you re-ingest the files into the pipeline without sensitive data, the files will not be flagged by Macie.

The workflow performs the following:

  • If you select Approve, the files are moved to the <BucketNamePrefix>-data-pipeline-scanned-data S3 bucket with an Amazon S3 SensitiveDataFound object tag with a value of true.
  • If you select Deny, the files are deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket.
  • If no action is taken, the Step Functions workflow execution times out after five days and the file will automatically be deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket after 10 days.

Clean up the application

You’ve successfully deployed and tested the sensitive data pipeline scan workflow. To avoid ongoing charges for resources you created, you should delete all associated resources by deleting the CloudFormation stack. In order to delete the CloudFormation stack, you must first delete all objects that are stored in the S3 buckets that you created for the application.

To delete the application

  1. Empty the S3 buckets created in this application (<BucketNamePrefix>-data-pipeline-raw S3 bucket, <BucketNamePrefix>-data-pipeline-scan-stage, <BucketNamePrefix>-data-pipeline-manual-review, and <BucketNamePrefix>-data-pipeline-scanned-data).
  2. Delete the CloudFormation stack used to deploy the application.

Considerations for regular use

Before using this application in a production data pipeline, you will need to stop and consider some practical matters. First, the notification mechanism used when sensitive data is identified in the objects is email. Email doesn’t scale: you should expand this solution to integrate with your ticketing or workflow management system. If you choose to use email, subscribe a mailing list so that the work of reviewing and responding to alerts is shared across a team.

Second, the application is run on a scheduled basis (every 6 hours by default). You should consider starting the application when your preliminary validations have completed and are ready to perform a sensitive data scan on the data as part of your pipeline. You can modify the EventBridge Event Rule to run in response to an Amazon EventBridge event instead of a scheduled basis.

Third, the application currently uses a 60 second Step Functions Wait state when polling for the Macie discovery job completion. In real world scenarios, the discovery scan will take 10 minutes at a minimum, likely several orders of magnitude longer. You should evaluate the typical execution times for your application execution and tune the polling period accordingly. This will help reduce costs related to running Lambda functions and log storage within CloudWatch Logs. The polling period is defined in the Step Functions state machine definition file (macie_pipeline_scan.asl.json) under the pollForCompletionWait state.

Fourth, the application currently doesn’t account for false positives in the sensitive data discovery job results. Also, the application will progress or delete all objects identified based on the decision by the reviewer. You should consider expanding the application to handle false positives through automation rather than manual review / intervention (such as deleting the files from the manual review bucket or removing the sensitive data tags applied).

Last, the solution will stop the ingestion of a subset of objects into your pipeline. This behavior is similar to other validation and data quality checks that most customers perform as part of the data pipeline. However, you should test to ensure that this will not cause unexpected outcomes and address them in your downstream application logic accordingly.


In this post, I showed you how to integrate sensitive data discovery using Macie as an additional validation step in an automated data pipeline. You’ve reviewed the components of the application, deployed it using the AWS SAM CLI, tested to validate that the application functions as expected, and cleaned up by removing deployed resources.

You now know how to integrate sensitive data scanning into your ETL pipeline. You can use automation and—where required—manual review to help reduce the risk of sensitive data, such as personally identifiable information, being inadvertently ingested into a data lake. You can take this application and customize it to fit your use case and workflows, such as using custom data identifiers as part of your scans, adding additional validation steps, creating Macie suppression rules to define cases to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Brandon Wu

Brandon is a security solutions architect helping financial services organizations secure their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and experimenting in the kitchen.

Verified, episode 2 – A Conversation with Emma Smith, Director of Global Cyber Security at Vodafone

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-2-conversation-with-emma-smith-director-of-global-cyber-security-at-vodafone/

Over the past 8 months, it’s become more important for us all to stay in contact with peers around the globe. Today, I’m proud to bring you the second episode of our new video series, Verified: Presented by AWS re:Inforce. Even though we couldn’t be together this year at re:Inforce, our annual security conference, we still wanted to share some of the conversations with security leaders that would have taken place at the conference. The series showcases conversations with security leaders around the globe. In episode two, I’m talking to Emma Smith, Vodafone’s Global Cyber Security Director.

Vodafone is a global technology communications company with an optimistic culture. Their focus is connecting people and building the digital future for society. During our conversation, Emma detailed how the core values of the Global Cyber Security team were inspired by the company. “We’ve got a team of people who are ultimately passionate about protecting customers, protecting society, protecting Vodafone, protecting all of our services and our employees.” Emma shared experiences about the evolution of the security organization during her past 5 years with the company.

We were also able to touch on one of Emma’s passions, diversity and inclusion. Emma has worked to implement diversity and drive a policy of inclusion at Vodafone. In June, she was named Diversity Champion in the SC Awards Europe. In her own words: “It makes me realize that my job is to smooth the way for everybody else and to try and remove some of those obstacles or barriers that were put in their way… it means that I’m really passionate about trying to get a very diverse team in security, but also in Vodafone, so that we reflect our customer base, so that we’ve got diversity of thinking, of backgrounds, of experience, and people who genuinely feel comfortable being themselves at work—which is easy to say but really hard to create that culture of safety and belonging.”

Stay tuned for future episodes of Verified: Presented by AWS re:Inforce here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Introducing the first video in our new series, Verified, featuring Netflix’s Jason Chan

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/introducing-first-video-new-series-verified-featuring-netflix-jason-chan/

The year has been a profoundly different one for us all, and like many of you, I’ve been adjusting, both professionally and personally, to this “new normal.” Here at AWS we’ve seen an increase in customers looking for secure solutions to maintain productivity in an increased work-from-home world. We’ve also seen an uptick in requests for training; it’s clear, a sense of community and learning are critically important as workforces physically distance.

For these reasons, I’m happy to announce the launch of Verified: Presented by AWS re:Inforce. I’m hosting this series, but I’ll be joined by leaders in cloud security across a variety of industries. The goal is to have an open conversation about the common issues we face in securing our systems and tools. Topics will include how the pandemic is impacting cloud security, tips for creating an effective security program from the ground up, how to create a culture of security, emerging security trends, and more. Learn more by following me on Twitter (@StephenSchmidt), and get regular updates from @AWSSecurityInfo. Verified is just one of the many ways we will continue sharing best practices with our customers during this time. You can find more by reading the AWS Security Blog, reviewing our documentation, visiting the AWS Security and Compliance webpages, watching re:Invent and re:Inforce playlists, and/or reviewing the Security Pillar of Well Architected.

Our first conversation, above, is with Jason Chan, Vice President of Information Security at Netflix. Jason spoke to us about the security program at Netflix, his approach to hiring security talent, and how Zero Trust enables a remote workforce. Jason also has solid insights to share about how he started and grew the security program at Netflix.

“In the early days, what we were really trying to figure out is how do we build a large-scale consumer video-streaming service in the public cloud, and how do you do that in a secure way? There wasn’t a ton of expertise in that, so when I was building the security team at Netflix, I thought, ‘how do we bring in folks from a variety of backgrounds, generalists … to tackle this problem?’”

He also gave his view on how a growing security team can measure ROI. “I think it’s difficult to have a pure equation around that. So what we try to spend our time doing is really making sure that we, as a team, are aligned on what is the most important—what are the most important assets to protect, what are the most critical risks that we’re trying to prevent—and then make sure that leadership is aligned with that, because, as we all know, there’s not unlimited resources, right? You can’t hire an unlimited number of folks or spend an unlimited amount of money, so you’re always trying to figure out how do you prioritize, and how do you find where is going to be the biggest impact for your value?”

Check out Jason’s full interview above, and stay tuned for further videos in this series. If you have an idea or a topic you’d like covered in this series, please drop us a comment below. Thanks!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How Security Operation Centers can use Amazon GuardDuty to detect malicious behavior

Post Syndicated from Darren House original https://aws.amazon.com/blogs/security/how-security-operation-centers-can-use-amazon-guardduty-to-detect-malicious-behavior/

The Security Operations Center (SOC) has a tough job. As customers modernize and shift to cloud architectures, the ability to monitor, detect, and respond to risks poses different challenges.

In this post we address how Amazon GuardDuty can address some common concerns of the SOC regarding the number of security tools and the overhead to integrate and manage them. We describe the GuardDuty service, how the SOC can use GuardDuty threat lists, filtering, and suppression rules to tune detections and reduce noise, and the intentional model used to define and categorize GuardDuty finding types to quickly give detailed information about detections.

Today, the typical SOC has between 10 and 60 tools for managing security. Some larger SOCs can have more than 100 tools, which are mostly point solutions that don’t integrate with each other.

The security market is flush with niche security tools you can deploy to monitor, detect, and respond to events. Each tool has technical and operational overhead in the form of designing system uptime, sensor deployment, data aggregation, tool integration, deployment plans, server and software maintenance, and licensing.

Tuning your detection systems is an example of a process with both technical and operational overhead. To improve your signal-to-noise ratio (S/N), the security systems you deploy have to be tuned to your environment and to emerging risks that are relevant to your environment. Improving the S/N matters for SOC teams because it reduces time and effort spent on activities that don’t bring value to an organization. Spending time tuning detection systems reduces the exhaustion factors that impact your SOC analysts. Tuning is highly technical, yet it’s also operational because it’s a process that continues to evolve, which means you need to manage the operations and maintenance lifecycle of the infrastructure and tools that you use in tuning your detections.

Amazon GuardDuty

GuardDuty is a core part of the modern FedRAMP-authorized cloud SOC, because it provides SOC analysts with a broad range of cloud-specific detective capabilities without requiring the overhead associated with a large number of security tools.

GuardDuty is a continuous security monitoring service that analyzes and processes data from Amazon Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail event logs that record Amazon Web Services (AWS) API calls, and DNS logs to provide analysis and detection using threat intelligence feeds, signatures, anomaly detection, and machine learning in the AWS Cloud. GuardDuty also helps you to protect your data stored in S3. GuardDuty continuously monitors and profiles S3 data access events (usually referred to as data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities. Detections include unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. For a full list of GuardDuty S3 threat detections, see GuardDuty S3 finding types. GuardDuty integrates threat intelligence feeds from CrowdStrike, Proofpoint, and AWS Security to detect network and API activity from known malicious IP addresses and domains. It uses machine learning to identify unknown and potentially unauthorized and malicious activity within your AWS environment.

The GuardDuty team continually monitors and manages the tuning of detections for threats related to modern cloud deployments, but your SOC can use trusted IP and threat lists and suppression rules to implement your own custom tuning to fit your unique environment.

GuardDuty is integrated with AWS Organizations, and customers can use AWS Organizations to associate member accounts with a GuardDuty management account. AWS Organizations helps automate the process of enabling and disabling GuardDuty simultaneously across a group of AWS accounts that are in your control. Note that, as of this writing, you can have one management account and up to 5,000 member accounts.

Operational overhead is near zero. There are no agents or sensors to deploy or manage. There are no servers to build, deploy, or manage. There’s nothing to patch or upgrade. There aren’t any highly available architectures to build. You don’t have to buy a subscription to a threat intelligence provider, manage the influx of threat data and most importantly, you don’t have to invest in normalizing all of the datasets to facilitate correlation. Your SOC can enable GuardDuty with a single click or API call. It is a multi-account service where you can create a management account, typically in the security account, that can read all findings information from the member accounts for easy centralization of detections. When deployed in a Management/Member design, GuardDuty provides a flexible model for centralizing your enterprise threat detection capability. The management account can control certain member settings, like update intervals for Amazon CloudWatch Events, use of threat and trusted lists, creation of suppression rules, opening tickets, and automating remediations.

Filters and suppression rules

When GuardDuty detects potential malicious activity, it uses a standardized finding format to communicate the details about the specific finding. The details in a finding can be queried in filters, displayed as saved rules, or used for suppression to improve visibility and reduce analyst fatigue.

Suppress findings from vulnerability scanners

A common example of tuning your GuardDuty deployment is to use suppression rules to automatically archive new Recon:EC2/Portscan findings from vulnerability assessment tools in your accounts. This is a best practice designed to reduce S/N and analyst fatigue.

The first criteria in the suppression rule should use the Finding type attribute with a value of Recon:EC2/Portscan. The second filter criteria should match the instance or instances that host these vulnerability assessment tools. You can use the Instance image ID attribute, the Network connection remote IPv4 address, or the Tag value attribute depending on what criteria is identifiable with the instances that host these tools. In the example shown in Figure 1, we used the remote IPv4 address.

Figure 1: GuardDuty filter for vulnerability scanners

Figure 1: GuardDuty filter for vulnerability scanners

Filter on activity that was not blocked by security groups or NACL

If you want visibility into the GuardDuty detections that weren’t blocked by preventative measures such as a network ACL (NACL) or security group, you can filter by the attribute Network connection blocked = False, as shown in Figure 2. This can provide visibility into potential changes in your filtering strategy to reduce your risk.

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Filter on specific malicious IP addresses

Some customers want to track specific malicious IP addresses to see whether they are generating findings. If you want to see whether a single source IP address is responsible for CloudTrail-based findings, you can filter by the API caller IPv4 address attribute.

Figure 3: GuardDuty filter for specific malicious IP address

Figure 3: GuardDuty filter for specific malicious IP address

Filter on specific threat provider

Maybe you want to know how many findings are generated from a threat intelligence provider or your own threat lists. You can filter by the attribute Threat list name to see if the potential attacker is on a threat list from CrowdStrike, Proofpoint, AWS, or your threat lists that you uploaded to GuardDuty.

Figure 4: GuardDuty filter for specific threat intelligence list provider

Figure 4: GuardDuty filter for specific threat intelligence list provider

Finding types and formats

Now that you know a little more about GuardDuty and tuning findings by using filters and suppression rules, let’s dive into the finding types that are generated by a GuardDuty detection. The first thing to know is that all GuardDuty findings use the following model:


This model is intended to communicate core information to security teams on the nature of the potential risk, the AWS resource types that are potentially impacted, and the threat family name, variants, and artifacts of the detected activity in your account. The Threat Purpose field describes the primary purpose of a threat or a potential attempt on your environment.

Let’s take the Backdoor:EC2/C&CActivity.B!DNS finding as an example.

Backdoor     :EC2                 /C&CActivity.    .B                  !DNS

The Backdoor threat purpose indicates an attempt to bypass normal security controls on a specific Amazon Elastic Compute Cloud (EC2) instance. In this example, the EC2 instance in your AWS environment is querying a domain name (DNS) associated with a known command and control (C&CActivity) server. This is a high-severity finding, because there are enough indicators that malware is on your host and acting with malicious intent.

GuardDuty, at the time of this writing, supports the following finding types:

  • Backdoor finding types
  • Behavior finding types
  • CryptoCurrency finding types
  • PenTest finding types
  • Persistence finding types
  • Policy finding types
  • PrivilegeEscalation finding types
  • Recon finding types
  • ResourceConsumption finding types
  • Stealth finding types
  • Trojan finding types
  • Unauthorized finding types

OK, now you know about the model for GuardDuty findings, but how does GuardDuty work?

When you enable GuardDuty, the service evaluates events in both a stateless and stateful manner, which allows us to use machine learning and anomaly detection in addition to signatures and threat intelligence. Some stateless examples include the Backdoor:EC2/C&CActivity.B!DNS or the CryptoCurrency:EC2/BitcoinTool.B finding types, where GuardDuty only needs to see a single DNS query, VPC Flow Log entry, or CloudTrail record to detect potentially malicious activity.

Stateful detections are driven by anomaly detection and machine learning models that identify behaviors that deviate from a baseline. The machine learning detections typically require more time to train the models and potentially use multiple events for triggering the detection.

The typical triggers for behavioral detections vary based on the log source and the detection in question. The behavioral detections learn about typical network or user activity to set a baseline, after which the anomaly detections change from a learning mode to an active mode. In active mode, you only see findings generated from these detections if the service observes behavior that suggests a deviation. Some examples of machine learning–based detections include the Backdoor:EC2/DenialOfService.Dns, UnauthorizedAccess:IAMUser/ConsoleLogin, and Behavior:EC2/NetworkPortUnusual finding types.

Why does this matter?

We know the SOC has the tough job of keeping organizations secure with limited resources, and with a high degree of technical and operational overhead due to a large portfolio of tools. This can impact the ability to detect and respond to security events. For example, CrowdStrike tracks the concept of breakout time—the window of time from when an outside party first gains unauthorized access to an endpoint machine, to when they begin moving laterally across your network. They identified average breakout times are between 19 minutes and 10 hours. If the SOC is overburdened with undifferentiated technical and operational overhead, it can struggle to improve monitoring, detection, and response. To act quickly, we have to decrease detection time and the overhead burden on the SOC caused by the numerous tools used. The best way to decrease detection time is with threat intelligence and machine learning. Threat intelligence can provide context to alerts and gives a broader perspective of cyber risk. Machine learning uses baselines to detect what normal looks like, enabling detection of anomalies in user or resource behavior, and heuristic threats that change over time. The best way to reduce SOC overhead is to share the load so that AWS services manage the undifferentiated heavy lifting, while the SOC focuses on more specific tasks that add value to the organization.

GuardDuty is a cost-optimized service that is in scope for the FedRAMP and DoD compliance programs in the US commercial and GovCloud Regions. It leverages threat intelligence and machine learning to provide detection capabilities without you having to manage, maintain, or patch any infrastructure or manage yet another security tool. With a 30-day trial period, there is no risk to evaluate the service and discover how it can support your cyber risk strategy.

If you want to receive automated updates about GuardDuty, you can subscribe to an SNS notification that will email you whenever new features and detections are released.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Darren House

Darren brings over 20 years’ experience building secure technology architectures and technical strategies to support customer outcomes. He has held several roles including CTO, Director of Technology Solutions, Technologist, Principal Solutions Architect, and Senior Network Engineer for USMC. Today, he is focused on enabling AWS customers to adopt security services and automations that increase visibility and reduce risk.


Trish Cagliostro

Trish is a leader in the security industry where she has spent 10 years advising public and private sector customers like DISA, DHS, and US Senate and commercial entities like Bank of America and United Airlines. Trish is a subject matter expert on a variety of topics, including integrating threat intelligence and has testified before the House Homeland Security Committee about information sharing.

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

New whitepaper: Achieving Operational Resilience in the Financial Sector and Beyond

Post Syndicated from Rahul Prabhakar original https://aws.amazon.com/blogs/security/new-whitepaper-achieving-operational-resilience-in-the-financial-sector-and-beyond/

AWS has released a new whitepaper, Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS cloud. We’re constantly amazed at the applications our customers build using AWS services — including what our financial services customers have built, from credit risk simulations to mobile banking applications. Depending on their internal and regulatory requirements, financial services companies may need to meet specific resiliency objectives and withstand low-probability events that could otherwise disrupt their businesses. We know that financial regulators are also interested in understanding how the AWS cloud allows customers to meet those objectives. This new whitepaper addresses these topics.

The paper walks through the AWS global infrastructure and how we build to withstand failures. Reflecting how AWS and customers share responsibility for resilience, the paper also outlines how a financial institution can build mission-critical applications to leverage, for example, multiple Availability Zones to improve their resiliency compared to a traditional, on-premises environment.

Security and resiliency remain our highest priority. We encourage you to check out the paper and provide feedback. We’d love to hear from you, so don’t hesitate to get in touch with us by reaching out to your account executive or contacting AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

A simpler way to assess the network exposure of EC2 instances: AWS releases new network reachability assessments in Amazon Inspector

Post Syndicated from Catherine Dodge original https://aws.amazon.com/blogs/security/amazon-inspector-assess-network-exposure-ec2-instances-aws-network-reachability-assessments/

Performing network security assessments allows you to understand your cloud infrastructure and identify risks, but this process traditionally takes a lot of time and effort. You might need to run network port-scanning tools to test routing and firewall configurations, then validate what processes are listening on your instance network ports, before finally mapping the IPs identified in the port scan back to the host’s owner. To make this process simpler for our customers, AWS recently released the Network Reachability rules package in Amazon Inspector, our automated security assessment service that enables you to understand and improve the security and compliance of applications deployed on AWS. The existing Amazon Inspector host assessment rules packages check the software and configurations on your Amazon Elastic Compute Cloud (Amazon EC2) instances for vulnerabilities and deviations from best practices.

The new Network Reachability rules package analyzes your Amazon Virtual Private Cloud (Amazon VPC) network configuration to determine whether your EC2 instances can be reached from external networks such as the Internet, a virtual private gateway, AWS Direct Connect, or from a peered VPC. In other words, it informs you of potential external access to your hosts. It does this by analyzing all of your network configurations—like security groups, network access control lists (ACLs), route tables, and internet gateways (IGWs)—together to infer reachability. No packets need to be sent across the VPC network, nor must attempts be made to connect to EC2 instance network ports—it’s like packet-less network mapping and reconnaissance!

This new rules package is the first Amazon Inspector rules package that doesn’t require an Amazon Inspector agent to be installed on your Amazon EC2 instances. If you do optionally install the Amazon Inspector agent on your EC2 instances, the network reachability assessment will also report on the processes listening on those ports. In addition, installing the agent allows you to use Amazon Inspector host rules packages to check for vulnerabilities and security exposures in your EC2 instances.

To determine what is reachable, the Network Reachability rules package uses the latest technology from the AWS Provable Security initiative, referring to a suite of AWS technology powered by automated reasoning. It analyzes your AWS network configurations such as Amazon Virtual Private Clouds (VPCs), security groups, network access control lists (ACLs), and route tables to prove reachability of ports. What is automated reasoning, you ask? It’s fancy math that proves things are working as expected. In more technical terms, it’s a method of formal verification that automatically generates and checks mathematical proofs, which help to prove systems are functioning correctly. Note that Network Reachability only analyzes network configurations, so any other network controls, like on-instance IP filters or external firewalls, are not accounted for in the assessment. See documentation for more details.

Tim Kropp, Technology & Security Lead at Bridgewater Associates talked about how Bridgewater benefitted from Network Reachability Rules. “AWS provides tools for organizations to know if all their compliance, security, and availability requirements are being met. Technology created by the AWS Automated Reasoning Group, such as the Network Reachability Rules, allow us to continuously evaluate our live networks against these requirements. This grants us peace of mind that our most sensitive workloads exist on a network that we deeply understand.”

Network reachability assessments are priced per instance per assessment (instance-assessment). The free trial offers the first 250 instance-assessments for free within your first 90 days of usage. After the free trial, pricing is tiered based on your monthly volume. You can see pricing details here.

Using the Network Reachability rules package

Amazon Inspector makes it easy for you to run agentless network reachability assessments on all of your EC2 instances. You can do this with just a couple of clicks on the Welcome page of the Amazon Inspector console. First, use the check box to Enable network assessments, then select Run Once to run a single assessment or Run Weekly to run a weekly recurring assessment.

Figure 1: Assessment setup

Figure 1: Assessment setup

Customizing the Network Reachability rules package

If you want to target a subset of your instances or modify the recurrence of assessments, you can select Advanced setup for guided steps to set up and run a custom assessment. For full customization options including getting notifications for findings, select Cancel and use the following steps.

  1. Navigate to the Assessment targets page of the Amazon Inspector console to create an assessment target. You can select the option to include all instances within your account and AWS region, or you can assess a subset of your instances by adding tags to them in the EC2 console and inputting those tags when you create the assessment target. Give your target a name and select Save.
    Figure 2: Assessment target

    Figure 2: Assessment target

    Optional agent installation: To get information about the processes listening on reachable ports, you’ll need to install the Amazon Inspector agent on your EC2 instances. If your instances allow the Systems Manager Run command, you can select the Install Agents option while creating your assessment target. Otherwise, you can follow the instructions here to install the Amazon Inspector agent on your instances before setting up and running the Amazon Inspector assessments using the steps above. In addition, installing the agent allows you to use Amazon Inspector host rules packages to check for vulnerabilities and security exposures in your EC2 instances.

  2. Go to the Assessment templates page of the Amazon Inspector console. In the Target name field, select the assessment target that you created in step 1. From the Rules packages drop-down, select the Network Reachability-1.1 rules package. You can also set up a recurring schedule and notifications to your Amazon Simple Notification Service topic. (Learn more about Amazon SNS topics here). Now, select Create and Run. That’s it!

    Alternately, you can run the assessment by selecting the template you just created from the Assessment templates page and then selecting Run, or by using the Amazon Inspector API.

You can view your findings on the Findings page in the Amazon Inspector console. You can also download a CSV of the findings from Amazon Inspector by using the Download button on the page, or you can use the AWS application programming interface (API) to retrieve findings in another application.

Note: You can create any CloudWatch Events rule and add your Amazon Inspector assessment as the target using the assessment template’s Amazon Resource Name (ARN), which is available in the console. You can use CloudWatch Events rules to automatically trigger assessment runs on a schedule or based on any other event. For example, you can trigger a network reachability assessment whenever there is a change to a security group or another VPC configuration, allowing you to automatically be alerted about insecure network exposure.

Understanding your EC2 instance network exposure

You can use this new rules package to analyze the accessibility of critical ports, as well as all other network ports. For critical ports, Amazon Inspector will show the exposure of each and will offer findings per port. When critical, well-known ports (based on Amazon’s standard guidance) are reachable, findings will be created with higher severities. When the Amazon Inspector agent is installed on the instance, each reachable port with a listener will also be reported. The following examples show network exposure from the Internet. There are analogous findings for network exposure via VPN, Direct Connect, or VPC peering. Read more about the finding types here.

Example finding for a well-known port open to the Internet, without installation of the Amazon Inspector Agent:

Figure 3: Finding for a well-known port open to the Internet

Figure 3: Finding for a well-known port open to the Internet

Example finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH):

Figure 4: Finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH)

Figure 4: Finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH)

Note that the findings provide the details on exactly how network access is allowed, including which VPC and subnet the instance is in. This makes tracking down the root cause of the network access straightforward. The recommendation includes information about exactly which Security Group you can edit to remove the access. And like all Amazon Inspector findings, these can be published to an SNS topic for additional processing, whether that’s to a ticketing system or to a custom AWS Lambda function. (See our blog post on automatic remediation of findings for guidance on how to do this.) For example, you could use Lambda to automatically remove ingress rules in the Security Group to address a network reachability finding.


With this new functionality from Amazon Inspector, you now have an easy way of assessing the network exposure of your EC2 instances and identifying and resolving unwanted exposure. We’ll continue to tailor findings to align with customer feedback. We encourage you to try out the Network Reachability Rules Package yourself and post any questions in the Amazon Inspector forum.

Want more AWS Security news? Follow us on Twitter.


Catherine Dodge

Catherine is a Senior Technical Program Manager in AWS Security. She helps teams use cutting edge AI technology to build security products to delight customers. She has over 15 years of experience in the cybersecurity field, mostly spent at the assembly level, either pulling apart malware or piecing together shellcode. In her spare time, she’s always tearing something down around the house, preferably ivy or drywall.


Stephen Quigg

Stephen — known as “Squigg,” internally — is a Principal Security Solutions Architect at AWS. His job is helping customers understand AWS security and how they can meet their most demanding security requirements when using the AWS platform. It’s not all about solving hard problems though, he gets just as much delight when an AWS customer creates their first VPC! When he’s not with his customers, you can find him up in his loft making bleeping noises on a bunch of old synthesizers.

Daniel Schwartz-Narbonne shares how automated reasoning is helping achieve the provable security of AWS boot code

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/automated-reasoning-provable-security-of-boot-code/

I recently sat down with Daniel Schwartz-Narbonne, a software development engineer in the Automated Reasoning Group (ARG) at AWS, to learn more about the groundbreaking work his team is doing in cloud security. The team uses automated reasoning, a technology based on mathematical logic, to prove that key components of the cloud are operating as intended. ARG recently hit a milestone by leveraging this technology to prove the memory safety of boot code components. Boot code is the foundation of the cloud. Proving the memory safety of boot code is akin to verifying that the foundation of your house is secure—it lets you build upon it without worry. Daniel shared details with the AWS Security Blog team about the project’s accomplishments and how it will help solve cloud security challenges.

Daniel Schwartz-Narbonne discusses ARG's work on the provable security of boot code

Daniel Schwartz-Narbonne discusses how automated reasoning, a branch of AI tech, can help prove the security of boot code

Tell me about yourself: what made you decide to become a software engineer with the Automated Reasoning Group?

I wanted to become an engineer because I like building things and understanding how they work. I get satisfaction out of producing something tangible. I went into cloud security because I believe it’s a major area of opportunity in the computing industry right now. As the cloud continues to scale in response to customer demand, we’re going to need more and more automation around security to meet this demand.

I was offered the opportunity to work with ARG after I finished up my post-doc at NYU. Byron Cook, the director of ARG, was starting up a team with the mission of using formal reasoning methods to solve real-world problems in the cloud. Joining ARG was an opportunity for me to help pioneer the use of automated reasoning for cloud security.

How would you describe automated reasoning?

Automated reasoning uses mathematical analysis to understand what’s happening in a complex computer system. The technique takes a system and a question you might have about the system—like “is the system memory safe?”—and reformulates the question as a set of mathematical properties. Then it uses automated reasoning tools called “constraint solvers” to analyze these properties and provide an answer. We’re using this technology to provide higher levels of cloud security assurance for AWS customers via features that protect key components of the cloud, including IAM permissions, networking controls, verification for security protocols and source code of foundational software packages in use at AWS. Links to this work can be found at the bottom of this post.

What is the Boot Code Verification Project?

The Boot Code Verification Project is one of several ARG projects that apply automated reasoning techniques to the foundational elements of cloud security. In this case, we’re looking at boot code. Boot code is the first code that starts when you turn on a computer. It’s the foundation for all computer code, which makes its security critical. This is joint work with my ARG colleagues Michael Tautschnig and Mark Tuttle and with infrastructure engineers.

Why is boot code so difficult to secure?

Ensuring boot code security by using traditional techniques, such as penetration testing and unit testing, is hard. You can only achieve visibility into code execution via debug ports, which means you have almost no ability to single-step the boot code for debugging. You often can’t instrument the boot code, either, because this can break the build process: the increased size of the instrumented code may be larger than the size of the ROM targeted by the build process. Extracting the data collected by instrumentation is also difficult because the boot code has no access to a file system to record the data, and memory available for storing the data may be limited.

Our aim is to gain increased confidence in the correctness of the boot code by using automated reasoning, instead. Applying automated reasoning to boot code has challenges, however. A big one is that boot code directly interfaces with hardware. Hardware can, for example, modify the value of memory locations through the use of memory-mapped input/output (IO). We developed techniques for modeling the effect that hardware can have on executing boot code. One technique we successfully tried is using model checking to symbolically represent all the effects that hardware could have on the memory state of the boot code. This required close collaboration with our product teams to understand AWS data center hardware and then design and validate a model based on these specifications. To ensure future code revisions maintain the properties we have validated, our analysis is embedded into the continuous integration flow. In such a workflow, each change by the developers triggers automated verification.

We published the full technical details, including the process by which we were able to prove the memory safety of boot code, in Model Checking Boot Code from AWS Data Centers, a peer-reviewed scientific publication at the Computer-Aided Verification Conference, the leading academic conference on automated reasoning.

You mention model checking. Can you explain what that is?

A software model checker is a tool that examines every path through a computer program from every possible input. There are different kinds of model checkers, but our model checker is based on a constraint solver (called a SAT solver, or a Satisfiability solver) that can test whether a given set of constraints is satisfiable. To understand how it works, first remember that each line of a computer program describes a particular change in the state of the computer (for example, turning on the device). Our model checker describes each change as an equation that shows how the computer’s state has changed. If you describe each line of code in a program this way, the result is a set of equations that describes a set of constraints upon all the ways that the program can change the state of the computer. We hand these constraints and a question (“Is there a bug?”) to a constraint solver, which then determines if the computer can ever reach a state in which the question (“Is there a bug?”) is true.

What is memory safety? Why is it so crucial to prove the memory safety of boot code?

A proof of memory safety gives you assurance that certain security issues cannot arise. Memory safety states that every operation in a program can only write to the variables it has access to, within the bounds of those variables. A classic example is a buffer that stores data coming in from a message on the network. If the message is larger than the buffer in which it’s stored, then you’ll overwrite the buffer, as well as whatever comes after the buffer. If trusted data stored after the buffer is overwritten, then the value of this trusted data is under the control of the adversary inducing the buffer overflow—and your system’s security is now at risk.

Boot code is written in C, a language that does not have the dynamic run-time support for memory safety found in other programming languages. The Boot Code Verification Project uses automated reasoning technology to prove memory safety of the boot code for every possible input.

What has the Boot Code Verification Project accomplished?

We’ve achieved two major accomplishments. The first is the concrete proof we’ve delivered. We have demonstrated that for every boot configuration, device configuration, possible boot source, and second stage binary, AWS boot code is memory safe.

The second accomplishment is more forward-looking. We haven’t just validated a piece of code—we’ve validated a methodology for testing security critical C code at AWS. As we describe in our paper, completing this proof required us to make significant advances in program analysis tooling, ranging from the way we handle memory-mapped IO, to a more efficient symbolic implementation of memcpy, to new tooling that can analyze the unusual linker configurations used in boot code. We made the tooling easier to use, with AWS Batch scripts that allow automatic proof re-runs, and HTML-based reports that make it easy to dive in and understand code. We expect to build on these improvements as we continue to apply automated reasoning to the AWS cloud.

Is your work open source?

We use the model checker CBMC (C Bounded Model Checker), which is available on GitHub under the open source Berkeley Software Distribution license. AWS is committed to the open source community, and we have a number of other projects that you can also find on GitHub.

Overall, how does the Boot Code Verification Project benefit customers?

Customers ask how AWS secures their data. This project is a part of the answer, providing assurance for how AWS protects low-level code in customer data centers running on AWS. Given all systems and processes run on top of this code, customers need to know measures are in place to keep it continuously safe.

We also believe that technology powered by automated reasoning has wider applicability. Our team has created tools like Zelkova, which we’ve embedded in a variety of AWS services to help customers validate their security-critical code. Because the Boot Code Verification Project is based on an existing open source project, wider applications of our methodology have also been documented in a variety of scientific publications that you can find on the the AWS Provable Security page under “Insight papers.” We encourage customers to check out our resources and comment below!

Want more AWS Security news? Follow us on Twitter.


Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

Podcast: We developed Amazon GuardDuty to meet scaling demands, now it could assist with compliance considerations such as GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-we-developed-amazon-guardduty-to-meet-scaling-demands-now-it-could-assist-with-compliance-considerations-such-as-gdpr/

It isn’t simple to meet the scaling requirements of AWS when creating a threat detection monitoring service. Our service teams have to maintain the ability to deliver at a rapid pace. That led to the question what can be done to make a security service as frictionless as possible to business demands?

Core parts of our internal solution can now be found in Amazon GuardDuty, which doesn’t require deployment of software or security infrastructure. Instead, GuardDuty uses machine learning to monitor metadata for access activity such as unusual API calls. This method turned out to be highly effective. Because it worked well for us, we thought it would work well for our customers, too. Additionally, when we externalized the service, we enabled it to be turned on with a single click. The customer response to Amazon GuardDuty has been positive with rapid adoption since launch in late 2017.

The service’s monitoring capabilities and threat detections could become increasingly helpful to customers concerned with data privacy or facing regulations such as the EU’s General Data Privacy Regulation (GDPR). Listen to the podcast with Senior Product Manager Michael Fuller to learn how Amazon GuardDuty could be leveraged to meet your compliance considerations.

How AWS Meets a Physical Separation Requirement with a Logical Separation Approach

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/how-aws-meets-a-physical-separation-requirement-with-a-logical-separation-approach/

We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

If you have questions or want to learn more, contact your account executive or AWS Support.