Tag Archives: Amazon Inspector

Top 2021 AWS Security service launches security professionals should review – Part 1

Post Syndicated from Ryan Holland original https://aws.amazon.com/blogs/security/top-2021-aws-security-service-launches-part-1/

Given the speed of Amazon Web Services (AWS) innovation, it can sometimes be challenging to keep up with AWS Security service and feature launches. To help you stay current, here’s an overview of some of the most important 2021 AWS Security launches that security professionals should be aware of. This is the first of two related posts; Part 2 will highlight some of the important 2021 launches that security professionals should be aware of across all AWS services.

Amazon GuardDuty

In 2021, the threat detection service Amazon GuardDuty expanded the internal AWS security intelligence it consumes to use more of the intel that AWS internal threat detection teams collect, including additional nation-state threat intelligence. Sharing more of the important intel that internal AWS teams collect lets you quickly improve your protection. GuardDuty also launched domain reputation modeling. These machine learning models take all the domain requests from across all of AWS, and feed them into a model that allows AWS to categorize previously unseen domains as highly likely to be malicious or benign based on their behavioral characteristics. In practice, AWS is seeing that these models often deliver high-fidelity threat detections, identifying malicious domains 7–14 days before they are identified and available on commercial threat feeds.

AWS also launched second generation anomaly detection for GuardDuty. Shortly after the original GuardDuty launch in 2017, AWS added additional anomaly detection for user behavior analytics and monitoring for unusual activity of AWS Identity and Access Management (IAM) users. After receiving customer feedback that the original feature was a little too noisy, and that it was difficult to understand why some findings were generated, the GuardDuty analytics team rebuilt this functionality on an entirely new machine learning model, considerably reducing the number of detections and generating a more accurate positive-detection rate. The new model also added additional context that security professionals (such as analysts) can use to understand why the model shows findings as suspicious or unusual.

Since its introduction, GuardDuty has detected when AWS EC2 Role credentials are used to call AWS APIs from IP addresses outside of AWS. Beginning in early 2022, GuardDuty now supports detection when credentials are used from other AWS accounts, inside the AWS network. This is a complex problem for customers to solve on their own, which is why the GuardDuty team added this enhancement. The solution considers that there are legitimate reasons why a source IP address that is communicating with AWS services APIs might be different than the Amazon Elastic Compute Cloud (Amazon EC2) instance IP address, or a NAT gateway associated with the instance’s VPC. The enhancement also considers complex network topologies that route traffic to one or multiple VPCs—for example, AWS Transit Gateway or AWS Direct Connect.

Our customers are increasingly running container workloads in production; helping to raise the security posture of these workloads became an AWS development priority in 2021. GuardDuty for EKS Protection is one recent feature that has resulted from this investment. This new GuardDuty feature monitors Amazon Elastic Kubernetes Service (Amazon EKS) cluster control plane activity by analyzing Kubernetes audit logs. GuardDuty is integrated with Amazon EKS, giving it direct access to the Kubernetes audit logs without requiring you to turn on or store these logs. Once a threat is detected, GuardDuty generates a security finding that includes container details such as pod ID, container image ID, and associated tags. See below for details on how the new Amazon Inspector is also helping to protect containers.

Amazon Inspector

At AWS re:Invent 2021, we launched the new Amazon Inspector, a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. The original Amazon Inspector was completely re-architected in this release to automate vulnerability management and to deliver near real-time findings to minimize the time needed to discover new vulnerabilities. This new Amazon Inspector has simple one-click enablement and multi-account support using AWS Organizations, similar to our other AWS Security services. This launch also introduces a more accurate vulnerability risk score, called the Inspector score. The Inspector score is a highly contextualized risk score that is generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources such as network accessibility. This makes it easier for you to identify and prioritize your most critical vulnerabilities for immediate remediation. One of the most important new capabilities is that Amazon Inspector automatically discovers running EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR), at any scale, and immediately starts assessing them for known vulnerabilities. Now you can consolidate your vulnerability management solutions for both Amazon EC2 and Amazon ECR into one fully managed service.

AWS Security Hub

In addition to a significant number of smaller enhancements throughout 2021, in October AWS Security Hub, an AWS cloud security posture management service, addressed a top customer enhancement request by adding support for cross-Region finding aggregation. You can now view all your findings from all accounts and all selected Regions in a single console view, and act on them from an Amazon EventBridge feed in a single account and Region. Looking back at 2021, Security Hub added 72 additional best practice checks, four new AWS service integrations, and 13 new external partner integrations. A few of these integrations are Atlassian Jira Service Management, Forcepoint Cloud Security Gateway (CSG), and Amazon Macie. Security Hub also achieved FedRAMP High authorization to enable security posture management for high-impact workloads.

Amazon Macie

Based on customer feedback, data discovery tool Amazon Macie launched a number of enhancements in 2021. One new feature, which made it easier to manage Amazon Simple Storage Service (Amazon S3) buckets for sensitive data, was criteria-based bucket selection. This Macie feature allows you to define runtime criteria to determine which S3 buckets should be included in a sensitive data-discovery job. When a job runs, Macie identifies the S3 buckets that match your criteria, and automatically adds or removes them from the job’s scope. Before this feature, once a job was configured, it was immutable. Now, for example, you can create a policy where if a bucket becomes public in the future, it’s automatically added to the scan, and similarly, if a bucket is no longer public, it will no longer be included in the daily scan.

Originally Macie included all managed data identifiers available for all scans. However, customers wanted more surgical search criteria. For example, they didn’t want to be informed if there were exposed data types in a particular environment. In September 2021, Macie launched the ability to enable/disable managed data identifiers. This allows you to customize the data types you deem sensitive and would like Macie to alert on, in accordance with your organization’s data governance and privacy needs.

Amazon Detective

Amazon Detective is a service to analyze and visualize security findings and related data to rapidly get to the root cause of potential security issues. In January 2021, Amazon Detective added a convenient, time-saving integration that allows you to start security incident investigation workflows directly from the GuardDuty console. This new hyperlink pivot in the GuardDuty console takes findings directly from the GuardDuty console into the Detective console. Another time-saving capability added was the IP address drill down functionality. This new capability can be useful to security forensic teams performing incident investigations, because it helps quickly determine the communications that took place from an EC2 instance under investigation before, during, and after an event.

In December 2021, Detective added support for AWS Organizations to simplify management for security operations and investigations across all existing and future accounts in an organization. This launch allows new and existing Detective customers to onboard and centrally manage the Detective graph database for up to 1,200 AWS accounts.

AWS Key Management Service

In June 2021, AWS Key Management Service (AWS KMS) introduced multi-Region keys, a capability that lets you replicate keys from one AWS Region into another. With multi-Region keys, you can more easily move encrypted data between Regions without having to decrypt and re-encrypt with different keys for each Region. Multi-Region keys are supported for client-side encryption using direct AWS KMS API calls, or in a simplified manner with the AWS Encryption SDK and Amazon DynamoDB Encryption Client.

AWS Secrets Manager

Last year was a busy year for AWS Secrets Manager, with four feature launches to make it easier to manage secrets at scale, not just for client applications, but also for platforms. In March 2021, Secrets Manager launched multi-Region secrets to automatically replicate secrets for multi-Region workloads. Also in March, Secrets Manager added three new rules to AWS Config, to help administrators verify that secrets in Secrets Manager are configured according to organizational requirements. Then in April 2021, Secrets Manager added a CSI driver plug-in, to make it easy to consume secrets from Amazon EKS by using Kubernetes’s standard Secrets Store interface. In November, Secrets Manager introduced a higher secret limit of 500,000 per account to simplify secrets management for independent software vendors (ISVs) that rely on unique secrets for a large number of end customers. Although launched in January 2022, it’s also worth mentioning Secrets Manager’s release of rotation windows to align automatic rotation of secrets with application maintenance windows.

Amazon CodeGuru and Secrets Manager

In November 2021, AWS announced a new secrets detector feature in Amazon CodeGuru that searches your codebase for hardcoded secrets. Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality, and identify an application’s most expensive lines of code.

This new feature can pinpoint locations in your code with usernames and passwords; database connection strings, tokens, and API keys from AWS; and other service providers. When a secret is found in your code, CodeGuru Reviewer provides an actionable recommendation that links to AWS Secrets Manager, where developers can secure the secret with a point-and-click experience.

Looking ahead for 2022

AWS will continue to deliver experiences in 2022 that meet administrators where they govern, developers where they code, and applications where they run. A lot of customers are moving to container and serverless workloads; you can expect to see more work on this in 2022. You can also expect to see more work around integrations, like CodeGuru Secrets Detector identifying plaintext secrets in code (as noted previously).

To stay up-to-date in the year ahead on the latest product and feature launches and security use cases, be sure to read the Security service launch announcements. Additionally, stay tuned to the AWS Security Blog for Part 2 of this blog series, which will provide an overview of some of the important 2021 launches that security professionals should be aware of across all AWS services.

If you’re looking for more opportunities to learn about AWS security services, check out AWS re:Inforce, the AWS conference focused on cloud security, identity, privacy, and compliance, which will take place June 28-29 in Houston, Texas.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Ryan Holland

Ryan is a Senior Manager with GuardDuty Security Response. His team is responsible for ensuring GuardDuty provides the best security value to customers, including threat intelligence, behavioral analytics, and finding quality.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Using AWS security services to protect against, detect, and respond to the Log4j vulnerability

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/using-aws-security-services-to-protect-against-detect-and-respond-to-the-log4j-vulnerability/

January 7, 2022: The blog post has been updated to include using Network ACL rules to block potential log4j-related outbound traffic.

January 4, 2022: The blog post has been updated to suggest using WAF rules when correct HTTP Host Header FQDN value is not provided in the request.

December 31, 2021: We made a minor update to the second paragraph in the Amazon Route 53 Resolver DNS Firewall section.

December 29, 2021: A paragraph under the Detect section has been added to provide guidance on validating if log4j exists in an environment.

December 23, 2021: The GuardDuty section has been updated to describe new threat labels added to specific finding to give log4j context.

December 21, 2021: The post includes more info about Route 53 Resolver DNS query logging.

December 20, 2021: The post has been updated to include Amazon Route 53 Resolver DNS Firewall info.

December 17, 2021: The post has been updated to include using Athena to query VPC flow logs.

December 16, 2021: The Respond section of the post has been updated to include IMDSv2 and container mitigation info.

This blog post was first published on December 15, 2021.


Overview

In this post we will provide guidance to help customers who are responding to the recently disclosed log4j vulnerability. This covers what you can do to limit the risk of the vulnerability, how you can try to identify if you are susceptible to the issue, and then what you can do to update your infrastructure with the appropriate patches.

The log4j vulnerability (CVE-2021-44228, CVE-2021-45046) is a critical vulnerability (CVSS 3.1 base score of 10.0) in the ubiquitous logging platform Apache Log4j. This vulnerability allows an attacker to perform a remote code execution on the vulnerable platform. Version 2 of log4j, between versions 2.0-beta-9 and 2.15.0, is affected.

The vulnerability uses the Java Naming and Directory Interface (JNDI) which is used by a Java program to find data, typically through a directory, commonly a LDAP directory in the case of this vulnerability.

Figure 1, below, highlights the log4j JNDI attack flow.

Figure 1. Log4j attack progression

Figure 1. Log4j attack progression. Source: GovCERT.ch, the Computer Emergency Response Team (GovCERT) of the Swiss government

As an immediate response, follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Protect

You can use multiple AWS services to help limit your risk/exposure from the log4j vulnerability. You can build a layered control approach, and/or pick and choose the controls identified below to help limit your exposure.

AWS WAF

Use AWS Web Application Firewall, following AWS Managed Rules for AWS WAF, to help protect your Amazon CloudFront distribution, Amazon API Gateway REST API, Application Load Balancer, or AWS AppSync GraphQL API resources.

  • AWSManagedRulesKnownBadInputsRuleSet esp. the Log4JRCE rule which helps inspects the request for the presence of the Log4j vulnerability. Example patterns include ${jndi:ldap://example.com/}.
  • AWSManagedRulesAnonymousIpList esp. the AnonymousIPList rule which helps inspect IP addresses of sources known to anonymize client information.
  • AWSManagedRulesCommonRuleSet, esp. the SizeRestrictions_BODY rule to verify that the request body size is at most 8 KB (8,192 bytes).

You should also consider implementing WAF rules that deny access, if the correct HTTP Host Header FQDN value is not provided in the request. This can help reduce the likelihood of scanners that are scanning the internet IP address space from reaching your resources protected by WAF via a request with an incorrect Host Header, like an IP address instead of an FQDN. It’s also possible to use custom Application Load Balancer listener rules to achieve this.

If you’re using AWS WAF Classic, you will need to migrate to AWS WAF or create custom regex match conditions.

Have multiple accounts? Follow these instructions to use AWS Firewall Manager to deploy AWS WAF rules centrally across your AWS organization.

Amazon Route 53 Resolver DNS Firewall

You can use Route 53 Resolver DNS Firewall, following AWS Managed Domain Lists, to help proactively protect resources with outbound public DNS resolution. We recommend associating Route 53 Resolver DNS Firewall with a rule configured to block domains on the AWSManagedDomainsMalwareDomainList, which has been updated in all supported AWS regions with domains identified as hosting malware used in conjunction with the log4j vulnerability. AWS will continue to deliver domain updates for Route 53 Resolver DNS Firewall through this list.

Also, you should consider blocking outbound port 53 to prevent the use of external untrusted DNS servers. This helps force all DNS queries through DNS Firewall and ensures DNS traffic is visible for GuardDuty inspection. Using DNS Firewall to block DNS resolution of certain country code top-level domains (ccTLD) that your VPC resources have no legitimate reason to connect out to, may also help. Examples of ccTLDs you may want to block may be included in the known log4j callback domains IOCs.

We also recommend that you enable DNS query logging, which allows you to identify and audit potentially impacted resources within your VPC, by inspecting the DNS logs for the presence of blocked outbound queries due to the log4j vulnerability, or to other known malicious destinations. DNS query logging is also useful in helping identify EC2 instances vulnerable to log4j that are responding to active log4j scans, which may be originating from malicious actors or from legitimate security researchers. In either case, instances responding to these scans potentially have the log4j vulnerability and should be addressed. GreyNoise is monitoring for log4j scans and sharing the callback domains here. Some notable domains customers may want to examine log activity for, but not necessarily block, are: *interact.sh, *leakix.net, *canarytokens.com, *dnslog.cn, *.dnsbin.net, and *cyberwar.nl. It is very likely that instances resolving these domains are vulnerable to log4j.

AWS Network Firewall

Customers can use Suricata-compatible IDS/IPS rules in AWS Network Firewall to deploy network-based detection and protection. While Suricata doesn’t have a protocol detector for LDAP, it is possible to detect these LDAP calls with Suricata. Open-source Suricata rules addressing Log4j are available from corelight, NCC Group, from ET Labs, and from CrowdStrike. These rules can help identify scanning, as well as post exploitation of the log4j vulnerability. Because there is a large amount of benign scanning happening now, we recommend customers focus their time first on potential post-exploitation activities, such as outbound LDAP traffic from their VPC to untrusted internet destinations.

We also recommend customers consider implementing outbound port/protocol enforcement rules that monitor or prevent instances of protocols like LDAP from using non-standard LDAP ports such as 53, 80, 123, and 443. Monitoring or preventing usage of port 1389 outbound may be particularly helpful in identifying systems that have been triggered by internet scanners to make command and control calls outbound. We also recommend that systems without a legitimate business need to initiate network calls out to the internet not be given that ability by default. Outbound network traffic filtering and monitoring is not only very helpful with log4j, but with identifying other classes of vulnerabilities too.

Network Access Control Lists

Customers may be able to use Network Access Control List rules (NACLs) to block some of the known log4j-related outbound ports to help limit further compromise of successfully exploited systems. We recommend customers consider blocking ports 1389, 1388, 1234, 12344, 9999, 8085, 1343 outbound. As NACLs block traffic at the subnet level, careful consideration should be given to ensure any new rules do not block legitimate communications using these outbound ports across internal subnets. Blocking ports 389 and 88 outbound can also be helpful in mitigating log4j, but those ports are commonly used for legitimate applications, especially in a Windows Active Directory environment. See the VPC flow logs section below to get details on how you can validate any ports being considered.

Use IMDSv2

Through the early days of the log4j vulnerability researchers have noted that, once a host has been compromised with the initial JDNI vulnerability, attackers sometimes try to harvest credentials from the host and send those out via some mechanism such as LDAP, HTTP, or DNS lookups. We recommend customers use IAM roles instead of long-term access keys, and not store sensitive information such as credentials in environment variables. Customers can also leverage AWS Secrets Manager to store and automatically rotate database credentials instead of storing long-term database credentials in a host’s environment variables. See prescriptive guidance here and here on how to implement Secrets Manager in your environment.

To help guard against such attacks in AWS when EC2 Roles may be in use — and to help keep all IMDS data private for that matter — customers should consider requiring the use of Instance MetaData Service version 2 (IMDSv2). Since IMDSv2 is enabled by default, you can require its use by disabling IMDSv1 (which is also enabled by default). With IMDSv2, requests are protected by an initial interaction in which the calling process must first obtain a session token with an HTTP PUT, and subsequent requests must contain the token in an HTTP header. This makes it much more difficult for attackers to harvest credentials or any other data from the IMDS. For more information about using IMDSv2, please refer to this blog and documentation. While all recent AWS SDKs and tools support IMDSv2, as with any potentially non-backwards compatible change, test this change on representative systems before deploying it broadly.

Detect

This post has covered how to potentially limit the ability to exploit this vulnerability. Next, we’ll shift our focus to which AWS services can help to detect whether this vulnerability exists in your environment.

Figure 2. Log4j finding in the Inspector console

Figure 2. Log4j finding in the Inspector console

Amazon Inspector

As shown in Figure 2, the Amazon Inspector team has created coverage for identifying the existence of this vulnerability in your Amazon EC2 instances and Amazon Elastic Container Registry Images (Amazon ECR). With the new Amazon Inspector, scanning is automated and continual. Continual scanning is driven by events such as new software packages, new instances, and new common vulnerability and exposure (CVEs) being published.

For example, once the Inspector team added support for the log4j vulnerability (CVE-2021-44228 & CVE-2021-45046), Inspector immediately began looking for this vulnerability for all supported AWS Systems Manager managed instances where Log4j was installed via OS package managers and where this package was present in Maven-compatible Amazon ECR container images. If this vulnerability is present, findings will begin appearing without any manual action. If you are using Inspector Classic, you will need to ensure you are running an assessment against all of your Amazon EC2 instances. You can follow this documentation to ensure you are creating an assessment target for all of your Amazon EC2 instances. Here are further details on container scanning updates in Amazon ECR private registries.

GuardDuty

In addition to finding the presence of this vulnerability through Inspector, the Amazon GuardDuty team has also begun adding indicators of compromise associated with exploiting the Log4j vulnerability, and will continue to do so. GuardDuty will monitor for attempts to reach known-bad IP addresses or DNS entries, and can also find post-exploit activity through anomaly-based behavioral findings. For example, if an Amazon EC2 instance starts communicating on unusual ports, GuardDuty would detect this activity and create the finding Behavior:EC2/NetworkPortUnusual. This activity is not limited to the NetworkPortUnusual finding, though. GuardDuty has a number of different findings associated with post exploit activity, such as credential compromise, that might be seen in response to a compromised AWS resource. For a list of GuardDuty findings, please refer to this GuardDuty documentation.

To further help you identify and prioritize issues related to CVE-2021-44228 and CVE-2021-45046, the GuardDuty team has added threat labels to the finding detail for the following finding types:

Backdoor:EC2/C&CActivity.B
If the IP queried is Log4j-related, then fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Backdoor:EC2/C&CActivity.B!DNS
If the domain name queried is Log4j-related, then the fields of the associated finding will include the following values:

  • service.additionalInfo.threatListName = Amazon
  • service.additionalInfo.threatName = Log4j Related

Behavior:EC2/NetworkPortUnusual
If the EC2 instance communicated on port 389 or port 1389, then the associated finding severity will be modified to High, and the finding fields will include the following value:

  • service.additionalInfo.context = Possible Log4j callback
Figure 3. GuardDuty finding with log4j threat labels

Figure 3. GuardDuty finding with log4j threat labels

Security Hub

Many customers today also use AWS Security Hub with Inspector and GuardDuty to aggregate alerts and enable automatic remediation and response. In the short term, we recommend that you use Security Hub to set up alerting through AWS Chatbot, Amazon Simple Notification Service, or a ticketing system for visibility when Inspector finds this vulnerability in your environment. In the long term, we recommend you use Security Hub to enable automatic remediation and response for security alerts when appropriate. Here are ideas on how to setup automatic remediation and response with Security Hub.

VPC flow logs

Customers can use Athena or CloudWatch Logs Insights queries against their VPC flow logs to help identify VPC resources associated with log4j post exploitation outbound network activity. Version 5 of VPC flow logs is particularly useful, because it includes the “flow-direction” field. We recommend customers start by paying special attention to outbound network calls using destination port 1389 since outbound usage of that port is less common in legitimate applications. Customers should also investigate outbound network calls using destination ports 1388, 1234, 12344, 9999, 8085, 1343, 389, and 88 to untrusted internet destination IP addresses. Free-tier IP reputation services, such as VirusTotal, GreyNoise, NOC.org, and ipinfo.io, can provide helpful insights related to public IP addresses found in the logged activity.

Note: If you have a Microsoft Active Directory environment in the captured VPC flow logs being queried, you might see false positives due to its use of port 389.

Validation with open-source tools

With the evolving nature of the different log4j vulnerabilities, it’s important to validate that upgrades, patches, and mitigations in your environment are indeed working to mitigate potential exploitation of the log4j vulnerability. You can use open-source tools, such as aws_public_ips, to get a list of all your current public IP addresses for an AWS Account, and then actively scan those IPs with log4j-scan using a DNS Canary Token to get notification of which systems still have the log4j vulnerability and can be exploited. We recommend that you run this scan periodically over the next few weeks to validate that any mitigations are still in place, and no new systems are vulnerable to the log4j issue.

Respond

The first two sections have discussed ways to help prevent potential exploitation attempts, and how to detect the presence of the vulnerability and potential exploitation attempts. In this section, we will focus on steps that you can take to mitigate this vulnerability. As we noted in the overview, the immediate response recommended is to follow this blog and use the tool designed to hotpatch a running JVM using any log4j 2.0+. Steve Schmidt, Chief Information Security Officer for AWS, also discussed this hotpatch.

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

Figure 4. Systems Manager Patch Manager patch baseline approving critical patches immediately

AWS Patch Manager

If you use AWS Systems Manager Patch Manager, and you have critical patches set to install immediately in your patch baseline, your EC2 instances will already have the patch. It is important to note that you’re not done at this point. Next, you will need to update the class path wherever the library is used in your application code, to ensure you are using the most up-to-date version. You can use AWS Patch Manager to patch managed nodes in a hybrid environment. See here for further implementation details.

Container mitigation

To install the hotpatch noted in the overview onto EKS cluster worker nodes AWS has developed an RPM that performs a JVM-level hotpatch which disables JNDI lookups from the log4j2 library. The Apache Log4j2 node agent is an open-source project built by the Kubernetes team at AWS. To learn more about how to install this node agent, please visit the this Github page.

Once identified, ECR container images will need to be updated to use the patched log4j version. Downstream, you will need to ensure that any containers built with a vulnerable ECR container image are updated to use the new image as soon as possible. This can vary depending on the service you are using to deploy these images. For example, if you are using Amazon Elastic Container Service (Amazon ECS), you might want to update the service to force a new deployment, which will pull down the image using the new log4j version. Check the documentation that supports the method you use to deploy containers.

If you’re running Java-based applications on Windows containers, follow Microsoft’s guidance here.

We recommend you vend new application credentials and revoke existing credentials immediately after patching.

Mitigation strategies if you can’t upgrade

In case you either can’t upgrade to a patched version, which disables access to JDNI by default, or if you are still determining your strategy for how you are going to patch your environment, you can mitigate this vulnerability by changing your log4j configuration. To implement this mitigation in releases >=2.10, you will need to remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class.

For a more comprehensive list about mitigation steps for specific versions, refer to the Apache website.

Conclusion

In this blog post, we outlined key AWS security services that enable you to adopt a layered approach to help protect against, detect, and respond to your risk from the log4j vulnerability. We urge you to continue to monitor our security bulletins; we will continue updating our bulletins with our remediation efforts for our side of the shared-responsibility model.

Given the criticality of this vulnerability, we urge you to pay close attention to the vulnerability, and appropriately prioritize implementing the controls highlighted in this blog.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Syed Shareef

Syed is a Senior Security Solutions Architect at AWS. He works with large financial institutions to help them achieve their business goals with AWS, whilst being compliant with regulatory and security requirements.

Improved, Automated Vulnerability Management for Cloud Workloads with a New Amazon Inspector

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/improved-automated-vulnerability-management-for-cloud-workloads-with-a-new-amazon-inspector/

Amazon Inspector is a service used by organizations of all sizes to automate security assessment and management at scale. Amazon Inspector helps organizations meet security and compliance requirements for workloads deployed to AWS, scanning for unintended network exposure, software vulnerabilities, and deviations from application security best practice.

Since the original launch of Amazon Inspector in 2015, vulnerability management for cloud customers has changed considerably. Over the last six years, the team delivered several new customer-requested features, including assessment reporting, support for proxy environments, and integration with Amazon CloudWatch Metrics. However, the team also recognized that there were new requirements to meet – enabling frictionless deployment at scale, support for an expanded set of resource types needing assessment, and a critical need to detect and remediate at speed. Today I’m happy to announce a new Amazon Inspector, able to meet these requirements with the following features:

  • Continual, automated assessment scans—replaces periodic, manual scanning.
  • Automated resource discovery – once enabled, the new Amazon Inspector automatically discovers all running Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Elastic Container Registry repositories.
  • New support for container-based workloads—workloads are now assessed on both EC2 and container infrastructure.
  • Integration with AWS Organizations—allowing security and compliance teams to enable and take advantage of Amazon Inspector across all accounts in an organization.
  • Removal of the stand-alone Amazon Inspector scanning agent—assessment scanning now uses the widely deployed AWS Systems Manager agent, eliminating the need for a separate agent installation.
  • Improved risk scoring—a highly contextualized risk score is now generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources, such as network accessibility. This makes it easier to identify the most critical vulnerabilities to address as a priority.
  • Integration with Amazon EventBridge—integrate with event management and workflow systems such as Splunk and Jira. And, you can trigger automated remediation, for example, system patching using Systems Manager or virtual machine image rebuilds using EC2 Image Builder.
  • Integration with AWS Security Hub—helping your teams to more easily identify those resources with critical vulnerabilities or deviations from security best practices.

Automatically Assessing your Workloads with Amazon Inspector
Tens of thousands of vulnerabilities exist, with new ones being discovered and made public on a regular basis. With this continually growing threat, manual assessment can lead to customers being unaware of an exposure and thus potentially vulnerable between assessments. Additionally, customers with manual processes for managing their inventories of applications resources, the deployment of stand-alone security agents on those resources, and the scheduling of periodic assessments may find the whole process to be a costly and time-consuming exercise. That’s before they have to then sift through the mass of assessment findings to determine the most critical issues to address.

With the new Amazon Inspector, all you need to do is enable the service. It will auto-discover and start continual assessment of your EC2 and your Amazon Elastic Container Registry-based container workloads to evaluate your security posture, even as the underlying resources change.

EC2 instances are discovered and assessed for unintended exposure to external networks and software vulnerabilities using the Systems Manager agent, already included by default in images provided by AWS for instance management, automated patching, and more. Container-based workloads are assessed as the images are pushed to Amazon Elastic Container Registry. Without needing additional software or agents, container images and EC2 instances are assessed in near real time when an event occurs.

Automated assessment is driven by changes in workload configuration and newly published vulnerabilities to ensure resources are only assessed when needed. The new Amazon Inspector collects events from over 50 vulnerability intelligence sources, including CVE, the National Vulnerability Database (NVD), and MITRE. Images that may be affected by a newly identified entry, for example, a new CVE notification, will be automatically rescanned. Image rescanning is enabled for 30 days from the date they are pushed to the registry. You can also enable an option to only scan on image push and not subsequently perform rescans.

Summary page for Amazon Inspector

Selecting either Accounts, Instances, or Repositories from your Dashboard page takes you to a detail summary for the selected resource. Below, I’m viewing summary data for EC2 instances across a couple of accounts.

Viewing instances scanned by Amazon Inspector across accounts

If vulnerabilities are found, you receive actionable assessment findings in a report. Starting today, these findings are summarized with enhanced risk scoring and improved resource detail to help you prioritize the most at-risk resources needing to be addressed. Also new today, the Amazon Inspector console has been redesigned to surface all findings and recommendations for remediation.

Vulnerabilities in container images are also sent to Amazon Elastic Container Registry to be summarized for the owner. And, as I noted earlier, new integrations with AWS Security Hub and Amazon EventBridge allow findings to be sent downstream for additional visibility and remediation by automated workflows. For example, automation can be created to isolate instances, trigger system patching, software image rebuilds, and more. The availability of multiple integration points makes it easier for security and application teams to collaborate to manage remediation. Below, I’m viewing findings from Amazon Inspector in the AWS Security Hub console.

Viewing findings from Amazon Inspector in the Security Hub console

Assessments can result in hundreds of thousands, or more, findings that need to be filtered and sifted to determine the most critical to action. Also available today, organizations can determine which of the findings they consider to be acceptable and mark those findings for temporary or permanent suppression. This helps reduce the volume of alerts, further assisting with prioritization and automated remediation. Suppression filters can be set from several screens. Rules specify one or more filters, such as Severity, that will cause findings that match the filters to be removed from display. When defining rules, a list is shown of the findings that will be suppressed, helping you fine-tune the filter values to match your specific needs.

Setting up suppression rules for Amazon Inspector findings

I mentioned earlier that the new Amazon Inspector implements a contextualized risk assessment score for findings. The screenshot below shows an example of Amazon Inspector‘s risk assessment score, compared to a generic Common Vulnerability Scoring System (CVSS) score. Contextual risk assessment takes into account additional factors such as accessibility to the internet and ease of exploitability to make the score more meaningful. In the image below, Amazon Inspector‘s risk assessment score is lower than the CVSS score because the attack vector requires network access. Amazon Inspector knows that the vulnerability identified in the GNOME Glib will be difficult to exploit because in this resource, there is no network access, and therefore it lowered the risk score.

Risk assessment score

Start a Free Trial with Amazon Inspector Today
The new Amazon Inspector is available now in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo) Regions.

Amazon Inspector offers a free 15-day trial, so you can put it to work to see how Amazon Inspector can help your security and compliance teams reduce operational complexity and cost associated with managing resource inventories, stand-alone security agents, and repetitive manual assessments.

— Steve

Deploying CIS Level 1 hardened AMIs with Amazon EC2 Image Builder

Post Syndicated from Joseph Keating original https://aws.amazon.com/blogs/devops/deploying-cis-level-1-hardened-amis-with-amazon-ec2-image-builder/

The NFL, an AWS Professional Services partner, is collaborating with NFL’s Player Health and Safety team to build the Digital Athlete Program. The Digital Athlete Program is working to drive progress in the prevention, diagnosis, and treatment of injuries; enhance medical protocols; and further improve the way football is taught and played. The NFL, in conjunction with AWS Professional Services, delivered an Amazon EC2 Image Builder pipeline for automating the production of Amazon Machine Images (AMIs). Following similar practices from the Digital Athlete Program, this post demonstrates how to deploy an automated Image Builder pipeline.

“AWS Professional Services faced unique environment constraints, but was able to deliver a modular pipeline solution leveraging EC2 Image Builder. The framework serves as a foundation to create hardened images for future use cases. The team also provided documentation and knowledge transfer sessions to ensure our team was set up to successfully manage the solution.”

—Joseph Steinke, Director, Data Solutions Architect, National Football League

A common scenario AWS customers face is how to build processes that configure secure AWS resources that can be leveraged throughout the organization. You need to move fast in the cloud without compromising security best practices. Amazon Elastic Compute Cloud (Amazon EC2) allows you to deploy virtual machines in the AWS Cloud. EC2 AMIs provide the configuration utilized to launch an EC2 instance. You can use AMIs for several use cases, such as configuring applications, applying security policies, and configuring development environments. Developers and system administrators can deploy configuration AMIs to bring up EC2 resources that require little-to-no setup. Often times, multiple patterns are adopted for building and deploying AMIs. Because of this, you need the ability to create a centralized, automated pattern that can output secure, customizable AMIs.

In this post, we demonstrate how to create an automated process that builds and deploys Center for Internet Security (CIS) Level 1 hardened AMIs. The pattern that we deploy includes Image Builder, a CIS Level 1 hardened AMI, an application running on EC2 instances, and Amazon Inspector for security analysis. You deploy the AMI configured with the Image Builder pipeline to an application stack. The application stack consists of EC2 instances running Nginx. Lastly, we show you how to re-hydrate your application stack with a new AMI utilizing AWS CloudFormation and Amazon EC2 launch templates. You use Amazon Inspector to scan the EC2 instances launched from the Image Builder-generated AMI against the CIS Level 1 Benchmark.

After going through this exercise, you should understand how to build, manage, and deploy AMIs to an application stack. The infrastructure deployed with this pipeline includes a basic web application, but you can use this pattern to fit many needs. After running through this post, you should feel comfortable using this pattern to configure an AMI pipeline for your organization.

The project we create in this post addresses the following use case: you need a process for building and deploying CIS Level 1 hardened AMIs to an application stack running on Amazon EC2. In addition to demonstrating how to deploy the AMI pipeline, we also illustrate how to refresh a running application stack with a new AMI. You learn how to deploy this configuration with the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

AWS services used
Image Builder allows you to develop an automated workflow for creating AMIs to fit your organization’s needs. You can streamline the creation and distribution of secure images, automate your patching process, and define security and application configuration into custom AWS AMIs. In this post, you use the following AWS services to implement this solution:

  • AWS CloudFormation – AWS CloudFormation allows you to use domain-specific languages or simple text files to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. You can deploy AWS resources in a safe, repeatable manner, and automate the provisioning of infrastructure.
  • AWS KMSAmazon Key Management Service (AWS KMS) is a fully managed service for creating and managing cryptographic keys. These keys are natively integrated with most AWS services. You use a KMS key in this post to encrypt resources.
  • Amazon S3Amazon Simple Storage Service (Amazon S3) is an object storage service utilized for storing and encrypting data. We use Amazon S3 to store our configuration files.
  • AWS Auto ScalingAWS Auto Scaling allows you to build scaling plans that automate how groups of different resources respond to changes in demand. You can optimize availability, costs, or a balance of both. We use Auto Scaling to manage Nginx on Amazon EC2.
  • Launch templatesLaunch templates contain configurations such as AMI ID, instance type, and security group. Launch templates enable you to store launch parameters so that they don’t have to be specified every time instances are launched.
  • Amazon Inspector – This automated security assessment service improves the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposures, vulnerabilities, and deviations from best practices.

Architecture overview
We use Ansible as a configuration management component alongside Image Builder. The CIS Ansible Playbook applies a Level 1 set of rules to the local host of which the AMI is provisioned on. For more information about the Ansible Playbook, see the GitHub repo. Image Builder offers AMIs with Security Technical Implementation Guides (STIG) levels low-high as part of its pipeline build.

The following diagram depicts the phases of the Image Builder pipeline for building a Nginx web server. The numbers 1–6 represent the order of when each phase runs in the build process:

  1. Source
  2. Build components
  3. Validate
  4. Test
  5. Distribute
  6. AMI

Figure: Shows the EC2 Image Builder steps

The workflow includes the following steps:

  1. Deploy the CloudFormation templates.
  2. The template creates an Image Builder pipeline.
  3. AWS Systems Manager completes the AMI build process.
  4. Amazon EC2 starts an instance to build the AMI.
  5. Systems Manager starts a test instance build after the first build is successful.
  6. The AMI starts provisioning.
  7. The Amazon Inspector CIS benchmark starts.

CloudFormation templates
You deploy the following CloudFormation templates. These CloudFormation templates have a great deal of configurations. They deploy the following resources:

  • vpc.yml – Contains all the core networking configuration. It deploys the VPC, two private subnets, two public subnets, and the route tables. The private subnets utilize a NAT gateway to communicate to the internet. The public subnets have full outbound access to the IGW.
  • kms.yml – Contains the AWS KMS configuration that we use for encrypting resources. The KMS key policy is also configured in this template.
  • s3-iam-config.yml – Contains the launch configuration and autoscaling groups for the initial Nginx launch. For updates and patching to Nginx, we use Image Builder to build those changes.
  • infrastructure-ssm-params.yml – Contains the Systems Manager parameter store configuration. The parameters are populated by using outputs from other CloudFormation templates.
  • nginx-config.yml – Contains the configuration for Nginx. Additionally, this template contains the network load balancer, target groups, security groups, and EC2 instance AWS Identity and Access Management (IAM) roles.
  • nginx-image-builder.yml – Contains the configuration for the Image Builder pipeline that we use to build AMIs.

Prerequisites
To follow the steps to provision the pipeline deployment, you must have the following prerequisites:

Deploying the CloudFormation templates
To deploy your templates, complete the following steps:

1. Clone the source code repository found in the following location:

git clone https://github.com/aws-samples/deploy-cis-level-1-hardened-ami-with-ec2-image-builder-pipeline.git

You now use the AWS CLI to deploy the CloudFormation templates. Make sure to leave the CloudFormation template names as we have written in this post.

2. Deploy the VPC CloudFormation template:

aws cloudformation create-stack \
--stack-name vpc-config \
--template-body file://Templates/vpc.yml \
--parameters file://Parameters/vpc-params.json  \
--capabilities CAPABILITY_IAM \
--region us-east-1

The output should look like the following code:

{

    "StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/vpc-config/7faaab30-247f-11eb-8712-0e65b6fb18f9"
}

 

3. Open the Parameters/kms-params.json file and update the UserARN parameter with your account ID:

[
  {
      "ParameterKey": "KeyName",
      "ParameterValue": "DemoKey"
  },
  {
    "ParameterKey": "UserARN",
    "ParameterValue": "arn:aws:iam::<input_your_account_id>:root"
  }
]

 

4. Deploy the KMS key CloudFormation template:

aws cloudformation create-stack \
--stack-name kms-config \
--template-body file://Templates/kms.yml \
--parameters file://Parameters/kms-params.json \
--capabilities CAPABILITY_IAM \
--region us-east-1

The output should look like the following:

{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/kms-config/f65aca80-08ff-11eb-8795-12275bc6e1ef"
}

 

5. Open the Parameters/s3-iam-config.json file and update the DemoConfigS3BucketName parameter to a unique name of your choosing:

[
  {
    "ParameterKey" : "Environment",
    "ParameterValue" : "dev"
  },
  {
    "ParameterKey": "NetworkStackName",
    "ParameterValue" : "vpc-config"
  },
  {
    "ParameterKey" : "KMSStackName",
    "ParameterValue" : "kms-config"
  },
  {
    "ParameterKey": "DemoConfigS3BucketName",
    "ParameterValue" : "<input_your_unique_bucket_name>"
  },
  {
    "ParameterKey" : "EC2InstanceRoleName",
    "ParameterValue" : "EC2InstanceRole"
  }
]

 

6. Deploy the IAM role configuration template:

aws cloudformation create-stack \
--stack-name s3-iam-config \
--template-body file://Templates/s3-iam-config.yml \
--parameters file://Parameters/s3-iam-config.json \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

The output should look like the following:

{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/s3-iam-config/9be9f990-0909-11eb-811c-0a78092beb51"
}

 

Configuring IAM roles and policies

This solution uses a couple of service-linked roles. Let’s generate these roles using the AWS CLI.

 

1. Run the following commands:

aws iam create-service-linked-role --aws-service-name autoscaling.amazonaws.com
aws iam create-service-linked-role --aws-service-name imagebuilder.amazonaws.com

If you see a message similar to following code, it means that you already have the service-linked role created in your account and you can move on to the next step:

An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation: Service role name AWSServiceRoleForImageBuilder has been taken in this account, please try a different suffix.

Now that you have generated the IAM roles used in this post, you add them to the KMS key policy. This allows the roles to encrypt and decrypt the KMS key.

 

2. Open the Parameters/kms-params.json file:

[
  {
      "ParameterKey": "KeyName",
      "ParameterValue": "DemoKey"
  },
  {
    "ParameterKey": "UserARN",
    "ParameterValue": "arn:aws:iam::12345678910:root"
  }
]

 

3. Add the following values as a comma-separated list to the UserARN parameter key:

arn:aws:iam::<input_your_aws_account_id>:role/EC2InstanceRole
arn:aws:iam::<input_your_aws_account_id>:role/EC2ImageBuilderRole
arn:aws:iam::<input_your_aws_account_id>:role/NginxS3PutLambdaRole
arn:aws:iam::<input_your_aws_account_id>:role/aws-service-role/imagebuilder.amazonaws.com/AWSServiceRoleForImageBuilder
arn:aws:iam::<input_your_aws_account_id>:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling

 

When finished, the file should look similar to the following:

[
  {
      "ParameterKey": "KeyName",
      "ParameterValue": "DemoKey"
  },
  {
    "ParameterKey": "UserARN",
    "ParameterValue": "arn:aws:iam::123456789012:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling,arn:aws:iam::<input_your_aws_account_id>:role/NginxS3PutLambdaRole,arn:aws:iam::123456789012:role/aws-service-role/imagebuilder.amazonaws.com/AWSServiceRoleForImageBuilder,arn:aws:iam::12345678910:role/EC2InstanceRole,arn:aws:iam::12345678910:role/EC2ImageBuilderRole,arn:aws:iam::12345678910:root"
  }
]

Updating the CloudFormation stack

Now that the AWS KMS parameter file has been updated, you update the AWS KMS CloudFormation stack.

1. Run the following command to update the kms-config stack:

aws cloudformation update-stack \
--stack-name kms-config \
--template-body file://Templates/kms.yml \
--parameters file://Parameters/kms-params.json \
--capabilities CAPABILITY_IAM \
--region us-east-1

 

The output should look like the following:

{
"StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/kms-config/6e84b750-0905-11eb-b543-0e4dccb471bf"
}

 

2. Open the AnsibleConfig/component-nginx.yml file and update the <input_s3_bucket_name> value with the bucket name you generated from the s3-iam-config stack:

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
name: 'Ansible Playbook Execution on Amazon Linux 2'
description: 'This is a sample component that demonstrates how to download and execute an Ansible playbook against Amazon Linux 2.'
schemaVersion: 1.0
constants:
  - s3bucket:
      type: string
      value: <input_s3_bucket_name>
phases:
  - name: build
    steps:
      - name: InstallAnsible
        action: ExecuteBash
        inputs:
          commands:
           - sudo amazon-linux-extras install -y ansible2
      - name: CreateDirectory
        action: ExecuteBash
        inputs:
          commands:
            - sudo mkdir -p /ansibleloc/roles
      - name: DownloadLinuxCis
        action: S3Download
        inputs:
          - source: 's3://{{ s3bucket }}/components/linux-cis.zip'
            destination: '/ansibleloc/linux-cis.zip'
      - name: UzipLinuxCis
        action: ExecuteBash
        inputs:
          commands:
            - unzip /ansibleloc/linux-cis.zip -d /ansibleloc/roles
            - echo "unzip linux-cis file"
      - name: DownloadCisPlaybook
        action: S3Download
        inputs:
          - source: 's3://{{ s3bucket }}/components/cis_playbook.yml'
            destination: '/ansibleloc/cis_playbook.yml'
      - name: InvokeCisAnsible
        action: ExecuteBinary
        inputs:
          path: ansible-playbook
          arguments:
            - '{{ build.DownloadCisPlaybook.inputs[0].destination }}'
            - '--tags=level1'
      - name: DeleteCisPlaybook
        action: ExecuteBash
        inputs:
          commands:
            - rm '{{ build.DownloadCisPlaybook.inputs[0].destination }}'
      - name: DownloadNginx
        action: S3Download
        inputs:
          - source: s3://{{ s3bucket }}/components/nginx.zip'
            destination: '/ansibleloc/nginx.zip'
      - name: UzipNginx
        action: ExecuteBash
        inputs:
          commands:
            - unzip /ansibleloc/nginx.zip -d /ansibleloc/roles
            - echo "unzip Nginx file"
      - name: DownloadNginxPlaybook
        action: S3Download
        inputs:
          - source: 's3://{{ s3bucket }}/components/nginx_playbook.yml'
            destination: '/ansibleloc/nginx_playbook.yml'
      - name: InvokeNginxAnsible
        action: ExecuteBinary
        inputs:
          path: ansible-playbook
          arguments:
            - '{{ build.DownloadNginxPlaybook.inputs[0].destination }}'
      - name: DeleteNginxPlaybook
        action: ExecuteBash
        inputs:
          commands:
            - rm '{{ build.DownloadNginxPlaybook.inputs[0].destination }}'

  - name: validate
    steps:
      - name: ValidateDebug
        action: ExecuteBash
        inputs:
          commands:
            - sudo echo "ValidateDebug section"

  - name: test
    steps:
      - name: TestDebug
        action: ExecuteBash
        inputs:
          commands:
            - sudo echo "TestDebug section"
      - name: Download_Inspector_Test
        action: S3Download
        inputs:
          - source: 's3://ec2imagebuilder-managed-resources-us-east-1-prod/components/inspector-test-linux/1.0.1/InspectorTest'
            destination: '/workdir/InspectorTest'
      - name: Set_Executable_Permissions
        action: ExecuteBash
        inputs:
          commands:
            - sudo chmod +x /workdir/InspectorTest
      - name: ExecuteInspectorAssessment
        action: ExecuteBinary
        inputs:
          path: '/workdir/InspectorTest'

 

Adding files to your S3 buckets

Now you assume a role you generated from one of the previous CloudFormation stacks. This allows you to add files to the encrypted S3 bucket.

1. Run the following command and make sure to update the role to use your AWS account ID number:

aws sts assume-role --role-arn "arn:aws:iam::<input_your_aws_account_id>:role/EC2ImageBuilderRole" --role-session-name AWSCLI-Session

You see an output similar to the following:

{
    "Credentials": {
        "AccessKeyId": "<AWS_ACCESS_KEY_ID>",
        "SecretAccessKey": "<AWS_SECRET_ACCESS_KEY_ID>",
        "SessionToken": "<AWS_SESSION_TOKEN>",
        "Expiration": "2020-11-20T02:54:17Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "ACPATGCCLSNJCNSJCEWZ:AWSCLI-Session",
        "Arn": "arn:aws:sts::123456789012:assumed-role/EC2ImageBuilderRole/AWSCLI-Session"
    }
}

You now assume the EC2ImageBuilderRole IAM role from the command line. This role allows you to create objects in the S3 bucket generated from the s3-iam-config stack. Because this bucket is encrypted with AWS KMS, any user or IAM role requires specific permissions to decrypt the key. You have already accounted for this in a previous step by adding the EC2ImageBuilderRole IAM role to the KMS key policy.

 

2. Create the following environment variable to use the EC2ImageBuilderRole role. Update the values with the output from the previous step:

export AWS_ACCESS_KEY_ID=AccessKeyId
export AWS_SECRET_ACCESS_KEY=SecretAccessKey
export AWS_SESSION_TOKEN=SessionToken

 

3. Check to make sure that you have actually assumed the role EC2ImageBuilderRole:

aws sts get-caller-identity

You should see an output similar to the following:

{
    "UserId": "AROATG5CKLSWENUYOF6A4:AWSCLI-Session",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/EC2ImageBuilderRole/AWSCLI-Session"
}

 

4. Create a folder inside of the encrypted S3 bucket generated in the s3-iam-config stack:

aws s3api put-object --bucket <input_your_bucket_name> --key components

 

5. Zip the configuration files that you use in the Image Builder pipeline process:

zip -r linux-cis.zip LinuxCis/
zip -r nginx.zip Nginx/

 

6. Upload the configuration files to S3 bucket. Update the bucket name with the S3 bucket name you generated in the s3-iam-config stack.

aws s3 cp linux-cis.zip s3://<input_your_bucket_name>/components/

aws s3 cp nginx.zip s3://<input_your_bucket_name>/components/

aws s3 cp AnsibleConfig/cis_playbook.yml s3://<input_your_bucket_name>/components/

aws s3 cp AnsibleConfig/nginx_playbook.yml s3://<input_your_bucket_name>/components/

aws s3 cp AnsibleConfig/component-nginx.yml s3://<input_your_bucket_name>/components/

Deploying your pipeline

You’re now ready to deploy your pipeline.

1. Switch back to the original IAM user profile you used before assuming the EC2ImageBuilderRole. For instructions, see How do I assume an IAM role using the AWS CLI?

 

2. Open the Parameters/nginx-image-builder-params.json file and update the ImageBuilderBucketName parameter with the S3 bucket name generated in the s3-iam-config stack:

[
  {
    "ParameterKey": "Environment",
    "ParameterValue": "dev"
  },
  {
    "ParameterKey": "ImageBuilderBucketName",
    "ParameterValue": "<input_your_bucket_name>"
  },
  {
    "ParameterKey": "NetworkStackName",
    "ParameterValue": "vpc-config"
  },
  {
    "ParameterKey": "KMSStackName",
    "ParameterValue": "kms-config"
  },
  {
    "ParameterKey": "S3ConfigStackName",
    "ParameterValue": "s3-iam-config"
  }
]

 

3. Deploy the nginx-image-builder.yml template:

aws cloudformation create-stack \
--stack-name cis-image-builder \
--template-body file://Templates/nginx-image-builder.yml \
--parameters file://Parameters/nginx-image-builder-params.json \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

The template takes around 35 minutes to complete. Deploying this template starts the Image Builder pipeline.

 

Monitoring the pipeline

You can get more details about the pipeline on the AWS Management Console.

1. On the Image Builder console, choose Image pipelines to see the status of the pipeline.

Figure: Shows the EC2 Image Builder Pipeline status

 

2. Choose the pipeline (for this post, cis-image-builder-LinuxCis-Pipeline)

On the pipeline details page, you can view more information and make updates to its configuration.

Figure: Shows the Image Builder Pipeline metadata

At this point, the Image Builder pipeline has started running the automation document in Systems Manager. Here you can monitor the progress of the AMI build.

 

3. On the Systems Manager console, choose Automation.

 

4. Choose the execution ID of the arn:aws:ssm:us-east-1:123456789012:document/ImageBuilderBuildImageDocument document.

Figure: Shows the Image Builder Pipeline Systems Manager Automation steps

 

5. Choose the step ID to see what is happening in each step.

At this point, the Image Builder pipeline is bringing up an Amazon Linux 2 EC2 instance. From there, we run Ansible playbooks that configure the security and application settings. The automation is pulling its configuration from the S3 bucket you deployed in a previous step. When the Ansible run is complete, the instance stops and an AMI is generated from this instance. When this is complete, a cleanup is initiated that ends the EC2 instance. The final result is a CIS Level 1 hardened Amazon Linux 2 AMI running Nginx.

 

Updating parameters

When the stack is complete, you retrieve some new parameter values.

1. On the Systems Manager console, choose Automation.

 

2. Choose the execution ID of the arn:aws:ssm:us-east-1:123456789012:document/ImageBuilderBuildImageDocument document.

 

3. Choose step 21.

The following screenshot shows the output of this step.

Figure: Shows step of EC2 Image Builder Pipeline

 

4. Open the Parameters/nginx-config.json file and update the AmiId parameter with the AMI ID generated from the previous step:

[
  {
    "ParameterKey" : "Environment",
    "ParameterValue" : "dev"
  },
  {
    "ParameterKey": "NetworkStackName",
    "ParameterValue" : "vpc-config"
  },
  {
    "ParameterKey" : "S3ConfigStackName",
    "ParameterValue" : "s3-iam-config"
  },
  {
    "ParameterKey": "AmiId",
    "ParameterValue" : "<input_the_cis_hardened_ami_id>"
  },
  {
    "ParameterKey": "ApplicationName",
    "ParameterValue" : "Nginx"
  },
  {
    "ParameterKey": "NLBName",
    "ParameterValue" : "DemoALB"
  },
  {
    "ParameterKey": "TargetGroupName",
    "ParameterValue" : "DemoTG"
  }
]

 

5. Deploy the nginx-config.yml template:

aws cloudformation create-stack \
--stack-name nginx-config \
--template-body file://Templates/nginx-config.yml \
--parameters file://Parameters/nginx-config.json \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

The output should look like the following:

{
    "StackId": "arn:aws:cloudformation:us-east-1:123456789012:stack/nginx-config/fb2b0f30-24f6-11eb-ad7c-0a3238f55eb3"
}

 

6. Deploy the infrastructure-ssm-params.yml template:

aws cloudformation create-stack \
--stack-name ssm-params-config \
--template-body file://Templates/infrastructure-ssm-params.yml \
--parameters file://Parameters/infrastructure-ssm-parameters.json \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

 

Verifying Nginx is running

Let’s verify that our Nginx service is up and running properly. You use Session Manager to connect to a testing instance.

1. On the Amazon EC2 console, choose Instances.

You should see three instances, as in the following screenshot.

Figure: Shows the Nginx EC2 instances

You can connect to either one of the Nginx instances.

 

2. Select the testing instance.

 

3. On the Actions menu, choose Connect.

 

4. Choose Session Manager.

 

5. Choose Connect.

A terminal on the EC2 instance opens, similar to the following screenshot.

Figure: Shows the Session Manager terminal

6. Run the following command to ensure that Nginx is running properly:

curl localhost:8080

You should see an output similar to the following screenshot.

Figure: Shows Nginx output from terminal

Reviewing resources and configurations

Now that you have deployed the core services that for the solution, take some time to review the services that you have just deployed.

 

IAM roles

This project creates several IAM roles that are used to manage AWS resources. For example, EC2ImageBuilderRole is used to configure new AMIs with the Image Builder pipeline. This role contains only the permissions required to manage the Image Builder process. Adopting this pattern enforces the practice of least privilege. Additionally, many of the IAM polices attached to the IAM roles are restricted down to specific AWS resources. Let’s look at a couple of examples of managing IAM permissions with this project.

 

The following policy restricts Amazon S3 access to a specific S3 bucket. This makes sure that the role this policy is attached to can only access this specific S3 bucket. If this role needs to access any additional S3 buckets, the resource has to be explicitly added.

Policies:
  - PolicyName: GrantS3Read
    PolicyDocument:
      Statement:
        - Sid: GrantS3Read
          Effect: Allow
          Action:
            - s3:List*
            - s3:Get*
            - s3:Put*
          Resource: !Sub 'arn:aws:s3:::${S3Bucket}*'

Let’s look at the EC2ImageBuilderRole. A common scenario that occurs is when you need to assume a role locally in order to perform an action on a resource. In this case, because you’re using AWS KMS to encrypt the S3 bucket, you need to assume a role that has access to decrypt the KMS key so that artifacts can be uploaded to the S3 bucket. In the following AssumeRolePolicyDocument, we allow Amazon EC2 and Systems Manager services to be assumed by this role. Additionally, we allow IAM users to assume this role as well.

AssumeRolePolicyDocument:
  Version: 2012-10-17
  Statement:
    - Effect: Allow
      Principal:
        Service:
          - ec2.amazonaws.com
          - ssm.amazonaws.com
          - imagebuilder.amazonaws.com
        AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root'
      Action:
        - sts:AssumeRole

The principle !Sub 'arn:aws:iam::${AWS::AccountId}:root allows for any IAM user in this account to assume this role locally. Normally, this role should be scoped down to specific IAM users or roles. For the purpose of this post, we grant permissions to all users of the account.

 

Nginx configuration

The AMI built from the Image Builder pipeline contains all of the application and security configurations required to run Nginx as a web application. When an instance is launched from this AMI, no additional configuration is required.

We use Amazon EC2 launch templates to configure the application stack. The launch templates contain information such as the AMI ID, instance type, and security group. When a new AMI is provisioned, you simply update the launch template CloudFormation parameter with the new AMI and update the CloudFormation stack. From here, you can start an Auto Scaling group instance refresh to update the application stack to use the new AMI. The Auto Scaling group is updated with instances running on the updated AMI by bringing down one instance at a time and replacing it.

 

Amazon Inspector configuration

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. With Amazon Inspector, assessments are generated for exposure, vulnerabilities, and deviations from best practices.

After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports that are available via the Amazon Inspector console or API. We can use Amazon Inspector to assess our security posture against the CIS Level 1 standard that we use our Image Builder pipeline to provision. Let’s look at how we configure Amazon Inspector.

A resource group defines a set of tags that, when queried, identify the AWS resources that make up the assessment target. Any EC2 instance that is launched with the tag specified in the resource group is in scope for Amazon Inspector assessment runs. The following code shows our configuration:

ResourceGroup:
  Type: "AWS::Inspector::ResourceGroup"
  Properties:
    ResourceGroupTags:
      - Key: "ResourceGroup"
        Value: "Nginx"

AssessmentTarget:
  Type: AWS::Inspector::AssessmentTarget
  Properties:
    AssessmentTargetName : "NginxAssessmentTarget"
    ResourceGroupArn : !Ref ResourceGroup

In the following code, we specify the tag set in the resource group, which makes sure that when an instance is launched from this AMI, it’s under the scope of Amazon Inspector:

IBImage:
  Type: AWS::ImageBuilder::Image
  Properties:
    ImageRecipeArn: !Ref Recipe
    InfrastructureConfigurationArn: !Ref Infrastructure
    DistributionConfigurationArn: !Ref Distribution
    ImageTestsConfiguration:
      ImageTestsEnabled: false
      TimeoutMinutes: 60
    Tags:
      ResourceGroup: 'Nginx'

 

Building and deploying a new image with Amazon Inspector tests enabled

For this final portion of this post, we build and deploy a new AMI with an Amazon Inspector evaluation.

1. In your text editor, open Templates/nginx-image-builder.yml and update the pipeline and IBImage resource property ImageTestsEnabled to true.

The updated configuration should look like the following:

IBImage:
  Type: AWS::ImageBuilder::Image
  Properties:
    ImageRecipeArn: !Ref Recipe
    InfrastructureConfigurationArn: !Ref Infrastructure
    DistributionConfigurationArn: !Ref Distribution
    ImageTestsConfiguration:
      ImageTestsEnabled: true
      TimeoutMinutes: 60
    Tags:
      ResourceGroup: 'Nginx'

 

2. Update the stack with the new configuration:

aws cloudformation update-stack \
--stack-name cis-image-builder \
--template-body file://Templates/nginx-image-builder.yml \
--parameters file://Parameters/nginx-image-builder-params.json \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-1

This starts a new AMI build with an Amazon Inspector evaluation. The process can take up to 2 hours to complete.

3. On the Amazon Inspector console, choose Assessment Runs.

Figure: Shows Amazon Inspector Assessment Run

4. Under Reports, choose Download report.

5. For Select report type, select Findings report.

6. For Select report format, select PDF.

7. Choose Generate report.

The following screenshot shows the findings report from the Amazon Inspector run.

This report generates an assessment against the CIS Level 1 standard. Any policies that don’t comply with the CIS Level 1 standard are explicitly called out in this report.

Section 3.1 lists any failed policies.

 

Figure: Shows Inspector findings

These failures are detailed later in the report, along with suggestions for remediation.

In section 4.1, locate the entry 1.3.2 Ensure filesystem integrity is regularly checked. This section shows the details of a failure from the Amazon Inspector findings report. You can also see suggestions on how to remediate the issue. Under Recommendation, the findings report suggests a specific command that you can use to remediate the issue.

 

Figure: Shows Inspector findings issue

You can use the Image Builder pipeline to simply update the Ansible playbooks with this setting, then run the Image Builder pipeline to build a new AMI, deploy the new AMI to an EC2 Instance, and run the Amazon Inspector report to ensure that the issue has been resolved. Finally, we can see the specific instances that have been assessed that have this issue.

Organizations often customize security settings based off of a given use case. Your organization may choose CIS Level 1 as a standard but elect to not apply all the recommendations. For example, you might choose to not use the FirewallD service on your Linux instances, because you feel that using Amazon EC2 security groups gives you enough networking security in place that you don’t need an additional firewall. Disabling FirewallD causes a high severity failure in the Amazon Inspector report. This is expected and can be ignored when evaluating the report.

 

Conclusion
In this post, we showed you how to use Image Builder to automate the creation of AMIs. Additionally, we also showed you how to use the AWS CLI to deploy CloudFormation stacks. Finally, we walked through how to evaluate resources against CIS Level 1 Standard using Amazon Inspector.

 

About the Authors

 

Joe Keating is a Modernization Architect in Professional Services at Amazon Web Services. He works with AWS customers to design and implement a variety of solutions in the AWS Cloud. Joe enjoys cooking with a glass or two of wine and achieving mediocrity on the golf course.

 

 

 

Virginia Chu is a Sr. Cloud Infrastructure Architect in Professional Services at Amazon Web Services. She works with enterprise-scale customers around the globe to design and implement a variety of solutions in the AWS Cloud.

 

How to visualize multi-account Amazon Inspector findings with Amazon Elasticsearch Service

Post Syndicated from Moumita Saha original https://aws.amazon.com/blogs/security/how-to-visualize-multi-account-amazon-inspector-findings-with-amazon-elasticsearch-service/

Amazon Inspector helps to improve the security and compliance of your applications that are deployed on Amazon Web Services (AWS). It automatically assesses Amazon Elastic Compute Cloud (Amazon EC2) instances and applications on those instances. From that assessment, it generates findings related to exposure, potential vulnerabilities, and deviations from best practices.

You can use the findings from Amazon Inspector as part of a vulnerability management program for your Amazon EC2 fleet across multiple AWS Regions in multiple accounts. The ability to rank and efficiently respond to potential security issues reduces the time that potential vulnerabilities remain unresolved. This can be accelerated within a single pane of glass for all the accounts in your AWS environment.

Following AWS best practices, in a secure multi-account AWS environment, you can provision (using AWS Control Tower) a group of accounts—known as core accounts, for governing other accounts within the environment. One of the core accounts may be used as a central security account, which you can designate for governing the security and compliance posture across all accounts in your environment. Another core account is a centralized logging account, which you can provision and designate for central storage of log data.

In this blog post, I show you how to:

  1. Use Amazon Inspector, a fully managed security assessment service, to generate security findings.
  2. Gather findings from multiple Regions across multiple accounts using Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).
  3. Use AWS Lambda to send the findings to a central security account for deeper analysis and reporting.

In this solution, we send the findings to two services inside the central security account:

Solution overview

Overall architecture

The flow of events to implement the solution is shown in Figure 1 and described in the following process flow.

Figure 1: Solution overview architecture

Figure 1: Solution overview architecture

Process flow

The flow of this architecture is divided into two types of processes—a one-time process and a scheduled process. The AWS resources that are part of the one-time process are triggered the first time an Amazon Inspector assessment template is created in each Region of each application account. The AWS resources of the scheduled process are triggered at a designated interval of Amazon Inspector scan in each Region of each application account.

One-time process

  1. An event-based Amazon CloudWatch rule in each Region of every application account triggers a regional AWS Lambda function when an Amazon Inspector assessment template is created for the first time in that Region.

    Note: In order to restrict this event to trigger the Lambda function only the first time an assessment template is created, you must use a specific user-defined tag to trigger the Attach Inspector template to SNS Lambda function for only one Amazon Inspector template per Region. For more information on tags, see the Tagging AWS resources documentation.

  2. The Lambda function attaches the Amazon Inspector assessment template (created in application accounts) to the cross-account Amazon SNS topic (created in the security account). The function, the template, and the topic are all in the same AWS Region.

    Note: This step is needed because Amazon Inspector templates can only be attached to SNS topics in the same account via the AWS Management Console or AWS Command Line Interface (AWS CLI).

Scheduled process

  1. A scheduled Amazon CloudWatch Event in every Region of the application accounts starts the Amazon Inspector scan at a scheduled time interval, which you can configure.
  2. An Amazon Inspector agent conducts the scan on the EC2 instances of the Region where the assessment template is created and sends any findings to Amazon Inspector.
  3. Once the findings are generated, Amazon Inspector notifies the Amazon SNS topic of the security account in the same Region.
  4. The Amazon SNS topics from each Region of the central security account receive notifications of Amazon Inspector findings from all application accounts. The SNS topics then send the notifications to a central Amazon SQS queue in the primary Region of the security account.
  5. The Amazon SQS queue triggers the Send findings Lambda function (as shown in Figure 1) of the security account.

    Note: Each Amazon SQS message represents one Amazon Inspector finding.

  6. The Send findings Lambda function assumes a cross-account role to fetch the following information from all application accounts:
    1. Finding details from the Amazon Inspector API.
    2. Additional Amazon EC2 attributes—VPC, subnet, security group, and IP address—from EC2 instances with potential vulnerabilities.
  7. The Lambda function then sends all the gathered data to a central S3 bucket and a domain in Amazon ES—both in the central security account.

These Amazon Inspector findings, along with additional attributes on the scanned instances, can be used for further analysis and visualization via Kibana—a data visualization dashboard for Amazon ES. Storing a copy of these findings in an S3 bucket gives you the opportunity to forward the findings data to outside monitoring tools that don’t support direct data ingestion from AWS Lambda.

Prerequisites

The following resources must be set up before you can implement this solution:

  1. A multi-account structure. To learn how to set up a multi-account structure, see Setting up AWS Control Tower and AWS Landing zone.
  2. Amazon Inspector agents must be installed on all EC2 instances. See Installing Amazon Inspector agents to learn how to set up Amazon Inspector agents on EC2 instances. Additionally, keep note of all the Regions where you install the Amazon Inspector agent.
  3. An Amazon ES domain with Kibana authentication. See Getting started with Amazon Elasticsearch Service and Use Amazon Cognito for Kibana access control.
  4. An S3 bucket for centralized storage of Amazon Inspector findings.
  5. An S3 bucket for storage of the Lambda source code for the solution.

Set up Amazon Inspector with Amazon ES and S3

Follow these steps to set up centralized Amazon Inspector findings with Amazon ES and Amazon S3:

  1. Upload the solution ZIP file to the S3 bucket used for Lambda code storage.
  2. Collect the input parameters for AWS CloudFormation deployment.
  3. Deploy the base template into the central security account.
  4. Deploy the second template in the primary Region of all application accounts to create global resources.
  5. Deploy the third template in all Regions of all application accounts.

Step 1: Upload the solution ZIP file to the S3 bucket used for Lambda code storage

  1. From GitHub, download the file Inspector-to-S3ES-crossAcnt.zip.
  2. Upload the ZIP file to the S3 bucket you created in the central security account for Lambda code storage. This code is used to create the Lambda function in the first CloudFormation stack set of the solution.

Step 2: Collect input parameters for AWS CloudFormation deployment

In this solution, you deploy three AWS CloudFormation stack sets in succession. Each stack set should be created in the primary Region of the central security account. Underlying stacks are deployed across the central security account and in all the application accounts where the Amazon Inspector scan is performed. You can learn more in Working with AWS CloudFormation StackSets.

Before you proceed to the stack set deployment, you must collect the input parameters for the first stack set: Central-SecurityAcnt-BaseTemplate.yaml.

To collect input parameters for AWS CloudFormation deployment

  1. Fetch the account ID (CentralSecurityAccountID) of the AWS account where the stack set will be created and deployed. You can use the steps in Finding your AWS account ID to help you find the account ID.
  2. Values for the ES domain parameters can be fetched from the Amazon ES console.
    1. Open the Amazon ES Management Console and select the Region where the Amazon ES domain exists.
    2. Select the domain name to view the domain details.
    3. The value for ElasticsearchDomainName is displayed on the top left corner of the domain details.
    4. On the Overview tab in the domain details window, select and copy the URL value of the Endpoint to use as the ElasticsearchEndpoint parameter of the template. Make sure to exclude the https:// at the beginning of the URL.

      Figure 2: Details of the Amazon ES domain for fetching parameter values

      Figure 2: Details of the Amazon ES domain for fetching parameter values

  3. Get the values for the S3 bucket parameters from the Amazon S3 console.
    1. Open the Amazon S3 Management Console.
    2. Copy the name of the S3 bucket that you created for centralized storage of Amazon Inspector findings. Save this bucket name for the LoggingS3Bucket parameter value of the Central-SecurityAcnt-BaseTemplate.yaml template.
    3. Select the S3 bucket used for source code storage. Select the bucket name and copy the name of this bucket for the LambdaSourceCodeS3Bucket parameter of the template.

      Figure 3: The S3 bucket where Lambda code is uploaded

      Figure 3: The S3 bucket where Lambda code is uploaded

  4. On the bucket details page, select the source code ZIP file name that you previously uploaded to the bucket. In the detail page of the ZIP file, choose the Overview tab, and then copy the value in the Key field to use as the value for the LambdaCodeS3Key parameter of the template.

    Figure 4: Details of the Lambda code ZIP file uploaded in Amazon S3 showing the key prefix value

    Figure 4: Details of the Lambda code ZIP file uploaded in Amazon S3 showing the key prefix value

Note: All of the other input parameter values of the template are entered automatically, but you can change them during stack set creation if necessary.

Step 3: Deploy the base template into the central security account

Now that you’ve collected the input parameters, you’re ready to deploy the base template that will create the necessary resources for this solution implementation in the central security account.

Prerequisites for CloudFormation stack set deployment

There are two permission modes that you can choose from for deploying a stack set in AWS CloudFormation. If you’re using AWS Organizations and have all features enabled, you can use the service-managed permissions; otherwise, self-managed permissions mode is recommended. To deploy this solution, you’ll use self-managed permissions mode. To run stack sets in self-managed permissions mode, your administrator account and the target accounts must have two IAM roles—AWSCloudFormationStackSetAdministrationRole and AWSCloudFormationStackSetExecutionRole—as prerequisites. In this solution, the administrator account is the central security account and the target accounts are application accounts. You can use the following CloudFormation templates to create the necessary IAM roles:

To deploy the base template

  1. Download the base template (Central-SecurityAcnt-BaseTemplate.yaml) from GitHub.
  2. Open the AWS CloudFormation Management Console and select the Region where all the stack sets will be created for deployment. This should be the primary Region of your environment.
  3. Select Create StackSet.
    1. In the Create StackSet window, select Template is ready and then select Upload a template file.
    2. Under Upload a template file, select Choose file and select the Central-SecurityAcnt-BaseTemplate.yaml template that you downloaded earlier.
    3. Choose Next.
  4. Add stack set details.
    1. Enter a name for the stack set in StackSet name.
    2. Under Parameters, most of the values are pre-populated except the values you collected in the previous procedure for CentralSecurityAccountID, ElasticsearchDomainName, ElasticsearchEndpoint, LoggingS3Bucket, LambdaSourceCodeS3Bucket, and LambdaCodeS3Key.
    3. After all the values are populated, choose Next.
  5. Configure StackSet options.
    1. (Optional) Add tags as described in the prerequisites to apply to the resources in the stack set that these rules will be deployed to. Tagging is a recommended best practice, because it enables you to add metadata information to resources during their creation.
    2. Under Permissions, choose the Self service permissions mode to be used for deploying the stack set, and then select the AWSCloudFormationStackSetAdministrationRole from the dropdown list.

      Figure 5: Permission mode to be selected for stack set deployment

      Figure 5: Permission mode to be selected for stack set deployment

    3. Choose Next.
  6. Add the account and Region details where the template will be deployed.
    1. Under Deployment locations, select Deploy stacks in accounts. Under Account numbers, enter the account ID of the security account that you collected earlier.

      Figure 6: Values to be provided during the deployment of the first stack set

      Figure 6: Values to be provided during the deployment of the first stack set

    2. Under Specify regions, select all the Regions where the stacks will be created. This should be the list of Regions where you installed the Amazon Inspector agent. Keep note of this list of Regions to use in the deployment of the third template in an upcoming step.
      • Though an Amazon Inspector scan is performed in all the application accounts, the regional Amazon SNS topics that send scan finding notifications are created in the central security account. Therefore, this template is created in all the Regions where Amazon Inspector will notify SNS. The template has the logic needed to handle the creation of specific AWS resources only in the primary Region, even though the template executes in many Regions.
      • The order in which Regions are selected under Specify regions defines the order in which the stack is deployed in the Regions. So you must make sure that the primary Region of your deployment is the first one specified under Specify regions, followed by the other Regions of stack set deployment. This is required because global resources are created using one Region—ideally the primary Region—and so stack deployment in that Region should be done before deployment to other Regions in order to avoid any build dependencies.

        Figure 7: Showing the order of specifying the Regions of stack set deployment

        Figure 7: Showing the order of specifying the Regions of stack set deployment

  7. Review the template settings and select the check box to acknowledge the Capabilities section. This is required if your deployment template creates IAM resources. You can learn more at Controlling access with AWS Identity and Access Management.

    Figure 8: Acknowledge IAM resources creation by AWS CloudFormation

    Figure 8: Acknowledge IAM resources creation by AWS CloudFormation

  8. Choose Submit to deploy the stack set.

Step 4: Deploy the second template in the primary Region of all application accounts to create global resources

This template creates the global resources required for sending Amazon Inspector findings to Amazon ES and Amazon S3.

To deploy the second template

  1. Download the template (ApplicationAcnts-RolesTemplate.yaml) from GitHub and use it to create the second CloudFormation stack set in the primary Region of the central security account.
  2. To deploy the template, follow the steps used to deploy the base template (described in the previous section) through Configure StackSet options.
  3. In Set deployment options, do the following:
    1. Under Account numbers, enter the account IDs of your application accounts as comma-separated values. You can use the steps in Finding your AWS account ID to help you gather the account IDs.
    2. Under Specify regions, select only your primary Region.

      Figure 9: Select account numbers and specify Regions

      Figure 9: Select account numbers and specify Regions

  4. The remaining steps are the same as for the base template deployment.

Step 5: Deploy the third template in all Regions of all application accounts

This template creates the resources in each Region of all application accounts needed for scheduled scanning of EC2 instances using Amazon Inspector. Notifications are sent to the SNS topics of each Region of the central security account.

To deploy the third template

  1. Download the template InspectorRun-SetupTemplate.yaml from GitHub and create the final AWS CloudFormation stack set. Similar to the previous stack sets, this one should also be created in the central security account.
  2. For deployment, follow the same steps you used to deploy the base template through Configure StackSet options.
  3. In Set deployment options:
    1. Under Account numbers, enter the same account IDs of your application accounts (comma-separated values) as you did for the second template deployment.
    2. Under Specify regions, select all the Regions where you installed the Amazon Inspector agent.

      Note: This list of Regions should be the same as the Regions where you deployed the base template.

  4. The remaining steps are the same as for the second template deployment.

Test the solution and delivery of the findings

After successful deployment of the architecture, to test the solution you can wait until the next scheduled Amazon Inspector scan or you can use the following steps to run the Amazon Inspector scan manually.

To run the Amazon Inspector scan manually for testing the solution

  1. In any one of the application accounts, go to any Region where the Amazon Inspector scan was performed.
  2. Open the Amazon Inspector console.
  3. In the left navigation menu, select Assessment templates to see the available assessments.
  4. Choose the assessment template that was created by the third template.
  5. Choose Run to start the assessment immediately.
  6. When the run is complete, Last run status changes from Collecting data to Analysis Complete.

    Figure 10: Amazon Inspector assessment run

    Figure 10: Amazon Inspector assessment run

  7. You can see the recent scan findings in the Amazon Inspector console by selecting Assessment runs from the left navigation menu.

    Figure 11: The assessment run indicates total findings from the last Amazon Inspector run in this Region

    Figure 11: The assessment run indicates total findings from the last Amazon Inspector run in this Region

  8. In the left navigation menu, select Findings to see details of each finding, or use the steps in the following section to verify the delivery of findings to the central security account.

Test the delivery of the Amazon Inspector findings

This solution delivers the Amazon Inspector findings to two AWS services—Amazon ES and Amazon S3—in the primary Region of the central security account. You can either use Kibana to view the findings sent to Amazon ES or you can use the findings sent to Amazon S3 and forward them to the security monitoring software of your preference for further analysis.

To check whether the findings are delivered to Amazon ES

  1. Open the Amazon ES Management Console and select the Region where the Amazon ES domain is located.
  2. Select the domain name to view the domain details.
  3. On the domain details page, select the Kibana URL.

    Figure 12: Amazon ES domain details page

    Figure 12: Amazon ES domain details page

  4. Log in to Kibana using your preferred authentication method as set up in the prerequisites.
    1. In the left panel, select Discover.
    2. In the Discover window, select a Region to view the total number of findings in that Region.

      Figure 13: The total findings in Kibana for the chosen Region of an application account

      Figure 13: The total findings in Kibana for the chosen Region of an application account

To check whether the findings are delivered to Amazon S3

  1. Open the Amazon S3 Management Console.
  2. Select the S3 bucket that you created for storing Amazon Inspector findings.
  3. Select the bucket name to view the bucket details. The total number of findings for the chosen Region is at the top right corner of the Overview tab.

    Figure 14: The total security findings as stored in an S3 bucket for us-east-1 Region

    Figure 14: The total security findings as stored in an S3 bucket for us-east-1 Region

Visualization in Kibana

The data sent to the Amazon ES index can be used to create visualizations in Kibana that make it easier to identify potential security gaps and plan the remediation accordingly.

You can use Kibana to create a dashboard that gives an overview of the potential vulnerabilities identified in different instances of different AWS accounts. Figure 15 shows an example of such a dashboard. The dashboard can help you rank the need for remediation based on criteria such as:

  • The category of vulnerability
  • The most impacted AWS accounts
  • EC2 instances that need immediate attention
Figure 15: A sample Kibana dashboard showing findings from Amazon Inspector

Figure 15: A sample Kibana dashboard showing findings from Amazon Inspector

You can build additional panels to visualize details of the vulnerability findings identified by Amazon Inspector, such as the CVE ID of the security vulnerability, its description, and recommendations on how to remove the vulnerabilities.

Figure 16: A sample Kibana dashboard panel listing the top identified vulnerabilities and their details

Figure 16: A sample Kibana dashboard panel listing the top identified vulnerabilities and their details

Conclusion

By using this solution to combine Amazon Inspector, Amazon SNS topics, Amazon SQS queues, Lambda functions, an Amazon ES domain, and S3 buckets, you can centrally analyze and monitor the vulnerability posture of EC2 instances across your AWS environment, including multiple Regions across multiple AWS accounts. This solution is built following least privilege access through AWS IAM roles and policies to help secure the cross-account architecture.

In this blog post, you learned how to send the findings directly to Amazon ES for visualization in Kibana. These visualizations can be used to build dashboards that security analysts can use for centralized monitoring. Better monitoring capability helps analysts to identify potentially vulnerable assets and perform remediation activities to improve security of your applications in AWS and their underlying assets. This solution also demonstrates how to store the findings from Amazon Inspector in an S3 bucket, which makes it easier for you to use those findings to create visualizations in your preferred security monitoring software.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Moumita Saha

Moumita is a Security Consultant with AWS Professional Services working to help enterprise customers secure their workloads in the cloud. She assists customers in secure cloud migration, designing automated solutions to protect against cyber threats in the cloud. She is passionate about cyber security, data privacy, and new, emerging cloud-security technologies.