Tag Archives: Security, Identity & Compliance

Introducing the AWS Best Practices for Security, Identity, & Compliance Webpage and Customer Polling Feature

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/introducing-aws-best-practices-security-identity-compliance-webpage-and-customer-polling-feature/

The AWS Security team has made it easier for you to find information and guidance on best practices for your cloud architecture. We’re pleased to share the Best Practices for Security, Identity, & Compliance webpage of the new AWS Architecture Center. Here you’ll find top recommendations for security design principles, workshops, and educational materials, and you can browse our full catalog of self-service content including blogs, whitepapers, videos, trainings, reference implementations, and more.

We’re also running polls on the new AWS Architecture Center to gather your feedback. Want to learn more about how to protect account access? Or are you looking for recommendations on how to improve your incident response capabilities? Let us know by completing the poll. We will use your answers to help guide security topics for upcoming content.

Poll topics will change periodically, so bookmark the Security, Identity, & Compliance webpage for easy access to future questions, or to submit your topic ideas at any time. Our first poll, which asks what areas of the Well-Architected Security Pillar are most important for your use, is available now. We look forward to hearing from you.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

TISAX scope broadened

Post Syndicated from Kevin Quaid original https://aws.amazon.com/blogs/security/tisax-scope-broadened/

The Trusted Information Security Assessment Exchange (TISAX) provides automotive industry organizations the assurance needed to build secure applications and services on the cloud. In late June, AWS achieved the assessment objectives required for data with a very high need for protection according to TISAX criteria.

We’re happy to announce this broadened scope of our TISAX certification today, September 3, the same day that Ferdinand Porsche, credited with originating VW’s Beetle, pioneering hybrid electric-gasoline technology, and founding the Porsche car company, was born 145 years ago.

Automotive customers and their entire supply chain rely on AWS, including Volkswagen’s global supply chain comprised of 122 manufacturing plants and 1,500 suppliers. This certification evidences that the AWS information management systems meet industry standards.

“We rely on our partners and suppliers to achieve a unified level of information security established by TISAX. AWS recognizes the importance of this bar and demonstrates innovation by expanding program scope to include additional regions, control domains, and protection levels.”
    –Stefan Arnold, Director Technology & Acceleration, Porsche

AWS completed a scope extension assessment against TISAX very high protection level (AL 3) for five additional regions. The seven regions in scope include Frankfurt, Ireland, US West (Oregon), US East (Ohio), US East (North Virginia), Canada, and Seoul. Control domains in scope expanded to include data protection.

TISAX was established by the German Association of the Automotive Industry (VDA) and is governed by the European Network Exchange (ENX). The assessment was conducted and accredited by an audit provider, and the results are retrievable from the ENX Portal. The scope ID and assessment ID are SP208R and AYZ38F-1, respectively.

For more information, see Trusted Information Security Assessment Exchange.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kevin Quaid

Kevin leads expansion initiatives for security assurance, supporting customers using and migrating to AWS. He previously managed datacenter site selection and qualification for AWS infrastructure. He is passionate about leveraging his decade-plus risk management experience at Amazon to drive innovation and cloud adoption.

Deploying defense in depth using AWS Managed Rules for AWS WAF (part 2)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/

In this post, I show you how to use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

In part 1 of this post I describe the technologies and methods that you can use to build and manage defense in depth for your network. In part 2, I will show you how to use those tools to build your defense in depth using AWS Managed Rules as the starting point and how it can be used for optimal effectiveness

Managing policies for multiple environments can be done with minimal administrative overhead and can now be part of a deployment pipeline where you programmatically enforce policies for broad edge network policy enforcements and protect production workloads without compromising on development speed or safety.

Building robust security policy enforcement relies on a layered approach and the same applies to securing your web applications. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests as well as unified policy enforcement.

Using a layered AWS WAF deployment, such as is deployed by the procedure that follows, gives you greater flexibility in the amount of rules you can use and the option to standardize edge policies and production policies. This lets you test and develop new applications without comprising the production environments.

In the following example, the application load balancer is in us-east-1. To create a web ACL for Amazon CloudFront you need to deploy the stack in us-east-1. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template can create both web ACLs in this scenario.

Note: If you’re using CloudFront and hosting the origin in us-east-1, you only need to maintain one stack. If your origin is in another region, you need to deploy a stack in us-east-1 for CloudFront web ACLs and another in the region where your application load balancer is. That scenario isn’t covered in the following procedure. None of the underlying infrastructure would be deployed with the example AWS CloudFromation templates provided. Only the AWS WAF configurations would be deployed using the example templates.

Solution overview

The following diagram illustrates the traffic flow where traffic comes in via CloudFront and serves the traffic to the backend load balancers. Both CloudFront and the load balancers support AWS WAF. This is where dedicated web security policies can be enforced to build out a defense-in-depth, multi layered policy enforcement.
 

Figure 1: Defense in depth deployment on AWS WAF

Figure 1: Defense in depth deployment on AWS WAF

Creating AWS Managed Rule web ACLs

During this process we create two web ACLs that are designed for policy enforcement for two dedicated layers. The process won’t deploy the required infrastructure, such as the CloudFront distribution or application load balancers. This example template deploys a single stack in us-east-1 where the CloudFront origin load balancer is located.

To create AWS Managed Rule web ACLs

  1. Download the Amazon-CloudFront-Application-Load-Balancer-AMR.yml template.
  2. Open the AWS Management Console and select the region where the origin application load balancer is deployed. The Amazon-CloudFront-Application-Load-Balancer-AMR.yml template that you downloaded deploys both web ACLs for CloudFront and the application load balancer.
     
    Figure 2: Select a region from the console

    Figure 2: Select a region from the console

  3. Under Find Services enter AWS CloudFormation and select Enter.
     
    Figure 3: Find and select AWS CloudFormation

    Figure 3: Find and select AWS CloudFormation

  4. Select Create stack.
     
    Figure 4: Create stack

    Figure 4: Create stack

  5. Select a template file for the stack.
    1. In the Create stack window, select Template is ready and Upload a template file.
    2. Under Upload a template file, select Choose file and select the Amazon-CloudFront-Application-Load-Balancer-AMR.yml example AWS CloudFormation template you downloaded earlier.
    3. Choose Next.
    Figure 5: Prepare and choose a template

    Figure 5: Prepare and choose a template

  6. Add stack details.
    1. Enter a name for the stack in Stack name.
    2. Enter a name for the Edge Network AWS WAF WebACL and for the Public Layer AWS WAF WebACL.
    3. Set a rate-limit for HTTP GET requests in HTTP Get Flood Protection (this rate is applied per IP address over a 5 minute period).
    4. Set a rate limit for HTTP POST requests in HTTP Post Flood Protection.
    5. Use the Login URL to apply the limit to a targeted login page. If you want to rate-limit all HTTP POST requests, leave the login URL section blank.
    Figure 6: Set stack details

    Figure 6: Set stack details

  7. By default, all the rules within the rule-sets are in action override (count mode). This does not include the rate based rules. If you want to deploy selected rules in a block, remove them from the pre-populated list by highlighting and deleting them. It’s best practice to evaluate firewall rules before changing them from count to block mode. Choose Next to move to the next step.
     
    Figure 7: Default managed rules options

    Figure 7: Default managed rules options

  8. Here you can add tags to apply to the resources in the stack that these rules will be deployed to. Tagging is a recommended best practice as it enables you to add metadata information to resources during the creation. For more information on tagging please see the Tagging AWS resources documentation. Then choose Next. On the following page choose Create stack.
     
    Figure 8: Add tags

    Figure 8: Add tags

  9. Wait until the stack has been deployed. When deployment is complete, the status of the stack will change to CREATE_COMPLETE.
     
    Figure 9: Stack deployment status

    Figure 9: Stack deployment status

Associating the web ACLs to resources

During this process we associate the two newly created web ACLs to the corresponding infrastructure resources. In this example, it would be the CloudFront distribution and its origin load balancer which should have been created prior.

To associate the web ACLs to resources

  1. In the console search for and select WAF & Shield.
     
    Figure 10: Select WAF & Shield

    Figure 10: Select WAF & Shield

  2. Select Web ACLs from the list on the left.
     
    Figure 11: Select Web ACLs

    Figure 11: Select Web ACLs

  3. Select Global (CloudFront) from the drop down list at the top of the page. Choose the Edge-Network-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 12: Select the web ACL

    Figure 12: Select the web ACL

  4. Next select Associated AWS and then choose Add AWS resources.
     
    Figure 13: Add AWS resources

    Figure 13: Add AWS resources

  5. Select the CloudFront distribution you want to protect. Choose Add.
     
    Figure 14: Select the CloudFront distribution to protect

    Figure 14: Select the CloudFront distribution to protect

  6. Select the region the application load balancer is deployed in—this example is us-east-1—and then repeat the same association process as in steps by selecting Web ACLs and now associating the Application Load Balancer similar to steps 3 and 4 above. However, this time, select the application load balancer that serves as the CloudFront Distribution origin. Select US East (N. Virginia) from the drop-down list at the top of the page. Choose the Public-Application-Layer-WebACL name that you created in step 6 of the previous procedure (Creating AWS Managed Rule web ACLs).
     
    Figure 15: Application layer Web ACL association

    Figure 15: Application layer Web ACL association

Conclusion

Using AWS WAF to manage a multi-layer web application security enforcement policy you are able to build defense in depth stack for each specific web application. The configuration will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications. Now with AWS Managed Rules this has enabled customers to make use of prebuild rule sets that can easily be deployed to create a layered defense that will fit into customers web application deployment pipelines. For customers that would like to centrally manage and control WAF in their AWS Organization, consider AWS Firewall Manager.

The AWS CloudFormation templates used in this procedure are in this GitHub repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

Defense in depth using AWS Managed Rules for AWS WAF (part 1)

Post Syndicated from Daniel Swart original https://aws.amazon.com/blogs/security/defense-in-depth-using-aws-managed-rules-for-aws-waf-part-1/

In this post, I discuss how you can use recent enhancements in AWS WAF to manage a multi-layer web application security enforcement policy. These enhancements will help you to maintain and deploy web application firewall configurations across deployment stages and across different types of applications.

The post is in two parts. This first part describes AWS Managed Rules for AWS WAF and how it can be used to provide defense in depth. The second part shows how to apply AWS Managed Rules for WAF.

AWS Managed Rules for AWS WAF is a service that provides groups of rules created by Amazon Web Services (AWS) or by an AWS technology partner. By using AWS Managed Rules, you can reduce the administrative overhead of configuring rules for AWS WAF. You still need a comprehensive strategy for web application policy enforcement to help you make the best use of AWS Managed Rules for your web applications.

By using a layered policy enforcement strategy, you can create policy enforcement that’s specific to each part of your applications. This helps you avoid having to maintain and manage monolithic AWS WAF configurations for each of your applications. When you can separate policies for the edge network and for the application layer network, replicating separate policies across larger workloads becomes modular. This makes your application security more agile and lets you protect public-facing web applications without writing new rules or including rules that aren’t relevant to your web application.

Policy enforcement becomes even less of an administrative burden when you use AWS Firewall Manager to enforce policies across all accounts. This helps ensure organizations have robust policy enforcement measures across multiple accounts, with increased application layer visibility.

The new AWS WAF JSON document-style configuration enables traditional code review processes. You can now easily manage AWS WAF configurations on multiple layers of your web applications. This has also enabled partners to create more dynamic and robust rules that they can deliver on AWS WAF, which ultimately helps those customers manage their web application security policies.

AWS WAF enhancements

AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are used to run your rules, rule groups, and web ACLs.

You can use JSON key-value pair document-based configuration to more easily integrate AWS WAF into the development practices of your organization. As noted in the prior paragraph, using document-style configuration removes the need to use multiple API calls to create objects in the correct order before you can create and deploy a web ACL to protect your web applications.

Using this method lets firewall changes be implemented with normal development and operations best practices because it will be infrastructure as code. This enables version control and code review before deploying updates to your production environment.

Solution overview

The following diagram illustrates the layers and functions of a defense-in-depth solution. The text that follows describes each layer.
 

Figure 1: Solution overview diagram

Figure 1: Solution overview diagram

Edge network layer policy enforcement

The edge network is the first layer of policy enforcement and should be used for broad security policy enforcement. This is the ideal place for rules such as AWS Managed Rules Core rule set (CRS), geographical location blocks, IP reputational lists, anonymous IP lists, and basic rate limits enforcement. By limiting known bad traffic at the edge network, the CRS limits the exposure of the application layer to known bad IP address ranges, malicious requests, bad bots, and request floods. This provides broad protection to the inner application layer against malicious activity, which can be applied regardless of the web application being served at the application layer.

Combining Amazon CloudFront with the distributed denial of service (DDoS) mitigation capabilities of AWS Shield is supported by AWS WAF for your outer layer of web application security enforcement.

It’s a common misconception that CloudFront is only a content delivery platform, but it also has robust transparent reverse proxy capabilities. CloudFront can help protect your environment from a broad range of web application risks. For example, you can use CloudFront to ensure that HTTP requests conform to standards on the far outer layer of your web application environment while serving content closer to the user.

Application layer policy enforcement

The next level of enforcement should be an application load balancer in a public subnet with another web ACL at the CloudFront origin. This policy enforcement layer is where you create a regional web ACL for the CloudFront origin. In addition, this layer is where you apply application-specific rules. For example, if you have a web application that uses a LAMP stack, it would be best to use AWS Managed Rules for SQL Injection, Linux, and PHP as an enforcement layer.

Note: IP-based enforcement is not effective on this part of the environment. Consider making use of an origin custom header on the CloudFront distribution. Then using this custom header to create a BLOCK rule within this web ACL to deny any request without the origin custom header as the first rule in your web ACL list. This rule needs to be created manually and will not be configured by the supplied templates.

(Optional) Third-party web application firewall layer policy enforcement

AWS WAF enforces policies on inbound requests and doesn’t have outbound inspection capabilities. If you need to enforce policies based on outbound responses, you can use Amazon Machine Image (AMI) based web application firewalls, which are available via the AWS Marketplace.

Using an instance-based web application firewall is used here because most of the heavy lifting of computational expenditure is done on the AWS WAF enforcement layers. The third-party layer is where you can enforce policies that require requests to be stateful.

Using an AMI from AWS Marketplace also gives you access to capabilities such as higher visibility, threat intelligence, and robust firewall rules. This adds an additional layer of security enhancement to your environment.

(Optional) Private layer policy enforcement

When working with a traditional three-tier web architecture, you can add an additional layer of enforcement on the private layer, which can be used for the web front ends. This stage is where you would deploy an application load balancer in a private subnet serving your web front ends. This load balancer is there for any computational expensive regex-based rule enforcement that you don’t want to enforce on the instances-based WAF. This also gives you another layer of visibility before requests reaches the web front ends themselves. This example can be seen in Figure 2 below as a reference.

Use case examples

The AWS CloudFormation templates supplied can be deployed in a modular fashion. If the application load balancer is located in the us-east-1 region, you can deploy a single template called Amazon-CloudFront-Application-Load-Balancer-AMR.yml.

If the application load balancer isn’t located in us-east-1, you can use the Amazon-CloudFront-EdgeLayer-AMR.yml template to deploy the stack in us-east-1 to support the web ACL on CloudFront and then deploy ApplicationLayer-Load-Balancer-AMR.yml in the region the original application load balancer was deployed for its web ACL.

All CloudFormation templates are available on the Github project page and a summary of each can be found in the main readme.md file.

Note: All the individual rules in each rule set is set to ACTION OVERRIDE for initial deployment. If any of the rule actions in the group are set to block or allow, this override changes the behavior so that matching rules are only counted. You may change the setting to NO ACTION OVERRIDE after a period of evaluation to avoid disrupting production workloads with potential false positives.

Edge network and application load balancer origin using AWS Managed Rules for AWS WAF

When considering some of the web application best practices on AWS for resiliency and security, the recommendation is to use CloudFront where possible, because it can terminate TLS/SSL connections and serve cached content close to the end user. CloudFront has advanced mitigation capabilities such as SYN cookies and a massively distributed network separate from the traditional Amazon Elastic Compute Cloud (Amazon EC2) networking space. CloudFront also supports AWS WAF rate limits, IP blacklists, and broad security policies, which can be enforced at the edge network layer.

In the example Amazon-CloudFront-Application-Load-Balancer-AMR.yml template, we place a rate-limit for HTTP GET and HTTP POST methods. This is dependent upon expected traffic request rates. You can review Amazon CloudWatch metrics for your CloudFront distribution or application load balancer to determine the baseline for your rate limit based on the maximum expected requests per minute.

The rate limit is adjustable within the parameter options at deployment of the AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml. The HTTP POST rate limit also helps to slow down credential stuffing attacks—a form of brute force attack—on login pages. The ApplicationLayer-Load-Balancer-AMR.yml template used in part 2 of this post also deploys the Amazon IP reputation list to drop IP addresses based on Amazon internal threat intelligence.

We also use the AWS Managed Rules CommonRuleSet that blocks cross-site scripting (XSS) attacks, request with no user-agents, requests with known bad user-agents, large queries, posts, cookies, and URLs, and known LFI/RFI attacks.

Note: The size constraint rules aren’t recommended for protecting APIs or web applications with large HTTP POSTs or long cookies. Evaluate the possible effects of size constraint rules thoroughly before setting them to block requests.

There is also an AWS Managed Rule for known bad inputs which is based on threat intelligence gathered by the AWS Threat Research Team. Finally, there is an admin protection rule set that drops requests to known management login pages. It’s not advised that web applications have front door access to admin controls.

At the origin, it’s a good idea to use an application load balancer that also supports AWS WAF. This is where you want to apply application-specific web policies. For example, this is where you would apply rules to protect against a SQL injection attack if your web application uses a SQL database.

In the example AWS CloudFormation template Amazon-CloudFront-Application-Load-Balancer-AMR.yml, for the origin application load balancer, we use AWS Managed Rules for SQL injections, Linux rule set, Unix rule set, PHP rule set, and the WordPress rule set to cover most eventualities customers could be using on their web applications.

For the example solution in part 2 of this post, if the origin application load balancer is in us-east-1, you can use Amazon-CloudFront-Application-Load-Balancer-AMR.yml, which will deploy both web ACLs.

If the origin is not in us-east-1, you can use two example templates which are Amazon-CloudFront-EdgeLayer-AMR.yml for the edge network and ApplicationLayer-Load-Balancer-AMR.yml in the origin region.

Using AWS Managed WAF Rules on public and private application load balancers

Some customers have reasons to not use CloudFront and will use two application load balancers. One load balancer for the public facing environment for web front ends and an internal load balancer for the application backends.

The following figure shows a deployment that uses two load balancers. A public load balancer works with the edge network WAF to connect to a web front end in a private subnet and an internal load balancer connects to the backend application.
 

Figure 2: Diagram of stacked load balancers

Figure 2: Diagram of stacked load balancers

In this use case, we can still use the same structure of edge network and application layer network, now only using load balancers. Using a three-tier web application approach to deploy web applications there will be an external facing and an internal application load balancer where you can deploy the same style of policy enforcement, but only on load balancers.

Note: To deploy something similar to this example, you can use the template EdgeLayerALB-PrivateLayerALB-AMR.yml in the relevant regions where the load balancers have been deployed.

Alarms and logging

After deploying these AWS CloudFormation templates you should consider setting CloudWatch alarms on certain metrics for the HTTP GET and HTTP POST flood rules as well as the reputation and anonymous IP lists. Customers that are familiar with developing may also opt to use Lambda responders to use CloudWatch Events to trigger and update to the rule change from COUNT to BLOCK. Also enabling full logging for each web ACL will give you higher visibility into each request and will make potential investigations easier.

Conclusion

Using the new enhancements of AWS WAF makes it easier to manage a multi-layer web application security enforcement policy by using AWS WAF to maintain and deploy web application firewall configurations across their different deployment stages, as well as across different types of applications. By making use of partner or AWS Managed Rules, administrative overhead can be significantly reduced, and with AWS Firewall Manager, customers can enforce these policies across all of an organization’s accounts. Part 2 of this post will show you one example of how this can be done.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Daniel Cisco Swart

The AWS Managed Rules was something Daniel worked on personally over a number of years during his time with the AWS Threat Research Team. Currently Daniel is working with Security competency technology partners from the AWS Partner Network as a Partner Solutions Architect enabling customer success through technical collaboration with AWS’s top security partners.

New third-party test compares Amazon GuardDuty to network intrusion detection systems

Post Syndicated from Tim Winston original https://aws.amazon.com/blogs/security/new-third-party-test-compares-amazon-guardduty-to-network-intrusion-detection-systems/

A new whitepaper is available that summarizes the results of tests by Foregenix comparing Amazon GuardDuty with network intrusion detection systems (IDS) on threat detection of network layer attacks. GuardDuty is a cloud-centric IDS service that uses Amazon Web Services (AWS) data sources to detect a broad range of threat behaviors. Security engineers need to understand how Amazon GuardDuty compares to traditional solutions for network threat detection. Assessors have also asked for clarity on the effectiveness of GuardDuty for meeting compliance requirements, like Payment Card Industry (PCI) Data Security Standard (DSS) requirement 11.4, which requires intrusion detection techniques to be implemented at critical points within a network.

A traditional IDS typically relies on monitoring network traffic at specific network traffic control points, like firewalls and host network interfaces. This allows the IDS to use a set of preconfigured rules to examine incoming data packet information and identify patterns that closely align with network attack types. Traditional IDS have several challenges in the cloud:

  • Networks are virtualized. Data traffic control points are decentralized and traffic flow management is a shared responsibility with the cloud provider. This makes it difficult or impossible to monitor all network traffic for analysis.
  • Cloud applications are dynamic. Features like auto-scaling and load balancing continuously change how a network environment is configured as demand fluctuates.

Most traditional IDS require experienced technicians to maintain their effective operation and avoid the common issue of receiving an overwhelming number of false positive findings. As a compliance assessor, I have often seen IDS intentionally de-tuned to address the false positive finding reporting issue when expert, continuous support isn’t available.

GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs. This gives GuardDuty the ability to analyze event data, such as AWS API calls to AWS Identity and Access Management (IAM) login events, which is beyond the capabilities of traditional IDS solutions. Monitoring AWS API calls from CloudTrail also enables threat detection for AWS serverless services, which sets it apart from traditional IDS solutions. However, without inspection of packet contents, the question remained, “Is GuardDuty truly effective in detecting network level attacks that more traditional IDS solutions were specifically designed to detect?”

AWS asked Foregenix to conduct a test that would compare GuardDuty to market-leading IDS to help answer this question for us. AWS didn’t specify any specific attacks or architecture to be implemented within their test. It was left up to the independent tester to determine both the threat space covered by market-leading IDS and how to construct a test for determining the effectiveness of threat detection capabilities of GuardDuty and traditional IDS solutions which included open-source and commercial IDS.

Foregenix configured a lab environment to support tests that used extensive and complex attack playbooks. The lab environment simulated a real-world deployment composed of a web server, a bastion host, and an internal server used for centralized event logging. The environment was left running under normal operating conditions for more than 45 days. This allowed all tested solutions to build up a baseline of normal data traffic patterns prior to the anomaly detection testing exercises that followed this activity.

Foregenix determined that GuardDuty is at least as effective at detecting network level attacks as other market-leading IDS. They found GuardDuty to be simple to deploy and required no specialized skills to configure the service to function effectively. Also, with its inherent capability of analyzing DNS requests, VPC flow logs, and CloudTrail events, they concluded that GuardDuty was able to effectively identify threats that other IDS could not natively detect and required extensive manual customization to detect in the test environment. Foregenix recommended that adding a host-based IDS agent on Amazon Elastic Compute Cloud (Amazon EC2) instances would provide an enhanced level of threat defense when coupled with Amazon GuardDuty.

As a PCI Qualified Security Assessor (QSA) company, Foregenix states that they consider GuardDuty as a qualifying network intrusion technique for meeting PCI DSS requirement 11.4. This is important for AWS customers whose applications must maintain PCI DSS compliance. Customers should be aware that individual PCI QSAs might have different interpretations of the requirement, and should discuss this with their assessor before a PCI assessment.

Customer PCI QSAs can also speak with AWS Security Assurance Services, an AWS Professional Services team of PCI QSAs, to obtain more information on how customers can leverage AWS services to help them maintain PCI DSS Compliance. Customers can request Security Assurance Services support through their AWS Account Manager, Solutions Architect, or other AWS support.

We invite you to download the Foregenix Amazon GuardDuty Security Review whitepaper to see the details of the testing and the conclusions provided by Foregenix.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Winston

Tim is long-time security and compliance consultant and currently a PCI QSA with AWS Security Assurance Services.

How to use trust policies with IAM roles

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/

November 3, 2022: We updated this post to fix some syntax errors in the policy statements and to add additional use cases.

August 30, 2021: This post is currently being updated. We will post another note when it’s complete.


AWS Identity and Access Management (IAM) roles are a significant component of the way that customers operate on Amazon Web Service (AWS). In this post, we will dive into the details of how role trust policies work and how you can use them to restrict how your roles are assumed.

There are several different scenarios where you might use IAM roles on AWS:

  • An AWS service or resource accesses another AWS resource in your account – When an AWS resource needs access to other AWS services, functions, or resources, you can create a role that has appropriate permissions for use by that AWS resource. Services like AWS Lambda and Amazon Elastic Container Service (Amazon ECS) assume roles to deliver temporary credentials to your code that’s running in them.
  • An AWS service generates AWS credentials to be used by devices running outside AWS
    AWS IAM Roles Anywhere, AWS IoT Core, and AWS Systems Manager hybrid instances can deliver role session credentials to applications, devices, and servers that don’t run on AWS.
  • An AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. It allows human or machine IAM principals from one AWS account to assume this role and act on resources within a second AWS account. A role is assumed to enable this behavior when the resource in the target account doesn’t have a resource-based policy that could be used to grant cross-account access.
  • An end user authenticated with a web identity provider or OpenID Connect (OIDC) needs access to your AWS resources – This use case allows identities from Facebook or OIDC providers such as GitHub, Amazon Cognito, or other generic OIDC providers to assume a role to access resources in your AWS account.
  • A customer performs workforce authentication using SAML 2.0 federation – This occurs when customers federate their users into AWS from their corporate identity provider (IdP) such as Okta, Microsoft Azure Active Directory, or Active Directory Federation Services (ADFS), or from AWS Single Sign-On (AWS SSO).

An IAM role is an IAM principal whose entitlements are assumed in one of the preceding use cases. An IAM role differs from an IAM user as follows:

  • An IAM role can’t have long-term AWS credentials associated with it. Rather, an authorized principal (an IAM user, AWS service, or other authenticated identity) assumes the IAM role and inherits the permissions assigned to that role.
  • Credentials associated with an IAM role are temporary and expire.
  • An IAM role has a trust policy that defines which conditions must be met to allow other principals to assume it.

Managing access to IAM roles

Let’s dive into how you can control access to IAM roles by understanding the policy types that you can apply to an IAM role.

There are three circumstances where policies are used for an IAM role:

  • Trust policy – The trust policy defines which principals can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. The trust policy is the focus of the rest of this blog post.
  • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources.
  • Permissions boundary – A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions for a role. A principal’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based permissions policies and its permissions boundaries. You can use permissions boundaries to delegate permissions management tasks, such as IAM role creation, to non-administrators so that they can create roles in self-service.

For the rest of this post, you’ll learn how to enforce the conditions under which roles can be assumed by configuring their trust policies.

An example of a simple trust policy

A common use case is when you need to provide access to a role in account A to assume a role in Account B. To facilitate this, you add an entry in the role in account B’s trust policy that allows authenticated principals from account A to assume the role through the sts:AssumeRole API call.

Important: If you reference :root in an IAM role’s trust policy, you might allow more principals to assume your role than you intended, so it’s a best practice to use the Principal element or conditions to only allow specific principals or paths to assume a role. Later in this post, we show you how to limit this access to more specific principals.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

This trust policy has the same structure as other IAM policies with Effect, Action, and Condition components. It also has the Principal element, but no Resource element. This is because the resource is the IAM role itself. For the same reason, the Action element will only ever be set to relevant actions for role assumption.

Note: The suffix :root in the policy’s Principal element equates to the principals in the account, not the root user of that account.

Using the Principal element to limit who can assume a role

In a trust policy, the Principal element indicates which other principals can assume the IAM role. In the preceding example, 111122223333 represents the AWS account number for the auditor’s AWS account. This allows a principal in the 111122223333 account with sts:AssumeRole permissions to assume this role.

To allow a specific IAM role to assume a role, you can add that role within the Principal element. For example, the following trust policy would allow only the IAM role LiJuan from the 111122223333 account to assume the role it is attached to.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/LiJuan"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The principals included in the Principal element can be a principal defined within the IAM documentation, and can refer to an AWS or a federated principal. You can’t use a wildcard (“*” or “?”) within the Principal element for a trust policy, other than one case which we will cover later. You must define precisely which principal you are referring to because there is a translation that occurs when you submit your trust policy that ties it to each principal’s ID. For more information, see Why is there an unknown principal format in my IAM resource-based policy?

If an IAM role has a principal from the same account in its trust policy directly, that principal doesn’t need an explicit entitlement in its identity-attached policy to assume the role.

Using the Condition element in a trust policy

The Condition element in a role trust policy sets additional requirements for the Principal trying to assume the role. The Condition element is a flexible way to reduce the set of users that are able to assume the role without necessarily specifying the principals.

Condition elements of role trust policies behave identically to condition elements in identity-based policies and other resource policies on AWS.

Using SAML identity federation on AWS

Federated users from a SAML 2.0 compliant IdP are given permissions to access AWS accounts through the use of IAM roles. The mapping of which enterprise users get which roles is established within the directory used by the SAML 2.0 IdP and is placed inside the signed SAML assertion by the IdP.

The Principal element of a role trust policy for SAML federation contains the ARN of the SAML IdP in the same AWS account. IdPs in other accounts can’t be referenced. Roles assumed by SAML federation can use SAML-specific condition keys in their role trust policy.

A role trust policy for a role to be assumed by SAML places the ARN of the SAML IDP in the Principal element, and checks the intended audience (SAML:aud) of the SAML assertion. Setting the audience condition is important because it means that only SAML assertions intended for AWS can be used to assume a role:

{
    "Version": "2012-10-17",
    "Statement": {
      "Effect": "Allow",
      "Action": "sts:AssumeRoleWithSAML",
      "Principal": {"Federated": "arn:aws:iam::account-id:saml-provider/PROVIDER-NAME"},
      "Condition": {"StringEquals": {"SAML:aud": "https://signin.aws.amazon.com/saml"}}
    }
  }

The AWS documentation covers creating roles for SAML 2.0 federation in detail. For information about how to manage the role trust policies of roles assumed by SAML from multiple AWS Regions for resiliency, see the blog post How to use regional SAML endpoints for failover.

For federating workforce access to AWS, you can use AWS IAM Identity Center (successor to AWS Single Sign-On) to broker access to IAM roles through SAML. Roles managed by IAM Identity Center can’t have their trust policy modified by IAM directly.

SAML IDPs used in a role trust policy must be in the same account as the role is.

Assuming a role with WebIdentity

Roles can also be assumed with tokens issued by web identity providers and OpenID Connect (OIDC) compliant providers.

After you’ve created an OpenID Connect identity provider in your account, you can configure roles to be assumed by that OpenID Connect identity provider.

The following is a trust policy that allows a role to be assumed by the identity provider auth.example.com where the value of the sub claim is equal to Administrator and the aud is equal to MyappWebIdentity.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::11112222333:oidc-provider/auth.example.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "auth.example.com:sub": "Administrator",
  "auth.example.com:aud": "MyappWebIdentity"
                }
            }
        }
    ]
}

The condition keys used for roles assumed by OIDC identity providers are always prefixed with the name of the OIDC identity provider (for example, auth.example.com). So to use claims in the ID Token like aud, sub, and amr, they are prefixed to become auth.example.com:aud, auth.example.com:sub, and auth.example.com:amr, respectively, in a trust policy to be evaluated as a condition key. Only ID Token claims listed in the STS documentation can be used in role trust policies as condition keys.

It’s important to set the:aud condition in role trust policies to help verify that the tokens being used to assume roles in your AWS accounts are tokens that are intended to be used for that purpose, and are for your application or tenant if your web identity provider is a public or multi-tenant identity provider, such as Google or GitHub.

Amazon Elastic Kubernetes Service (Amazon EKS) clusters have OIDC identity provider capabilities and can be used to assume roles in AWS accounts.

OIDC identity providers used to assume a role must be in the same AWS account as the role.

Limiting role use based on an identifier

At times you might need to give a third-party access to your AWS resources. Suppose that you hired a third-party company, Example Corp, to monitor your AWS account and help optimize costs. To track your daily spending, Example Corp needs access to your AWS resources, so you allow them to assume an IAM role in your account. However, Example Corp also tracks spending for other customers, and there could be a configuration issue in the Example Corp environment that allows another customer to compel Example Corp to attempt to take an action in your AWS account, even though that customer should only be able to take the action in their own account. This is referred to as the cross-account confused deputy problem. This section shows you a way to mitigate this risk.

The following trust policy requires that principals from the Example Corp AWS account, 444455556666, have provided a special string, called an external ID, when making their request to assume the role. Adding this condition reduces the risk that someone from the 444455556666 account will assume this role by mistake. This string is configured by specifying an ExternalID conditional context key.

External IDs should be generated by the third-party assuming your role, like Example Corp, and associated with the other assume role calls to assume a given customer’s role by Example Corp. By doing this, other Example Corp customers won’t be able to compel Example Corp to assume your roles on their behalf because they can’t force Example Corp to use your external ID through their tenant even if they become aware of your external ID.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::444455556666:role/ExampleCorpRole"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "ExampleUniquePhrase"
        }
      }
    }
  ]
}

The external IDs should be unique for every customer of a service provider. AWS doesn’t treat external IDs as secrets—they can be seen by anyone with entitlements to view a role’s trust policy.

If you assume roles in your customers’ accounts, it’s a best practice to generate unique external ID values on behalf of your customers and associate them with your customers, and you shouldn’t allow your customers to specify an external ID.

Roles with the sts:ExternalId condition can’t be assumed through the AWS console, unless there is another Allow statement without that condition.

Limiting role use based on IP addresses or CIDR ranges

You can put IP address conditions into a role trust policy to limit the networks from which a role can be assumed. For example, you can limit role assumption to a corporate network or VPN range. The following example trust policy will only allow the role to be assumed if the call is made from within the 203.0.113.0/24 CIDR range.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/IpBoundedRole"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "IpAddress": {
          "aws:SourceIp": "203.0.113.0/24"
        }
      }
    }
  ]
}

By using aws:SourceIP in the trust policy, you limit where the role can be assumed from, but this doesn’t limit where the credentials can be used from after they are assumed. To restrict where the credentials can be used from, you can use aws:SourceIP as a condition within the principal’s identity-based policy or the service control policies that apply to it. For more information on restricting where credentials can be used from, see Establishing a data perimeter on AWS.

Limiting role use based on tags

You can use IAM tagging capabilities to build flexible and adaptive trust policies. You can use an attribute-based access control (ABAC) model for assuming IAM roles in the same way that you can for accessing objects in an Amazon Simple Storage Service (Amazon S3) bucket. You can build trust policies that only permit principals that have already been tagged with a specific key and value to assume a specific role. The following example trust policy requires that IAM principals in the AWS account 111122223333 have the value of their principal tag department match the value of the IAM role’s tag owningDepartment.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/department": "${aws:ResourceTag/owningDepartment}"
        }
      }
    }
  ]
}

As an example, in the preceding policy, if the role is tagged with an owningDepartment of finance, then only principals within account 111122223333 who have a tag department with a value of finance will be able to assume the role.

When using ABAC, it’s important to have governance around who can set tags on resources, principals, and sessions. If someone can change or modify tags on principals, resources, or sessions, they might be able to access resources that you didn’t intend them to. Principals from AWS accounts outside of your control might have different tag governance practices than your own organization, and you should take this into account when using principal tags as part of cross-account role assumption. You can use tag policies to help govern tags within your organization, and later in this blog post, we show how to manage tags set on assumption by using role trust policies.

For more information, see the Attribute-Based Access Control (ABAC) for AWS page.

Limiting role assumption to only principals within your organization

Since its announcement in 2016, the vast majority of enterprise customers that we work with use AWS Organizations. This AWS service allows you to create an organizational structure for your accounts by creating logical boundaries/organizational units that allow grouping of AWS accounts that need common guardrails applied. You can use the PrincipalOrgID condition key to limit the use of roles solely to principals within your organization in AWS Organizations.

The following example shows a policy that denies assumption of this role except by AWS services or by principals that are a member of the o-abcd12efg1 organization. This statement can be broadly applied to prevent someone outside your AWS organization from assuming your roles.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringNotEquals": {
                    "aws:PrincipalOrgID": "o-abcd12efg1"
                },
                "Bool": {
                    "aws:PrincipalIsAWSService": "false"
                }
            }
        }
    ]
}

In the preceding example, the StringNotEquals operator denies access to this role by a principal that doesn’t belong to a member account of the specified organization.

AWS roles that you intend to use with AWS services need to be able to be assumed by those AWS services. In the preceding example, we added the aws:PrincipalIsAWSService condition key so that an AWS service principal isn’t impacted by the explicit Deny statement. All principals, including AWS services, are still required to have an explicit Allow statement in a role’s trust policy to assume that role. Requests to customer resources by AWS service principals do so with the aws:PrincipalIsAWSService condition key set to a value of true, which means that the preceding Deny statement won’t apply to a service principal, but an Allow statement will let a service principal assume the role.

You can also use the aws:PrincipalOrgPaths condition key to limit role assumption to member accounts within a specific OU of an organization if you want role assumption to be more fine-grained.

Enforcing invariants with Deny statements

Only allowing principals in your organization to assume your roles is an example of a security invariant. Security invariants are security principles that you want to always apply. Deny statements are useful in trust policies to restrict conditions under which you would never want a role to be assumable. In AWS authorization, the presence of an applicable Deny statement overrides an applicable Allow statement. So having a Deny statement with conditions in it that should never be met such as allowing a role to be assumed by a principal outside of your organization is powerful.

Setting the source identity on role sessions to help trace actions in CloudTrail

You can configure a role session to have a source identity when assumed. This is most common when customers federate users into IAM through SAML2.0 or Web Identity/OpenID Connect to assume roles. You can configure your IdP to set the SourceIdentity attribute on the role session. Setting the source identity causes AWS CloudTrail logs for actions taken by this role session to contain the source identity so that you can trace actions taken by roles back to the user that assumed them. The SourceIdentity attribute also follows that role session if it assumes another role.

To set a source identity, you need to grant the IdP the sts:SetSourceIdentity entitlement in the role’s trust policy.

{
    "Version": "2012-10-17",
    "Statement": {
      "Effect": "Allow",
      "Action": ["sts:AssumeRoleWithSAML","sts:SetSourceIdentity"],
      "Principal": {"Federated": "arn:aws:iam::111122223333:saml-provider/PROVIDER-NAME"},
      "Condition": {"StringEquals": {"SAML:aud": "https://signin.aws.amazon.com/saml"}}
    }
  }

In order for a role session that has a SourceIdentity set to assume a second role, it must also have the sts:SetSourceIdentity entitlement in that second role’s trust policy. If it doesn’t, the first role won’t be able to assume the second role.

You can also use the sts:SourceIdentity condition key to enforce that the SourceIdentity attribute that is being set conforms to an expected standard:

            "Condition": {
                "StringLike": {"sts:SourceIdentity": "*@example.org"}
            }

In the preceding example, for the Condition element, all requests must contain @example.org.

Setting tags on role sessions

You can set tags on role sessions, which can then be used in IAM and resource policy authorization decisions. Tags on role sessions are evaluated with the same condition key that tags on IAM roles are: aws:PrincipalTag/TagKey. Tag values that are set when a role is assumed have precedence over tag values that are attached to the role.

If you’re basing authorization on principal tags in your AWS accounts, it’s important that you control who can set the session tags and principal tags in your accounts so that access isn’t granted to unintended parties.

The ability to tag a role session must be granted in a role’s trust policy using the sts:TagSession permission, and you can use conditions and condition keys to restrict which tags can be set to which values.

The following is an example statement for a role trust policy that allows a principal from account 111122223333 to assume the role and requires that the three session tags for Project, CostCenter and Department are set. The Department tag must have a value of either Engineering or Marketing. The third Condition statement allows the Project and Department tags to be set as transitive when the role is assumed. Because conditions for the tags are in the same Allow statement as the AssumeRole entitlement, these tags are required to be set.

        {
            "Effect": "Allow",
            "Action": ["sts:TagSession","sts:AssumeRole"],
            "Principal": {"AWS": "arn:aws:iam::111122223333:root"},
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*",
                    "aws:RequestTag/CostCenter": "*"
                },
                "StringEquals": {
                    "aws:RequestTag/Department": [
                        "Engineering",
                        "Marketing"
                    ]
                },
                "ForAllValues:StringEquals": {
                    "sts:TransitiveTagKeys": [
                        "Project",
                        "Department"
                    ]
                }
            }
        }

When a role session assumes another role, transitive tags from the calling role session are set to the same value within the subsequent role session. For more information, see Chaining roles with session tags.

You can use Deny statements with the sts:TagSession operation to restrict certain tags from being set. In the following example, attempts to tag a session with an Admin tag would be denied:

{
    "Effect": "Deny",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "Null": {
            "aws:RequestTag/Admin": false
        }
    }
}

In the following example statements, we deny tagging operations on role sessions where the Team tag is equal to Admin, but we allow the setting of a different tag value.

{
    "Effect": "Deny",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/Team": "Admin"
        }
    }
},
{
    "Effect": "Allow",
    "Action": "sts:TagSession",
    "Principal": {"AWS": "*"},
    "Condition": {
        "StringLike": {
            "aws:RequestTag/Team": "*"
        }
    }
}

What happens when a role in a trust policy is deleted

When you specify a role in the Principal element of a trust policy, AWS uses that role’s unique RoleId to make the authorization decision. If the ExampleCorpRole role from the earlier policy examples was deleted and re-created in account 111122223333, then the unique RoleId would be different, and the new ExampleCorpRole wouldn’t be able to assume the roles that trusted it in the Principal element.

When a role is deleted, the trust policy of the remaining roles that referenced this now-deleted role will show the unique RoleId it trusted in the Principal element when viewed:

"Principal": {
				"AWS": "AROA1234567123456D"
			}

Because the policy references a now-invalid RoleID, it can’t be modified until the invalid RoleID is removed from it. You can retrieve the original role ARNs by looking at CloudTrail logs for UpdateAssumeRolePolicy and CreateRole events for a role and reading the trust policy from the log entries.

For more information about using the Principal element in policy statements, see IAM role principals.

Principals placed inside the Condition block of a trust policy statement are not referenced to their RoleID but instead use the ARN of the role. The following trust policy statement would allow the ExampleCorpRole to assume the role that trusted it, even if the ExampleCorpRole role was deleted and re-created.

  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
    "ArnEquals": {
      "aws:PrincipalArn": "arn:aws:iam::111122223333:role/ExampleCorpRole"
    }
  }
    }
  ]
}

When creating a role trust policy, you should determine the behavior that you want to occur when a role is deleted. Your organization’s security posture might dictate that a deleted and re-created role should no longer be able to assume a role in your account, so using a specific principal in the Principal element is appropriate. Or you might want to allow the role to be assumed in the event that a given principal is deleted and re-created.

If you use the aws:PrincipalArn condition with a principal of :root to allow role assumption within the same account, the principal doing the assuming will need the sts:AssumeRole action in its identity-based policy.

Wildcarding principals

Earlier we noted that wildcards can’t be placed in the Principal element of a policy as part of an ARN. However, wildcards can be used in the Condition block of a policy, so wildcarding is possible with the ArnLike and StringLike condition operators. This is useful when you don’t know the specific roles, but you do have other controls that limit the path where known roles are created, such as delegated administrator models. The following policy allows a role from account 111122223333 in the path OpsRoles to assume it.

  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
    "ArnLike": {
      "aws:PrincipalArn": "arn:aws:iam::111122223333:role/OpsRoles/*"
    }
  }
    }
  ]
}

It’s a best practice to restrict role assumption to specific paths or principals instead of allowing an entire account where possible.

Using multiple statements

So far, the examples in this post have been single policy statements. Trust policies, like other policies on AWS, can have multiple statements up to the quota for role trust policy length.

You can combine multiple statements together to create complex role trusts like the following, which allows ExampleRole to assume a role and tag the session, but only from the network range 203.0.113.0/24 while forbidding that the Admin tag be set:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:role/ExampleRole"
                ]
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "203.0.113.0/24"
                }
            }
        },
        {
            "Effect": "Deny",
            "Action": "sts:TagSession",
            "Principal": {
                "AWS": "*"
            },
            "Condition": {
                "Null": {
                    "aws:RequestTag/Admin": false 
                }
            }
        }
    ]
}

Although it’s possible to use multiple statements, it’s a best practice that you don’t use roles for unrelated purposes, and that you don’t share roles across different AWS services. It’s also a best practice to use different IAM roles for different use cases and AWS services, and to avoid situations where different principals have access to the same IAM role.

Working with services that deliver role-session credentials

IAM Roles Anywhere, AWS IoT Core, and Systems Manager can deliver AWS role session credentials to devices, servers, and applications running outside of AWS. These roles are assumed by AWS services and delivered to your devices, servers, and applications after they authenticate to their respective AWS services.

For more information about these services and their requirements, see the following documentation:

Role chaining

When a role assumes another role, it’s called role chaining. Sessions created by role chaining have a maximum lifetime of 1 hour regardless of the maximum session length that a role is configured to allow.

Roles that are assumed by other means are not considered role chaining and are not subject to this restriction.

Conclusion

In this post, you learned how to craft trust policies for your IAM roles to restrict their assumption by specific principals and under certain conditions, and to combine multiple statements with different conditions. You also learned how to use features like source identity and session tags, how to protect against the cross-account confused deputy problem, and the nuances of the Principal element. You now have the tools that you need to build robust and effective trust policies for roles in your organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Liam Wadman

Liam Wadman

Liam is a Solutions Architect with the Identity Solutions team. When he’s not building exciting solutions on AWS or helping customers, he’s often found in the hills of British Columbia on his Mountain Bike. Liam points out that you cannot spell LIAM without IAM.

Discover sensitive data by using custom data identifiers with Amazon Macie

Post Syndicated from Kayla Jing original https://aws.amazon.com/blogs/security/discover-sensitive-data-by-using-custom-data-identifiers-with-amazon-macie/

As you put more and more data in the cloud, you need to rely on security automation to keep it secure at scale. AWS recently launched Amazon Macie, a fully managed service that uses machine learning and pattern matching to help you detect, classify, and better protect your sensitive data stored in the AWS Cloud.

Many data breaches are not the result of malicious activity from unauthorized users, but rather from mistakes made by authorized users. To monitor and manage the security of sensitive data, you must first be able to identify it. In this post, we show you how to use custom data identifiers with Macie to identify sensitive data. Once you know what’s sensitive, you can start designing security controls that operate at scale to monitor and remediate risk automatically.

Macie comes with a set of managed data identifiers that you can use to discover many types of sensitive data. These are somewhat generic and broadly applicable to many organizations. What makes Macie unique is its ability to help you address specific data needs. Macie enables you to expand your sensitive data detection through the new custom data identifiers. Custom data identifiers can be used to highlight organizational proprietary data, intellectual property, and specific scenarios.

Custom Data Identifiers in Macie help you find and identify sensitive data based on your own organization’s specific needs. In this post, we show you a step-by-step walkthrough of how to define and run custom data identifiers to automatically discover specific, sensitive data. Before you begin using Custom Data Identifiers, you need to enable Macie and configure detailed logging. Follow these instructions to enable Macie and follow these instructions to configure detailed logging, if you haven’t done that already.

When to use the Custom Data Identifier resource

To begin, imagine you’re an IT administrator for a manufacturing company that’s headquartered in France. Your company has acquired a few additional local subsidiaries, including an R&D facility in São Paulo, Brazil. The company is migrating to AWS, and in the process is classifying registration information, employee information, and product data into encrypted and non-encrypted storage.

You want to identify sensitive data for the following three scenarios:

  • SIRET-NIC: SIRET-NIC is a unique number assigned to businesses in France. This number is issued by their National Institute of Statistics (INSEE) when a business is registered. A sample file that contains SIRET-NIC information is shown in the following figure. Each record in the file includes the GUID, employee name, employee email, the company name, the date it was issued, and the SIRET-NIC number.

    Figure 1: SIRET-NIC dataset

    Figure 1: SIRET-NIC dataset

  • Brazil CPF (Cadastro de Pessoas Físicas – Natural Persons Register): CPF is a unique number assigned by the Brazilian revenue agency to people subject to taxes in the country. Each of your employees residing in the Brazilian office has a CPF.
  • Prototyping naming convention: Your company has products that are publicly available, but also products that are still in the prototyping stage and should be kept confidential. A sample file that contains Brazil CPF numbers and the prototype names is shown in the following figure.

    Figure 2: Brazil CPF and prototype number dataset

    Figure 2: Brazil CPF and prototype number dataset

Configure the Custom Data Identifier resource in the Macie console

To use custom data identifiers to identify your organization’s sensitive information, you must:

  1. Create custom data identifiers.
  2. Create a job to scan your Amazon Simple Storage Service (Amazon S3) bucket to locate the data patterns that match your custom data identifiers.
  3. Respond to the returned results.

The following steps introduce you to the Custom Data Identifier resource in Macie.

Designing Custom Data Identifiers for use with Amazon Macie

In the previous section you discovered 3 scenarios that your company will like to protect SIRET-NIC, Brazil CPF, and your prototyping naming convention. You now need to first create a specific REGEX pattern for each of these scenarios. There are different syntaxes and dialects of regular expression languages. Amazon Macie supports a subset of the Perl Compatible Regular Expressions (PCRE) library, and you can learn more about it in Regex support in custom data identifiers section. Once the patterns are ready, follow the instructions below to create the custom data identifiers.

Creating Custom Data Identifiers in Amazon Macie

  1. Sign in to the AWS Management Console.
  2. Enter Amazon Macie in the AWS services search box.
  3. Choose Amazon Macie.
  4. In the navigation pane on the left-hand side, under Settings, choose Custom data identifiers as shown in the following figure.

    Figure 3: Custom data identifiers console

    Figure 3: Custom data identifiers console

Create a custom data identifier

  1. Choose Create on the custom data identifier console.
  2. Name: Enter a name for your custom data identifier. Make it descriptive so you know what it does. For example, enter SIRET-NIC for the SIRET-NIC number you use.
  3. Description: Enter a description of the custom data identifier.
  4. Regular expression (regex): Define the pattern you want to identify. Use a Regular Expression (“regex”) to create the desired pattern. For example, a SIRET-NIC number contains 14 digits—9 numbers followed by a hyphen and then 5 more numbers. The first part, 9 numbers, can stay together or separated by spaces into 3 groups of 3. The specific regex pattern for this is \b(\d{3}\s?){2}\d{3}\-\d{5}\b
  5. Keywords: Define expressions that identify the text to match. The SIRET-NIC number itself is publicly accessible information. But in your case, you want to encrypt the information about the company that was registered during the month the acquisition happened (April 2020), thus the information will not leak to your competitors. So, the keywords here will be all the days in April.
  6. (Optional) Ignore words: Use this box to enter text that you want to be ignored. In this example scenario, you know your security training materials always use an example SIRET-NICs of 12345789-12345 and 000000000-00000. You can enter these values here, so that your security training materials are not flagged as sensitive data containing SIRET-NICs.
  7. Maximum match distance: Use this box to define the proximity between the result and the keywords. If you enter 20, Macie will provide results that include the specified keyword and 20 characters on either side of it.

Note: Do not select Submit yet. After entering the settings and before selecting Submit, you should test your custom data identifier with sample data to confirm that it works.

With all the attributes set, your console will look like what is shown in Figure 4.

Figure 4: SIRET-NIC custom data identifier creation

Figure 4: SIRET-NIC custom data identifier creation

Test your SIRET-NIC custom data identifier

Use the Evaluate section on the right-hand panel of the Macie console to confirm that the regex pattern and other configurations for your custom data identifier are correct.

Follow the steps below to use the Evaluate section.

  1. Enter test data in the sample data box.
  2. Select Submit. There will be one match per record in the file if the configurations are correct and your custom data identifier is ready.The following figure is an example of the Evaluate section using test data. The test data has 3 records, each record has 5 fields which are GUID, employee name, employee email, company name, date SIRET-NIC was issued, and the SIRET-NIC number.

    Figure 5: Evaluate, showing sample data

    Figure 5: Evaluate, showing sample data

  3. After verifying your SIRET-NIC custom data identifier works in the Evaluate section, now select Submit on the New custom data identifier window to create the custom data identifier.

Create a Brazil CPF Custom Data Identifier

Congrats on creating your first custom data identifier! Now use the same steps to create and test custom data identifiers for the Brazil CPF and prototyping naming convention scenarios. The Brazil CPF number usually shows up in the format of 000.000.000-00.

Use the following values for the Brazil CPF scenario, as shown in the following figure:

  • Name: Brazil CPF
  • Description: The format for Brazil CPF in our sample data is 000.000.000-00
  • Regular expression: \b(\d{3}\.){2}\d{3}\-\d{2}\b

    Figure 6: Brazil CPF custom data identifier

    Figure 6: Brazil CPF custom data identifier

Create a Prototype Name Custom Data Identifier

Assume that your company has a very strict and regular naming scheme for prototype part numbers. It is P, followed by a hyphen, and then 2 letters and 4 digits. E.g., P-AB1234. You want to identify objects in S3 that contain references to private prototype parts. This is a small pattern, and so if we’re not careful it will cause Macie to flag objects that do not actually contain one of our prototype numbers. We suggest adding \b at the beginning and the end of the regular expression. The \b symbol means a “word boundary” and word boundaries are basically whitespace, punctuation, or other things that are not letters and numbers. With \b, you limit the pattern so that you only match if the entire word matches the pattern. For example, P-AB1234 will match the pattern, but STEP-AB123456 and P-XY123 will not match the pattern. This gives you finer grained control and reduces false positives.

Use the following values for the prototyping name scenario, as shown in the following figure:

  • Name: Prototyping Naming
  • Description: Any prototype name start with P means it’s private. The format for private prototype name is P-2 capital letters and 4 numbers
  • Regular expression: \bP\-[A-Z]{2}\d{4}\b
Figure 7: Prototyping naming custom data identifier

Figure 7: Prototyping naming custom data identifier

You should now see a page like the following figure, indicating that the SIRET-NIC, Brazil CPF, and Prototyping Naming custom data identifiers are successfully configured.

Figure 8: Successfully configured custom data identifier

Figure 8: Successfully configured custom data identifier

Set up a Test Bucket to Demonstrate Macie

Before we can see Macie do its work, we have to create a bucket with some test data that we can scan. We’ve provided some sample data files that you can download. Follow these instructions to create a test bucket and load our test data into the test bucket.

  1. Download the sample data and unzip it.
  2. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  3. Choose Create bucket. The Create bucket wizard opens.
  4. In Bucket name, enter a DNS-compliant name for your bucket. The bucket name must:
    • Be unique across all of Amazon S3.
    • Be between 3 and 63 characters long.
    • Not contain uppercase characters.
    • Start with a lowercase letter or number.

    We created a bucket called bucketformacieuse; you have to choose another name because this one is already taken by us.

  5. In Region, choose the AWS Region where you want the bucket to reside.
  6. Select Create, to finish the bucket creation.
  7. Open the bucket you just created and upload the two Excel files you downloaded in step 1.

Use Macie to create a job to scan your data

Now you can create a job to scan your Amazon S3 bucket to detect and locate the data patterns defined in the SIRET-NIC, Brazil CPF, and Prototyping Naming custom data identifiers.

To create a job

  1. In the navigation pane, choose Jobs, and then select Create Job on the upper right.
  2. Select Amazon S3 buckets: Select the S3 bucket you want to analyze. In this case, we are using the bucket previously created, bucketformacieuse.
  3. Review Amazon S3 buckets: Verify that you selected the S3 bucket you want the job to scan and analyze.
  4. Scope: Select your scope. For this example, choose the One-time job option as your scope. The scope specifies how often you want the job to run. This can be either a one-time job or a scheduled job. If you choose a scheduled job, you can define how often you want your job to scan your Amazon S3 bucket.
  5. Custom data identifiers: Select the 3 custom data identifiers you created to be associated with this job, and then select Next. This is shown in the following figure.

    Figure 9: Select your custom data identifiers

    Figure 9: Select your custom data identifiers

  6. Name and description: Enter a name and description for the job.
  7. Review and create: Review and verify all your settings, and then select Create.

You now have a job in Macie to scan the Amazon S3 buckets you’ve chosen using the 3 custom data identifiers you created. More information about creating jobs is available in Running sensitive data discovery jobs in Amazon Macie.

Respond to results

Macie will help you be secure when you’re effectively responding to the findings that it produces. For our example, we’ll show you how to review your findings manually. You can look at your findings by bucket, type, or job, or see a collective summary of all findings. In this example, let’s look at all findings.

To review your results

  1. In the navigation pane on the left-hand side, choose Findings. Findings include the severity, the type, the resources affected, and when the findings were last updated.
  2. The following figure shows an example of the results you might see on the findings page. There are two findings for the selected job. The compagnie_français.csv and the empresa_brasileira.csv files contain the custom data identifiers that you created earlier and added to the job.

    Figure 10: Findings

    Figure 10: Findings

  3. Let’s look at the details of one of the findings so you can review the results. From the page showing the 4r findings, select the file that contains your custom data identifier for the Brazil CPF: empresa_brasileira.csv. The number of custom data identifiers found in the document is shown in the Result section on the right, as shown in the following figure.

    Figure 11: Findings detail page for the Brazil CPF custom data identifiers

    Figure 11: Findings detail page for the Brazil CPF custom data identifiers

  4. Now look at the findings details for the compagnie_français.csv file. It shows the number of custom data identifiers found in the file. In this case Macie found 13 SIRET-NIC numbers as shown in the following figure.

    Figure 12: Findings page for the French company file

    Figure 12: Findings page for the French company file

  5. If you configured detailed logging, the results will be saved in the Amazon S3 bucket you specified. The S3 bucket location can be found in the Details section after Detailed result location as shown in the preceding figure.

Now that you’ve used Macie and the Custom Data Identifiers resource to obtain these findings, you can identify what data to place in encrypted storage, and what can be placed in non-encrypted storage when migrating to AWS. Macie and custom data identifiers provide an automated tool to help you enhance protection of your sensitive data by providing you the information to help detect and classify your data in the AWS Cloud.

Using Macie at Scale

Custom Data Identifiers help you tell Macie what to look for. As you move more and more data to the cloud, you’ll need to make new identifiers and new rules. As your rules and identifiers grow you will need to create automation that responds to things that are found. For example, perhaps a lambda function turns on encryption in a bucket when it finds sensitive data in that bucket. Or perhaps a function automatically applies tags to buckets where sensitive data is found, and those buckets and their owners start to appear on reports for audit and compliance. Once you’ve done this at small scale, think about how you will automate responses at larger scale.

Conclusion

The new Custom Data Identifier resource in the newly enhanced Macie can help you detect, classify, and protect sensitive data types unique to your organization. This post focused on the functionality and use of custom data identifiers to automatically discover sensitive data stored in Amazon S3. You can also review the managed data identifiers to see a list of personally identifiable information (PII) that Macie can detect by default. Visit What is Amazon Macie? to learn more.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kayla Jing

Kayla is a Solutions Architect at Amazon Web Services based out of Seattle. She has experience in data science with a focus on Data Analytics and Machine Learning.

Author

Joshua Choung

Joshua is a Solutions Architect based out of Seattle. He works with customers to provide architectural and technical guidance and training on their AWS cloud journey.

Author

Laura Reith

Laura is a Solutions Architect at Amazon Web Services. Before AWS, she worked as a Solutions Architect in Taiwan focusing on physical security and retail analytics.

How to think about cloud security governance

Post Syndicated from Paul Hawkins original https://aws.amazon.com/blogs/security/how-to-think-about-cloud-security-governance/

When customers first move to the cloud, their instinct might be to build a cloud security governance model based on one or more regulatory frameworks that are relevant to their industry. Although this can be a helpful first step, it’s also critically important that organizations understand what the control objectives for their workloads should be.

In this post, we discuss what you need to do both organizationally and technically with Amazon Web Services (AWS) to build an efficient and effective governance model. People who are taking their first steps in cloud can use this post to guide their thinking. It can also act as useful context for folks who have been running in the cloud for a while to evaluate their current governance approach.

But before you can build that model, it’s important to understand what governance is and to consider why you need it. Governance is how an organization ensures the consistent application of policies across all teams. The best way to implement consistent governance is by codifying as much of the process as possible. Security governance in particular is used to support business objectives by defining policies and controls to manage risk.

Moving to the cloud provides you with an opportunity to deliver features faster, react to the changing world in a more agile way, and return some decision making to the hands of the people closest to the business. In this fast-paced environment, it’s important to have a way to maintain consistency, scaleability, and security. This is where a strong governance model helps.

Creating the right governance model for your organization may seem like a complex task, but it doesn’t have to be.

Frameworks

Many customers use a standard framework that’s relevant to their industry to inform their decision-making process. Some frameworks that are commonly used to develop a security governance model include: NIST Cybersecurity Framework (CSF), Information Security Registered Assessors Program (IRAP), Payment Card Industry Data Security Standard (PCI DSS), or ISO/IEC 27001:2013

Some of these standards provide requirements that are specific to a particular regulator, or region and others are more widely applicable—you should choose one that fits the needs of your organization.

While frameworks are useful to set the context for a security program and give guidance on governance models, you shouldn’t build either one only to check boxes on a particular standard. It’s critical that you should build for security first and then use the compliance standards as a way to demonstrate that you’re doing the right things.

Control objectives

After you’ve selected a framework to use, the next considerations are controls. A control is a technical- or process-based implementation that’s designed to ensure that the likelihood or consequences of an identified risk are reduced to a level that’s acceptable to the organization’s risk appetite. Examples of controls include firewalls, logging mechanisms, access management tools, and many more.

Controls will evolve over time; sometimes they do so very quickly in the early stages of cloud adoption. During this rapid evolution, it’s easy to focus purely on the implementation of a control rather than the objective of it. However, if you want to build a robust and useful governance model, you must not lose sight of control objectives.

Consider the example of the firewall. When you use a firewall, you implement a control. The objective is to make sure that only traffic that should reach your environment is able to reach it. Although a firewall is one way to meet this objective, you can achieve the same outcome with a layered approach using Amazon Virtual Private Cloud (Amazon VPC) Security Groups, AWS WAF and Amazon VPC network access control lists (ACLs). Splitting the control implementation into multiple places can enable workload owners to have greater flexibility in how they configure resources while the baseline posture is delivered automatically.

Not all areas of a business necessarily have the same cloud maturity level, or use the same methods to deploy or run workloads. As a security architect, your job is to help those different parts of the business deliver outcomes in the way that is appropriate for their maturity or particular workload.

The best way to help drive this goal is for the security part of your organization to clearly communicate the necessary control objectives. As a security architect, it’s easier to have a discussion about the things that need tweaking in an application if the objectives are well communicated. It is much harder if the workload owner doesn’t know they have to meet certain security expectations.

What is the job of security?

At AWS, we talk to customers across a range of industries. One thing that consistently comes up in conversation is how to help customers understand the role of their security team in a distributed cloud-aware environment. The answer is always the same: we as security people are here to help the business deploy and run applications securely. Our job is to guide and educate the rest of the organization on the best way to meet the business objectives while meeting the security, risk, and compliance requirements.

So how do you do this?

Technology and culture are both important to an organization’s security posture, and they enable each other. AWS is a good example of an organization that has a strong culture of security ownership. One thing that all customers can take away from AWS: security is everyone’s job. When you understand that, it becomes easier to build the mechanisms that make the configuration and operation of appropriate security control objectives a reality.

The cloud environment that you build goes a long way to achieving this goal in two key ways. First, it provides guardrails and automated guidance for people building on the platform. Second, it allows solutions to scale.

One of the challenges organizations encounter is that there are more developers than there are security people. The traditional approach of point-in-time risk and control assessments performed by a human looking at an architecture diagram doesn’t scale. You need a way to scale that knowledge and capability without increasing the number of people. The best way to achieve this is to codify as much as possible, early in the build and release process.

One way to do this is to run the AWS platform as a product in its own right. Team members should be able to submit feature requests, and there should be metrics on the features that are enabled through the platform. The more security capability that teams building workloads can inherit from the platform, the less they have to implement at the workload level and the more time they can spend on product features. There will always be some security control objectives that can only be delivered by specific configuration at the workload level; this should build on top of what’s inherited from the cloud platform. Your security team and the other teams need to work together to make sure that the capabilities provided by the cloud platform are available to help people build and release securely.

One part of the governance model that we like to highlight is the concept of platform onboarding. The idea of this part of the governance model is to quickly and consistently get to a baseline set of controls that enable you to use a service safely in a particular environment. A good example here is to give developers access to evaluate a service in an experimentation account. To support this process, you don’t want to spend a long time building controls for every possible outcome. The best approach is to take advantage of the foundational controls that are delivered by the cloud platform as the starting point. Things like federation, logging, and service control policies can be used to provide guard rails that enable you to use services quickly. When the services are being evaluated, your security team can work together with your business to define more specific controls that make sense for the actual use cases.

AWS Well-Architected Framework

The cloud platform you use is the foundation of many of the security controls. These guard rails of federation, logging, service control polices, and automated response apply to workloads of all types. The security pillar in the AWS Well-Architected Framework builds on other risk management and compliance frameworks, provides you with best practices, and helps you to evaluate your architectures. These best practices are a great place to look for what you should do when building in the cloud. The categories—identity and access management, detection, infrastructure protection, data protection, and incident response—align with the most important areas to focus on when you build in AWS.

For example, identity is a foundational control in a cloud environment. One of the AWS Well-Architected security best practices is “Rely on a centralized identity provider.” You can use AWS Single Sign-On (AWS SSO) for this purpose or an equivalent centralized mechanism. If you centralize your identity provider, you can perform identity lifecycle management on users, provide them with access to only the resources that are required, and support users who move between teams. This can apply across the multiple AWS accounts in your AWS environment. AWS Organizations uses service control policies to enable you to use a subset of AWS services in particular environments; this is an identity-centric way of providing guard rails.

In addition to federating users, it’s important to enable logging and monitoring services across your environment. This allows you to generate an event when something unexpected happens, such as a user trying to call AWS Key Management Service (AWS KMS) to decrypt data that they should have access to. Securely storing logs means that you can perform investigations to determine the causes of any issues you might encounter. AWS customers who use Amazon GuardDuty and AWS CloudTrail, and have a set of AWS Config rules enabled, have access to security monitoring and logging capabilities as they build their applications.

The layer cake model

When you think about cloud security, we find it useful to use the layer cake as a good mental model. The base of the cake is the understanding of the below-the-line capability that AWS provides. This includes self-serving the compliance documentation from AWS Artifact and understanding the AWS shared responsibility model.

The middle of the cake is the foundational controls, including those described previously in this post. This is the most important layer, because it’s where the most controls are and therefore where the most value is for the security team. You could describe it as the “solve it once, consume it many times” layer.

The top of the cake is the application-specific layer. This layer includes things that are more context dependent, such as the correct control objectives for a certain type of application or data classification. The work in the middle layer helps support this layer, because the middle layer provides the mechanisms that make it easier to automatically deliver the top layer capability.

The middle and top layers are not just technology layers. They also include the people and process parts of the equation. The technology is just there to support the processes.

One thing to be aware of is that you shouldn’t try to define every possible control for a service before you allow your business to use the service. Make use of the various environments in your organization—experimenting, development, testing, and production—to get the services in the hands of developers as quickly as possible with the minimum guardrails to avoid accidental misconfiguration. Then, use the time when the services are being assessed to collaborate with the developers on control implementation. Control implementations can then be rolled into the middle layer of the cake, and the services can be adopted by other parts of the business.

This is also the ideal time to apply practical threat modelling techniques so you can understand what threats and risks you must address. Working with your business to define recommended implementation patterns also helps provide context for how services are typically used. This means you can focus on the controls that are most relevant.

The architecture, platform, or cloud center of excellence (CoE) teams can help at this stage. They can likely make a quick determination of whether an AWS service fits in with your organization’s architectural direction. This quick triage helps the security team focus their efforts in helping get services safely in the hands of the business without being seen as blocking adoption. A good mechanism for streamlining the use of new services is to make sure the backlog is well communicated, typically on a platform team wiki. This helps the security and non-security parts of your organization prioritize their time on services that deliver the most business value. A consistent development approach means that the services that are used are probably being used in more places across the organization. This helps your organization get the benefits of scale as consistent approaches to control implementation are replicated between teams.

Simplicity, metrics, and culture

The world moves fast. You can’t just define a security posture and control objectives, and then walk away. New services are launched that make it easier to do more complex things, business priorities change, and the threat landscape evolves. How do you keep up with all of it?

The answer is a combination of simplicity, metrics, and culture.

Simplicity is hard, but useful. For example, if you have 100 application teams all building in a different way, you have a large number of different configurations that you must ensure are sensibly defined. Ideally, you do this programmatically, which means that the work to define and maintain that set of security controls is significant. If you have 100 application teams using only 10 main patterns, it’s easier to build controls. This has the added benefit of reducing the complexity at the operations end, which applies to both the day-to-day operations and to incident responses. Simplification of your control environment means that your monitoring is less complex, troubleshooting is easier, and people have time to focus on the development of new controls or processes.

Metrics are important because you can make informed decisions based on data. A good example of the usefulness of metrics is patching. Patching is one of the easiest ways to improve your security posture. Having metrics on patch age, presented where this information is most important in your environment, enables you to focus on the most valuable areas. For example, infrastructure on your edge is more important to keep patched than infrastructure that is behind multiple layers of controls. You should patch everything, but you need to make it easy for application teams to do so as part of their build and release cycles. Exposing metrics to teams and leadership helps your organization learn from high performing areas in the business. These could be teams that are regularly meeting the patching expectations or have low instances of needing to remediate penetration testing findings. Metrics and data about your control effectiveness enables you to provide assurance internally and externally that you’re meeting your control objectives.

This brings us to culture. Security as an enabler is something that we think is the most important concept to take away from this post. You must build capabilities that enable people in your organization to have the secure configuration or design choice be the easiest option. This is the role of security. You should also make sure that, when there are problems, your security team works with the business to help everyone learn the cause and improve for next time.

AWS has a culture that uses trouble ticketing for everything. If our employees think they have a security problem, we tell them to open a ticket; if they’re not sure that they have a security problem, we tell them to open a ticket anyway to get guidance. This kind of culture encourages people to communicate and help means so we can identify and fix issues early. Issues that aren’t as severe thought can be downgraded quickly. This culture of ticketing gives us data to inform what we build, which helps people be more secure. You can get started with a system like this in your own environment, or look to extend the capability if you’ve already started.

Take our recommendation to turn on GuardDuty across all your accounts. We recommend that the resulting high and medium alerts are sent to a ticketing system. Look at how you resolve those issues and use that to prioritize the next two weeks of work. Now you can build automation to fix the issues and, more importantly, build to prevent the issues from happening in the first place. Ask yourself, “What information did I need to diagnose the problem?” Then, build automation to enrich the findings so your tickets have that context. Iterate on the automation to understand the context. For example, you may want to include information to show whether the environment is production or non-production.

Note that having production-like controls in non-production environments means that you reduce the chance of deployment failures. It also gets teams used to working within the security guardrails. This increased rigor earlier on in the process, and helps your change management team, too.

Summary

It doesn’t matter what security frameworks or standards you use to inform your business, and you might not even align with a particular industry standard. What does matter is building a governance model that empowers the people in your organization to consistently make good security decisions and provides the capability for your security team to enable this to happen. To get started or continue to evolve your governance model, follow the AWS Well-Architected security best practices. Then, make sure that the platform you implement helps you deliver the foundational security control objectives so that your business can spend more of its time on the business logic and security configuration that is specific to its workloads.

The technology and governance choices you make are the first step in building a positive security culture. Security is everyone’s job, and it’s key to make sure that your platform, automation, and metrics support making that job easy.

The areas of focus we’ve talked about in this post are what allow security to be an enabler for business and to ultimately help you better help your customers and earn their trust with everything you do.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Paul Hawkins

Paul helps customers of all sizes understand how to think about cloud security so they can build the technology and culture where security is a business enabler. He takes an optimistic approach to security and believes that getting the foundations right is the key to improving your security posture.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

How to import PFX-formatted certificates into AWS Certificate Manager using OpenSSL

Post Syndicated from Praveen Kumar Jeyarajan original https://aws.amazon.com/blogs/security/how-to-import-pfx-formatted-certificates-into-aws-certificate-manager-using-openssl/

In this blog post, we show you how to import PFX-formatted certificates into AWS Certificate Manager (ACM) using OpenSSL tools.

Secure Sockets Layer and Transport Layer Security (SSL/TLS) certificates are small data files that digitally bind a cryptographic key pair to an organization’s details. The key pair is used to secure network communications and establish the identity of websites over the internet and on private networks. These certificates are usually issued by a trusted certificate authority (CA). A CA acts as a trusted third party—trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. The format of these certificates is specified by the X.509 or Europay, Mastercard, and Visa (EMV) standards. SSL/TLS certificates issued by a trusted CA are usually encoded in Personal Information Exchange (PFX) or Privacy-Enhanced Mail (PEM) format.

ACM lets you easily provision, manage, and deploy public and private SSL/TLS certificates for use with Amazon Web Services (AWS) and your internal connected resources. Certificates can be imported from outside AWS, or created using AWS tools. Certificates can be used to help with ACM-integrated AWS resources, such as Elastic Load Balancing, Amazon CloudFront distributions, and Amazon API Gateway.

To import a self–signed SSL/TLS certificate into ACM, you must provide the certificate and its private key in PEM format. To import a signed certificate, you must also include the certificate chain in PEM format. Prerequisites for Importing Certificates provides more detail.

Sometimes, the trusted CA issues the certificate, private key, and certificate chain details in PFX format. In this post, we show you how to convert a PFX-encoded certificate into PEM format and then import it into ACM.

Solution

The following solution converts a PFX-encoded certificate to PEM format using the OpenSSL command line tool. The certificate is then imported into ACM.

Figure 1: Use the OpenSSL Toolkit to convert the certificate, then import the certificate into ACM

Figure 1: Use the OpenSSL Toolkit to convert the certificate, then import the certificate into ACM

The solution has two parts, shown in the preceding figure:

  1. Use the OpenSSL Toolkit to convert the PFX-encoded certificate into PEM format.
  2. Import the PEM certificate into ACM.

Prerequisites

We use the OpenSSL toolkit to convert a PFX encoded certificate to PEM format. OpenSSL is an open source toolkit for manipulating cryptographic files. It’s also a general-purpose cryptography library.

For this post, we use a password protected PFX-encoded file—website.xyz.com.pfx—with an X.509 standard CA signed certificate and 2048-bit RSA private key data.

  1. Download and install the OpenSSL toolkit.
  2. Add the OpenSSL binaries location to your system PATH variable, so that the binaries are available for command line use.

Convert the PFX encoded certificate into PEM format

Run the following commands to convert a PFX-encoded SSL certificate into PEM format. The procedure requires the PFX-encoded certificate and the passphrase used for encrypting it.

The procedure converts the PFX-encoded signed certificate file into three files in PEM format.

  • cert-file.pem – PEM file containing the SSL/TLS certificate for the resource.
  • withoutpw-privatekey.pem – PEM file containing the private key of the certificate with no password protection.
  • ca-chain.pem – PEM file containing the root certificate of the CA.

To convert the PFX encoded certificate

  1. Use the following command to extract the certificate private key from the PFX file. If your certificate is secured with a password, enter it when prompted. The command generates a PEM-encoded private key file named privatekey.pem. Enter a passphrase to protect the private key file when prompted to Enter a PEM pass phrase.
    
    openssl pkcs12 -in website.xyz.com.pfx -nocerts -out privatekey.pem
    

     

    Figure 2: Prompt to enter a PEM pass phrase

    Figure 2: Prompt to enter a PEM pass phrase

  2. The previous step generates a password-protected private key. To remove the password, run the following command. When prompted, provide the passphrase created in step 1. If successful, you will see writing RSA key.
    
    openssl rsa -in privatekey.pem -out withoutpw-privatekey.pem
    

     

    Figure 3: Writing RSA key

    Figure 3: Writing RSA key

  3. Use the following command to transfer the certificate from the PFX file to a PEM file. This creates the PEM-encoded certificate file named cert-file.pem. If successful, you will see MAC verified OK.
    
    openssl pkcs12 -in website.xyz.com.pfx -clcerts -nokeys -out cert-file.pem
    

     

    Figure 4: MAC verified OK

    Figure 4: MAC verified OK

  4. Finally, use the following command to extract the CA chain from the PFX file. This creates the CA chain file named ca-chain.pem. If successful, you will see MAC verified OK.
    
    openssl pkcs12 -in website.xyz.com.pfx -cacerts -nokeys -chain -out ca-chain.pem
    

     

    Figure 5: MAC verified OK

    Figure 5: MAC verified OK

When the preceding steps are complete, the PFX-encoded signed certificate file is split and returned as three files in PEM format, shown in the following figure. To view the list of files in a directory, enter the command dir in Windows or type the command ls -l in Linux.

  • cert-file.pem
  • withoutpw-privatekey.pem
  • ca-chain.pem

    Figure 6: PEM-formatted files

    Figure 6: PEM-formatted files

Import the PEM certificates into ACM

Use the ACM console to import the PEM-encoded SSL certificate. You need the PEM files containing the SSL certificate (cert-file.pem), the private key (withoutpw-privatekey.pem), and the root certificate of the CA (ca-chain.pem) that you created in the previous procedure.

To import the certificates

  1. Open the ACM console. If this is your first time using ACM, look for the AWS Certificate Manager heading and select the Get started button.
  2. Select Import a certificate.
  3. Add the files you created in the previous procedure:
    1. Use a text-editing tool such as Notepad to open cert-file.pem. Copy the lines beginning at –BEGIN CERTIFICATE– and ending with –END CERTIFICATE–. Paste them into the Certificate body text box.
    2. Open withoutpw-privatekey.pem. Copy the lines beginning at –BEGIN RSA PRIVATE KEY– and ending with –END RSA PRIVATE KEY–. Paste them into the Certificate private key, text box.
    3. For Certificate chain, copy and paste the lines starting –BEGIN CERTIFICATE– and ending with –END CERTIFICATE– in the file ca-chain.pem.

      Figure 7: Add the files to import the certificate

      Figure 7: Add the files to import the certificate

  4. Select Next and add tags for the certificate. Each tag is a label consisting of a key and value that you define. Tags help you manage, identify, organize, search for, and filter resources.
  5. Select Review and import.
  6. Review the information about your certificate, then select Import.

Conclusion

In this post, we discussed how you can use OpenSSL tools to import a PFX-encoded SSL/TLS certificate into ACM. You can use the imported certificate with any ACM-integrated AWS service. ACM makes it easier to set up SSL/TLS for a website or application on AWS. ACM can replace many of the manual processes usually associated with using and managing SSL/TLS certificates. ACM can also manage renewals, which can help you avoid downtime due to misconfigured, revoked, or expired certificates. You can renew an imported certificate by obtaining and importing a new certificate from your certificate issuer, or you can request a new certificate from ACM.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Praveen Kumar Jeyarajan

PraveenKumar is a DevOps Consultant in AWS supporting enterprise customers and their journey to the cloud. Before his work on AWS and cloud technologies, PraveenKumar focused on solving myriad technical challenges using the latest technologies. Outside of work, he enjoys watching movies and playing tennis.

Author

Viyoma Sachdeva

Viyoma is a DevOps Consultant in AWS supporting global customers and their journey to the cloud. Outside of work, she enjoys watching series and spending time with her family.