Tag Archives: Advanced (300)

Centrally manage AWS WAF (API v2) and AWS Managed Rules at scale with Firewall Manager

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/centrally-manage-aws-waf-api-v2-and-aws-managed-rules-at-scale-with-firewall-manager/

Since AWS Firewall Manager was introduced in 2018, it has evolved with many more features and today also supports the newest version of AWS WAF, as well as the latest AWS WAF APIs (AWS WAFV2), and AWS Managed Rules for AWS WAF. (Note that the original AWS WAF APIs are still available and supported under the name AWS WAF Classic. Firewall Manager already supported AWS WAF Classic and continues to do so.) In this blog, we walk you through the steps of setting up Firewall Manager policies for AWS WAF and highlight some of the options available. We also walk through how to use the recently launched feature that enables centralized logging for your AWS WAF policies. In addition, we share an AWS CloudFormation template that you can use to set up Firewall Manager policies, AWS WAF rule groups, and the related AWS WAF rules (both custom and managed rules). Before we get into the content of the blog, here’s some background information you should know.

AWS WAF rules define how to inspect web requests and what to do when a web request matches the inspection criteria. Firewall Manager simplifies the administration of AWS WAF rules at scale across multiple accounts.

Background

Web ACL capacity units

The web ACL capacity unit (WCU) is a new concept that we introduced to AWS WAF. WCU is a measurement that is used to calculate and control the operating resources that are used to run your rules with your web access control list (web ACL).

AWS Managed Rules

AWS Managed Rules (AMRs) are a set of AWS WAF rules curated and maintained by the AWS Threat Research Team. With just a few clicks, these rules can help protect your web applications from new and emerging risks, so you don’t need to spend time researching and writing your own rules. The core rule set covers some of the common security risks described in the OWASP Top 10 publication. AMRs also include IP reputation lists based on Amazon threat intelligence that can help reduce your exposure to bot traffic or exploitation attempts. You can add multiple AMRs to your web ACL or write hundreds of your own rules. Additional rule sets from AWS Marketplace sellers like Cyber Security Cloud and Fortinet are also available to use. If you subscribe to rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.

Rule groups

A rule group is a group of AWS WAF rules. In the new AWS WAF, a rule group is defined under AWS WAF, and you can add rule groups as a reusable set of rules under a web ACL. With the addition of AMRs, customers can select from AWS Managed Rule groups in addition to Partner Managed and Custom Configured rule groups. So, you now have the option to create Firewall Manager policies by using either AWS WAF Classic rule groups or new AWS WAF rule groups.

Firewall Manager policy

A Firewall Manager policy is an entity that contains a set of rule groups that can be associated to and applied to your resources. In this policy, you also specify the resource types you want to protect in specific or all accounts. Based on the policy, Firewall Manager creates a Firewall Manager web ACL in the corresponding accounts. In addition, individual application teams can add more rules or rule groups to this web ACL.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. If you’re new to Organizations, use these steps to enable AWS Organizations in your account. For more information, see Creating an organization and Enabling all features in your organization.
  • A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization. On the web console of the AWS account that you want to designate as the Firewall Manager administrator, go to the Firewall Manager service, and on the Settings tab, choose Set administrator account to set that account as the administrator, as shown in figure 1.
     
    Figure 1: Firewall Manager administrator account

    Figure 1: Firewall Manager administrator account

  • AWS Config: You must enable AWS Config for all the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Walkthrough: Set up Firewall Manager policies

In the following steps, we walk you through the steps of creating and applying a Firewall Manager policy across AWS accounts, explaining the implications of the options you can choose. This policy uses AWS WAF rules that include AMRs such as the core rule set, Amazon’s IP reputation list and SQL injection.

To create and apply Firewall Manager policies

  1. In the AWS Management Console, navigate to the Firewall Manager service, choose Security Policies, and then choose the appropriate Region.
  2. Choose the Create Policy button. Under Policy type, you can see four options to choose from, as shown in figure 2. For this walkthrough, choose AWS WAF, which is the most recent version of AWS WAF, and then choose Next.
     
    Figure 2: Choosing the policy type

    Figure 2: Choosing the policy type

  3. You can now see options to add two sets of rule groups, first rule groups and last rule groups, as shown in figure 3.
    • First rule groups: When the web ACL inspects a web request, these are the set of rule groups that are prioritized to be evaluated at the very beginning. Note that these rules could be either custom build rules, or managed AWS WAF rules offered by AWS or other sellers.
    • Last rule groups: If the web request makes it through the first rule set and the AWS WAF rules defined for this web ACL (which will be in the format FMManagedWebACLV2PartialRuleName-policyXXXX), then it is evaluated against this last set of rule groups, before resulting in the action defined in this rule set.
    • Default web ACL action: This option specifies whether you want the web request evaluated by the rule to be allowed or blocked.
    Figure 3: Updating Rule groups

    Figure 3: Updating Rule groups

  4. See the AWS Managed Rules rule groups list to understand the rules that are defined in the managed rule set. You can choose to override the default action set and instead apply the “count” setting to monitor the traffic for the rule group. This will help customers to monitor for false positives before deciding to allow or reject certain web-requests.
     
    You can apply this count mode to specific rules of the rule set by selecting the rule, choosing the Edit button, as shown in figure 4, and then setting the toggle for individual rules.
     
    Figure 4: Edit rule group to override the default action set

    Figure 4: Edit rule group to override the default action set

    Figure 5 shows the Override rules action setting toggled on and off for various rules. AWS Core Rule set contains rules to protect against commonly occurring vulnerabilities described in OWASP publications. The two parameters, “SizeRestrictions_QUERYSTRING” and “SizeRestrictions_BODY” are set to monitoring mode which helps verify that the URI query string length and request body size are within the standard boundary for web applications.
     

    Figure 5: Example of setting a managed rule to override the rules action

    Figure 5: Example of setting a managed rule to override the rules action

  5. On the same page, under Policy action, you can also set the policy action to either identify the resources and monitor those that don’t comply with the policy rules, without auto-remediation, OR choose to auto-remediate any of the non-compliant resources. This option is shown in figure 6.
     
    Auto-remediation: Here is where you can get creative in using Firewall Manager policies based on various requirements. For example, you can create policies for specific departments using AWS tags OR all applications deployed in specific accounts, where you want to enforce certain AWS WAF rules to meet requirements to protect all the resources. Notice in figure 6 that you can choose to auto-remediate. What this means is that these AWS WAF rules are not only applied to all the resources types that you select to protect, but if someone tampers with this Firewall Manager web ACL by deleting the rule group, Firewall Manager creates this rule group again within a few minutes. In the background, Firewall Manager has a tight integration with AWS Config to monitor all the resources across other accounts in that parent organization. Similarly, in the same account, you could create another policy for a different department or team where you don’t want to enforce the AWS WAF rules but instead let the application teams in these departments create their own web ACLs to use.
     
    Figure 6: Setting the default web ACL and auto-remediate actions

    Figure 6: Setting the default web ACL and auto-remediate actions

    In figure 6, you see an option to replace the existing web ACLs. You may have created custom AWS WAF rules and applied those to the resources by using a custom web ACL. With this option, you can either choose not to alter such existing resources that are already protected by another custom web ACL (and not by Firewall Manager–created web ACLs), or to disassociate the resources and have them protected by the new web ACLs created by this policy.

  6. In this last step, under Policy scope, you can decide to apply the policy to specific types of AWS resources, either to all accounts in that organization or just to some of them, and also filter based on tags. This option is shown in figure 7. In the below example, you will see that the policy is restricted to two accounts, and two corresponding OU’s with tags limiting to “DeptName: PCI”.
     
    Figure 7: Defining policy scope for specific accounts

    Figure 7: Defining policy scope for specific accounts

Once these changes are applied, then in the background, with AWS Config enabled in the child accounts that are included in the policy, AWS WAF rules are created in the format “FMManagedWebACLV2<rulegroupname>XXXXXXXXXXXXX”. All resources that meet the conditions called out in the policy are automatically protected.

Validation: You can now log in to one of the child accounts to verify whether the resources are created. In our example, all the Application Load Balancers with the tag name DepName:PCI will be associated to this web ACL. You can also add more AWS WAF rules to this web ACL.
 

Figure 8: Reviewing and managing web ACLs in participating child accounts

Figure 8: Reviewing and managing web ACLs in participating child accounts

Firewall Manager logging capability

As part of the AWS WAF capability you want to make sure that logging is enabled with the recently announced feature for centralized logging of your AWS WAF policies. With this logging feature, you get detailed information about traffic within your organization.

The steps are similar to enabling logging for AWS WAF, and consists of two steps. In the first step, Amazon Kinesis Data Firehose is used to capture logs from your policy’s web ACLs to a configured storage destination through the HTTPS endpoint. In the second step, you enable logging in a Firewall Manager policy and select the Firehose stream you created in the first step.

Step 1: Set up a new Kinesis Data Firehose delivery stream

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (ES), Splunk, and Amazon Redshift. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

To set up the delivery stream

  1. In the AWS Management Console, using your Firewall Manager administrator account, open the Amazon Kinesis Data Firehose service and choose the button to create a new delivery stream.
    1. For Delivery stream name, enter a name for your new stream that starts with aws-waf-logs- as shown in figure 9. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.
    3. We recommend that you also enable server-side encryption. You can either choose to use AWS-owned keys or the customer-managed keys. In this example, we have chosen AWS owned Customer master keys (CMKs). If you have your own CMK’s, you can select the 2nd option of the customer-managed keys and pick your corresponding key from the dropdown list.
       
      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

      Figure 9: Setting the source for the Kinesis Data Firehose delivery stream

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Data transformation, choose Disabled. You also have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, for Record format conversion, choose Disabled. Figure 10 shows these options.
     
    Figure 10: Setting data transformation and format conversion

    Figure 10: Setting data transformation and format conversion

  3. On the Select destination screen, for Destination, choose Amazon S3, as shown in figure 11.
    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket, since you’ll need the bucket name in a later step in this walkthrough.
    2. (Optional) Set the S3 prefix and error bucket for logs.
       
      Figure 11: Selecting the destination

      Figure 11: Selecting the destination

    3. On the Configure Settings screen, shown in figure 12, choose other S3 options, such as encryption and compression, and creation of an IAM role for granting access to the Firehose delivery stream.
      1. You can leave the default values for S3 buffer values. We also recommend enabling compression and encryption.
      2. Either choose the option to create a new IAM role, or choose an existing role if you’ve already created one.
      Figure 12: Setting S3 options and IAM role

      Figure 12: Setting S3 options and IAM role

  4. In the next screen, review all the options you selected and choose the Create Delivery Stream button to create a Kinesis Data Firehose delivery stream.

Step 2: Enable logging for an AWS WAF policy

In this step, you configure Firewall Manager policy to direct the log ingestion to the Kinesis Data Firehose delivery stream that you created in the previous step.

To enable logging for an AWS WAF policy

  1. On the AWS Management Console, search for AWS Firewall Manager and in the navigation pane, choose Security Policies.
  2. Choose the AWS WAF policy that you want to enable logging for, and on the Policy details tab, in the Policy rules section, choose Edit. For Logging configuration status, choose Enabled.
  3. Choose the Kinesis Data Firehose that you created for your logging. You must choose a firehose that begins with “aws-waf-logs-”.
     
    Figure 13: Enable Firewall Manager logs

    Figure 13: Enable Firewall Manager logs

  4. Review your settings, and then choose Save to save your changes to the policy.

    Note: Firewall Manager supports this option for the latest version of AWS WAF, but not for AWS WAF Classic.

With these two steps, you now have logging enabled for your Firewall Manager policies.

Deploy Firewall Manager policy with a CloudFormation template

In this section, we provide you with an example CloudFormation template that deploys Firewall Manager policy with a rule group that consists of both an AWS Managed rule set and a custom AWS WAF rule. As a part of the custom AWS WAF rule, this template creates an IP set in which you specify the list of IP addresses from which you want to block traffic. It also creates a rule with an AND statement that blocks cross-site scripting requests and requests that originate from the IP addresses that you specified. You will also notice that this rule is applied to specific accounts that are entered in the parameters as comma-delimited values. Out of many available AWS Managed Rules, we used two rules: AWSManagedRulesCommonRuleSet and AWSManagedRulesSQLiRuleSet. For the former rule, we set the override action to count, which means the requests won’t be blocked but will be counted for further investigation. The latter rule contains rules to block request patterns associated with exploitation of SQL databases, such as SQL injection attacks. This can help prevent remote injection of unauthorized queries.

As a best practice, before using a rule group in production, test it in a non-production environment, with the action override set to count. Evaluate the rule group using Amazon CloudWatch metrics combined with AWS WAF sampled requests or AWS WAF logs. When you’re satisfied that the rule group does what you want, remove the override on the group.

To deploy the CloudFormation template

  1. Copy the following template code and save it in a file named deploy-firewall-manager-policy.yaml.
    Description: Create IPSet, RuleGroup and Firewall Manager Policy. Firewall Policy contains two AWS managed rule groups and one custom rule group.
    Parameters:
      BlockIpAddressCIDR:
        Type: CommaDelimitedList
        Description: "Enter IP Address range by using CIDR notation separated by comma to block incoming traffic originating from them. For eg: To specify the IPV4 address 192.0.2.44 type 192.0.2.44/32 or 10.0.2.0/24"
      AWSAccountIds:
        Type: CommaDelimitedList
        Description: "Enter AWS Account IDs separated by comma in which you want to apply Firewall Manager policy"
    
    Resources:
      WAFIPSetFMS:
          Type: 'AWS::WAFv2::IPSet'
          Properties:
            Description: Block ranges of IP addresses using this IP Set
            Name: WAFIPSetFMS
            Scope: REGIONAL
            IPAddressVersion: IPV4	
            Addresses: !Ref BlockIpAddressCIDR
      RuleGroupXssFMS:
        Type: 'AWS::WAFv2::RuleGroup'
        DependsOn: WAFIPSetFMS
        Properties: 
          Capacity: 500
          Description: AWS WAF Rule Group to block web requests that contain cross site scripting injection attacks and originate from specific IP ranges.
          Name: RuleGroupXssFMS
          Scope: REGIONAL
          VisibilityConfig:
              SampledRequestsEnabled: true
              CloudWatchMetricsEnabled: true
              MetricName: RuleGroupXssFMS
          Rules: 
            - Name: xssException
              Priority: 0
              Action:
                Block: {}
              VisibilityConfig:
                  SampledRequestsEnabled: true
                  CloudWatchMetricsEnabled: true
                  MetricName: xssException
              Statement:
                AndStatement:
                  Statements:
                  - XssMatchStatement:
                      FieldToMatch:
                        Body: {}
                      TextTransformations:
                      - Type: HTML_ENTITY_DECODE
                        Priority: 0
                      - Type: LOWERCASE
                        Priority: 1
                  - IPSetReferenceStatement:
                      Arn: !GetAtt WAFIPSetFMS.Arn
      PolicyWAFv2:
        Type: AWS::FMS::Policy
        Properties:
          ExcludeResourceTags: false
          PolicyName: WAF-Policy
          IncludeMap: 
            ACCOUNT: !Ref AWSAccountIds
          RemediationEnabled: true
          ResourceType: AWS::ElasticLoadBalancingV2::LoadBalancer 
          SecurityServicePolicyData: 
            Type: WAFV2
            ManagedServiceData: !Sub '{"type":"WAFV2", 
                                      "preProcessRuleGroups":[{ 
                                      "ruleGroupType":"RuleGroup",
                                      "ruleGroupArn":"${RuleGroupXssFMS.Arn}",
                                      "overrideAction":{"type":"NONE"}},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesCommonRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"COUNT"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"},{
                                      "managedRuleGroupIdentifier":{
                                      "managedRuleGroupName":"AWSManagedRulesSQLiRuleSet", 
                                      "vendorName":"AWS"},
                                      "overrideAction":{"type":"NONE"}, 
                                      "excludeRules":[],"ruleGroupType":"ManagedRuleGroup"}],
                                      "postProcessRuleGroups":[],
                                      "defaultAction":{"type":"BLOCK"}}'
    
    

  2. Execute the following AWS CLI command to deploy the stack. If you haven’t configured AWS CLI, refer to this quickstart.
    aws cloudformation create-stack --stack-name firewall-manager-policy-stack \
          --template-body file://deploy-firewall-manager-policy.yaml \
          --parameters \
    ParameterKey=BlockIpAddressCIDR,ParameterValue=<Enter-BlockIpAddressCIDR>
    ParameterKey= AWSAccountIds,ParameterValue=<Enter-AWSAccountIds>
    

Pricing

As of today, price per AWS WAF protection policy per Region is $100.00. To get an overall idea on pricing, we recommend that you review this AWS Firewall Manager pricing guide that covers a few scenarios.

AWS WAF charges based on the number of web access control lists (web ACLs) that you create, the number of rules that you add per web ACL, and the number of web requests that you receive. There are no upfront commitments. Learn more about AWS WAF pricing.

There is no additional charge for using AWS Managed Rules for AWS WAF. When you subscribe to a Managed Rule Group provided by an AWS Marketplace seller, you will be charged additional fees based on the price set by the seller.

AWS Shield Advanced customers can use Firewall Manager to apply AWS Shield Advanced and AWS WAF protections across their entire organization at no additional cost.

Conclusion

This blog post describes how you can create Firewall Manager policies with the new version of AWS WAF rules, by using the web console or AWS CloudFormation. You can also create these rules by using the command line interface (CLI), or programmatically with the SDK and other similar scripting tools. Using both AWS WAF and Firewall Manager, you can create a deployment strategy to safeguard all your accounts centrally at the organization level, and also choose to automatically remediate the AWS WAF rules if anything is changed after deployment.

For further reading, see the AWS Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Senior Cloud Infrastructure Architect with AWS who delivers proof-of-concept projects and topical workshops, and leads implementation projects. He holds a bachelor’s degree in Computer Science & Engineering from the National Institute of Technology, Jamshedpur (India). Outside of work, he enjoys watching documentaries, biking, practicing meditation and discussing spirituality.

Author

Mahek Pavagadhi

Mahek is a Cloud Infrastructure Architect at AWS in San Francisco, CA. She has a master’s degree in Software Engineering with a major in Cloud Computing. She is passionate about cloud services and building solutions with it. Outside of work, she is an avid traveler who loves to explore local cafeterias.

Investigate VPC flow with Amazon Detective

Post Syndicated from Ross Warren original https://aws.amazon.com/blogs/security/investigate-vpc-flow-with-amazon-detective/

Many Amazon Web Services (AWS) customers need enhanced insight into IP network flow. Traditionally, cost, the complexity of collection, and the time required for analysis has led to incomplete investigations of network flows. Having good telemetry is paramount, and VPC Flow Logs are a very important part of a robust centralized logging architecture. The information that VPC Flow Logs provide is frequently used by security analysts to determine the scope of security issues, to validate that network access rules are working as expected, and to help analysts investigate issues and diagnose network behaviors. Flow logs capture information about the IP traffic going to and from EC2 interfaces in a VPC. Each record describes aspects of the traffic flow, such as where it originated and where it was sent to, what network ports were used, and how many bytes were sent.

Amazon Detective now enables you to interactively examine the details of the virtual private cloud (VPC) network flows of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Detective automatically collects VPC flow logs from your monitored accounts, aggregates them by EC2 instance, and presents visual summaries and analytics about these network flows. Detective doesn’t require VPC Flow Logs to be configured and doesn’t impact existing flow log collection.

In this blog post, I describe how to use the new VPC flow feature in Detective to investigate an UnauthorizedAccess:EC2/TorClient finding from Amazon GuardDuty. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty documentation states that this alert can indicate unauthorized access to your AWS resources with the intent of hiding the unauthorized user’s true identity. I’ll demonstrate how to use Amazon Detective to investigate an instance that was flagged by Amazon GuardDuty to determine whether it is compromised or not.

Starting the investigation in GuardDuty

In my GuardDuty console, I’m going to select the UnauthorizedAccess:EC2/TorClient finding shown in Figure 1, choose the Actions menu, and select Investigate.
 

Figure 1: Investigating from the GuardDuty console

Figure 1: Investigating from the GuardDuty console

This opens a new browser tab and launches the Amazon Detective console, where I’m presented with the profile page for this finding, shown in Figure 2. You must have Detective enabled to pivot between a GuardDuty finding and Detective. Detective provides profile pages for supported GuardDuty findings and AWS resources (for example, IP address, EC2 instance, user, and role) that include information and data visualizations that summarize observed behaviors and give guidance for interpreting them. Profiles help analysts to determine whether the finding is of genuine concern or a false positive. For resources, profiles provide supporting details for an investigation into a finding or for a general hunt for suspicious activity.
 

Figure 2: Finding profile panel

Figure 2: Finding profile panel

In this case, the profile page for this GuardDuty UnauthorizedAccess:EC2/TorClient finding provides contextual and behavioral data about the EC2 instance on which GuardDuty has noted the issue. As I dive into this finding, I’m going to be asking questions that help assess whether the instance was in fact accessed unintentionally, such as, “What IP port or network service was in use at that time?,” “Were any large data transfers involved?,” “Was the traffic allowed by my security groups?” Profile pages in Detective organize content that helps security analysts investigate GuardDuty findings, examine unexpected network behavior, and identify other AWS resources that might be affected by a potential security issue.

I begin scrolling down the page and notice the Findings associated with EC2 instance i-9999999999999999 panel. Detective displays related findings to provide analysts with additional evidence and context about potentially related issues. The finding I’m investigating is listed there, as well as an Unusual Behaviors/VM/Behavior:EC2-NetworkPortUnusual finding. GuardDuty builds a baseline on your network traffic and will generate findings where there is traffic outside the calculated normal. While we might not investigate every instance of anomalous traffic, having these alerts correlated by Detective provides context for validating the issue. Keeping this in mind as I scroll down, at the bottom of this profile page, I find the Overall VPC flow volume panel. If you choose the Info link next to the panel title, you can see helpful tips that describe how to use the visualizations and provide ideas for questions to ask within your investigation. These info links are available throughout Detective. Check them out!

Investigating VPC flow in Detective

In this investigation, I’m very curious about the two large spikes in inbound traffic that I see in the Overall VPC flow volume panel, which seem to be visually associated with some unusual outbound traffic spikes. It’s most likely that these outbound spikes are related to the Unusual Behaviors/VM/Behavior:EC2-NetworkPortUnusual finding I mentioned earlier. To start the investigation, I choose the display details for scope time button, shown circled at the bottom of Figure 2. This expands the VPC Flow Details, shown in Figure 3.
 

Figure 3: Our first look at VPC Flow Details

Figure 3: Our first look at VPC Flow Details

We now can see that each entry displays the volume of inbound traffic, the volume of outbound traffic, and whether the access request was accepted or rejected. Detective provides annotations on the VPC flows to help guide your investigation. These From finding annotations make it clear which flows and resources were involved in the finding. In this case, we can easily see (in Figure 3) the three IP addresses at the top of the list that triggered this GuardDuty finding.

I’m first going to focus on the spikes in traffic that are above the baseline. When I click on one of the spikes in the graph, the time window for the VPC flow activity now matches the dates of these spikes I’m investigating.

If I choose the Inbound Traffic column header, shown in Figure 4, I can find the flows that contributed to the spike during this time window.
 

Figure 4: Inbound traffic spikes

Figure 4: Inbound traffic spikes

Note that the two large inbound spikes aren’t associated with the IP address from the UnauthorizedAccess:EC2/TorClient finding, based on the Detective annotation From finding. Let’s check the outbound traffic. If I do a quick sort of the table based on the outbound traffic column, as shown in Figure 5, we can also see the outbound spikes, and it isn’t immediately evident whether the spikes are associated with this finding. I could continue to investigate the spikes (because they are a visual anomaly), or focus just on the VPC flow traffic that GuardDuty and Detective have labeled as associated with this TOR finding.
 

Figure 5: Outbound traffic spikes

Figure 5: Outbound traffic spikes

Let’s focus on the outbound and inbound spikes and see if we can determine what’s happening. The inbound spikes are on port 443, typically an HTTPS port, or a secure web connection. The outbound spikes are on port 22 (ssh), but go to IP addresses that look to be internal based on their addresses of 172.16.x.x. The port 443 traffic might indicate a web server that’s open to the internet and receiving traffic. With further investigation, we can determine if this idea is valid, and continue hunting for potentially malicious traffic.

A good next step would be to investigate the two specific IP addresses to rule out their involvement in the finding. I can do this by right-clicking on either of the external IP addresses and opening a new tab, where I can focus on investigating these two specific IP addresses. I would take this line of investigation to possibly rule out the involvement of these IP addresses in this finding, determine if they regularly communicate with my resources, find out what instance(s) they’re related to, and see if there are other findings associated with these instances or IP addresses. This deeper investigation is outside the scope of this blog post, but it’s something you should be doing in your own environment.

IP addresses in AWS are ephemeral in nature. The unique identifier in VPC flow logs is the Instance ID. At the time of this investigation, 172.16.0.7 is assigned to the instance related to this finding, so let’s continue to take a look at the internal 172.16.0.7 IP address with 218 MB outbound traffic on port 22. I choose 172.16.0.7, and Detective opens up the profile page for this specific IP address, as shown in Figure 6. Here we see some interesting correlations: two other GuardDuty findings related to SSH brute-force attacks. These could be related to our outbound port 22 spikes, because they’re certainly in the window of time we’re investigating.
 

Figure 6: IP address profile panel

Figure 6: IP address profile panel

As part of a deeper investigation, you would investigate the SSH brute-force findings for 198.51.100.254 and 203.0.113.83 but for now I’m interested what this IP is involved in. Detective easily associates this 172.16.0.7 IP address with the instance that was assigned the IP during the scope time. I scroll down to the bottom of the profile page for 172.16.0.7 and investigate the i-9999999999999999 instance by choosing the instance name.

Filtering VPC flow activity

In Detective, as the investigator we are looking at an instance profile panel, similar to the one in Figure 2, and since we’re interested in VPC flow details, I’m going to scroll down and select display details for scope time.

To focus on specific activity, I can filter the activity details by the following values:

  • IP address
  • Local or remote port
  • Direction
  • Protocol
  • Whether the request was accepted or rejected

I’m going to filter these VPC flow details and just look at port 22 (sshd) inbound traffic. I select the Filter check box and select Local Port and 22, as shown in Figure 7. Detective fills in all the available ports for you, making it easy to complete this filter.
 

Figure 7: Port 22 traffic

Figure 7: Port 22 traffic

The activity details show a few IP addresses related to port 22, and we’re still following the large outbound spikes of traffic. It’s outside the scope of this blog post, but now it would be time to start looking at your security groups and network access control lists (ACLs) and determine why port 22 is open to the internet and sending all this traffic.

Understanding traffic behavior

As an investigator, I now have a good picture of the traffic related to the initial finding, and by diving deeper we’re able to discover other interesting traffic during the same timeframe. While we may not always determine “who has done it,” the goal should be to improve our understanding of the behavior of our environment and gather important technical evidence. Detective helps you identify and investigate anomalies to give you insight into your environment. If we were to continue our investigation into the finding, here are some actions we can take within Detective.

Investigate VPC findings with Detective:

  • Perform ports and utilization analysis
    • Identify service and ephemeral ports
    • Determine whether traffic was accepted or rejected based on security groups and NACL configurations
    • Investigate possible reconnaissance traffic by exploring the significant amount of rejected traffic
  • Correlate EC2 instances to TCP/IP ports and IPs
  • Analyze traffic spikes and anomalies
  • Discover traffic patterns and make behavioral correlations

Explore EC2 instance behavior with Detective:

  • Directional Traffic Analysis
  • Investigate possible data exfiltration events by digging into large transfers
  • Enumerate distinct IP connections and sort and filter by protocol, amount of traffic, and traffic direction
  • Gather data related to a spike in port count from a single IP address (potential brute force) or multiple IP addresses (distributed denial of service (DDoS))

Additional forensics steps to consider

  • Snapshot EC2 Volumes
  • Memory dump of EC2 instance
  • Isolate EC2 instance
  • Review your authentication strategy and assess whether the chosen authentication method is sufficient to protect your asset

Summary

Without requiring you to set up infrastructure or spend time configuring log ingestion, Detective collects, organizes, and presents relevant data for your threat analysis and investigations. Security and operations teams will find this new capability helpful for simplifying EC2 traffic analysis, validating security group permissions, and diagnosing EC2 instance behavior. Detective does the heavy lifting of storing, and analyzing VPC flow data so you can focus on quickly answering your investigative questions. VPC network flow details are available now in all Detective supported Regions and are included as part of your service subscription.

To get started, you can enable a 30-day free trial of Amazon Detective. See the AWS Regional Services page for all the regions where Detective is available. To learn more, visit the Amazon Detective product page.

Are you a visual learner? Check out Amazon Detective Overview and Demonstration. This video helps you learn how and when to use Amazon Detective to improve the security of your AWS resources.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ross Warren

Ross Warren is a Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.

Author

Jim Miller

Jim is a Solution Architect at AWS based in Connecticut. Jim has worked within cyber security his entire career with areas of focus including cyber security architecture and incident response. At AWS he loves building secure solutions for customers to enable teams to build and innovate with confidence.

Round 2 post-quantum TLS is now supported in AWS KMS

Post Syndicated from Alex Weibel original https://aws.amazon.com/blogs/security/round-2-post-quantum-tls-is-now-supported-in-aws-kms/

AWS Key Management Service (AWS KMS) now supports three new hybrid post-quantum key exchange algorithms for the Transport Layer Security (TLS) 1.2 encryption protocol that’s used when connecting to AWS KMS API endpoints. These new hybrid post-quantum algorithms combine the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges undergoing evaluation for standardization. The fastest of these algorithms adds approximately 0.3 milliseconds of overheard compared to a classical TLS handshake. The new post-quantum key exchange algorithms added are Round 2 versions of Kyber, Bit Flipping Key Encapsulation (BIKE), and Supersingular Isogeny Key Encapsulation (SIKE). Each organization has submitted their algorithms to the National Institute of Standards and Technology (NIST) as part of NIST’s post-quantum cryptography standardization process. This process spans several rounds of evaluation over multiple years, and is likely to continue beyond 2021.

In our previous hybrid post-quantum TLS blog post, we announced that AWS KMS had launched hybrid post-quantum TLS 1.2 with Round 1 versions of BIKE and SIKE. The Round 1 post-quantum algorithms are still supported by AWS KMS, but at a lower priority than the Round 2 algorithms. You can choose to upgrade your client to enable negotiation of Round 2 algorithms.

Why post-quantum TLS is important

A large-scale quantum computer would be able to break the current public-key cryptography that’s used for key exchange in classical TLS connections. While a large-scale quantum computer isn’t available today, it’s still important to think about and plan for your long-term security needs. TLS traffic using classical algorithms recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before the lifespan of the sensitivity of your data would be susceptible to an unauthorized user with a large-scale quantum computer. As an example, this means that if you believe that a large-scale quantum computer is 25 years away, and your data must be secure for 20 years, you should migrate to post-quantum schemes within the next 5 years. AWS is working to prepare for this future, and we want you to be prepared too.

We’re offering this feature now instead of waiting for standardization efforts to be complete so you have a way to measure the potential performance impact to your applications. Offering this feature now also gives you the protection afforded by the proposed post-quantum schemes today. While we believe that the use of this feature raises the already high security bar for connecting to AWS KMS endpoints, these new cipher suites will impact bandwidth utilization and latency. However, using these new algorithms could also create connection failures for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation or any issues found so we can improve it over time.

Hybrid post-quantum TLS 1.2

Hybrid post-quantum TLS is a feature that provides the security protections of both the classical and post-quantum key exchange algorithms in a single TLS handshake. Figure 1 shows the differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2. Hybrid post-quantum TLS 1.2 has three major differences from classical TLS 1.2:

  • The negotiated post-quantum key is appended to the ECDHE key before being used as the hash-based message authentication code (HMAC) key.
  • The text hybrid in its ASCII representation is prepended to the beginning of the HMAC message.
  • The entire client key exchange message from the TLS handshake is appended to the end of the HMAC message.
Figure 1: Differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2

Figure 1: Differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2

Some background on post-quantum TLS

Today, all requests to AWS KMS use TLS with key exchange algorithms that provide perfect forward secrecy and use one of the following classical schemes:

While existing FFDHE and ECDHE schemes use perfect forward secrecy to protect against the compromise of the server’s long-term secret key, these schemes don’t protect against large-scale quantum computers. In the future, a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded classical session, and thereby gain access to the data inside. Using a post-quantum key exchange algorithm during the TLS handshake protects against attacks from a large-scale quantum computer.

The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. NIST has started the process of standardizing post-quantum key encapsulation mechanisms (KEMs). A KEM is a type of key exchange that’s used to establish a shared symmetric key. AWS has chosen three NIST KEM submissions to adopt in our post-quantum efforts:

Hybrid mode ensures that the negotiated key is as strong as the weakest key agreement scheme. If one of the schemes is broken, the communications remain confidential. The Internet Engineering Task Force (IETF) Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 draft describes how to combine post-quantum KEMs with ECDHE to create new cipher suites for TLS 1.2.

These cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake. The key exchange then cryptographically combines the keys from each into a single TLS session key. This strategy combines the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges being analyzed by NIST.

The effect of hybrid post-quantum TLS on performance

Post-quantum cipher suites have a different performance profile and bandwidth usage from traditional cipher suites. AWS has measured bandwidth and latency across 2,000 TLS handshakes between an Amazon Elastic Compute Cloud (Amazon EC2) C5n.4xlarge client and the public AWS KMS endpoint, which were both in the us-west-2 Region. Your own performance characteristics might differ, and will depend on your environment, including your:

  • Hardware–CPU speed and number of cores.
  • Existing workloads–how often you call AWS KMS and what other work your application performs.
  • Network–location and capacity.

The following graphs and table show latency measurements performed by AWS for all newly supported Round 2 post-quantum algorithms, in addition to the classical ECDHE key exchange algorithm currently used by most customers.

Figure 2 shows the latency differences of all hybrid post-quantum algorithms compared with classical ECDHE alone, and shows that compared to ECDHE alone, SIKE adds approximately 101 milliseconds of overhead, BIKE adds approximately 9.5 milliseconds of overhead, and Kyber adds approximately 0.3 milliseconds of overhead.
 

Figure 2: TLS handshake latency at varying percentiles for four key exchange algorithms

Figure 2: TLS handshake latency at varying percentiles for four key exchange algorithms

Figure 3 shows the latency differences between ECDHE with Kyber, and ECDHE alone. The addition of Kyber adds approximately 0.3 milliseconds of overhead.
 

Figure 3: TLS handshake latency at varying percentiles, with only top two performing key exchange algorithms

Figure 3: TLS handshake latency at varying percentiles, with only top two performing key exchange algorithms

The following table shows the total amount of data (in bytes) needed to complete the TLS handshake for each cipher suite, the average latency, and latency at varying percentiles. All measurements were gathered from 2,000 TLS handshakes. The time was measured on the client from the start of the handshake until the handshake was completed, and includes all network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. All hybrid post-quantum tests used the NIST Round 2 versions. The Kyber test used the Kyber-512 parameter, the BIKE test used the BIKE-1 Level 1 parameter, and the SIKE test used the SIKEp434 parameter.

ItemBandwidth
(bytes)
Total
handshakes
Average
(ms)
p0
(ms)
p50
(ms)
p90
(ms)
p99
(ms)
ECDHE (classic)3,5742,0003.082.073.023.954.71
ECDHE + Kyber R25,8982,0003.362.383.174.285.35
ECDHE + BIKE R212,4562,00014.9111.5914.1618.2723.58
ECDHE + SIKE R24,6282,000112.40103.22108.87126.80146.56

By default, the AWS SDK client performs a TLS handshake once to set up a new TLS connection, and then reuses that TLS connection for multiple requests. This means that the increased cost of a hybrid post-quantum TLS handshake is amortized over multiple requests sent over the TLS connection. You should take the amortization into account when evaluating the overall additional cost of using post-quantum algorithms; otherwise performance data could be skewed.

AWS KMS has chosen Kyber Round 2 to be KMS’s highest prioritized post-quantum algorithm, with BIKE Round 2, and SIKE Round 2 next in priority order for post-quantum algorithms. This is because Kyber’s performance is closest to the classical ECDHE performance that most AWS KMS customers are using today and are accustomed to.

How to use hybrid post-quantum cipher suites

To use the post-quantum cipher suites with AWS KMS, you need the preview release of the AWS Common Runtime (CRT) HTTP client for the AWS SDK for Java 2.x. Also, you will need to configure the AWS CRT HTTP client to use the s2n post-quantum hybrid cipher suites. Post-quantum TLS for AWS KMS is available in all AWS Regions except for AWS GovCloud (US-East), AWS GovCloud (US-West), AWS China (Beijing) Region operated by Beijing Sinnet Technology Co. Ltd (“Sinnet”), and AWS China (Ningxia) Region operated by Ningxia Western Cloud Data Technology Co. Ltd. (“NWCD”). Since NIST has not yet standardized post-quantum cryptography, connections that require Federal Information Processing Standards (FIPS) compliance cannot use the hybrid key exchange. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.

  1. If you’re using the AWS SDK for Java 2.x, you must add the preview release of the AWS Common Runtime client to your Maven dependencies.
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>aws-crt-client</artifactId>
        <version>2.14.13-PREVIEW</version>
    </dependency>
    

  2. You then must configure the new SDK and cipher suite in the existing initialization code of your application:
    if(!TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07.isSupported()){
        throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
    }
    
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07)
              .build();
              
    KmsAsyncClient kms = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();
             
    ListKeysResponse response = kms.listKeys().get();
    

Now, all connections made to AWS KMS in supported Regions will use the new hybrid post-quantum cipher suites! To see a complete example of everything set up, check out the example application here.

Things to try

Here are some ideas about how to use this post-quantum-enabled client:

  • Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
  • Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from you about how your infrastructure interacts with this new variant of TLS traffic. If you have questions or feedback, please start a new thread on the AWS KMS discussion forum.

Conclusion

In this blog post, I announced support for Round 2 hybrid post-quantum algorithms in AWS KMS, and showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to AWS KMS endpoints.

More info

If you’d like to learn more about post-quantum cryptography check out:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alex Weibel

Alex is a Senior Software Engineer on the AWS Crypto Algorithms team. He’s one of the maintainers for Amazon’s TLS Library s2n. Previously, Alex worked on TLS termination and request proxying for S3 and the Elastic Load Balancing Service developing new features for customers. Alex holds a Bachelor of Science degree in Computer Science from the University of Texas at Austin.

How to record a video of Amazon AppStream 2.0 streaming sessions

Post Syndicated from Nicolas Malaval original https://aws.amazon.com/blogs/security/how-to-record-video-of-amazon-appstream-2-0-streaming-sessions/

Amazon AppStream 2.0 is a fully managed service that lets you stream applications and desktops to your users. In this post, I’ll show you how to record a video of AppStream 2.0 streaming sessions by using FFmpeg, a popular media framework.

There are many use cases for session recording, such as auditing administrative access, troubleshooting user issues, or quality assurance. For example, you could publish administrative tools with AppStream 2.0, such as a Remote Desktop Protocol (RDP) client, to protect access to your backend systems (see How to use Amazon AppStream 2.0 to reduce your bastion host attack surface) and you may want to record a video of what your administrators do when accessing and operating backend systems. You may also want to see what a user did to reproduce an issue, or view activities in a call center setting, such as call handling or customer support, for review and training.

This solution is not designed or intended for people surveillance, or for the collection of evidence for legal proceedings. You are responsible for complying with all applicable laws and regulations when using this solution.

Overview and architecture

In this section, you can learn about the steps for recording AppStream 2.0 streaming sessions and see an overview of the solution architecture. Later in this post, you can find instructions about how to implement and test the solution.

AppStream 2.0 enables you to run custom scripts to prepare the streaming instance before the applications launch or after the streaming session has completed. Figure 1 shows a simplified description of what happens before, during and after a streaming session.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Before the streaming session starts, AppStream 2.0 runs script A, which uses PsExec, a utility that enables administrators to run commands on local or remote computers, to launch script B. Script B then runs during the entire streaming session. PsExec can run the script as the LocalSystem account, a service account that has extensive privileges on a local system, while it interacts with the desktop of another session. Using the LocalSystem account, you can use FFmpeg to record the session screen and prevent AppStream 2.0 users from stopping or tampering with the solution, as long as they aren’t granted local administrator rights.
  2. Script B launches FFmpeg and starts recording the desktop. The solution uses the FFmpeg built-in screen-grabber to capture the desktop across all the available screens.
  3. When FFmpeg starts recording, it captures the area covered by the desktop at that time. If the number of screens or the resolution changes, a portion of the desktop might be outside the recorded area. In that case, script B stops the recording and starts FFmpeg again.
  4. After the streaming session ends, AppStream 2.0 runs script C, which notifies script B that it must end the recording and close. Script B stops FFmpeg.
  5. Before exiting, script B uploads the video files that FFmpeg generated to Amazon Simple Storage Service (Amazon S3). It also stores user and session metadata in Amazon S3, along with the video files, for easy retrieval of session recordings.

For a more comprehensive understanding of how the session scripts works, you can refer to the GitHub repository that contains the solution artifacts, where I go into the details of each script.

Implementing and testing the solution

Now that you understand the architecture of this solution, you can follow the instructions in this section to implement this blog post’s solution in your AWS account. You will:

  1. Create a virtual private cloud (VPC), an S3 bucket and an AWS Identity and Access Management (IAM) role with AWS CloudFormation.
  2. Create an AppStream 2.0 image builder.
  3. Configure the solution scripts on the image builder.
  4. Specify an application to publish and create an image.
  5. Create an AppStream 2.0 fleet.
  6. Create an AppStream 2.0 stack.
  7. Create a user in the AppStream 2.0 user pool.
  8. Launch a streaming session and test the solution.

Step 1: Create a VPC, an S3 bucket, and an IAM role with AWS CloudFormation

For the first step in the solution, you create a new VPC where AppStream 2.0 will be deployed, or choose an existing VPC, a new S3 bucket to store the session recordings, and a new IAM role to grant AppStream 2.0 the necessary IAM permissions.

To create the VPC, the S3 bucket, and the IAM role with AWS CloudFormation

  1. Select the following Launch Stack button to open the CloudFormation console and create a CloudFormation stack from the template. You can change the Region where resources are deployed in the navigation bar.
     
    Select the Launch Stack button to launch the template

    The latest template can also be downloaded on GitHub.

  2. Choose Next. For VPC ID, Subnet 1 ID and Subnet 2 ID, you can optionally select a VPC and two subnets, if you want to deploy the solution in an existing VPC, or leave these fields blank to create a new VPC. Then follow the on-screen instructions. AWS CloudFormation creates the following resources:
    • (If you chose to create a new VPC) An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
    • (If you chose to create a new VPC) Two public subnets on this Amazon VPC with a new route table to make them publicly accessible.
    • An S3 bucket to store the session recordings.
    • An IAM role to grant AppStream 2.0 permissions to upload video and metadata files to Amazon S3.
  3. After the stack creation has completed, choose the Outputs tab in the CloudFormation console and note the values that the process returned: the name and Region of the S3 bucket, the name of the IAM role, the ID of the VPC, and the two subnets.

Step 2: Create an AppStream 2.0 image builder

The next step is to create a new AppStream 2.0 image builder. An image builder is a virtual machine that you can use to install and configure applications for streaming, and then create a custom image.

To create the AppStream 2.0 image builder

  1. Open the AppStream 2.0 console and select the Region in the navigation bar. Choose Get Started then Skip if you are new to the console.
  2. Choose Images in the left pane, and then choose Image Builder. Choose Launch Image Builder.
  3. In Step 1: Choose Image:
    1. Select the name of the latest AppStream 2.0 base image for the Windows Server version of your choice. You can find its name in the AppStream 2.0 base image version history. For example, at the time of writing, the name of the latest Windows Server 2019 base image is AppStream-WinServer2019-07-16-2020.
    2. Choose Next.
  4. In Step 2: Configure Image Builder:
    1. For Name, enter session-recording.
    2. For Instance Type, choose stream.standard.medium.
    3. For IAM role, select the IAM role that AWS CloudFormation created.
    4. Choose Next.
  5. In Step 3: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. For VPC, select the ID of the VPC, and for Subnet 1, select the ID of Subnet 1.
    3. For Security group(s), select the ID of the security group. Refer back to the Outputs tab of the CloudFormation stack if you are unsure which VPC, subnet and security group to select.
    4. Choose Review.
  6. In Step 4: Review, choose Launch.

Step 3: Configure the solution scripts on the image builder

The session scripts to run before streaming sessions start or after sessions end are specified within an AppStream 2.0 image. In this step, you install the solution scripts on your image builder and specify the scripts to run in the session scripts configuration file.

To configure the solution scripts on the image builder

  1. Wait until the image builder is in the Running state, and then choose Connect.
  2. Within the AppStream 2.0 streaming session, on the Local User tab, choose Administrator.
  3. To install the solution scripts:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell, and then choose Run as Administrator.
    3. Run the following commands in the PowerShell terminal to create the required folders, and to copy the solution scripts and the session scripts configuration file from public objects in GitHub to the local disk. If you aren’t using Google Chrome or the AppStream 2.0 client, you need to choose the Clipboard icon in the AppStream 2.0 navigation bar, and then select Paste to remote session.
      New-Item -Path C:\SessionRecording -ItemType directory
      New-Item -Path C:\SessionRecording\Scripts -ItemType directory
      New-Item -Path C:\SessionRecording\Output -ItemType directory
      New-Item -Path C:\SessionRecording\Bin -ItemType directory
      
      $Acl = Get-Acl C:\SessionRecording
      $Acl.SetAccessRuleProtection($true,$false)
      $AccessRule1 = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule1)
      $AccessRule2 = New-Object System.Security.AccessControl.FileSystemAccessRule("SYSTEM","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule2)
      $AccessRule3 = New-Object System.Security.AccessControl.FileSystemAccessRule("ImageBuilderAdmin","FullControl","ContainerInherit,ObjectInherit","None","Allow")
      $Acl.SetAccessRule($AccessRule3)
      Set-Acl C:\SessionRecording $Acl
      
      [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_a.ps1 -OutFile C:\SessionRecording\Scripts\script_a.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_b.ps1 -OutFile C:\SessionRecording\Scripts\script_b.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/script_c.ps1 -OutFile C:\SessionRecording\Scripts\script_c.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/variables.ps1 -OutFile C:\SessionRecording\Scripts\variables.ps1
      Invoke-WebRequest -URI https://github.com/aws-samples/appstream-session-recording/raw/main/config.json -OutFile C:\AppStream\SessionScripts\config.json
      

    4. Close the PowerShell terminal.
  4. To edit the variables.ps1 file with your own values:
    1. From the image builder desktop, choose Start in the Windows taskbar.
    2. Open the context (right-click) menu for Windows PowerShell ISE, and then choose Run as Administrator.
    3. Choose File, then Open. Navigate to the folder C:\SessionRecording\Scripts\ and open the file variables.ps1.
    4. Edit the name and the Region of the S3 bucket with the values returned by AWS CloudFormation in the Outputs tab. You can also customize the number of frames per second, and the maximum duration in seconds of each video file. Save the file.
    5. Save and close the file.
  5. To download the latest FFmpeg and PsExec executables to the image builder:
    1. From the image builder desktop, open the Firefox desktop icon.
    2. Navigate to the URL https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-github and choose the link that contains essentials_build.zip to download FFmpeg. Choose Open to download and extract the ZIP archive. Copy the file ffmpeg.exe in the bin folder of the ZIP archive to C:\SessionRecording\Bin\.

      Note: FFmpeg only provides source code and compiled packages are available at third-party locations. If the link above is invalid, go to the FFmpeg download page and follow the instructions to download the latest release build for Windows.

    3. Navigate to the URL https://download.sysinternals.com/files/PSTools.zip to download PsExec. Choose Open to download and extract the ZIP archive. Copy the file PsExec64.exe to C:\SessionRecording\Bin\. You must agree with the license terms, because the solution in this blog post automatically accepts them.
    4. Close Firefox.

Step 4: Specify an application to publish and create an image

In this step, you publish Firefox on your image builder and create an AppStream 2.0 custom image. I chose Firefox because it’s easy to test later in the procedure. You can choose other or additional applications to publish, if needed.

To specify the application to publish and create the image

  1. From the image builder desktop, open the Image Assistant icon available on the desktop. Image Assistant guides you through the image creation process.
  2. In 1. Add Apps:
    1. Choose + Add App.
    2. Enter the location C:\Program Files (x86)\Mozilla Firefox\firefox.exe to add Firefox.
    3. Choose Open. Keep the default settings and choose Save.
    4. Choose Next multiple times until you see 4. Optimize.
  3. In 4. Optimize:
    1. Choose Launch.
    2. Choose Continue until you can see 5. Configure Image.
  4. In 5. Configure Image:
    1. For Name, enter session-recording for your image name.
    2. Choose Next.
  5. In 6. Review:
    1. Choose Disconnect and Create Image.
  6. Back in the AppStream 2.0 console:
    1. Choose Images in the left pane, and then choose the Image Registry tab.
    2. Change All Images to Private and shared with others. You will see your new AppStream 2.0 image.
    3. Wait until the image is in the Available state. This can take more than 30 minutes.

Step 5: Create an AppStream 2.0 fleet

Next, create an AppStream 2.0 fleet that consists of streaming instances that run your custom image.

To create the AppStream 2.0 fleet

  1. In the left pane of the AppStream 2.0 console, choose Fleets, and then choose Create Fleet.
  2. In Step 1: Provide Fleet Details:
    1. For Name, enter session-recording-fleet.
    2. Choose Next.
  3. In Step 2: Choose an Image:
    1. Select the name of the custom image that you created with the image builder.
    2. Choose Next.
  4. In Step 3: Configure Fleet:
    1. For Instance Type, select stream.standard.medium.
    2. For Fleet Type, choose Always-on.
    3. For Stream view, you can choose to stream either the applications or the entire desktop.
    4. For IAM role, select the IAM role.
    5. Keep the defaults for all other parameters, and choose Next.
  5. In Step 4: Configure Network:
    1. Choose Default Internet Access to provide internet access to your image builder.
    2. Select the VPC, the two subnets, and the security group.
    3. Choose Next.
  6. In Step 5: Review, choose Create.
  7. Wait until the fleet is in the Running state.

Step 6: Create an AppStream 2.0 stack

Create an AppStream 2.0 stack and associate it with the fleet that you just created.

To create the AppStream 2.0 stack

  1. In the left pane of the AppStream 2.0 console, choose Stacks, and then choose Create Stack.
  2. In Step 1: Stack Details:
    1. For Name, enter session-recording-stack.
    2. For Fleet, select the fleet that you created.
  3. Then follow the on-screen instructions and keep the defaults for all other parameters until the stack is created.

Step 7: Create a user in the AppStream 2.0 user pool

The AppStream 2.0 user pool provides a simplified way to manage access to applications for your users. In this step, you create a user in the user pool that you will use later in the procedure to test the solution.

To create the user in the AppStream 2.0 user pool

  1. In the left pane of the AppStream 2.0 console, choose User Pool, and then choose Create User.
  2. Enter your email address, first name, and last name. Choose Create User.
  3. Select the user you just created. Choose Actions, and then choose Assign stack.
  4. Select the stack, and then choose Assign stack.

Step 8: Test the solution

Now, sign in to AppStream 2.0 with the user that you just created, launch a streaming session, and check that the session recordings are delivered to Amazon S3.

To launch a streaming session and test the solution

  1. AppStream 2.0 sends you a notification email. Connect to the sign in portal by entering the information included in the notification email, and set a permanent password.
  2. Sign in to AppStream 2.0 by entering your email address and the permanent password.
  3. After you sign in, you can view the application catalog. Choose Firefox to launch a Firefox window and browse any websites you’d like.
  4. Choose the user icon at the top-right corner, and then choose Logout to end the session.

In the Amazon S3 console, navigate to the S3 bucket to browse the session recordings. For the session you just terminated, you can find one text file that contains user and instance metadata, and one or more video files that you can download and play with a media player like VLC.

Step 9: Clean up resources

You can now delete the two CloudFormation stacks to clean up the resources that were just created.

To clean up resources

  1. To delete the image builder:
    1. In the left pane of the AppStream 2.0 console, choose Images, and then choose Image Builder.
    2. Select the image builder. Choose Actions, then choose Delete.
  2. To delete the stack:
    1. In the left pane of the AppStream 2.0 console, choose Stacks.
    2. Select the image builder. Choose Actions, then choose Disassociate Fleet. Choose Disassociate to confirm.
    3. Choose Actions, then choose Delete.
  3. To delete the fleet:
    1. In the left pane of the AppStream 2.0 console, choose Fleets.
    2. Select the fleet. Choose Actions, then choose Stop. Choose Stop to confirm.
    3. Wait until the fleet is in the Stopped state.
    4. Choose Actions, then choose Delete.
  4. To disable the user in the user pool:
    1. In the left pane of the AppStream 2.0 console, choose User Pool.
    2. Select the user. Choose Actions, then choose Disable user. Choose Disable User to confirm.
  5. Empty the S3 bucket that CloudFormation created (see How do I empty an S3 bucket?). Repeat the same operation with the buckets that AppStream 2.0 created, whose names start with appstream-settings, appstream-logs and appstream2.
  6. Delete the CloudFormation stack on the AWS CloudFormation console (see Deleting a stack on the AWS CloudFormation console).

Conclusion

In this blog post, I showed you a way to record AppStream 2.0 sessions to video files for administrative access auditing, troubleshooting, or quality assurance. While this blog post focuses on Amazon AppStream 2.0, you could adapt and deploy the solution in Amazon Workspaces or in Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

For a deep-dive explanation of how the solution scripts function, you can refer to the GitHub repository that contains the solution artifacts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon AppStream 2.0 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

Combining encryption and signing with AWS asymmetric keys

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/combining-encryption-and-signing-with-aws-asymmetric-keys/

In this post, I discuss how to use AWS Key Management Service (KMS) to combine asymmetric digital signature and asymmetric encryption of the same data.

The addition of support for asymmetric keys in AWS KMS has exciting use cases for customers. The ability to create, manage, and use public and private key pairs with KMS enables you to perform digital signing operations using RSA and Elliptic Curve Cryptography (ECC) keys. AWS KMS asymmetric keys can also be used to perform digital encryption operations using RSA keys. You can use these features together to digitally sign and encrypt the same data.

Another notable property of AWS KMS asymmetric keys is that they enable disconnected use cases. For example AWS KMS asymmetric keys can be used to cryptographically verify a digital signature client-side without the need for a network connection. AWS KMS asymmetric keys also enable scenarios where customers can use KMS to securely manage decryption of data that has been encrypted by a partner’s system that does not integrate with AWS APIs or have access to AWS account credentials. For the sake of simplicity, however, the example that I discuss in this post describes a connected use case where all cryptographic actions are performed server-side in AWS KMS using AWS credentials. The use of AWS KMS asymmetric keys throughout this post allows the overall approach to be adapted to disconnected and/or non-AWS-integrated use cases.

Overview

This post contains three basic steps.

  1. Create and configure AWS asymmetric customer master keys (CMK), AWS Identity and Access Management (IAM) roles, and key policies.
  2. Use your asymmetric CMKs to encrypt and sign a sample message in the role of a sender.
  3. Use AWS KMS to decrypt and verify the message signature of the sample message archive you generated in the previous procedure using your asymmetric CMKs in the role of a receiver.

Prerequisites

The commands I use in this tutorial were tested using AWS Command Line Interface (AWS CLI) version 2.50 on Amazon Linux 2. In order to run these commands in your in your own local environment ensure that you have first installed and updated the AWS CLI.

I assume you have at least one administrator identity available to you that has broad rights for creating roles, assuming roles, as well as creating, managing and using KMS keys. This can be a federated identity (for example, from your corporate identity provider or from a social identity), or it can be an AWS IAM user. Where no AWS identity is mentioned, I assume that you will be accessing the AWS Management Console or the AWS CLI using this administrator identity.

For simplicity, I create the KMS keys in the same region as each other. You must specify an AWS Region when using the AWS CLI, either explicitly or by setting a default Region. Before beginning, you should select an AWS Region to work in such as US East (N. Virginia). If you have not configured the AWS CLI in your environment please review the Configuration basics section of the AWS Command Line Interface User Guide for instructions. You may revert this configuration once you have finished if you do not wish to continue using a default Region with your AWS CLI. Take note of your selected region. When working in the AWS Console, if you do not see resources, such as AWS KMS keys, that you expect you may want to confirm that you are viewing resources in your chosen Region. For more information on selecting your Region in the AWS Console see Choosing a Region in AWS Management Console Getting Started Guide.

Create and configure resources

In the first phase of this tutorial you create and configure two asymmetric AWS KMS CMKs, two AWS IAM roles, and configure the key policies for both of your KMS CMKs to grant permissions to the roles. Shown in the following figure.
 

Figure 1: Create keys, roles, and key policies

Figure 1: Create keys, roles, and key policies

Create asymmetric signing and encryption key pairs

In the first step, you create two asymmetric master keys (CMK). One is configured for signing and verifying digital signatures while the other is configured for encrypting and decrypting data.

Note: The CMKs configured for this post are examples. RSA and Elliptic curve CMKs key specs can differ in a variety of dimensions. The RSA or elliptic curve key spec that you choose might be determined by your security standards or the requirements of your task. Different CMK key specs are priced differently and are subject to different request quotas because they each have different performance profiles. In general, use RSA or ECC keys with the highest security level that is practical and affordable for your task. For more information on CMK configuration options, please review the How to choose your CMK configuration section of the KMS Developer Guide.

To create a CMK for encryption and decryption

  1. Use the KMS CreateKey API. Pass RSA_4096 for the CustomerMasterKeySpec parameter and ENCRYPT_DECRYPT for the KeyUsage parameter in the AWS CLI example command below in order to generate a RSA 4096 key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec RSA_4096 \
        --key-usage ENCRYPT_DECRYPT \
        --description "Sample Digital Encryption Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value in this object.

  2. As a best practice, assign an alias for your key. Use the following command to assign an alias of sample-encrypt-decrypt-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID). Mapping a human-readable alias to the KeyID will make it easier to identify, use, and manage.
    aws kms create-alias \
        --alias-name alias/sample-encrypt-decrypt-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

To create a CMK for signature and verification

  1. Use the KMS CreateKey API. Pass ECC_NIST_P521 for the CustomerMasterKeySpec parameter and SIGN_VERIFY for the KeyUsage parameter in the AWS CLI example command below in order to generate an elliptic curve (ECC) key pair for signature creation and verification using AWS KMS.
    aws kms create-key --customer-master-key-spec \
        ECC_NIST_P521  \
        --key-usage SIGN_VERIFY \
        --description "Sample Digital Signature Key Pair"
    

    Note: If successful, this command returns a KeyMetadata object. Take note of the KeyID value.

  2. Use the following command to assign an alias of sample-sign-verify-key to your newly created CMK (replace the target-key-id value of 1234abcd-12ab-34cd-56ef-1234567890ab with your KeyID).
    aws kms create-alias \
        --alias-name alias/sample-sign-verify-key \
        --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab
    

Create sender and receiver roles

For the next step of this tutorial, you create two AWS principals. Use the steps that follow to create two roles—a sender principal and a receiver principal. Later, you will grant permissions to perform private key operations (sign and decrypt) and public key operations (verify and encrypt) to these roles.

To create and configure the roles

  1. Navigate to the AWS Identity and Access Management (IAM) Create role console dialogue that allows entities in a specified account to assume the role. Enter your Account ID and choose Next, as shown in the following figure.

    Note: If you don’t know you AWS account ID, please read Finding you AWS account ID in the AWS IAM User Guide for guidance on how to obtain this information.

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

    Figure 2: Enter your account ID to begin creating a role in AWS IAM

  2. Select Next through the next two screens.

    Note: By clicking next through these dialogues you do not attach an IAM permissions policy or a tag to this new role.

  3. On the final screen, enter a Role name of SenderRole and a Role description of your choice, as shown in the following figure.
     
    Figure 3: Create the sender role

    Figure 3: Create the sender role

  4. Choose Create role to finish creating the sender role.
  5. To create the receiver role, repeat the preceding role creation process. However, in step 3, substitute the name ReceiverRole for SenderRole.

Configure key policy permissions

Best practice is to adhere to the principle of least privilege and provide each AWS principal with the minimal permissions necessary to perform its tasks. The sender and receiver roles that you created in the previous step currently have no permissions in your account. For this scenario, the receiver principal must be granted permission to verify digital signatures and decrypt data in AWS KMS using your asymmetric CMKs and the sender principal must be granted permission to create digital signatures and encrypt data in KMS using your asymmetric CMKs.

To provide access control permissions for AWS KMS actions to your AWS principals, attach a key policy to each of your CMKs.

Modify the CMK key policy

For the sample-encrypt-decrypt-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Encrypt permissions and the IAM role for the receiver principal (ReceiverRole) kms:Decrypt permissions in the CMK key policy.

To modify the CMK key policy (console)

  1. Navigate to the AWS KMS page in the AWS Console and select customer-managed keys.
  2. Select your sample-encrypt-decrypt-key CMK.
  3. In the key policy section, choose edit.
  4. To allow your receiver principal to use the CMK to decrypt data encrypted under that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for decryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Decrypt",
        "Resource": "*"
    }
    

  5. To allow your sender principal to use the CMK to encrypt data, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for encryption",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Encrypt",
        "Resource": "*"
    }
    

  6. Choose Save changes.

Note: The kms:Encrypt permission is sufficient to permit the sender principal to encrypt small amounts of arbitrary data using your CMK directly.

Grant sign and verify permissions to the CMK key policy

For the sample-sign-verify-key CMK, grant the IAM role for the sender principal (SenderRole) kms:Sign permissions in the CMK key policy and the IAM role for the receiver principal (ReceiverRole) kms:Verify permissions in the CMK key policy.

To grant sign and verify permissions

  1. Using the same process as above, navigate to the key policy edit dialog for the sample-sign-verify-key CMK in the AWS console.
  2. To allow your sender principal to use the CMK to create digital signatures, append the following statement to the key policy (replace the account ID value of 111122223333 with your own).
    {
        "Sid": "Allow use of the CMK for digital signing",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/SenderRole"},
        "Action": "kms:Sign",
        "Resource": "*"
    }
    

  3. To allow your receiver principal to use the CMK to verify signatures created by that CMK, append the following statement to the key policy (replace the account ID value of 111122223333 with your own):
    {
        "Sid": "Allow use of the CMK for digital signature verification",
        "Effect": "Allow",
        "Principal": {"AWS":"arn:aws:iam::111122223333:role/ReceiverRole"},
        "Action": "kms:Verify",
        "Resource": "*"
    }
    

  4. Choose Save changes.

Key permissions summary

When these key policy edits have been completed the sender principal:

  • Will have permissions to encrypt data using the sample-encrypt-decrypt-key CMK and generate digital signatures using the sample-sign-verify-key CMK.
  • Will not have permissions to decrypt or to verify signatures using the CMKs.

The receiver principal:

  • Will have permissions to decrypt data which has been encrypted using the sample-encrypt-decrypt-key CMK and to verify signatures created using the sample-sign-verify-key CMK.
  • Will not have permissions to encrypt or to generate signatures using the CMKs.
Figure 4: Summary of key policy permissions

Figure 4: Summary of key policy permissions

Signing and encrypting a sample message

So far, you’ve created two asymmetric CMKs, created a set of sender and receiver roles, and configured permissions for those roles in each of your CMK key policies. In the second phase of this tutorial, you assume the role of sender and use your asymmetric signature and verification CMK to sign a sample message. You then bundle the sample message and its corresponding digital signature together into an archive and use your encryption and decryption asymmetric CMK to encrypt the archive.
 

Figure 5: Creating a message signature and encrypting the message along with its signature

Figure 5: Creating a message signature and encrypting the message along with its signature

Note: The order of operations in this process is that the message is first signed and then the signature and the message are encrypted together. This order is intentional. When a message is signed and then encrypted, neither the contents nor the identity of the sender will be available to unauthorized 3rd parties. If the order of operations were reversed, however, and a message was first encrypted and then signed it could leak information about the sender’s identity to unauthorized 3rd parties. Moreover, when a message is encrypted and then signed, an unauthorized 3rd party with access to the files could discard the authentic signature created by the sender and replace it with a valid signature created by their own key. This creates the potential for a 3rd party to deceptively create the appearance that they are the legitimate sender of the message and exploit that misperception further.

Assume the sender role

Start by assuming the sender role. In order to successfully assume a role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the sender role.

To assume the sender role

  1. Run the following command, but be sure to replace the account ID value of 111122223333 with your account ID:
    aws sts assume-role \
        --role-arn arn:aws:iam::111122223333::role/SenderRole \
        --role-session-name AWSCLI-Session
    

  2. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  3. Confirm that you’ve successfully assumed the sender role by issuing:
    aws sts get-caller-identity
    

    Note: If the output of this command contains the text assumed-role/SenderRole, then you’ve successfully assumed the sender role.

Create a message

Now, create a sample message file called message.json.

To create a message

Run the following command to create a message with the following content:

echo "
{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}
" > ./message.json 

Create a digital signature

Creating and verifying a digital signature for the message provides confidence that the message contents haven’t been altered after being sent. This characteristic is known as integrity. Furthermore, when access to a signing key is scoped to a particular principal, creating and verifying a digital signature for the message provides confidence in the sender’s identity. This characteristic is known as authenticity. Finally, a high degree of confidence in both the integrity and authenticity of a message limits the plausible ability of a sender to fraudulently deny having signed a message. This characteristic is known a non-repudiation.

To create a digital signature

Run the following command to create a digital signature for message.json:

aws kms sign \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --signing-algorithm ECDSA_SHA_512 \
    --message fileb://message.json \
    --output text \
    --query Signature | base64 --decode > message.sig

This generates an independent digital signature file, message.sig, for message.json. Any modification to the contents of message.json, such as changing the sender or message fields, will now cause signature validation of message.sig to fail for message.json.

Encrypt the message and signature

Even with the benefits of a digital signature, the message could still be viewed by any party with access to the file. In order to provide confidence that the message contents aren’t exposed to unauthorized parties, you can encrypt the message. This characteristic is known as confidentiality. In order to retain the benefits of your digital signature you can encrypt the message and corresponding signature together in a single package.

To encrypt the message and signature

  1. Combine your message and signature into an archive. For example, with the GNU Tar utility you can issue the following:
    tar -czvf message.tar.gz message.sig message.json
    

    This will create a new archive file named message.tar.gz containing both your message and message signature.

  2. Encrypt the archive using AWS KMS. To do so, issue the following command:
    aws kms encrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --plaintext fileb://message.tar.gz \
        --output text \
        --query CiphertextBlob | base64 --decode > message.enc
    

    This will output a message.enc file containing an encrypted copy of the message.tar.gz archive.

Decrypting and verifying a sample message

Now that you’ve created, signed, and encrypted a message, let’s change gears and see what working with this message.enc file is like from the perspective of a receiving party. In the final phase of this tutorial you assume the role of receiver and use your asymmetric CMKs to decrypt the encrypted message archive and verify the digital signature that you created. Finally, you will view your message. The process is shown in the following figure.
 

Figure 6: Decrypting a message archive and verifying the message signature

Figure 6: Decrypting a message archive and verifying the message signature

Assume the receiver role

Assume the receiver role so that you can simulate receiving a signed and encrypted message. As before, in order to assume the receiver role you must authenticate as an IAM principal which has permission to perform sts:AssumeRole. If the principal you are authenticated as lacks this permission you will not able to assume the receiver role.

To assume the receiver role

  1. Copy the message.enc file to a new directory to create a clean working space and navigate there in a terminal session.
  2. Assume your receiver role. To do so, execute the following command, replacing the account ID value of 111122223333 with your own:
    aws sts assume-role \
    	--role-arn arn:aws:iam::111122223333::role/ReceiverRole \
    	--role-session-name AWSCLI-Session
    

  3. The return value for this command provides an access key ID, secret key, and session token. Substitute them into their respective places in the following commands and execute:
    export AWS_ACCESS_KEY_ID=ExampleAccessKeyID1
    export AWS_SECRET_ACCESS_KEY=ExampleSecretKey1
    export AWS_SESSION_TOKEN=ExampleSessionToken1
    

  4. Confirm that you have successfully assumed the receiver role by issuing:
    aws sts get-caller-identity
    

If the output of this command contains the text assumed-role/ReceiverRole then you have successfully assumed the receiver role.

Decrypt the encrypted message archive in AWS KMS

Decrypt the encrypted message archive to access the plaintext of the message and message signature files.

To decrypt the encrypted message archive

  1. Issue the following command:
    aws kms decrypt \
        --key-id alias/sample-encrypt-decrypt-key \
        --ciphertext-blob fileb://EncryptedMessage \
        --encryption-algorithm RSAES_OAEP_SHA_256 \
        --output text \
        --query Plaintext | base64 --decode > message.tar.gz
    

  2. This will create an unencrypted message.tar.gz file that you can unpack with:
    tar -xvfz message.tar.gz
    

This, in turn, will expand the archive contents message.sig and message.json in your working directory.

Verify the message signature

To verify the signature on the message issue the following command:

aws kms verify \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --message fileb://message.json \
    --signing-algorithm ECDSA_SHA_512 \
    --signature fileb://message.sig

In the response you should see that SignatureValid is marked true indicating that the signature has been verified using the specified sample-sign-verify-key that you granted the sender principal permission to generate signatures with.

View the message

Finally, open message.json and view the file’s contents by issuing the following command:

less message.json

You will see that the contents of the file have not been modified and still read:

{ 
    "message": "The Magic Words are Squeamish Ossifrage", 
    "sender": "Sender Principal" 
}

Note: Be careful to avoid making any changes to the contents of this file. Even a minor modification of the message contents will compromise the integrity of the message and cause future attempts at signature validation using your message.sig file to fail.

Summary

In this tutorial, you signed and encrypted data using two AWS KMS asymmetric CMKs and later decrypted and verified your signature using those CMKs.

You first created two asymmetric CMKs in AWS KMS, one for creating and verifying digital signatures and the other for encrypting and decrypting data. You then configured key policy permissions for your sender and receiver principals. Acting as your sender principal, you digitally signed a message in AWS KMS, added the message and signature to an archive and then encrypted that archive in AWS KMS. Next you assumed your receiver role and decrypted the archive in AWS KMS, viewed your message, and verified its signature in AWS KMS.

To learn more about the asymmetric keys feature of AWS KMS, please read the AWS KMS Developer Guide. If you have questions about the asymmetric keys feature, please start a new thread on the AWS KMS forum. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. is a Senior Solutions Architect at AWS working with public sector organizations and financial institutions based out of New York City. His interests include security, privacy, and compliance. He is passionate about his work enabling AWS customers’ successful cloud journeys. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law.

Aligning IAM policies to user personas for AWS Security Hub

Post Syndicated from Vaibhawa Kumar original https://aws.amazon.com/blogs/security/aligning-iam-policies-to-user-personas-for-aws-security-hub/

AWS Security Hub provides you with a comprehensive view of your security posture across your accounts in Amazon Web Services (AWS) and gives you the ability to take action on your high-priority security alerts. There are several different user personas that use Security Hub, and they typically require different AWS Identity and Access Management (IAM) permissions. Those personas include: a security administrator; a security analyst or engineer on a central security team or in a Cloud Center of Excellence; and a Developer Operations (DevOps) engineer or application builder who is the primary owner of an AWS account. In this post, we show how to deploy sample IAM policies for these three personas.

The first persona, a security administrator or cloud system administrator (sysadmin), is responsible for setting up and configuring Security Hub, and they typically need access to all of the Security Hub APIs to do this work. As part of this work, the sysadmin enables Security Hub on various accounts and Regions, decides which standards and controls should be enabled on which accounts, enables product integrations, creates insights, sets up custom actions and automated remediations, and configures IAM policies for other users.

The second persona, a security analyst or engineer using Security Hub, is part of a central security team, often part of a Cloud Center of Excellence. We often see that cloud security efforts are centralized in Cloud Centers of Excellence within a company. These security analysts or engineers typically have access to the master account in Security Hub and can view and take action on findings from any of the connected member accounts. They typically are not configuring Security Hub, so they don’t need permissions to do so.

The third persona is a DevOps engineer or application builder. This user needs the ability to view findings and take action only on the findings associated with their account. For cloud workloads, security is often decentralized down to these users. Security Hub enables them to take more proactive responsibility for the security of their own account by directly viewing and taking action on findings in their account. They typically don’t need permissions to set up and configure Security Hub, because that is done by a central sysadmin.

Overview

The following reference architecture presents an overview of a Security Hub master-member account structure and three personas: a security administrator, a security analyst/engineer, and a DevOps engineer.
 

Figure 1: Reference architecture

Figure 1: Reference architecture

In this blog post, we show how you can create and use the following AWS managed and customer managed IAM policies to support these three personas:

  • The sysadmin persona needs permissions to configure and manage Security Hub, account memberships, insights, and integrations, and to create remediations and take actions and perform record and workflow updates. The AWS managed IAM policy called AWSSecurityHubFullAccess provides the permissions for this persona. An IAM user or role with these permissions can deploy and configure Security Hub in master and member accounts. They can also update findings. The sysadmin also requires permissions to configure AWS Config and Amazon CloudWatch event rules to set up automated responses and remediations.
  • The security analyst persona needs permissions to read, list, and describe findings, standards, controls, and products; to update findings; and to create and update insights for Security Hub resources in the master account. The AWS managed IAM policy called AWSSecurityHubReadOnlyAccess provides the permissions needed for read, list, and describe actions, and a customer managed policy will be attached to give permissions to create and update insights and to update findings.
  • The DevOps engineer persona needs the same permissions as the security analyst persona, but they will only have the ability to access their own Security Hub member AWS account(s) and won’t have access to the master account.

Depending on your specific use case, you might want to provide additional permissions to the security analyst and DevOps engineer personas. For example, you might want to also grant them permissions to create custom actions by using the UpdateActionTarget API. In that case, you should also ensure that they have appropriate permissions to create CloudWatch event rules. You can also restrict these personas to only be able to update certain fields in findings (for example, only update workflow status but not severity) by using IAM context keys.

Prerequisites

You must have already enabled Security Hub with one account as master and other associated accounts as members. You will use the following AWS services:

Implementation

To create the required customer managed policies and associate them to users and roles, you will perform these tasks, described in more detail later in this section:

  1. Create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  2. Create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  3. Create a sysadmin user and role and associate it with the AWS managed policy for AWSSecurityHubFullAccess, along with AWS Config and CloudWatch event rule permissions in the Security Hub master account.

The following policy JSON script is for those two customer managed policies.

Security Hub - Security Analyst policy 
MasterCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SecurityAnalystMasterCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
	  			"securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Security Hub – DevOps Engineer policy: 
MemberCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DevOpsMemberCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
                "securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Step 1: Create the user and role for the security analyst persona

First, create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub master account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies, and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – Security Analyst policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you’re creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 2: Create the user and role for the DevOps persona

Next, create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – DevOps Engineer policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you are creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 3: Create the user and role for the sysadmin persona

Next, create a user and role for the sysadmin persona and associate it with the AWS managed policies AWSSecurityHubFullAccess, CloudWatchEventsFullAccess, and full access to AWS Config in the Security Hub master account.

To create the IAM user and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role.
  3. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  4. Choose the permissions category Attach existing policies directly to filter and select managed policies, and attach the managed policies AWSSecurityHubFullAccess and CloudWatchEventsFullAccess.
  5. Create another policy to grant full access to AWS Config as described in the AWS Config Developer Guide, and attach the policy to the user or role. Review the permissions, and choose Add permissions.
  6. Save all the changes and try using the user and role after few minutes.

Test the users and roles in the Security Hub master and member accounts

Finally, test the three users and roles that you created for the respective personas in the preceding steps.

To test the users and roles

  1. Security analyst user and role:
    1. Sign in to the AWS Management Console in the master account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the Security Analyst can view them.
    3. Navigate to a finding and change the workflow status as described in the topic Setting the workflow status for findings.
  2. DevOps engineer user and role:
    1. Sign in to the AWS Management Console in the member account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the DevOps Engineer can view them.
    3. Create a custom insight for the member account and other attribute groupings.
  3. Sysadmin user and role:
    1. Sign in to the AWS Management Console in the master or member account, and open Security Hub.
    2. Try admin-related operations such as create a custom action, invite another account, and so on.

Adding policy conditions

You might want to further restrict the permissions for the security analyst and DevOps personas. IAM policies for Security Hub’s BatchUpdateFindings API enable you to specify conditions to prevent a user from making any update to a specific finding field. The following example disallows setting the Workflow Status field to Suppressed.

{
	"Sid": "CMPCondition",
	"Effect": "Deny",
	"Action": "securityhub:BatchUpdateFindings",
	"Resource": "*",
	"Condition": {
		"StringEquals": {
			"securityhub:ASFFSyntaxPath/Workflow.Status": "SUPPRESSED"
		}
	}
}

Summary

In this post, we showed you how to align AWS managed and customer managed IAM policies to different user personas, so that you can allow different users to access Security Hub with least privilege permissions. Security Hub enables both central security teams and individual DevOps engineers to understand and improve the security posture of the AWS accounts in their organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vaibhawa Kumar

Vaibhawa is a Cloud Infra Architect with AWS Professional Services. He helps customers on their cloud journey of critical workloads to the AWS cloud with infrastructure security and automations. In his free time, you can find him spending time with family, sports, and cooking.

Author

Sanjay Patel

Sanjay is a Senior Cloud Application Architect with AWS Professional Services. He has a diverse background in software design, enterprise architecture, and API integrations. He has helped AWS customers automate infrastructure security. He enjoys working with AWS customers to identify and implement the best fit solution.

Author

Ely Kahn

Ely is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

Rapid and flexible Infrastructure as Code using the AWS CDK with AWS Solutions Constructs

Post Syndicated from Biff Gaut original https://aws.amazon.com/blogs/devops/rapid-flexible-infrastructure-with-solutions-constructs-cdk/

Introduction

As workloads move to the cloud and all infrastructure becomes virtual, infrastructure as code (IaC) becomes essential to leverage the agility of this new world. JSON and YAML are the powerful, declarative modeling languages of AWS CloudFormation, allowing you to define complex architectures using IaC. Just as higher level languages like BASIC and C abstracted away the details of assembly language and made developers more productive, the AWS Cloud Development Kit (AWS CDK) provides a programming model above the native template languages, a model that makes developers more productive when creating IaC. When you instantiate CDK objects in your Typescript (or Python, Java, etc.) application, those objects “compile” into a YAML template that the CDK deploys as an AWS CloudFormation stack.

AWS Solutions Constructs take this simplification a step further by providing a library of common service patterns built on top of the CDK. These multi-service patterns allow you to deploy multiple resources with a single object, resources that follow best practices by default – both independently and throughout their interaction.

Comparison of an Application stack with Assembly Language, 4th generation language and Object libraries such as Hibernate with an IaC stack of CloudFormation, AWS CDK and AWS Solutions Constructs

Application Development Stack vs. IaC Development Stack

Solution overview

To demonstrate how using Solutions Constructs can accelerate the development of IaC, in this post you will create an architecture that ingests and stores sensor readings using Amazon Kinesis Data Streams, AWS Lambda, and Amazon DynamoDB.

An architecture diagram showing sensor readings being sent to a Kinesis data stream. A Lambda function will receive the Kinesis records and store them in a DynamoDB table.

Prerequisite – Setting up the CDK environment

Tip – If you want to try this example but are concerned about the impact of changing the tools or versions on your workstation, try running it on AWS Cloud9. An AWS Cloud9 environment is launched with an AWS Identity and Access Management (AWS IAM) role and doesn’t require configuring with an access key. It uses the current region as the default for all CDK infrastructure.

To prepare your workstation for CDK development, confirm the following:

  • Node.js 10.3.0 or later is installed on your workstation (regardless of the language used to write CDK apps).
  • You have configured credentials for your environment. If you’re running locally you can do this by configuring the AWS Command Line Interface (AWS CLI).
  • TypeScript 2.7 or later is installed globally (npm -g install typescript)

Before creating your CDK project, install the CDK toolkit using the following command:

npm install -g aws-cdk

Create the CDK project

  1. First create a project folder called stream-ingestion with these two commands:

mkdir stream-ingestion
cd stream-ingestion

  1. Now create your CDK application using this command:

npx [email protected] init app --language=typescript

Tip – This example will be written in TypeScript – you can also specify other languages for your projects.

At this time, you must use the same version of the CDK and Solutions Constructs. We’re using version 1.68.0 of both based upon what’s available at publication time, but you can update this with a later version for your projects in the future.

Let’s explore the files in the application this command created:

  • bin/stream-ingestion.ts – This is the module that launches the application. The key line of code is:

new StreamIngestionStack(app, 'StreamIngestionStack');

This creates the actual stack, and it’s in StreamIngestionStack that you will write the CDK code that defines the resources in your architecture.

  • lib/stream-ingestion-stack.ts – This is the important class. In the constructor of StreamIngestionStack you will add the constructs that will create your architecture.

During the deployment process, the CDK uploads your Lambda function to an Amazon S3 bucket so it can be incorporated into your stack.

  1. To create that S3 bucket and any other infrastructure the CDK requires, run this command:

cdk bootstrap

The CDK uses the same supporting infrastructure for all projects within a region, so you only need to run the bootstrap command once in any region in which you create CDK stacks.

  1. To install the required Solutions Constructs packages for our architecture, run the these two commands from the command line:

npm install @aws-solutions-constructs/[email protected]
npm install @aws-solutions-constructs/[email protected]

Write the code

First you will write the Lambda function that processes the Kinesis data stream messages.

  1. Create a folder named lambda under stream-ingestion
  2. Within the lambda folder save a file called lambdaFunction.js with the following contents:
var AWS = require("aws-sdk");

// Create the DynamoDB service object
var ddb = new AWS.DynamoDB({ apiVersion: "2012-08-10" });

AWS.config.update({ region: process.env.AWS_REGION });

// We will configure our construct to 
// look for the .handler function
exports.handler = async function (event) {
  try {
    // Kinesis will deliver records 
    // in batches, so we need to iterate through
    // each record in the batch
    for (let record of event.Records) {
      const reading = parsePayload(record.kinesis.data);
      await writeRecord(record.kinesis.partitionKey, reading);
    };
  } catch (err) {
    console.log(`Write failed, err:\n${JSON.stringify(err, null, 2)}`);
    throw err;
  }
  return;
};

// Write the provided sensor reading data to the DynamoDB table
async function writeRecord(partitionKey, reading) {

  var params = {
    // Notice that Constructs automatically sets up 
    // an environment variable with the table name.
    TableName: process.env.DDB_TABLE_NAME,
    Item: {
      partitionKey: { S: partitionKey },  // sensor Id
      timestamp: { S: reading.timestamp },
      value: { N: reading.value}
    },
  };

  // Call DynamoDB to add the item to the table
  await ddb.putItem(params).promise();
}

// Decode the payload and extract the sensor data from it
function parsePayload(payload) {

  const decodedPayload = Buffer.from(payload, "base64").toString(
    "ascii"
  );

  // Our CLI command will send the records to Kinesis
  // with the values delimited by '|'
  const payloadValues = decodedPayload.split("|", 2)
  return {
    value: payloadValues[0],
    timestamp: payloadValues[1]
  }
}

We won’t spend a lot of time explaining this function – it’s pretty straightforward and heavily commented. It receives an event with one or more sensor readings, and for each reading it extracts the pertinent data and saves it to the DynamoDB table.

You will use two Solutions Constructs to create your infrastructure:

The aws-kinesisstreams-lambda construct deploys an Amazon Kinesis data stream and a Lambda function.

  • aws-kinesisstreams-lambda creates the Kinesis data stream and Lambda function that subscribes to that stream. To support this, it also creates other resources, such as IAM roles and encryption keys.

The aws-lambda-dynamodb construct deploys a Lambda function and a DynamoDB table.

  • aws-lambda-dynamodb creates an Amazon DynamoDB table and a Lambda function with permission to access the table.
  1. To deploy the first of these two constructs, replace the code in lib/stream-ingestion-stack.ts with the following code:
import * as cdk from "@aws-cdk/core";
import * as lambda from "@aws-cdk/aws-lambda";
import { KinesisStreamsToLambda } from "@aws-solutions-constructs/aws-kinesisstreams-lambda";

import * as ddb from "@aws-cdk/aws-dynamodb";
import { LambdaToDynamoDB } from "@aws-solutions-constructs/aws-lambda-dynamodb";

export class StreamIngestionStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const kinesisLambda = new KinesisStreamsToLambda(
      this,
      "KinesisLambdaConstruct",
      {
        lambdaFunctionProps: {
          // Where the CDK can find the lambda function code
          runtime: lambda.Runtime.NODEJS_10_X,
          handler: "lambdaFunction.handler",
          code: lambda.Code.fromAsset("lambda"),
        },
      }
    );

    // Next Solutions Construct goes here
  }
}

Let’s explore this code:

  • It instantiates a new KinesisStreamsToLambda object. This Solutions Construct will launch a new Kinesis data stream and a new Lambda function, setting up the Lambda function to receive all the messages in the Kinesis data stream. It will also deploy all the additional resources and policies required for the architecture to follow best practices.
  • The third argument to the constructor is the properties object, where you specify overrides of default values or any other information the construct needs. In this case you provide properties for the encapsulated Lambda function that informs the CDK where to find the code for the Lambda function that you stored as lambda/lambdaFunction.js earlier.
  1. Now you’ll add the second construct that connects the Lambda function to a new DynamoDB table. In the same lib/stream-ingestion-stack.ts file, replace the line // Next Solutions Construct goes here with the following code:
    // Define the primary key for the new DynamoDB table
    const primaryKeyAttribute: ddb.Attribute = {
      name: "partitionKey",
      type: ddb.AttributeType.STRING,
    };

    // Define the sort key for the new DynamoDB table
    const sortKeyAttribute: ddb.Attribute = {
      name: "timestamp",
      type: ddb.AttributeType.STRING,
    };

    const lambdaDynamoDB = new LambdaToDynamoDB(
      this,
      "LambdaDynamodbConstruct",
      {
        // Tell construct to use the Lambda function in
        // the first construct rather than deploy a new one
        existingLambdaObj: kinesisLambda.lambdaFunction,
        tablePermissions: "Write",
        dynamoTableProps: {
          partitionKey: primaryKeyAttribute,
          sortKey: sortKeyAttribute,
          billingMode: ddb.BillingMode.PROVISIONED,
          removalPolicy: cdk.RemovalPolicy.DESTROY
        },
      }
    );

    // Add autoscaling
    const readScaling = lambdaDynamoDB.dynamoTable.autoScaleReadCapacity({
      minCapacity: 1,
      maxCapacity: 50,
    });

    readScaling.scaleOnUtilization({
      targetUtilizationPercent: 50,
    });

Let’s explore this code:

  • The first two const objects define the names and types for the partition key and sort key of the DynamoDB table.
  • The LambdaToDynamoDB construct instantiated creates a new DynamoDB table and grants access to your Lambda function. The key to this call is the properties object you pass in the third argument.
    • The first property sent to LambdaToDynamoDB is existingLambdaObj – by setting this value to the Lambda function created by KinesisStreamsToLambda, you’re telling the construct to not create a new Lambda function, but to grant the Lambda function in the other Solutions Construct access to the DynamoDB table. This illustrates how you can chain many Solutions Constructs together to create complex architectures.
    • The second property sent to LambdaToDynamoDB tells the construct to limit the Lambda function’s access to the table to write only.
    • The third property sent to LambdaToDynamoDB is actually a full properties object defining the DynamoDB table. It provides the two attribute definitions you created earlier as well as the billing mode. It also sets the RemovalPolicy to DESTROY. This policy setting ensures that the table is deleted when you delete this stack – in most cases you should accept the default setting to protect your data.
  • The last two lines of code show how you can use statements to modify a construct outside the constructor. In this case we set up auto scaling on the new DynamoDB table, which we can access with the dynamoTable property on the construct we just instantiated.

That’s all it takes to create the all resources to deploy your architecture.

  1. Save all the files, then compile the Typescript into a CDK program using this command:

npm run build

  1. Finally, launch the stack using this command:

cdk deploy

(Enter “y” in response to Do you wish to deploy all these changes (y/n)?)

You will see some warnings where you override CDK default values. Because you are doing this intentionally you may disregard these, but it’s always a good idea to review these warnings when they occur.

Tip – Many mysterious CDK project errors stem from mismatched versions. If you get stuck on an inexplicable error, check package.json and confirm that all CDK and Solutions Constructs libraries have the same version number (with no leading caret ^). If necessary, correct the version numbers, delete the package-lock.json file and node_modules tree and run npm install. Think of this as the “turn it off and on again” first response to CDK errors.

You have now deployed the entire architecture for the demo – open the CloudFormation stack in the AWS Management Console and take a few minutes to explore all 12 resources that the program deployed (and the 380 line template generated to created them).

Feed the Stream

Now use the CLI to send some data through the stack.

Go to the Kinesis Data Streams console and copy the name of the data stream. Replace the stream name in the following command and run it from the command line.

aws kinesis put-records \
--stream-name StreamIngestionStack-KinesisLambdaConstructKinesisStreamXXXXXXXX-XXXXXXXXXXXX \
--records \
PartitionKey=1301,'Data=15.4|2020-08-22T01:16:36+00:00' \
PartitionKey=1503,'Data=39.1|2020-08-22T01:08:15+00:00'

Tip – If you are using the AWS CLI v2, the previous command will result in an “Invalid base64…” error because v2 expects the inputs to be Base64 encoded by default. Adding the argument --cli-binary-format raw-in-base64-out will fix the issue.

To confirm that the messages made it through the service, open the DynamoDB console – you should see the two records in the table.

Now that you’ve got it working, pause to think about what you just did. You deployed a system that can ingest and store sensor readings and scale to handle heavy loads. You did that by instantiating two objects – well under 60 lines of code. Experiment with changing some property values and deploying the changes by running npm run build and cdk deploy again.

Cleanup

To clean up the resources in the stack, run this command:

cdk destroy

Conclusion

Just as languages like BASIC and C allowed developers to write programs at a higher level of abstraction than assembly language, the AWS CDK and AWS Solutions Constructs allow us to create CloudFormation stacks in Typescript, Java, or Python instead JSON or YAML. Just as there will always be a place for assembly language, there will always be situations where we want to write CloudFormation templates manually – but for most situations, we can now use the AWS CDK and AWS Solutions Constructs to create complex and complete architectures in a fraction of the time with very little code.

AWS Solutions Constructs can currently be used in CDK applications written in Typescript, Javascript, Java and Python and will be available in C# applications soon.

About the Author

Biff Gaut has been shipping software since 1983, from small startups to large IT shops. Along the way he has contributed to 2 books, spoken at several conferences and written many blog posts. He is now a Principal Solutions Architect at AWS working on the AWS Solutions Constructs team, helping customers deploy better architectures more quickly.

How to implement password-less authentication with Amazon Cognito and WebAuthn

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-implement-password-less-authentication-with-amazon-cognito-and-webauthn/

In this blog post, I show you how to offer a password-less authentication experience to your customers. To do this, you’ll allow physical security keys or platform authenticators (like finger-print scanners) to be used as the authentication factor to your web or mobile applications that use Amazon Cognito user pools for authentication.

An Amazon Cognito user pool is a user directory that Amazon Web Services (AWS) customers use to manage their customer identities. Web Authentication (WebAuthn) is a W3C standard that lets users authenticate to web applications using public-key cryptography. Using public-key cryptography enables you to implement a stronger authentication mechanism that’s less dependent on passwords.

Mobile and web applications can use WebAuthn together with browser and device support for the Client-To-Authenticator-Protocol (CTAP) to implement Fast ID Online (FIDO) authentication. To learn more about the flow of WebAuthn and CTAP, visit the FIDO Alliance.

How it works

Amazon Cognito user pools allow you to build a custom authentication flow that uses AWS Lambda functions to authenticates users based on one or more challenge/response cycles. You can use this flow to implement password-less authentication that is based on custom challenges. To use this flow to implement FIDO authentication, you need to create credentials during the registration phase and reference these credentials in the user’s profile. You can then validate these credentials during the authentication phase in a custom challenge.

During registration, a new set of credentials that are bound to your application (relying party), are created through a FIDO authenticator. For example, a platform authenticator with a biometric sensor or a roaming authenticator like a physical security key. The private key of this credential set remains on the authenticator, the public key, together with a credential identifier are saved in a custom attribute that’s part of the user profile in Amazon Cognito.

During authentication, a Cognito custom authentication flow will be used to implement authentication through a custom challenge. The application prompts the user to sign in using the authenticator that they used during registration. Response from the authenticator is then passed as a challenge response to Amazon Cognito and verified using the stored public key.

In this password-less flow, the private key has never left the physical device, the authenticator also validates that relying party in authentication request matches the relying party that was used to create the credentials. This combination provides a more secured authentication flow that uses stronger credentials, protects user from phishing and provides better user experience.

About the demo project

This blog post and the diagrams below explain a scenario that uses FIDO as the only authentication factor, to implement password-less authentication. To help with implementation details, I created a project to demonstrate WebAuthn integration with Amazon Cognito that provides sample code for three scenarios:

  • A scenario that uses FIDO as the only factor (password-less)
  • A scenario that uses FIDO as a second factor (with password)
  • A scenario that lets users sign-in with only a password

This project is only a demonstration and shouldn’t be used as-is in production environments. When using FIDO as authentication factor, it is a best practice to allow users to register multiple authenticators and you need to implement an account recovery workflow in case of a lost authenticator.

Implementing FIDO Authentication

Let’s take a deeper dive into the design and components involved in implementing this solution. To deploy this project in your development environment, follow the instructions in the WebAuthn integration with Amazon Cognito project.

Creating and configuring user pool

The first step is to create a Cognito user pool and triggers that orchestrate a custom authentication flow. You do that by deploying the CloudFormation stack that will create all resources as explained in the demo project.

Few implementation details to note about the user pool:

  • The template creates a user pool, app client, triggers, and Lambda functions to use for custom authentication.
  • The template creates a custom attribute called publicKeyCred. This is the custom attribute that holds a base64 encoded representation of the credential identifier and public key for the user’s authenticator.
  • The app client defines what authentication flows are allowed. You can limit allowed flows according to your use-case. To support FIDO authentication, you must allow CUSTOM_AUTH flow.
  • The app client has “write” permissions to the custom attribute publicKeyCred but not “read” permissions. This allows your application to write the attribute during registration or profile updates but excludes this attribute from the user’s id_token. Since this attribute is considered back-end data that is only used during authentication, it doesn’t need to be part of user profile in the id_token.

User registration flow

The registration flow needs to create credentials using the authenticator and store the public-key in user’s profile. Let’s take a closer look at the sequence of calls and involved components to implement this flow.
 

Figure 1: WebAuthn user registration process

Figure 1: WebAuthn user registration process

  1. The user navigates to your application, www.example.com (relying party), and creates an account. A request is sent to the relying party to build a credentials options object and send it back to the browser. (in the demo project, this starts in the createCredentials function in webauthn-client.js and creates the credentials options object by making a call to createCredRequest in authn.js)
  2. The browser uses built-in WebAuthn APIs to create the new credentials with an available authenticator using the credentials options object that was created in first step. This is done by making a call to navigator.credentials.create API (this API is available in browsers and platforms that support FIDO and WebAuthn).
  3. The user experience in this step depends on the OS, browser, and the authenticator. For example, the browser could prompt the user to attach a security key or, on devices that support it, to use a biometric scanner.
     
    Figure 2: User registration and browser alert to use an authenticator in Firefox

    Figure 2: User registration and browser alert to use an authenticator in Firefox

  4. The user interacts with an authenticator (by touching a security key or scanning finger on a touch-id device), which generates new credentials bound to the relying party and returns a response object to the browser.
  5. The browser sends a credential response object to the relying party to parse and validate the response on the server-side. The credential identifier and public key are extracted from the credential response. At this step, your application can also check additional authenticator data and use it to make authentication decisions. For example, your application can check if the authenticator was able to verify user identity through PIN or biometrics (UV flag) or only user presence (UP flag) was verified by authenticator. In the demo project, this is still part of createCredentials function and server-side parsing and validation is done in parseCredResponse that is implemented in authn.js
  6. To complete the user registration, the browser passes the profile attributes that have been collected during registration through Amazon Cognito APIs as custom attributes. This step is performed in signUp function in webauthn-client.js

At the conclusion of this process, a new user will be created in Amazon Cognito and the custom attribute “publicKeyCred” will be populated with a base64 encoded string that includes a credential identifier and the public key generated by the authenticator. This attribute is not considered secret or sensitive data, it rather includes the public key that will be used to verify the authenticator response during subsequent authentications.

User authentication flow

The following diagram describes the custom authentication flow to implement password-less authentication.
 

Figure 3: WebAuthn user authentication process

Figure 3: WebAuthn user authentication process

  1. The user provides their user name and selects the sign-in button, script (running in browser) starts the sign-in process using Amazon Cognito InitiateAuth API passing the user name and indicating that authentication flow is CUSTOM_AUTH. In the demo project, this part is performed in the signIn function in webauthn-client.js.
  2. The Amazon Cognito service passes control to the Define Auth Challenge Lambda trigger. The trigger then determines that this is the first step in the authentication and returns CUSTOM_CHALLENGE as the next challenge to the user.
  3. Control then moves to Create Auth Challenge Lambda trigger to create the custom challenge. This trigger creates a random challenge (a 64 bytes random string), extracts the credential identifier from the user profile (the value passed initially during the sign-up process) combines them and returns them as a custom challenge to the client. This is performed in CreateAuthChallenge Lambda function.
  4. The browser then prompts the user to activate an authenticator. At this stage, the authenticator verifies that credentials exist for the identifier and that the relying party matches the one that is bound to the credentials. This is implemented by making a call to navigator.credentials.get API that is available in browsers and devices that support FIDO2 and WebAuthn.
     
    Figure 4: Authentication and browser prompt to use a registered authenticator

    Figure 4: Authentication and browser prompt to use a registered authenticator

  5. If credentials exist and the relying party is verified, the authenticator requests a user attention or verification. Depending on the type of authenticator, user verification through biometrics or a PIN code is performed and the credentials response is passed back to the browser.
     
    Figure 5: Authentication examples from different browsers and platforms

    Figure 5: Authentication examples from different browsers and platforms

  6. The signIn function continues the sign-in process by calling respondToAuthChallenge API and sending the credentials response to Amazon Cognito.
  7. Amazon Cognito sends the response to the Verify Auth Challenge Lambda trigger. This trigger extracts the public key from the user profile, parses and validates the credentials response, and if the signature is valid, it responds with success. This is performed in VerifyAuthChallenge Lambda trigger.
  8. Lastly, Amazon Cognito sends the control again to Define Auth Challenge to determine the next step. If the results from Verify Auth Challenge indicate a successful response, authentication succeeds and Amazon Cognito responds with ID, access, and refresh tokens.

Conclusion

When building customer facing applications, you as the application owner and developer need to balance security with usability. Reducing the risk of account take-over and phishing is based on using strong credentials, strong second-factors, and minimizing the role of passwords. The flexibility of Amazon Cognito custom authentication flow integrated with WebAuthn offer a technical path to make this possible in addition to offering better user experience to your customers.

Check out the WebAuthn with Amazon Cognito project for code samples and deployment steps, deploy this in your development environment to see this integration in action and go build an awesome password-less experience in your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Building, bundling, and deploying applications with the AWS CDK

Post Syndicated from Cory Hall original https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages.

The post CDK Pipelines: Continuous delivery for AWS CDK applications showed how you can use CDK Pipelines to deploy a TypeScript-based AWS Lambda function. In that post, you learned how to add additional build commands to the pipeline to compile the TypeScript code to JavaScript, which is needed to create the Lambda deployment package.

In this post, we dive deeper into how you can perform these build commands as part of your AWS CDK build process by using the native AWS CDK bundling functionality.

If you’re working with Python, TypeScript, or JavaScript-based Lambda functions, you may already be familiar with the PythonFunction and NodejsFunction constructs, which use the bundling functionality. This post describes how to write your own bundling logic for instances where a higher-level construct either doesn’t already exist or doesn’t meet your needs. To illustrate this, I walk through two different examples: a Lambda function written in Golang and a static site created with Nuxt.js.

Concepts

A typical CI/CD pipeline contains steps to build and compile your source code, bundle it into a deployable artifact, push it to artifact stores, and deploy to an environment. In this post, we focus on the building, compiling, and bundling stages of the pipeline.

The AWS CDK has the concept of bundling source code into a deployable artifact. As of this writing, this works for two main types of assets: Docker images published to Amazon Elastic Container Registry (Amazon ECR) and files published to Amazon Simple Storage Service (Amazon S3). For files published to Amazon S3, this can be as simple as pointing to a local file or directory, which the AWS CDK uploads to Amazon S3 for you.

When you build an AWS CDK application (by running cdk synth), a cloud assembly is produced. The cloud assembly consists of a set of files and directories that define your deployable AWS CDK application. In the context of the AWS CDK, it might include the following:

  • AWS CloudFormation templates and instructions on where to deploy them
  • Dockerfiles, corresponding application source code, and information about where to build and push the images to
  • File assets and information about which S3 buckets to upload the files to

Use case

For this use case, our application consists of front-end and backend components. The example code is available in the GitHub repo. In the repository, I have split the example into two separate AWS CDK applications. The repo also contains the Golang Lambda example app and the Nuxt.js static site.

Golang Lambda function

To create a Golang-based Lambda function, you must first create a Lambda function deployment package. For Go, this consists of a .zip file containing a Go executable. Because we don’t commit the Go executable to our source repository, our CI/CD pipeline must perform the necessary steps to create it.

In the context of the AWS CDK, when we create a Lambda function, we have to tell the AWS CDK where to find the deployment package. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  runtime: lambda.Runtime.GO_1_X,
  handler: 'main',
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-go-executable')),
});

In the preceding code, the lambda.Code.fromAsset() method tells the AWS CDK where to find the Golang executable. When we run cdk synth, it stages this Go executable in the cloud assembly, which it zips and publishes to Amazon S3 as part of the PublishAssets stage.

If we’re running the AWS CDK as part of a CI/CD pipeline, this executable doesn’t exist yet, so how do we create it? One method is CDK bundling. The lambda.Code.fromAsset() method takes a second optional argument, AssetOptions, which contains the bundling parameter. With this bundling parameter, we can tell the AWS CDK to perform steps prior to staging the files in the cloud assembly.

Breaking down the BundlingOptions parameter further, we can perform the build inside a Docker container or locally.

Building inside a Docker container

For this to work, we need to make sure that we have Docker running on our build machine. In AWS CodeBuild, this means setting privileged: true. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [
        'bash', '-c', [
          'go test -v',
          'GOOS=linux go build -o /asset-output/main',
      ].join(' && '),
    },
  })
  ...
});

We specify two parameters:

  • image (required) – The Docker image to perform the build commands in
  • command (optional) – The command to run within the container

The AWS CDK mounts the folder specified as the first argument to fromAsset at /asset-input inside the container, and mounts the asset output directory (where the cloud assembly is staged) at /asset-output inside the container.

After we perform the build commands, we need to make sure we copy the Golang executable to the /asset-output location (or specify it as the build output location like in the preceding example).

This is the equivalent of running something like the following code:

docker run \
  --rm \
  -v folder-containing-source-code:/asset-input \
  -v cdk.out/asset.1234a4b5/:/asset-output \
  lambci/lambda:build-go1.x \
  bash -c 'GOOS=linux go build -o /asset-output/main'

Building locally

To build locally (not in a Docker container), we have to provide the local parameter. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [],
      local: {
        tryBundle(outputDir: string) {
          try {
            spawnSync('go version')
          } catch {
            return false
          }

          spawnSync(`GOOS=linux go build -o ${path.join(outputDir, 'main')}`);
          return true
        },
      },
    },
  })
  ...
});

The local parameter must implement the ILocalBundling interface. The tryBundle method is passed the asset output directory, and expects you to return a boolean (true or false). If you return true, the AWS CDK doesn’t try to perform Docker bundling. If you return false, it falls back to Docker bundling. Just like with Docker bundling, you must make sure that you place the Go executable in the outputDir.

Typically, you should perform some validation steps to ensure that you have the required dependencies installed locally to perform the build. This could be checking to see if you have go installed, or checking a specific version of go. This can be useful if you don’t have control over what type of build environment this might run in (for example, if you’re building a construct to be consumed by others).

If we run cdk synth on this, we see a new message telling us that the AWS CDK is bundling the asset. If we include additional commands like go test, we also see the output of those commands. This is especially useful if you wanted to fail a build if tests failed. See the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

Cloud Assembly

If we look at the cloud assembly that was generated (located at cdk.out), we see something like the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

It contains our GolangLambdaStack CloudFormation template that defines our Lambda function, as well as our Golang executable, bundled at asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952/main.

Let’s look at how the AWS CDK uses this information. The GolangLambdaStack.assets.json file contains all the information necessary for the AWS CDK to know where and how to publish our assets (in this use case, our Golang Lambda executable). See the following code:

{
  "version": "5.0.0",
  "files": {
    "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952": {
      "source": {
        "path": "asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952",
        "packaging": "zip"
      },
      "destinations": {
        "current_account-current_region": {
          "bucketName": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}",
          "objectKey": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip",
          "assumeRoleArn": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/cdk-hnb659fds-file-publishing-role-${AWS::AccountId}-${AWS::Region}"
        }
      }
    }
  }
}

The file contains information about where to find the source files (source.path) and what type of packaging (source.packaging). It also tells the AWS CDK where to publish this .zip file (bucketName and objectKey) and what AWS Identity and Access Management (IAM) role to use (assumeRoleArn). In this use case, we only deploy to a single account and Region, but if you have multiple accounts or Regions, you see multiple destinations in this file.

The GolangLambdaStack.template.json file that defines our Lambda resource looks something like the following code:

{
  "Resources": {
    "MyGoFunction0AB33E85": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": {
            "Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
          },
          "S3Key": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip"
        },
        "Handler": "main",
        ...
      }
    },
    ...
  }
}

The S3Bucket and S3Key match the bucketName and objectKey from the assets.json file. By default, the S3Key is generated by calculating a hash of the folder location that you pass to lambda.Code.fromAsset(), (for this post, folder-containing-source-code). This means that any time we update our source code, this calculated hash changes and a new Lambda function deployment is triggered.

Nuxt.js static site

In this section, I walk through building a static site using the Nuxt.js framework. You can apply the same logic to any static site framework that requires you to run a build step prior to deploying.

To deploy this static site, we use the BucketDeployment construct. This is a construct that allows you to populate an S3 bucket with the contents of .zip files from other S3 buckets or from a local disk.

Typically, we simply tell the BucketDeployment construct where to find the files that it needs to deploy to the S3 bucket. See the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-directory')),
  ],
  destinationBucket: myBucket
});

To deploy a static site built with a framework like Nuxt.js, we need to first run a build step to compile the site into something that can be deployed. For Nuxt.js, we run the following two commands:

  • yarn install – Installs all our dependencies
  • yarn generate – Builds the application and generates every route as an HTML file (used for static hosting)

This creates a dist directory, which you can deploy to Amazon S3.

Just like with the Golang Lambda example, we can perform these steps as part of the AWS CDK through either local or Docker bundling.

Building inside a Docker container

To build inside a Docker container, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [
          'bash', '-c', [
            'yarn install',
            'yarn generate',
            'cp -r /asset-input/dist/* /asset-output/',
          ].join(' && '),
        ],
      },
    }),
  ],
  ...
});

For this post, we build inside the publicly available node:lts image hosted on DockerHub. Inside the container, we run our build commands yarn install && yarn generate, and copy the generated dist directory to our output directory (the cloud assembly).

The parameters are the same as described in the Golang example we walked through earlier.

Building locally

To build locally, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        local: {
          tryBundle(outputDir: string) {
            try {
              spawnSync('yarn --version');
            } catch {
              return false
            }

            spawnSync('yarn install && yarn generate');

       fs.copySync(path.join(__dirname, ‘path-to-nuxtjs-project’, ‘dist’), outputDir);
            return true
          },
        },
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [],
      },
    }),
  ],
  ...
});

Building locally works the same as the Golang example we walked through earlier, with one exception. We have one additional command to run that copies the generated dist folder to our output directory (cloud assembly).

Conclusion

This post showed how you can easily compile your backend and front-end applications using the AWS CDK. You can find the example code for this post in this GitHub repo. If you have any questions or comments, please comment on the GitHub repo. If you have any additional examples you want to add, we encourage you to create a Pull Request with your example!

Our code also contains examples of deploying the applications using CDK Pipelines, so if you’re interested in deploying the example yourself, check out the example repo.

 

About the author

Cory Hall

Cory is a Solutions Architect at Amazon Web Services with a passion for DevOps and is based in Charlotte, NC. Cory works with enterprise AWS customers to help them design, deploy, and scale applications to achieve their business goals.

How to configure Duo multi-factor authentication with Amazon Cognito

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-configure-duo-multi-factor-authentication-with-amazon-cognito/

Adding multi-factor authentication (MFA) reduces the risk of user account take-over, phishing attacks, and password theft. Adding MFA while providing a frictionless sign-in experience requires you to offer a variety of MFA options that support a wide range of users and devices. Let’s see how you can achieve that with Amazon Cognito and Duo Multi-Factor Authentication (MFA).

Amazon Cognito user pools are user directories that are used by Amazon Web Services (AWS) customers to manage the identities of their customers and to add sign-in, sign-up and user management features to their customer-facing web and mobile applications. Duo Security is an APN Partner that provides unified access security and multi-factor authentication solutions.

In this blog post, I show you how to use Amazon Cognito custom authentication flow to integrate Duo Multi-Factor Authentication (MFA) into your sign-in flow and offer a wide range of MFA options to your customers. Some second factors available through Duo MFA are mobile phone SMS passcodes, approval of login via phone call, push-notification-based approval on smartphones, biometrics on devices that support it, and security keys that can be attached via USB.

How it works

Amazon Cognito user pools enable you to build a custom authentication flow that authenticates users based on one or more challenge/response cycles. You can use this flow to integrate Duo MFA into your authentication as a custom challenge.

Duo Web offers a software development kit to make it easier for you to integrate your web applications with Duo MFA. You need an account with Duo and an application to protect (which can be created from the Duo admin dashboard). When you create your application in the Duo admin dashboard, note the integration key (ikey), secret key (skey), and API hostname. These details, together with a random string (akey) that you generate, are the primary factors used to integrate your Amazon Cognito user pool with Duo MFA.

Note: ikey, skey, and akey are referred to as Duo keys.

Duo MFA will be integrated into the sign-in flow as a custom challenge. To do that, you need to generate a signed challenge request using Duo APIs and use it to load Duo MFA in an iframe and request the user’s second factor. When the challenge is answered by the user, a signed response is returned to your application and sent to Amazon Cognito for verification. If the response is valid then the MFA challenge is successful.

Let’s take a closer look at the sequence of calls and components involved in this flow.

Implementation details

In this section, I walk you through the end-to-end flow of integrating Duo MFA with Amazon Cognito using a custom authentication flow. To help you with this integration, I built a demo project that provides deployment steps and sample code to create a working demo in your environment.

Create and configure a user pool

The first step is to create the AWS resources needed for the demo. You can do that by deploying the AWS CloudFormation stack as described in the demo project.

A few implementation details to be aware of:

  • The template creates an Amazon Cognito user pool, application client, and AWS Lambda triggers that are used for the custom authentication.
  • The template also accepts ikey, skey, and akey as inputs. For security, the parameters are masked in the AWS CloudFormation console. These parameters are stored in a secret in AWS Secrets Manager with a resource policy that allows relevant Lambda functions read access to that secret.
  • Duo keys are loaded from secrets manager at the initialization of create auth challenge and verify auth challenge Lambda triggers to be used to create sign-request and verify sign-response.

Authentication flow

Figure 1: User authentication process for the custom authentication flow

Figure 1: User authentication process for the custom authentication flow

The preceding sequence diagram (Figure 1) illustrates the sequence of calls to sign in a user, which are as follows:

  1. In your application, the user is presented with a sign-in UI that captures their user name and password and starts the sign-in flow. A script—running in the browser—starts the sign-in process using the Amazon Cognito authenticateUser API with CUSTOM_AUTH set as the authentication flow. This validates the user’s credentials using Secure Remote Password (SRP) protocol and moves on to the second challenge if the credentials are valid.

    Note: The authenticateUser API automatically starts the authentication process with SRP. The first challenge that’s sent to Amazon Cognito is SRP_A. This is followed by PASSWORD_VERIFIER to verify the user’s credentials.

  2. After the SRP challenge step, the define auth challenge Lambda trigger will return CUSTOM_CHALLENGE and this will move control to the create auth challenge trigger.
  3. The create auth challenge Lambda trigger creates a Duo signed request using the Duo keys plus the username and returns the signed request as a challenge to the client. Here is a sample code of what create auth challenge should look like:
    <JavaScript>
    
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
    
        if(ikey == null || skey == null || akey == null){ 
          const promise = new Promise(function(resolve, reject) {
              secretsManagerClient.getSecretValue({SecretId: secretName}, function(err, data) {
                    if (err) {throw err; }
                    else {
                        if ('SecretString' in data) {
                            secret = JSON.parse(data.SecretString);
                            ikey = secret['duo-ikey'];
                            skey = secret['duo-skey'];
                            akey = secret['duo-akey'];
                        }
                    }
                    resolve();
                });
            })
            
            await promise; 
        }
    
        
        var username = event.userName;
        var sig_request = duo_web.sign_request(ikey, skey, akey, username);
        
        event.response.publicChallengeParameters = {
            sig_request: sig_request
        };
        
        return event;
    };
    

  4. The client initializes the Duo Web library with the signed request and displays Duo MFA in an iframe to request a second factor from the user. To initialize the Duo library, you need the api_hostname that is generated for your application in the Duo dashboard, the sign-request that was received as a challenge, and a callback function to invoke after the MFA step is completed by the user. This is done on the client side as follows:
    <JavaScript>
          //render Duo MFA iframe
          $("#duo-mfa").html('<iframe id="duo_iframe" title="Two-Factor Authentication" </iframe>');
            
          Duo.init({
            'host': api_hostname,
            'sig_request': challengeParameters.sig_request,
            'submit_callback': mfa_callback
          });
    

  5. Through the Duo iframe, the user can set up their MFA preferences and respond to an MFA challenge. After successful MFA setup, a signed response from the Duo Web library will be returned to the client and passed to the callback function that was provided in Duo.init call.
     
    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

  6. The client sends the Duo signed response to the Amazon Cognito service as a challenge response.
  7. Amazon Cognito sends the response to the verify auth challenge Lambda trigger, which uses Duo keys and username to verify the response.
    <JavaScript>
    const duo_web = require('duo_web');
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
        
        var username = event.userName;
        
        //-------get challenge response
        const sig_response = event.request.challengeAnswer;
        const verificationResult = duo_web.verify_response(ikey, skey, akey, sig_response);
        
        if (verificationResult === username) {
            event.response.answerCorrect = true;
        } else {
            event.response.answerCorrect = false;
        }
        return event;
    };
    

  8. Validation results and current state are passed once again to the define auth challenge Lambda trigger. If the user response is valid, then the Duo MFA challenge is successful. You can then decide to introduce additional challenges to the user or issue tokens and complete the authentication process.

Conclusion

As you build your mobile or web application, keep in mind that using multi-factor authentication is an effective and recommended approach to protect your customers from account take-over, phishing, and the risks of weak or compromised passwords. Making multi-factor authentication easy for your customers enables you to offer authentication experience that protects their accounts but doesn’t slow them down.

Visit the security pillar of AWS Well-Architected Framework to learn more about AWS security best practices and recommendations.

In this blog post, I showed you how to integrate Duo MFA with an Amazon Cognito user pool. Visit the demo application and review the code samples in it to learn how to integrate this with your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Improving customer experience and reducing cost with CodeGuru Profiler

Post Syndicated from Rajesh original https://aws.amazon.com/blogs/devops/improving-customer-experience-and-reducing-cost-with-codeguru-profiler/

Amazon CodeGuru is a set of developer tools powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code. Amazon CodeGuru Profiler allows you to profile your applications in a low impact, always on manner. It helps you improve your application’s performance, reduce cost and diagnose application issues through rich data visualization and proactive recommendations. CodeGuru Profiler has been a very successful and widely used service within Amazon, before it was offered as a public service. This post discusses a few ways in which internal Amazon teams have used and benefited from continuous profiling of their production applications. These uses cases can provide you with better insights on how to reap similar benefits for your applications using CodeGuru Profiler.

Inside Amazon, over 100,000 applications currently use CodeGuru Profiler across various environments globally. Over the last few years, CodeGuru Profiler has served as an indispensable tool for resolving issues in the following three categories:

  1. Performance bottlenecks, high latency and CPU utilization
  2. Cost and Infrastructure utilization
  3. Diagnosis of an application impacting event

API latency improvement for CodeGuru Profiler

What could be a better example than CodeGuru Profiler using itself to improve its own performance?
CodeGuru Profiler offers an API called BatchGetFrameMetricData, which allows you to fetch time series data for a set of frames or methods. We noticed that the 99th percentile latency (i.e. the slowest 1 percent of requests over a 5 minute period) metric for this API was approximately 5 seconds, higher than what we wanted for our customers.

Solution

CodeGuru Profiler is built on a micro service architecture, with the BatchGetFrameMetricData API implemented as set of AWS Lambda functions. It also leverages other AWS services such as Amazon DynamoDB to store data and Amazon CloudWatch to record performance metrics.

When investigating the latency issue, the team found that the 5-second latency spikes were happening during certain time intervals rather than continuously, which made it difficult to easily reproduce and determine the root cause of the issue in pre-production environment. The new Lambda profiling feature in CodeGuru came in handy, and so the team decided to enable profiling for all its Lambda functions. The low impact, continuous profiling capability of CodeGuru Profiler allowed the team to capture comprehensive profiles over a period of time, including when the latency spikes occurred, enabling the team to better understand the issue.
After capturing the profiles, the team went through the flame graphs of one of the Lambda functions (TimeSeriesMetricsGeneratorLambda) and learned that all of its CPU time was spent by the thread responsible to publish metrics to CloudWatch. The following screenshot shows a flame graph during one of these spikes.

TimeSeriesMetricsGeneratorLambda taking 100% CPU

As seen, there is a single call stack visible in the above flame graph, indicating all the CPU time was taken by the thread invoking above code. This helped the team immediately understand what was happening. Above code was related to the thread responsible for publishing the CloudWatch metrics. This thread was publishing these metrics in a synchronized block and as this thread took most of the CPU, it caused all other threads to wait and the latency to spike. To fix the issue, the team simply changed the TimeSeriesMetricsGeneratorLambda Lambda code, to publish CloudWatch metrics at the end of the function, which eliminated contention of this thread with all other threads.

Improvement

After the fix was deployed, the 5 second latency spikes were gone, as seen in the following graph.

Latency reduction for BatchGetFrameMetricData API

Cost, infrastructure and other improvements for CAGE

CAGE is an internal Amazon retail service that does royalty aggregation for digital products, such as Kindle eBooks, MP3 songs and albums and more. Like many other Amazon services, CAGE is also customer of CodeGuru Profiler.

CAGE was experiencing latency delays and growing infrastructure cost, and wanted to reduce them. Thanks to CodeGuru Profiler’s always-on profiling capabilities, rich visualization and recommendations, the team was able to successfully diagnose the issues, determine the root cause and fix them.

Solution

With the help of CodeGuru Profiler, the CAGE team identified several reasons for their degraded service performance and increased hardware utilization:

  • Excessive garbage collection activity – The team reviewed the service flame graphs (see the following screenshot) and identified that a lot of CPU time was spent getting garbage collection activities, 65.07% of the total service CPU.

Excessive garbage collection activities for CAGE

  • Metadata overhead – The team followed CodeGuru Profiler recommendation to identify that the service’s DynamoDB responses were consuming higher CPU, 2.86% of total CPU time. This was due to the response metadata caching in the AWS SDK v1.x HTTP client that was turned on by default. This was causing higher CPU overhead for high throughput applications such as CAGE. The following screenshot shows the relevant recommendation.

Response metadata recommendation for CAGE

  • Excessive logging – The team also identified excessive logging of its internal Amazon ION structures. The team initially added this logging for debugging purposes, but was unaware of its impact on the CPU cost, taking 2.28% of the overall service CPU. The following screenshot is part of the flame graph that helped identify the logging impact.

Excessive logging in CAGE service

The team used these flame graphs and CodeGuru Profiler provided recommendations to determine the root cause of the issues and systematically resolve them by doing the following:

  • Switching to a more efficient garbage collector
  • Removing excessive logging
  • Disabling metadata caching for Dynamo DB response

Improvements

After making these changes, the team was able to reduce their infrastructure cost by 25%, saving close to $2600 per month. Service latency also improved, with a reduction in service’s 99th percentile latency from approximately 2,500 milliseconds to 250 milliseconds in their North America (NA) region as shown below.

CAGE Latency Reduction

The team also realized a side benefit of having reduced log verbosity and saw a reduction in log size by 55%.

Event Analysis of increased checkout latency for Amazon.com

During one of the high traffic times, Amazon retail customers experienced higher than normal latency on their checkout page. The issue was due to one of the downstream service’s API experiencing high latency and CPU utilization. While the team quickly mitigated the issue by increasing the service’s servers, the always-on CodeGuru Profiler came to the rescue to help diagnose and fix the issue permanently.

Solution

The team analyzed the flame graphs from CodeGuru Profiler at the time of the event and noticed excessive CPU consumption (69.47%) when logging exceptions using Log4j2. See the following screenshot taken from an earlier version of CodeGuru Profiler user interface.

Excessive CPU consumption when logging exceptions using Log4j2

With CodeGuru Profiler flame graph and other metrics, the team quickly confirmed that the issue was due to excessive exception logging using Log4j2. This downstream service had recently upgraded to Log4j2 version 2.8, in which exception logging could be expensive, due to the way Log4j2 handles class-loading of certain stack frames. Log4j 2.x versions enabled class loading by default, which was disabled in 1.x versions, causing the increased latency and CPU utilization. The team was not able to detect this issue in pre-production environment, as the impact was observable only in high traffic situations.

Improvement

After they understood the issue, the team successfully rolled out the fix, removing the unnecessary exception trace logging to fix the issue. Such performance issues and many others are proactively offered as CodeGuru Profiler recommendations, to ensure you can proactively learn about such issues with your applications and quickly resolve them.

Conclusion

I hope this post provided a glimpse into various ways CodeGuru Profiler can benefit your business and applications. To get started using CodeGuru Profiler, see Setting up CodeGuru Profiler.
For more information about CodeGuru Profiler, see the following:

Investigating performance issues with Amazon CodeGuru Profiler

Optimizing application performance with Amazon CodeGuru Profiler

Find Your Application’s Most Expensive Lines of Code and Improve Code Quality with Amazon CodeGuru

 

Building a cross-account CI/CD pipeline for single-tenant SaaS solutions

Post Syndicated from Rafael Ramos original https://aws.amazon.com/blogs/devops/cross-account-ci-cd-pipeline-single-tenant-saas/

With the increasing demand from enterprise customers for a pay-as-you-go consumption model, more and more independent software vendors (ISVs) are shifting their business model towards software as a service (SaaS). Usually this kind of solution is architected using a multi-tenant model. It means that the infrastructure resources and applications are shared across multiple customers, with mechanisms in place to isolate their environments from each other. However, you may not want or can’t afford to share resources for security or compliance reasons, so you need a single-tenant environment.

To achieve this higher level of segregation across the tenants, it’s recommended to isolate the environments on the AWS account level. This strategy brings benefits, such as no network overlapping, no account limits sharing, and simplified usage tracking and billing, but it comes with challenges from an operational standpoint. Whereas multi-tenant solutions require management of a single shared production environment, single-tenant installations consist of dedicated production environments for each customer, without any shared resources across the tenants. When the number of tenants starts to grow, delivering new features at a rapid pace becomes harder to accomplish, because each new version needs to be manually deployed on each tenant environment.

This post describes how to automate this deployment process to deliver software quickly, securely, and less error-prone for each existing tenant. I demonstrate all the steps to build and configure a CI/CD pipeline using AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation. For each new version, the pipeline automatically deploys the same application version on the multiple tenant AWS accounts.

There are different caveats to build such cross-account CI/CD pipelines on AWS. Because of that, I use AWS Command Line Interface (AWS CLI) to manually go through the process and demonstrate in detail the various configuration aspects you have to handle, such as artifact encryption, cross-account permission granting, and pipeline actions.

Single-tenancy vs. multi-tenancy

One of the first aspects to consider when architecting your SaaS solution is its tenancy model. Each brings their own benefits and architectural challenges. On multi-tenant installations, each customer shares the same set of resources, including databases and applications. With this mode, you can use the servers’ capacity more efficiently, which generally leads to significant cost-saving opportunities. On the other hand, you have to carefully secure your solution to prevent a customer from accessing sensitive data from another. Designing for high availability becomes even more critical on multi-tenant workloads, because more customers are affected in the event of downtime.

Because the environments are by definition isolated from each other, single-tenant solutions are simpler to design when it comes to security, networking isolation, and data segregation. Likewise, you can customize the applications per customer, and have different versions for specific tenants. You also have the advantage of eliminating the noisy-neighbor effect, and can plan the infrastructure for the customer’s scalability requirements. As a drawback, in comparison with multi-tenant, the single-tenant model is operationally more complex because you have more servers and applications to maintain.

Which tenancy model to choose depends ultimately on whether you can meet your customer needs. They might have specific governance requirements, be bound to a certain industry regulation, or have compliance criteria that influences which model they can choose. For more information about modeling your SaaS solutions, see SaaS on AWS.

Solution overview

To demonstrate this solution, I consider a fictitious single-tenant ISV with two customers: Unicorn and Gnome. It uses one central account where the tools reside (Tooling account), and two other accounts, each representing a tenant (Unicorn and Gnome accounts). As depicted in the following architecture diagram, when a developer pushes code changes to CodeCommit, Amazon CloudWatch Events  triggers the CodePipeline CI/CD pipeline, which automatically deploys a new version on each tenant’s AWS account. It ensures that the fictitious ISV doesn’t have the operational burden to manually re-deploy the same version for each end-customers.

Architecture diagram of a CI/CD pipeline for single-tenant SaaS solutions

For illustration purposes, the sample application I use in this post is an AWS Lambda function that returns a simple JSON object when invoked.

Prerequisites

Before getting started, you must have the following prerequisites:

Setting up the Git repository

Your first step is to set up your Git repository.

  1. Create a CodeCommit repository to host the source code.

The CI/CD pipeline is automatically triggered every time new code is pushed to that repository.

  1. Make sure Git is configured to use IAM credentials to access AWS CodeCommit via HTTP by running the following command from the terminal:
git config --global credential.helper '!aws codecommit credential-helper [email protected]'
git config --global credential.UseHttpPath true
  1. Clone the newly created repository locally, and add two files in the root folder: index.js and application.yaml.

The first file is the JavaScript code for the Lambda function that represents the sample application. For our use case, the function returns a JSON response object with statusCode: 200 and the body Hello!\n. See the following code:

exports.handler = async (event) => {
    const response = {
        statusCode: 200,
        body: `Hello!\n`,
    };
    return response;
};

The second file is where the infrastructure is defined using AWS CloudFormation. The sample application consists of a Lambda function, and we use AWS Serverless Application Model (AWS SAM) to simplify the resources creation. See the following code:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Sample Application.

Parameters:
    S3Bucket:
        Type: String
    S3Key:
        Type: String
    ApplicationName:
        Type: String
        
Resources:
    SampleApplication:
        Type: 'AWS::Serverless::Function'
        Properties:
            FunctionName: !Ref ApplicationName
            Handler: index.handler
            Runtime: nodejs12.x
            CodeUri:
                Bucket: !Ref S3Bucket
                Key: !Ref S3Key
            Description: Hello Lambda.
            MemorySize: 128
            Timeout: 10
  1. Push both files to the remote Git repository.

Creating the artifact store encryption key

By default, CodePipeline uses server-side encryption with an AWS Key Management Service (AWS KMS) managed customer master key (CMK) to encrypt the release artifacts. Because the Unicorn and Gnome accounts need to decrypt those release artifacts, you need to create a customer managed CMK in the Tooling account.

From the terminal, run the following command to create the artifact encryption key:

aws kms create-key --region <YOUR_REGION>

This command returns a JSON object with the key ARN property if run successfully. Its format is similar to arn:aws:kms:<YOUR_REGION>:<TOOLING_ACCOUNT_ID>:key/<KEY_ID>. Record this value to use in the following steps.

The encryption key has been created manually for educational purposes only, but it’s considered a best practice to have it as part of the Infrastructure as Code (IaC) bundle.

Creating an Amazon S3 artifact store and configuring a bucket policy

Our use case uses Amazon Simple Storage Service (Amazon S3) as artifact store. Every release artifact is encrypted and stored as an object in an S3 bucket that lives in the Tooling account.

To create and configure the artifact store, follow these steps in the Tooling account:

  1. From the terminal, create an S3 bucket and give it a unique name:
aws s3api create-bucket \
    --bucket <BUCKET_UNIQUE_NAME> \
    --region <YOUR_REGION> \
    --create-bucket-configuration LocationConstraint=<YOUR_REGION>
  1. Configure the bucket to use the customer managed CMK created in the previous step. This makes sure the objects stored in this bucket are encrypted using that key, replacing <KEY_ARN> with the ARN property from the previous step:
aws s3api put-bucket-encryption \
    --bucket <BUCKET_UNIQUE_NAME> \
    --server-side-encryption-configuration \
        '{
            "Rules": [
                {
                    "ApplyServerSideEncryptionByDefault": {
                        "SSEAlgorithm": "aws:kms",
                        "KMSMasterKeyID": "<KEY_ARN>"
                    }
                }
            ]
        }'
  1. The artifacts stored in the bucket need to be accessed from the Unicorn and Gnome Configure the bucket policies to allow cross-account access:
aws s3api put-bucket-policy \
    --bucket <BUCKET_UNIQUE_NAME> \
    --policy \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "s3:GetBucket*",
                        "s3:List*"
                    ],
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:root",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:root"
                        ]
                    },
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>"
                    ]
                },
                {
                    "Action": [
                        "s3:GetObject*"
                    ],
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:root",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:root"
                        ]
                    },
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/CrossAccountPipeline/*"
                    ]
                }
            ]
        }' 

This S3 bucket has been created manually for educational purposes only, but it’s considered a best practice to have it as part of the IaC bundle.

Creating a cross-account IAM role in each tenant account

Following the security best practice of granting least privilege, each action declared on CodePipeline should have its own IAM role.  For this use case, the pipeline needs to perform changes in the Unicorn and Gnome accounts from the Tooling account, so you need to create a cross-account IAM role in each tenant account.

Repeat the following steps for each tenant account to allow CodePipeline to assume role in those accounts:

  1. Configure a named CLI profile for the tenant account to allow running commands using the correct access keys.
  2. Create an IAM role that can be assumed from another AWS account, replacing <TENANT_PROFILE_NAME> with the profile name you defined in the previous step:
aws iam create-role \
    --role-name CodePipelineCrossAccountRole \
    --profile <TENANT_PROFILE_NAME> \
    --assume-role-policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::<TOOLING_ACCOUNT_ID>:root"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }'
  1. Create an IAM policy that grants access to the artifact store S3 bucket and to the artifact encryption key:
aws iam create-policy \
    --policy-name CodePipelineCrossAccountArtifactReadPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "s3:GetBucket*",
                        "s3:ListBucket"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "s3:GetObject*",
                        "s3:Put*"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/CrossAccountPipeline/*"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [ 
                        "kms:DescribeKey", 
                        "kms:GenerateDataKey*", 
                        "kms:Encrypt", 
                        "kms:ReEncrypt*", 
                        "kms:Decrypt" 
                    ], 
                    "Resource": "<KEY_ARN>",
                    "Effect": "Allow"
                }
            ]
        }'
  1. Attach the CodePipelineCrossAccountArtifactReadPolicy IAM policy to the CodePipelineCrossAccountRole IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CodePipelineCrossAccountRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CodePipelineCrossAccountArtifactReadPolicy
  1. Create an IAM policy that allows to pass the IAM role CloudFormationDeploymentRole to CloudFormation and to perform CloudFormation actions on the application Stack:
aws iam create-policy \
    --policy-name CodePipelineCrossAccountCfnPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": [
                        "iam:PassRole"
                    ],
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:*"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:<TENANT_ACCOUNT_ID>:stack/SampleApplication*/*",
                    "Effect": "Allow"
                }
            ]
        }'
  1. Attach the CodePipelineCrossAccountCfnPolicy IAM policy to the CodePipelineCrossAccountRole IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CodePipelineCrossAccountRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CodePipelineCrossAccountCfnPolicy

Additional configuration is needed in the Tooling account to allow access, which you complete later on.

Creating a deployment IAM role in each tenant account

After CodePipeline assumes the CodePipelineCrossAccountRole IAM role into the tenant account, it triggers AWS CloudFormation to provision the infrastructure based on the template defined in the application.yaml file. For that, AWS CloudFormation needs to assume an IAM role that grants privileges to create resources into the tenant AWS account.

Repeat the following steps for each tenant account to allow AWS CloudFormation to create resources in those accounts:

  1. Create an IAM role that can be assumed by AWS CloudFormation:
aws iam create-role \
    --role-name CloudFormationDeploymentRole \
    --profile <TENANT_PROFILE_NAME> \
    --assume-role-policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudformation.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }'
  1. Create an IAM policy that grants permissions to create AWS resources:
aws iam create-policy \
    --policy-name CloudFormationDeploymentPolicy \
    --profile <TENANT_PROFILE_NAME> \
    --policy-document \
        '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "iam:PassRole",
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "iam:GetRole",
                        "iam:CreateRole",
                        "iam:DeleteRole",
                        "iam:AttachRolePolicy",
                        "iam:DetachRolePolicy"
                    ],
                    "Resource": "arn:aws:iam::<TENANT_ACCOUNT_ID>:role/*",
                    "Effect": "Allow"
                },
                {
                    "Action": "lambda:*",
                    "Resource": "*",
                    "Effect": "Allow"
                },
                {
                    "Action": "codedeploy:*",
                    "Resource": "*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "s3:GetObject*",
                        "s3:GetBucket*",
                        "s3:List*"
                    ],
                    "Resource": [
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>",
                        "arn:aws:s3:::<BUCKET_UNIQUE_NAME>/*"
                    ],
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "kms:Decrypt",
                        "kms:DescribeKey"
                    ],
                    "Resource": "<KEY_ARN>",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:CreateStack",
                        "cloudformation:DescribeStack*",
                        "cloudformation:GetStackPolicy",
                        "cloudformation:GetTemplate*",
                        "cloudformation:SetStackPolicy",
                        "cloudformation:UpdateStack",
                        "cloudformation:ValidateTemplate"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:<TENANT_ACCOUNT_ID>:stack/SampleApplication*/*",
                    "Effect": "Allow"
                },
                {
                    "Action": [
                        "cloudformation:CreateChangeSet"
                    ],
                    "Resource": "arn:aws:cloudformation:<YOUR_REGION>:aws:transform/Serverless-2016-10-31",
                    "Effect": "Allow"
                }
            ]
        }'

The granted permissions in this IAM policy depend on the resources your application needs to be provisioned. Because the application in our use case consists of a simple Lambda function, the IAM policy only needs permissions over Lambda. The other permissions declared are to access and decrypt the Lambda code from the artifact store, use AWS CodeDeploy to deploy the function, and create and attach the Lambda execution role.

  1. Attach the IAM policy to the IAM role:
aws iam attach-role-policy \
    --profile <TENANT_PROFILE_NAME> \
    --role-name CloudFormationDeploymentRole \
    --policy-arn arn:aws:iam::<TENANT_ACCOUNT_ID>:policy/CloudFormationDeploymentPolicy

Configuring an artifact store encryption key

Even though the IAM roles created in the tenant accounts declare permissions to use the CMK encryption key, that’s not enough to have access to the key. To access the key, you must update the CMK key policy.

From the terminal, run the following command to attach the new policy:

aws kms put-key-policy \
    --key-id <KEY_ARN> \
    --policy-name default \
    --region <YOUR_REGION> \
    --policy \
        '{
             "Id": "TenantAccountAccess",
             "Version": "2012-10-17",
             "Statement": [
                {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "arn:aws:iam::<TOOLING_ACCOUNT_ID>:root"
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                },
                {
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                            "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CodePipelineCrossAccountRole",
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CloudFormationDeploymentRole",
                            "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CodePipelineCrossAccountRole"
                        ]
                    },
                    "Action": [
                        "kms:Decrypt",
                        "kms:DescribeKey"
                    ],
                    "Resource": "*"
                }
             ]
         }'

Provisioning the CI/CD pipeline

Each CodePipeline workflow consists of two or more stages, which are composed by a series of parallel or serial actions. For our use case, the pipeline is made up of four stages:

  • Source – Declares CodeCommit as the source control for the application code.
  • Build – Using CodeBuild, it installs the dependencies and builds deployable artifacts. In this use case, the sample application is too simple and this stage is used for illustration purposes.
  • Deploy_Dev – Deploys the sample application on a sandbox environment. At this point, the deployable artifacts generated at the Build stage are used to create a CloudFormation stack and deploy the Lambda function.
  • Deploy_Prod – Similar to Deploy_Dev, at this stage the sample application is deployed on the tenant production environments. For that, it contains two actions (one per tenant) that are run in parallel. CodePipeline uses CodePipelineCrossAccountRole to assume a role on the tenant account, and from there, CloudFormationDeploymentRole is used to effectively deploy the application.

To provision your resources, complete the following steps from the terminal:

  1. Download the CloudFormation pipeline template:
curl -LO https://cross-account-ci-cd-pipeline-single-tenant-saas.s3.amazonaws.com/pipeline.yaml
  1. Deploy the CloudFormation stack using the pipeline template:
aws cloudformation deploy \
    --template-file pipeline.yaml \
    --region <YOUR_REGION> \
    --stack-name <YOUR_PIPELINE_STACK_NAME> \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        ArtifactBucketName=<BUCKET_UNIQUE_NAME> \
        ArtifactEncryptionKeyArn=<KMS_KEY_ARN> \
        UnicornAccountId=<UNICORN_TENANT_ACCOUNT_ID> \
        GnomeAccountId=<GNOME_TENANT_ACCOUNT_ID> \
        SampleApplicationRepositoryName=<YOUR_CODECOMMIT_REPOSITORY_NAME> \
        RepositoryBranch=<YOUR_CODECOMMIT_MAIN_BRANCH>

This is the list of the required parameters to deploy the template:

    • ArtifactBucketName – The name of the S3 bucket where the deployment artifacts are to be stored.
    • ArtifactEncryptionKeyArn – The ARN of the customer managed CMK to be used as artifact encryption key.
    • UnicornAccountId – The AWS account ID for the first tenant (Unicorn) where the application is to be deployed.
    • GnomeAccountId – The AWS account ID for the second tenant (Gnome) where the application is to be deployed.
    • SampleApplicationRepositoryName – The name of the CodeCommit repository where source changes are detected.
    • RepositoryBranch – The name of the CodeCommit branch where source changes are detected. The default value is master in case no value is provided.
  1. Wait for AWS CloudFormation to create the resources.

When stack creation is complete, the pipeline starts automatically.

For each existing tenant, an action is declared within the Deploy_Prod stage. The following code is a snippet of how these actions are configured to deploy the application on a different account:

RoleArn: !Sub arn:aws:iam::${UnicornAccountId}:role/CodePipelineCrossAccountRole
Configuration:
    ActionMode: CREATE_UPDATE
    Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
    StackName: !Sub SampleApplication-unicorn-stack-${AWS::Region}
    RoleArn: !Sub arn:aws:iam::${UnicornAccountId}:role/CloudFormationDeploymentRole
    TemplatePath: CodeCommitSource::application.yaml
    ParameterOverrides: !Sub | 
        { 
            "ApplicationName": "SampleApplication-Unicorn",
            "S3Bucket": { "Fn::GetArtifactAtt" : [ "ApplicationBuildOutput", "BucketName" ] },
            "S3Key": { "Fn::GetArtifactAtt" : [ "ApplicationBuildOutput", "ObjectKey" ] }
        }

The code declares two IAM roles. The first one is the IAM role assumed by the CodePipeline action to access the tenant AWS account, whereas the second is the IAM role used by AWS CloudFormation to create AWS resources in the tenant AWS account. The ParameterOverrides configuration declares where the release artifact is located. The S3 bucket and key are in the Tooling account and encrypted using the customer managed CMK. That’s why it was necessary to grant access from external accounts using a bucket and KMS policies.

Besides the CI/CD pipeline itself, this CloudFormation template declares IAM roles that are used by the pipeline and its actions. The main IAM role is named CrossAccountPipelineRole, which is used by the CodePipeline service. It contains permissions to assume the action roles. See the following code:

{
    "Action": "sts:AssumeRole",
    "Effect": "Allow",
    "Resource": [
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineSourceActionRole>",
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineApplicationBuildActionRole>",
        "arn:aws:iam::<TOOLING_ACCOUNT_ID>:role/<PipelineDeployDevActionRole>",
        "arn:aws:iam::<UNICORN_ACCOUNT_ID>:role/CodePipelineCrossAccountRole",
        "arn:aws:iam::<GNOME_ACCOUNT_ID>:role/CodePipelineCrossAccountRole"
    ]
}

When you have more tenant accounts, you must add additional roles to the list.

After CodePipeline runs successfully, test the sample application by invoking the Lambda function on each tenant account:

aws lambda invoke --function-name SampleApplication --profile <TENANT_PROFILE_NAME> --region <YOUR_REGION> out

The output should be:

{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}

Cleaning up

Follow these steps to delete the components and avoid future incurring charges:

  1. Delete the production application stack from each tenant account:
aws cloudformation delete-stack --profile <TENANT_PROFILE_NAME> --region <YOUR_REGION> --stack-name SampleApplication-<TENANT_NAME>-stack-<YOUR_REGION>
  1. Delete the dev application stack from the Tooling account:
aws cloudformation delete-stack --region <YOUR_REGION> --stack-name SampleApplication-dev-stack-<YOUR_REGION>
  1. Delete the pipeline stack from the Tooling account:
aws cloudformation delete-stack --region <YOUR_REGION> --stack-name <YOUR_PIPELINE_STACK_NAME>
  1. Delete the customer managed CMK from the Tooling account:
aws kms schedule-key-deletion --region <YOUR_REGION> --key-id <KEY_ARN>
  1. Delete the S3 bucket from the Tooling account:
aws s3 rb s3://<BUCKET_UNIQUE_NAME> --force
  1. Optionally, delete the IAM roles and policies you created in the tenant accounts

Conclusion

This post demonstrated what it takes to build a CI/CD pipeline for single-tenant SaaS solutions isolated on the AWS account level. It covered how to grant cross-account access to artifact stores on Amazon S3 and artifact encryption keys on AWS KMS using policies and IAM roles. This approach is less error-prone because it avoids human errors when manually deploying the exact same application for multiple tenants.

For this use case, we performed most of the steps manually to better illustrate all the steps and components involved. For even more automation, consider using the AWS Cloud Development Kit (AWS CDK) and its pipeline construct to create your CI/CD pipeline and have everything as code. Moreover, for production scenarios, consider having integration tests as part of the pipeline.

Rafael Ramos

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.

Automate AWS Firewall Manager onboarding using AWS Centralized WAF and VPC Security Group Management solution

Post Syndicated from Satheesh Kumar original https://aws.amazon.com/blogs/security/automate-aws-firewall-manager-onboarding-using-aws-centralized-waf-and-vpc-security-group-management-solution/

Many customers—especially large enterprises—run workloads across multiple AWS accounts and in multiple AWS regions. AWS Firewall Manager service, launched in April 2018, enables customers to centrally configure and manage AWS WAF rules, audit Amazon VPC security group rules across accounts and applications in AWS Organizations, and protect resources against distributed DDoS attacks.

In this blog post, we show you how to onboard your accounts and your AWS Organizations into the AWS Firewall Manager service and start centrally managing the security policies of your AWS Organizations member accounts. We also show you examples of how to perform operations on the Firewall Manager policies after you’ve deployed the solution, so you can adjust your security posture over time.

As more and more customers began using Firewall Manager for centralized management, they gave us feedback on how it could be improved. We heard that the process of defining policies and configuring rule sets can be challenging and time consuming, especially in a multi-account, multi-region scenario. We built the AWS Centralized WAF and VPC Security Group Management solution to make it easier and faster.

Solution overview

The AWS Centralized WAF and VPC Security Group Management solution fully automates the deployment of Firewall Manager with a set of opinionated defaults for the policies. We call it “opinionated,” because there’s no set of security rules that is right for absolutely every customer. We’re providing an example opinion, but you might have a different opinion, given your unique circumstance. Our examples block traffic that you might have a reason to allow. We install the following policies when you deploy this solution:

  • AWS WAF global policy for Amazon CloudFront distributions and regional policy for Application Load Balancer and Amazon API Gateway: Both the AWS WAF policies include the following AWS Managed Rules for AWS WAF. You can further customize these rules to suit your WAF requirements.
    • AWSManagedRulesCommonRuleSet
    • AWSManagedRulesAdminProtectionRuleSet
    • AWSManagedRulesKnownBadInputsRuleSet
    • AWSManagedRulesSQLiRuleSet
  • Amazon VPC security group usage audit and content audit policy: These policies flag security groups that are unused or redundant. Automatic remediation is turned off by default, but you can turn it on by customizing the solution.
  • AWS Shield Advanced policy: If your account has enabled Shield Advanced, then Shield Advanced protection is enabled for CloudFront, Application Load Balancer, and Elastic IPs.

The core of the solution is the AWS CloudFormation template aws-centralized-waf-and-security-group-management. This template deploys the components shown in Figure 1:

Figure 1: Main solutions template - aws-centralized-waf-and-vpc-security-group-management.template

Figure 1: Main solutions template – aws-centralized-waf-and-vpc-security-group-management.template

  1. After deployment, you can update the three AWS Systems Manager parameters—/FMS/OUs, /FMS/Regions, and /FMS/Tags—with appropriate values to control the scope and applicability of the Firewall Manager security policies.
  2. On update of the values in the Systems Manager parameters, the Amazon EventBridge rule captures the parameter update event.
  3. Amazon EventBridge triggers an AWS Lambda function to deploy the Firewall Manager policies.
  4. The AWS Lambda function will deploy the Firewall manager security policies across the OUs and regions specified in step 1.
  5. The lambda function updates the Amazon DynamoDB table with Firewall manager policies metadata.

Configuring Prerequisites Automatically

There are a few important prerequisites that must be configured in your account before you deploy this solution. We have built a template called aws-fms-prereq that will launch the solution prerequisites.

When you execute the aws-fms-prereq template, the following things will happen:

Note: If the Firewall Manager prerequisites described above are already met, you can skip this step and go directly to next step: Deploy the solution template.

If you run this prerequisite template in the Organization primary account that is also your Firewall Manager admin account, the solution template will be deployed automatically. If you do this, you can skip the step of deploying the solution template and jump straight to Manage your Firewall manager security policies.

Figure 2 shows how the prerequisite template creates a Lambda function. That function validates and installs the prerequisites and AWS CloudFormation stack sets to enable AWS Config across all member accounts in the organization.

Figure 2: Solution prerequisites

Figure 2: Solution prerequisites

Installing Prerequisites

If the Firewall Manager prerequisites are already met, skip this step and go directly to next step, below: Deploy the solution template.

  1. Install the AWS CLI

    Note: If you already have the AWS Command Line Interface v2 installed on your workstation, you can skip this step.

    The template deployment can be done using the AWS Management Console or the AWS CLI. This procedure uses the AWS CLI to do the deployment. To get started, install the AWS CLI and configure it with credentials that have the required IAM permissions to create resources.

  2. Deploy the prerequisite templateIf this is the first time you’re using Firewall Manager, download the aws-fms-prereq.template, and run the following AWS CLI command in the primary account of your AWS Organization to check for and complete the prerequisites.Replace the variable <your_FW_account_ID> with the account ID of your AWS Firewall Admin. This deployment typically takes 2–3 minutes, but sometimes a little longer if there are a large number of member accounts in your organization. If you want AWS Config to be enabled in your member accounts, then set the EnableConfig parameter to Yes; however, if AWS Config is already enabled, then set it to No.
    aws cloudformation create-stack \
    --stack-name fms-prereq-stack \
    --template-body file://aws-fms-prereq.template \
    --parameters ParameterKey=FMSAdmin,ParameterValue=<your_FW_account_ID> ParameterKey=EnableConfig,ParameterValue=Yes
    

Deploy the solution template

Deploy the solution using aws-centralized-waf-and-vpc-security-group-management.template—an AWS CloudFormation template—in your Firewall Manager admin account that was created by the prerequisite template.

This template deploys the AWS Centralized WAF and VPC Security Group Management solution shown in Figure 1, with all the resources and integrations.

The following AWS CLI command will deploy the solution template:

aws cloudformation create-stack \
--stack-name fms-central-policy-mgmt-stack \
--template-body file://aws-centralized-waf-and-vpc-security-group-management.template \
--capabilities CAPABILITY_IAM

The command will print very basic output, simply identifying the stack’s name, as shown below.

{
 "StackId": "arn:aws:cloudformation:us-east-1:<your_FW_admin_account_ID>:stack/fms-central-policy-mgmt-stack/<stack ARN>"
}

You can run the following CLI command to check the status of the stack deployment. Once you see that “StackStatus” is set to “CREATE_COMPLETE” in the output, you can proceed to the next step.

aws cloudformation describe-stacks \
--stack-name fms-central-policy-mgmt-stack

Manage your Firewall manager security policies

Once the solution is deployed, you can deploy the Firewall Manager policies to the organization member accounts, which will be reflected in the Parameter Store. These changes to the Parameter Store are picked up by the EventBridge rule, which triggers the Lambda function to automatically create, delete, or modify the policies as required.

Note: All of the following commands must be executed in the Firewall manager admin account, which might not be the root account of your AWS Organization.

  1. Add OUs to the scope of Firewall Manager policiesTo begin, define the OUs that the Firewall Manager policies should apply to. Store the list of OUs in a parameter named /FMS/OUs. The following AWS CLI command stores a comma-separated list of OU IDs in the right parameter.
    aws ssm put-parameter \
     --name "/FMS/OUs" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_OU_IDs>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  2. Add AWS Regions to the scope of Firewall Manager policiesNext you need to add the AWS regions where you want these policies to be applied. In the AWS CLI command that follows, you can customize the AWS region list depending on where you run your workloads. If, for example, you want to use us-west-2, us-east-1, and eu-west-1, then you need to provide us-west-2,us-east-1,eu-west-1 as your value in the command below.
    aws ssm put-parameter \
     --name "/FMS/Regions" \
     --type "StringList" \
     --value "<comma_separated_list_of_your_regions>" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
    "Version": 2,
     "Tier": "Standard"
    }
    

  3. (Optional) Add resource tagsPlease note that this is an optional step. Resource tags are a way to apply these Firewall Manager policies to some resources, but not others. Imagine that we only want to apply these policies to resources that have the tag Environment set to the value Prod. The following command will do that:
    aws ssm put-parameter \
     --name "/FMS/Tags" \
     --type "String" \
     --value "{\"ResourceTags\":[{\"Key\":\"Environment\",\"Value\":\"Prod\"}],\"ExcludeResourceTags\":false}" \
     --overwrite
    

    If it is successful, the output of the command will be a simple acknowledgement, like the following:

    {
     "Version": 2,
     "Tier": "Standard"
    }
    

Test the solution

Test the solution by creating a global CloudFront distribution and an Amazon VPC security group in one of your member accounts, and configure these resources to be noncompliant with the policies enforced by Firewall Manager.

Deploy test resources in one of your member accounts

Run the following AWS CLI command to deploy the demo template in one of the member accounts to create a sample CloudFront distribution and an Amazon VPC security group that aren’t compliant with the default Firewall Manager policies.

aws cloudformation create-stack \
  --stack-name demo-stack \
  --template-body file://aws-fms-demo.template \
  --capabilities CAPABILITY_IAM

Check for web ACL and security group audit results

To check the webACL resource association in the member account, follow these steps:

  1. Log in to your member account management console where you had deployed your demo-template and go to the WAF & Shield service page.
  2. You will see the WebACL with the prefix “FMManagedWebACLV2FMS-WAF-01“ , select that webACL.
  3. Go to the tab Associated AWS resources.
  4. You should see the newly created CloudFront distribution listed here.

To check the compliance status of the newly created security group, follow these steps:

  1. Log in to the Firewall manager admin account management console and go to the AWS Firewall Manager service page.
  2. In the Security policies section, you will find the FMS-SecGroup-02 policy. Select the policy FMS-SecGroup-02.
  3. You will see that the member account (where the demo template was deployed) is marked as non-compliant. Select the member account number to see the newly created security group along with the reason for the noncompliant finding.

Clean-up test resources in member account

Follow these steps to clean up the test resources that the demo template would have created in your member account:

  1. Log in to member account management console, and go to Cloudfront service page.
  2. Select the CloudFront distribution. (Origin would have the stack name as its prefix.)
  3. Go to “Behaviours,” select the one and select Edit.
  4. Remove the lambda function association and save changes.
  5. Run this CLI command to delete the stack that you had created earlier:
    aws cloudformation delete-stack \
      --stack-name demo-stack
    

Customizing the solution’s source code

This solution deploys Firewall Manager policies with some opinionated default rules defined in the manifest.json file. They probably aren’t all that you will need. You could customize these policies to fit your own needs, but it takes a bit of development skill. Also, the steps are too long to include here. If, for example, you want to add another AWS WAF rule group to the default AWS WAF security policy, you’ll need to edit the manifest and redeploy the solution. See how to do this customization and several others.

(Optional) Clean-up resources

Once you’ve tried the solution, if you want to clean up the stack you can do so in two steps:

  1. Go to the Firewall Manager admin account management console, navigate to the Parameter Store, and update the /FMS/OU parameter with the value delete. This ensures that the Firewall Manager policies and their related resources are deleted.
  2. Run the following AWS CLI command in the Firewall manager admin account to delete the solution stack you deployed earlier:
    aws cloudformation delete-stack --stack-name fms-central-policy-mgmt-stack
    

Conclusion

The AWS Centralized WAF and VPC Security Group Management solution addresses the feedback we heard from you, our customer. You told us the process of defining policies and configuring rule sets is challenging and time consuming, so we wrote this to make it easier and faster. In this post, we showed you how to deploy the solution following a two-step process using AWS CloudFormation. We also showed you ways that you can use the solution to easily manage your Firewall Manager policies by updating values in Systems Manager Parameter Store. Updating the values automates the deployment based on your choice of OUs, Regions, and resources. Finally, we showed you how to further customize the solution to suit your needs.

This post is just an introduction to what is possible and the overall objective. Before you start using this solution for anything important, we recommend that you review the solution implementation guide. It contains step-by-step directions and more example use cases. The guide also includes security recommendations and some cost estimates for the various supported scenarios.

To learn more about other AWS solutions, visit the AWS Solutions Library.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satheesh Kumar

Satheesh is a Senior Solutions Architect based out of Bangalore, India. He helps large enterprise customers build solutions using AWS services.

Author

Ramanan Kannan

Ramanan is a Senior Solutions Architect and works with large enterprise customers in the financial domain. He is based out of Chennai, India.

Use AWS Firewall Manager to deploy protection at scale in AWS Organizations

Post Syndicated from Chamandeep Singh original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-to-deploy-protection-at-scale-in-aws-organizations/

Security teams that are responsible for securing workloads in hundreds of Amazon Web Services (AWS) accounts in different organizational units aim for a consistent approach across AWS Organizations. Key goals include enforcing preventative measures to mitigate known security issues, having a central approach for notifying the SecOps team about potential distributed denial of service (DDoS) attacks, and continuing to maintain compliance obligations. AWS Firewall Manager works at the organizational level to help you achieve your intended security posture while it provides reporting for non-compliant resources in all your AWS accounts. This post provides step-by-step instructions to deploy and manage security policies across your AWS Organizations implementation by using Firewall Manager.

You can use Firewall Manager to centrally manage AWS WAF, AWS Shield Advanced, and Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. Firewall Manager helps to protect resources across different accounts, and it can protect resources with specific tags or resources in a group of AWS accounts that are in specific organizational units (OUs). With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes.

Architecture diagram

Figure 1 shows an example organizational structure in AWS Organizations, with several OUs that we’ll use in the example policy sets in this blog post.

Figure 1: AWS Organizations and OU structure

Figure 1: AWS Organizations and OU structure

Firewall Manager can be associated to either the AWS master payer account or one of the member AWS accounts that has appropriate permissions as a delegated administrator. Following the best practices for organizational units, in this post we use a dedicated Security Tooling AWS account (named Security in the diagram) to operate the Firewall Manager administrator deployment under the Security OU. The Security OU is used for hosting security-related access and services. The Security OU, its child OUs, and the associated AWS accounts should be owned and managed by your security organization.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites that you must complete before you create and apply a Firewall Manager policy:

  1. AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an organization and Enabling all features in your organization.
  2. A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the Firewall Manager administrator for Firewall Manager. This gives the account permission to deploy security policies across the organization.
  3. AWS Config: You must enable AWS Config for all of the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, use the Enable AWS Config template from the StackSets sample templates.

Deployment of security policies

In the following sections, we explain how to create AWS WAF rules, Shield Advanced protections, and Amazon VPC security groups by using Firewall Manager. We further explain how you can deploy these different policy types to protect resources across your accounts in AWS Organizations. Each Firewall Manager policy is specific to an individual resource type. If you want to enforce multiple policy types across accounts, you should create multiple policies. You can create more than one policy for each type. If you add a new account to an organization that you created with AWS Organizations, Firewall Manager automatically applies the policy to the resources in that account that are within scope of the policy. This is a scalable approach to assist you in deploying the necessary configuration when developers create resources. For instance, you can create an AWS WAF policy that will result in a known set of AWS WAF rules being deployed whenever someone creates an Amazon CloudFront distribution.

Policy 1: Create and manage security groups

You can use Firewall Manager to centrally configure and manage Amazon VPC security groups across all your AWS accounts in AWS Organizations. A previous AWS Security blog post walks you through how to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

Firewall Manager automatically audits new resources and rules as customers add resources or security group rules to their accounts. You can audit overly permissive security group rules, such as rules with a wide range of ports or Classless Inter-Domain Routing (CIDR) ranges, or rules that have enabled all protocols to access resources. To audit security group policies, you can use application and protocol lists to specify what’s allowed and what’s denied by the policy.

In this blog post, we use a security policy to audit the security groups for overly permissive rules and high-risk applications that are allowed to open to local CIDR ranges (for example, 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). We created a custom application list named Bastion Host for port 22 and a custom protocol list named Allowed Protocol that allows the child account to create rules only on TCP protocols. Refer link for how to create a custom managed application and protocol list.

To create audit security group policies

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, select Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this example, we selected the Sydney (ap-southeast-2) Region because we have all of our resources in the Sydney Region.
  3. Create the policy, and in Policy details, choose Security group. For Region, select a Region (we selected Sydney (ap-southeast-2)), and then choose Next.
  4. For Security group policy type, choose Auditing and enforcement of security group rules, and then choose Next.
  5. Enter a policy name. We named our policy AWS_FMS_Audit_SecurityGroup.
  6. For Policy rule options, for this example, we chose Configure managed audit policy rules.
  7. Under Policy rules, choose the following:
    1. For Security group rules to audit, choose Inbound Rules.
    2. For Rules, select the following:
      1. Select Audit over permissive security group rules.
        • For Allowed security group rules, choose Add Protocol list and select the custom protocol list Allowed Protocols that we created earlier.
        • For Denied security group rules, select Deny rules with the allow ‘ALL’ protocol.
      2. Select Audit high risk applications.
        • Choose Applications that can only access local CIDR ranges. Then choose Add application list and select the custom application list Bastion host that we created earlier.
  8. For Policy action, for the example in this post, we chose Auto remediate any noncompliant resources. Choose Next.

    Figure 2: Policy rules for the security group audit policy

    Figure 2: Policy rules for the security group audit policy

  9. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included Organizational units, select OU (example – Non-Prod Accounts).
    2. For Resource type, select EC2 Instance, Security Group, and Elastic Network Interface.
    3. For Resources, choose Include all resources that match the selected resource type.
  10. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Audit_Security_group.

Important: Migrating AWS accounts from one organizational unit to another won’t remove or detach the existing security group policy applied by Firewall Manager. For example, in the reference architecture in Figure 1 we have the AWS account Tenant-5 under the Staging OU. We’ve created a different Firewall Manager security group policy for the Pre-Prod OU and Prod OU. If you move the Tenant-5 account to Prod OU from Staging OU, the resources associated with Tenant-5 will continue to have the security group policies that are defined for both Prod and Staging OU unless you select otherwise before relocating the AWS account. Firewall Manager supports the detach option in case of policy deletion, because moving accounts across the OU may have unintended impacts such as loss of connectivity or protection, and therefore Firewall Manager won’t remove the security group.

Policy 2: Managing AWS WAF v2 policy

A Firewall Manager AWS WAF policy contains the rule groups that you want to apply to your resources. When you apply the policy, Firewall Manager creates a Firewall Manager web access control list (web ACL) in each account that’s within the policy scope.

Note: Creating Amazon Kinesis Data Firehose delivery stream is a prerequisite to manage the WAF ACL logging at Step 8 in us-east-1. (example – aws-waf-logs-lab-waf-logs)

To create a Firewall Manager – AWS WAF v2 policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select a Region. FMS region selection is on the service page drop down tab. For this example, we selected the Region as Global, since the policy is to protect CloudFront resources.
  3. Create the policy. Under Policy details, choose AWS WAF and for Region, choose Global. Then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_WAF_Rule.
  5. On the Policy rule page, under Web ACL configuration, add rule groups. AWS WAF supports custom rule groups (the customer creates the rules), AWS Managed Rules rule groups (AWS manages the rules), and AWS Marketplace managed rule groups. For this example, we chose AWS Managed Rules rule groups.
  6. For this example, for First rule groups, we chose the AWS Managed Rules rule group, AWS Core rule set. For Last rule groups, we chose the AWS Managed Rules rule group, Amazon IP reputation list.
  7. For Default web ACL action for requests that don’t match any rules in the web ACL, choose a default action. We chose Allow.
  8. Firewall Manager enables logging for a specific web ACL. This logging is applied to all the in-scope accounts and delivers the logs to a centralized single account. To enable centralized logging of AWS WAF logs:
    1. For Logging configuration status, choose Enabled.
    2. For IAM role, Firewall Manager creates an AWS WAF service-role for logging. Your security account should have the necessary IAM permissions. Learn more about access requirements for logging.
    3. Select Kinesis stream created earlier called aws-waf-logs-lab-waf-logs in us-east-1 as we’re using Cloudfront as a resource in the policy.
    4. For Redacted fields, for this example select HTTP method, Query String, URI, and Header. You can also add a new header. For more information, see Configure logging for an AWS Firewall Manager AWS WAF policy.
  9. For Policy action, for this example, we chose Auto remediate any noncompliant resources. To replace the existing web ACL that is currently associated with the resource, select Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy. Choose Next.

    Note: If a resource has an association with another web ACL that is managed by a different active Firewall Manager, it doesn’t affect that resource.

    Figure 3: Policy rules for the AWS WAF security policy

    Figure 3: Policy rules for the AWS WAF security policy

  10. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included organizational units, select OU (example – Pre-Prod Accounts).
    2. For Resource type, choose CloudFront distribution.
    3. For Resources, choose Include all resources that match the selected resource type.
  11. You can create tags for the security policy. For the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to WAF_Policy.
  12. Review the security policy, and then choose Create Policy.

    Note: For the AWS WAF v2 policy, the web ACL pushed by the Firewall Manager can’t be modified on the individual account. The account owner can only add a new rule group.

  13. In the policy’s first and last rule groups sets, you can add additional rule groups at the linked AWS account level to provide additional security based on application requirements. You can use managed rule groups, which AWS Managed Rules and AWS Marketplace sellers create and maintain for you. For example, you can use the WordPress application rule group, which contains rules that block request patterns associated with the exploitation of vulnerabilities specific to a WordPress site. You can also manage and use your own rule groups.For more information about all of these options, see Rule groups. Another example could be using a rate-based rule that tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. Learn more about rate-based rules.

Policy 3: Managing AWS Shield Advanced policy

AWS Shield Advanced is a paid service that provides additional protections for internet facing applications. If you have Business or Enterprise support, you can engage the 24X7 AWS DDoS Response Team (DRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks. Please refer Shield Advanced pricing for more info before proceeding with Shield FMS Policy.

After you complete the prerequisites that were outlined in the prerequisites section, we’ll create Shield Advanced policy which contains the accounts and resources that you want to protect with Shield Advanced. Purpose of this policy is to activate the AWS Shield Advanced in the Accounts in OU’s scope and add the selected resources under Shield Advanced protection list.

To create a Firewall Manager – Shield Advanced policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this post, we’ve selected the Sydney (ap-southeast-2) Region because all of our resources are in the Sydney Region.

    Note: To protect CloudFront resources, select the Global option.

  3. Create the policy, and in Policy details, choose AWS Shield Advanced. For Region, select a Region (example – ap-southeast-2), and then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_ShieldAdvanced Rule.
  5. For Policy action, for the example in this post, we chose Auto remediate any non-compliant resources. Alternatively, if you choose Create but do not apply this policy to existing or new resources, Firewall Manager doesn’t apply Shield Advanced protection to any resources. You must apply the policy to resources later. Choose Next.
  6. For Policy scope, this example uses the OU structure as the container of multiple accounts with similar requirements:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational units. For Included organizational units, select OU (example – Staging Accounts OU).
    2. For Resource type, select Application Load Balancer and Elastic IP.
    3. For Resources, choose Include all resources that match the selected resource type.
      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Note: If you want to protect only the resources with specific tags, or alternatively exclude resources with specific tags, choose Use tags to include/exclude resources, enter the tags, and then choose either Include or Exclude. Tags enable you to categorize AWS resources in different ways, for example by indicating an environment, owner, or team to include or exclude in Firewall Manager policy. Firewall Manager combines the tags with “AND” so that, if you add more than one tag to a policy scope, a resource must have all the specified tags to be included or excluded.

      Important: Shield Advanced supports protection for Amazon Route 53 and AWS Global Accelerator. However, protection for these resources cannot be deployed with the help of Firewall Manager security policy at this time. If you need to protect these resources with Shield Advanced, you should use individual AWS account access through the API or console to activate Shield Advanced protection for the intended resources.

  7. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Shield_Advanced_Policy. You can use the tags in the Resource element of IAM permission policy statements to either allow or deny users to make changes to security policy.
  8. Review the security policy, and then choose Create Policy.

Now you’ve successfully created a Firewall Manager security policy. Using the organizational units in AWS Organizations as a method to deploy the Firewall Manager security policy, when you add an account to the OU or to any of its child OUs, Firewall Manager automatically applies the policy to the new account.

Important: You don’t need to manually subscribe Shield Advanced on the member accounts. Firewall Manager subscribes Shield Advanced on the member accounts as part of creating the policy.

Operational visibility and compliance report

Firewall Manager offers a centralized incident notification for DDoS incidents that are reported by Shield Advanced. You can create an Amazon SNS topic to monitor the protected resources for potential DDoS activities and send notifications accordingly. Learn how to create an SNS topic. If you have resources in different Regions, the SNS topic needs to be created in the intended Region. You must perform this step from the Firewall Manager delegated AWS account (for example, Security Tooling) to receive alerts across your AWS accounts in that organization.

As a best practice, you should set up notifications for all the Regions where you have a production workload under Shield Advanced protection.

To create an SNS topic in the Firewall Manager administrative console

  1. In the AWS Management Console, sign in to the Security Tooling account or the AWS Firewall Manager delegated administrator account. In the left navigation pane, under AWS Firewall Manager, choose Settings.
  2. Select the SNS topic that you created earlier to be used for the Firewall Manager central notification mechanism. For this example, we created a new SNS topic in the Sydney Region (ap-southeast-2) named SNS_Topic_Syd.
  3. For Recipient email address, enter the email address that the SNS topic will be sent to. Choose Configure SNS configuration.

After you create the SNS configuration, you can see the SNS topic in the appropriate Region, as in the following example.

Figure 5: An SNS topic for centralized incident notification

Figure 5: An SNS topic for centralized incident notification

AWS Shield Advanced records metrics in Amazon CloudWatch to monitor the protected resources and can also create Amazon CloudWatch alarms. For the simplicity purpose we took the email notification route for this example. In security operations environment, you should integrate the SNS notification to your existing ticketing system or pager duty for Realtime response.

Important: You can also use the CloudWatch dashboard to monitor potential DDoS activity. It collects and processes raw data from Shield Advanced into readable, near real-time metrics.

You can automatically enforce policies on AWS resources that currently exist or are created in the future, in order to promote compliance with firewall rules across the organization. For all policies, you can view the compliance status for in-scope accounts and resources by using the API or AWS Command Line Interface (AWS CLI) method. For content audit security group policies, you can also view detailed violation information for in-scope resources. This information can help you to better understand and manage your security risk.

View all the policies in the Firewall Manager administrative account

For our example, we created three security policies in the Firewall Manager delegated administrator account. We can check policy compliance status for all three policies by using the AWS Management Console, AWS CLI, or API methods. The AWS CLI example that follows can be further extended to build an automation for notifying the non-compliant resource owners.

To list all the policies in FMS

 aws fms list-policies --region ap-southeast-2
{
    "PolicyList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "RemediationEnabled": false, 
            "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:222222222222:policy/78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "SecurityServiceType": "WAFV2", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58"
        },
        {
            "PolicyName": "AWS_FMS_Audit_SecurityGroup", 
            "RemediationEnabled": true, 
            "ResourceType": "ResourceTypeList", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:<Account-Id>:policy/d44f3f38-ed6f-4af3-b5b3-78e9583051cf", 
            "SecurityServiceType": "SECURITY_GROUPS_CONTENT_AUDIT", 
            "PolicyId": "d44f3f38-ed6f-4af3-b5b3-78e9583051cf"
        }
    ]
}

Now, we got the policy id to check the compliance status

aws fms list-compliance-status --policy-id 78edcc79-c0b1-46ed-b7b9-d166b9fd3b58
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601360994.0, 
            "MemberAccount": "444444444444", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 0, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 444444444444 associated to the policy is compliant. The following example shows the status for the second policy.

aws fms list-compliance-status --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "AWS_FMS_WAF_Rule", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601361231.0, 
            "MemberAccount": "555555555555", 
            "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 3, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "NON_COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 555555555555 associated to the policy is non-compliant.

To provide detailed compliance information about the specified member account, the output includes resources that are in and out of compliance with the specified policy, as shown in the following example.

aws fms get-compliance-detail --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d --member-account 555555555555
{
    "PolicyComplianceDetail": {
        "Violators": [
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2: 555555555555:loadbalancer/app/FMSTest2/c2da4e99d4d13cf4", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }, 
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2:555555555555:loadbalancer/app/fmstest/1e70668ce77eb61b", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }
        ], 
        "EvaluationLimitExceeded": false, 
        "PolicyOwner": "222222222222", 
        "ExpiredAt": 1601362402.0, 
        "MemberAccount": "555555555555", 
        "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
        "IssueInfoMap": {}
    }
}

In the preceding example, two Application Load Balancers (ALBs) are not associated with a web ACL. You can further introduce automation by using AWS Lambda functions to isolate the non-compliant resources or trigger an alert for the account owner to launch manual remediation.

Resource Clean up

You can delete a Firewall Manager policy by performing the following steps.

To delete a policy (console)

  1. In the navigation pane, choose Security policies.
  2. Choose the option next to the policy that you want to delete. We created 3 policies which needs to be removed one by one.
  3. Choose Delete.

Important: When you delete a Firewall Manager Shield Advanced policy, the policy is deleted, but your accounts remain subscribed to Shield Advanced.

Conclusion

In this post, you learned how you can use Firewall Manager to enforce required preventative policies from a central delegated AWS account managed by your security team. You can extend this strategy to all AWS OUs to meet your future needs as new AWS accounts or resources get added to AWS Organizations. A central notification delivery to your Security Operations team is crucial from a visibility perspective, and with the help of Firewall Manager you can build a scalable approach to stay protected, informed, and compliant. Firewall Manager simplifies your AWS WAF, AWS Shield Advanced, and Amazon VPC security group administration and maintenance tasks across multiple accounts and resources.

For further reading and updates, see the Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chamandeep Singh

Chamandeep is a Senior Technical Account Manager and member of the Global Security field team at AWS. He works with financial sector enterprise customers to support operations and security, and also designs scalable cloud solutions. He lives in Australia at present and enjoy travelling around the world.

Author

Prabhakaran Thirumeni

Prabhakaran is a Cloud Architect with AWS, specializing in network security and cloud infrastructure. His focus is helping customers design and build solutions for their enterprises. Outside of work he stays active with badminton, running, and exploring the world.

How to add authentication to a single-page web application with Amazon Cognito OAuth2 implementation

Post Syndicated from George Conti original https://aws.amazon.com/blogs/security/how-to-add-authentication-single-page-web-application-with-amazon-cognito-oauth2-implementation/

In this post, I’ll be showing you how to configure Amazon Cognito as an OpenID provider (OP) with a single-page web application.

This use case describes using Amazon Cognito to integrate with an existing authorization system following the OpenID Connect (OIDC) specification. OIDC is an identity layer on top of the OAuth 2.0 protocol to enable clients to verify the identity of users. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Some key reasons customers select Amazon Cognito include:

  • Simplicity of implementation: The console is very intuitive; it takes a short time to understand how to configure and use Amazon Cognito. Amazon Cognito also has key out-of-the-box functionality, including social sign-in, multi-factor authentication (MFA), forgotten password support, and infrastructure as code (AWS CloudFormation) support.
  • Ability to customize workflows: Amazon Cognito offers the option of a hosted UI where users can sign-in directly to Amazon Cognito or sign-in via social identity providers such as Amazon, Google, Apple, and Facebook. The Amazon Cognito hosted UI and workflows help save your team significant time and effort.
  • OIDC support: Amazon Cognito can securely pass user profile information to an existing authorization system following the ODIC authorization code flow. The authorization system uses the user profile information to secure access to the app.

Amazon Cognito overview

Amazon Cognito follows the OIDC specification to authenticate users of web and mobile apps. Users can sign in directly through the Amazon Cognito hosted UI or through a federated identity provider, such as Amazon, Facebook, Apple, or Google. The hosted UI workflows include sign-in and sign-up, password reset, and MFA. Since not all customer workflows are the same, you can customize Amazon Cognito workflows at key points with AWS Lambda functions, allowing you to run code without provisioning or managing servers. After a user authenticates, Amazon Cognito returns standard OIDC tokens. You can use the user profile information in the ID token to grant your users access to your own resources or you can use the tokens to grant access to APIs hosted by Amazon API Gateway. You can also exchange the tokens for temporary AWS credentials to access other AWS services.

Figure 1: Amazon Cognito sign-in flow

Figure 1: Amazon Cognito sign-in flow

OAuth 2.0 and OIDC

OAuth 2.0 is an open standard that allows a user to delegate access to their information to other websites or applications without handing over credentials. OIDC is an identity layer on top of OAuth 2.0 that uses OAuth 2.0 flows. OAuth 2.0 defines a number of flows to manage the interaction between the application, user, and authorization server. The right flow to use depends on the type of application.

The client credentials flow is used in machine-to-machine communications. You can use the client credentials flow to request an access token to access your own resources, which means you can use this flow when your app is requesting the token on its own behalf, not on behalf of a user. The authorization code grant flow is used to return an authorization code that is then exchanged for user pool tokens. Because the tokens are never exposed directly to the user, they are less likely to be shared broadly or accessed by an unauthorized party. However, a custom application is required on the back end to exchange the authorization code for user pool tokens. For security reasons, we recommend the Authorization Code Flow with Proof Key Code Exchange (PKCE) for public clients, such as single-page apps or native mobile apps.

The following table shows recommended flows per application type.

ApplicationCFlowDescription
MachineClient credentialsUse this flow when your application is requesting the token on its own behalf, not on behalf of the user
Web app on a serverAuthorization code grantA regular web app on a web server
Single-page appAuthorization code grant PKCEAn app running in the browser, such as JavaScript
Mobile appAuthorization code grant PKCEiOS or Android app

Securing the authorization code flow

Amazon Cognito can help you achieve compliance with regulatory frameworks and certifications, but it’s your responsibility to use the service in a way that remains compliant and secure. You need to determine the sensitivity of the user profile data in Amazon Cognito; adhere to your company’s security requirements, applicable laws and regulations; and configure your application and corresponding Amazon Cognito settings appropriately for your use case.

Note: You can learn more about regulatory frameworks and certifications at AWS Services in Scope by Compliance Program. You can download compliance reports from AWS Artifact.

We recommend that you use the authorization code flow with PKCE for single-page apps. Applications that use PKCE generate a random code verifier that’s created for every authorization request. Proof Key for Code Exchange by OAuth Public Clients has more information on use of a code verifier. In the following sections, I will show you how to set up the Amazon Cognito authorization endpoint for your app to support a code verifier.

The authorization code flow

In OpenID terms, the app is the relying party (RP) and Amazon Cognito is the OP. The flow for the authorization code flow with PKCE is as follows:

  1. The user enters the app home page URL in the browser and the browser fetches the app.
  2. The app generates the PKCE code challenge and redirects the request to the Amazon Cognito OAuth2 authorization endpoint (/oauth2/authorize).
  3. Amazon Cognito responds back to the user’s browser with the Amazon Cognito hosted sign-in page.
  4. The user signs in with their user name and password, signs up as a new user, or signs in with a federated sign-in. After a successful sign-in, Amazon Cognito returns the authorization code to the browser, which redirects the authorization code back to the app.
  5. The app sends a request to the Amazon Cognito OAuth2 token endpoint (/oauth2/token) with the authorization code, its client credentials, and the PKCE verifier.
  6. Amazon Cognito authenticates the app with the supplied credentials, validates the authorization code, validates the request with the code verifier, and returns the OpenID tokens, access token, ID token, and refresh token.
  7. The app validates the OpenID ID token and then uses the user profile information (claims) in the ID token to provide access to resources.(Optional) The app can use the access token to retrieve the user profile information from the Amazon Cognito user information endpoint (/userInfo).
  8. Amazon Cognito returns the user profile information (claims) about the authenticated user to the app. The app then uses the claims to provide access to resources.

The following diagram shows the authorization code flow with PKCE.

Figure 2: Authorization code flow

Figure 2: Authorization code flow

Implementing an app with Amazon Cognito authentication

Now that you’ve learned about Amazon Cognito OAuth implementation, let’s create a working example app that uses Amazon Cognito OAuth implementation. You’ll create an Amazon Cognito user pool along with an app client, the app, an Amazon Simple Storage Service (Amazon S3) bucket, and an Amazon CloudFront distribution for the app, and you’ll configure the app client.

Step 1. Create a user pool

Start by creating your user pool with the default configuration.

Create a user pool:

  1. Go to the Amazon Cognito console and select Manage User Pools. This takes you to the User Pools Directory.
  2. Select Create a user pool in the upper corner.
  3. Enter a Pool name, select Review defaults, and select Create pool.
  4. Copy the Pool ID, which will be used later to create your single-page app. It will be something like region_xxxxx. You will use it to replace the variable YOUR_USERPOOL_ID in a later step.(Optional) You can add additional features to the user pool, but this demonstration uses the default configuration. For more information see, the Amazon Cognito documentation.

The following figure shows you entering the user pool name.

Figure 3: Enter a name for the user pool

Figure 3: Enter a name for the user pool

The following figure shows the resulting user pool configuration.

Figure 4: Completed user pool configuration

Figure 4: Completed user pool configuration

Step 2. Create a domain name

The Amazon Cognito hosted UI lets you use your own domain name or you can add a prefix to the Amazon Cognito domain. This example uses an Amazon Cognito domain with a prefix.

Create a domain name:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under App integration, select Domain name.
  3. In the Amazon Cognito domain section, add your Domain prefix (for example, myblog).
  4. Select Check availability. If your domain isn’t available, change the domain prefix and try again.
  5. When your domain is confirmed as available, copy the Domain prefix to use when you create your single-page app. You will use it to replace the variable YOUR_COGNITO_DOMAIN_PREFIX in a later step.
  6. Choose Save changes.

The following figure shows creating an Amazon Cognito hosted domain.

Figure 5: Creating an Amazon Cognito hosted UI domain

Figure 5: Creating an Amazon Cognito hosted UI domain

Step 3. Create an app client

Now create the app client user pool. An app client is where you register your app with the user pool. Generally, you create an app client for each app platform. For example, you might create an app client for a single-page app and another app client for a mobile app. Each app client has its own ID, authentication flows, and permissions to access user attributes.

Create an app client:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under General settings, select App clients.
  3. Choose Add an app client.
  4. Enter a name for the app client in the App client name field.
  5. Uncheck Generate client secret and accept the remaining default configurations.

    Note: The client secret is used to authenticate the app client to the user pool. Generate client secret is unchecked because you don’t want to send the client secret on the URL using client-side JavaScript. The client secret is used by applications that have a server-side component that can secure the client secret.

  6. Choose Create app client as shown in the following figure.

    Figure 6: Create and configure an app client

    Figure 6: Create and configure an app client

  7. Copy the App client ID. You will use it to replace the variable YOUR_APPCLIENT_ID in a later step.

The following figure shows the App client ID which is automatically generated when the app client is created.

Figure 7: App client configuration

Figure 7: App client configuration

Step 4. Create an Amazon S3 website bucket

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. We use Amazon S3 here to host a static website.

Create an Amazon S3 bucket with the following settings:

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. Choose Create bucket to start the Create bucket wizard.
  3. In Bucket name, enter a DNS-compliant name for your bucket. You will use this in a later step to replace the YOURS3BUCKETNAME variable.
  4. In Region, choose the AWS Region where you want the bucket to reside.

    Note: It’s recommended to create the Amazon S3 bucket in the same AWS Region as Amazon Cognito.

  5. Look up the region code from the region table (for example, US-East [N. Virginia] has a region code of us-east-1). You will use the region code to replace the variable YOUR_REGION in a later step.
  6. Choose Next.
  7. Select the Versioning checkbox.
  8. Choose Next.
  9. Choose Next.
  10. Choose Create bucket.
  11. Select the bucket you just created from the Amazon S3 bucket list.
  12. Select the Properties tab.
  13. Choose Static website hosting.
  14. Choose Use this bucket to host a website.
  15. For the index document, enter index.html and then choose Save.

Step 5. Create a CloudFront distribution

Amazon CloudFront is a fast content delivery network service that helps securely deliver data, videos, applications, and APIs to customers globally with low latency and high transfer speeds—all within a developer-friendly environment. In this step, we use CloudFront to set up an HTTPS-enabled domain for the static website hosted on Amazon S3.

Create a CloudFront distribution (web distribution) with the following modified default settings:

  1. Sign into the AWS Management Console and open the CloudFront console.
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Choose the Origin Domain Name from the dropdown list. It will be YOURS3BUCKETNAME.s3.amazonaws.com.
  5. For Restrict Bucket Access, select Yes.
  6. For Origin Access Identity, select Create a New Identity.
  7. For Grant Read Permission on Bucket, select Yes, Update Bucket Policy.
  8. For the Viewer Protocol Policy, select Redirect HTTP to HTTPS.
  9. For Cache Policy, select Managed-Caching Disabled.
  10. Set the Default Root Object to index.html.(Optional) Add a comment. Comments are a good place to describe the purpose of your distribution, for example, “Amazon Cognito SPA.”
  11. Select Create Distribution. The distribution will take a few minutes to create and update.
  12. Copy the Domain Name. This is the CloudFront distribution domain name, which you will use in a later step as the DOMAINNAME value in the YOUR_REDIRECT_URI variable.

Step 6. Create the app

Now that you’ve created the Amazon S3 bucket for static website hosting and the CloudFront distribution for the site, you’re ready to use the code that follows to create a sample app.

Use the following information from the previous steps:

  1. YOUR_COGNITO_DOMAIN_PREFIX is from Step 2.
  2. YOUR_REGION is the AWS region you used in Step 4 when you created your Amazon S3 bucket.
  3. YOUR_APPCLIENT_ID is the App client ID from Step 3.
  4. YOUR_USERPOOL_ID is the Pool ID from Step 1.
  5. YOUR_REDIRECT_URI, which is https://DOMAINNAME/index.html, where DOMAINNAME is your domain name from Step 5.

Create userprofile.js

Use the following text to create the userprofile.js file. Substitute the preceding pre-existing values for the variables in the text.

var myHeaders = new Headers();
myHeaders.set('Cache-Control', 'no-store');
var urlParams = new URLSearchParams(window.location.search);
var tokens;
var domain = "YOUR_COGNITO_DOMAIN_PREFIX";
var region = "YOUR_REGION";
var appClientId = "YOUR_APPCLIENT_ID";
var userPoolId = "YOUR_USERPOOL_ID";
var redirectURI = "YOUR_REDIRECT_URI";

//Convert Payload from Base64-URL to JSON
const decodePayload = payload => {
  const cleanedPayload = payload.replace(/-/g, '+').replace(/_/g, '/');
  const decodedPayload = atob(cleanedPayload)
  const uriEncodedPayload = Array.from(decodedPayload).reduce((acc, char) => {
    const uriEncodedChar = ('00' + char.charCodeAt(0).toString(16)).slice(-2)
    return `${acc}%${uriEncodedChar}`
  }, '')
  const jsonPayload = decodeURIComponent(uriEncodedPayload);

  return JSON.parse(jsonPayload)
}

//Parse JWT Payload
const parseJWTPayload = token => {
    const [header, payload, signature] = token.split('.');
    const jsonPayload = decodePayload(payload)

    return jsonPayload
};

//Parse JWT Header
const parseJWTHeader = token => {
    const [header, payload, signature] = token.split('.');
    const jsonHeader = decodePayload(header)

    return jsonHeader
};

//Generate a Random String
const getRandomString = () => {
    const randomItems = new Uint32Array(28);
    crypto.getRandomValues(randomItems);
    const binaryStringItems = randomItems.map(dec => `0${dec.toString(16).substr(-2)}`)
    return binaryStringItems.reduce((acc, item) => `${acc}${item}`, '');
}

//Encrypt a String with SHA256
const encryptStringWithSHA256 = async str => {
    const PROTOCOL = 'SHA-256'
    const textEncoder = new TextEncoder();
    const encodedData = textEncoder.encode(str);
    return crypto.subtle.digest(PROTOCOL, encodedData);
}

//Convert Hash to Base64-URL
const hashToBase64url = arrayBuffer => {
    const items = new Uint8Array(arrayBuffer)
    const stringifiedArrayHash = items.reduce((acc, i) => `${acc}${String.fromCharCode(i)}`, '')
    const decodedHash = btoa(stringifiedArrayHash)

    const base64URL = decodedHash.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
    return base64URL
}

// Main Function
async function main() {
  var code = urlParams.get('code');

  //If code not present then request code else request tokens
  if (code == null){

    // Create random "state"
    var state = getRandomString();
    sessionStorage.setItem("pkce_state", state);

    // Create PKCE code verifier
    var code_verifier = getRandomString();
    sessionStorage.setItem("code_verifier", code_verifier);

    // Create code challenge
    var arrayHash = await encryptStringWithSHA256(code_verifier);
    var code_challenge = hashToBase64url(arrayHash);
    sessionStorage.setItem("code_challenge", code_challenge)

    // Redirtect user-agent to /authorize endpoint
    location.href = "https://"+domain+".auth."+region+".amazoncognito.com/oauth2/authorize?response_type=code&state="+state+"&client_id="+appClientId+"&redirect_uri="+redirectURI+"&scope=openid&code_challenge_method=S256&code_challenge="+code_challenge;
  } else {

    // Verify state matches
    state = urlParams.get('state');
    if(sessionStorage.getItem("pkce_state") != state) {
        alert("Invalid state");
    } else {

    // Fetch OAuth2 tokens from Cognito
    code_verifier = sessionStorage.getItem('code_verifier');
  await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/token?grant_type=authorization_code&client_id="+appClientId+"&code_verifier="+code_verifier+"&redirect_uri="+redirectURI+"&code="+ code,{
  method: 'post',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded'
  }})
  .then((response) => {
    return response.json();
  })
  .then((data) => {

    // Verify id_token
    tokens=data;
    var idVerified = verifyToken (tokens.id_token);
    Promise.resolve(idVerified).then(function(value) {
      if (value.localeCompare("verified")){
        alert("Invalid ID Token - "+ value);
        return;
      }
      });
    // Display tokens
    document.getElementById("id_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.id_token),null,'\t');
    document.getElementById("access_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.access_token),null,'\t');
  });

    // Fetch from /user_info
    await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/userInfo",{
      method: 'post',
      headers: {
        'authorization': 'Bearer ' + tokens.access_token
    }})
    .then((response) => {
      return response.json();
    })
    .then((data) => {
      // Display user information
      document.getElementById("userInfo").innerHTML = JSON.stringify(data, null,'\t');
    });
  }}}
  main();

Create the verifier.js file

Use the following text to create the verifier.js file.

var key_id;
var keys;
var key_index;

//verify token
async function verifyToken (token) {
//get Cognito keys
keys_url = 'https://cognito-idp.'+ region +'.amazonaws.com/' + userPoolId + '/.well-known/jwks.json';
await fetch(keys_url)
.then((response) => {
return response.json();
})
.then((data) => {
keys = data['keys'];
});

//Get the kid (key id)
var tokenHeader = parseJWTHeader(token);
key_id = tokenHeader.kid;

//search for the kid key id in the Cognito Keys
const key = keys.find(key =>key.kid===key_id)
if (key === undefined){
return "Public key not found in Cognito jwks.json";
}

//verify JWT Signature
var keyObj = KEYUTIL.getKey(key);
var isValid = KJUR.jws.JWS.verifyJWT(token, keyObj, {alg: ["RS256"]});
if (isValid){
} else {
return("Signature verification failed");
}

//verify token has not expired
var tokenPayload = parseJWTPayload(token);
if (Date.now() >= tokenPayload.exp * 1000) {
return("Token expired");
}

//verify app_client_id
var n = tokenPayload.aud.localeCompare(appClientId)
if (n != 0){
return("Token was not issued for this audience");
}
return("verified");
};

Create an index.html file

Use the following text to create the index.html file.

<!doctype html>

<html lang="en">
<head>
<meta charset="utf-8">

<title>MyApp</title>
<meta name="description" content="My Application">
<meta name="author" content="Your Name">
</head>

<body>
<h2>Cognito User</h2>

<p style="white-space:pre-line;" id="token_status"></p>

<p>Id Token</p>
<p style="white-space:pre-line;" id="id_token"></p>

<p>Access Token</p>
<p style="white-space:pre-line;" id="access_token"></p>

<p>User Profile</p>
<p style="white-space:pre-line;" id="userInfo"></p>
<script language="JavaScript" type="text/javascript"
src="https://kjur.github.io/jsrsasign/jsrsasign-latest-all-min.js">
</script>
<script src="js/verifier.js"></script>
<script src="js/userprofile.js"></script>
</body>
</html>

Upload the files into the Amazon S3 Bucket you created earlier

Upload the files you just created to the Amazon S3 bucket that you created in Step 4. If you’re using Chrome or Firefox browsers, you can choose the folders and files to upload and then drag and drop them into the destination bucket. Dragging and dropping is the only way that you can upload folders.

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. In the Bucket name list, choose the name of the bucket that you created earlier in Step 4.
  3. In a window other than the console window, select the index.html file to upload. Then drag and drop the file into the console window that lists the destination bucket.
  4. In the Upload dialog box, choose Upload.
  5. Choose Create Folder.
  6. Enter the name js and choose Save.
  7. Choose the js folder.
  8. In a window other than the console window, select the userprofile.js and verifier.js files to upload. Then drag and drop the files into the console window js folder.

    Note: The Amazon S3 bucket root will contain the index.html file and a js folder. The js folder will contain the userprofile.js and verifier.js files.

Step 7. Configure the app client settings

Use the Amazon Cognito console to configure the app client settings, including identity providers, OAuth flows, and OAuth scopes.

Configure the app client settings:

  1. Go to the Amazon Cognito console.
  2. Choose Manage your User Pools.
  3. Select your user pool.
  4. Select App integration, and then select App client settings.
  5. Under Enabled Identity Providers, select Cognito User Pool.(Optional) You can add federated identity providers. Adding User Pool Sign-in Through a Third-Party has more information about how to add federation providers.
  6. Enter the Callback URL(s) where the user is to be redirected after successfully signing in. The callback URL is the URL of your web app that will receive the authorization code. In our example, this will be the Domain Name for the CloudFront distribution you created earlier. It will look something like https://DOMAINNAME/index.html where DOMAINNAME is xxxxxxx.cloudfront.net.

    Note: HTTPS is required for the Callback URLs. For this example, I used CloudFront as a HTTPS endpoint for the app in Amazon S3.

  7. Next, select Authorization code grant from the Allowed OAuth Flows and OpenID from Allowed OAuth Scopes. The OpenID scope will return the ID token and grant access to all user attributes that are readable by the client.
  8. Choose Save changes.

Step 8. Show the app home page

Now that the Amazon Cognito user pool is configured and the sample app is built, you can test using Amazon Cognito as an OP from the sample JavaScript app you created in Step 6.

View the app’s home page:

  1. Open a web browser and enter the app’s home page URL using the CloudFront distribution to serve your index.html page created in Step 6 (https://DOMAINNAME/index.html) and the app will redirect the browser to the Amazon Cognito /authorize endpoint.
  2. The /authorize endpoint redirects the browser to the Amazon Cognito hosted UI, where the user can sign in or sign up. The following figure shows the user sign-in page.

    Figure 8: User sign-in page

    Figure 8: User sign-in page

Step 9. Create a user

You can use the Amazon Cognito user pool to manage your users or you can use a federated identity provider. Users can sign in or sign up from the Amazon Cognito hosted UI or from a federated identity provider. If you configured a federated identity provider, users will see a list of federated providers that they can choose from. When a user chooses a federated identity provider, they are redirected to the federated identity provider sign-in page. After signing in, the browser is directed back to Amazon Cognito. For this post, Amazon Cognito is the only identity provider, so you will use the Amazon Cognito hosted UI to create an Amazon Cognito user.

Create a new user using Amazon Cognito hosted UI:

  1. Create a new user by selecting Sign up and entering a username, password, and email address. Then select the Sign up button. The following figure shows the sign up screen.

    Figure 9: Sign up with a new account

    Figure 9: Sign up with a new account

  2. The Amazon Cognito sign up workflow will verify the email address by sending a verification code to that address. The following figure shows the prompt to enter the verification code.

    Figure 10: Enter the verification code

    Figure 10: Enter the verification code

  3. Enter the code from the verification email in the Verification Code text box.
  4. Select Confirm Account.

Step 10. Viewing the Amazon Cognito tokens and profile information

After authentication, the app displays the tokens and user information. The following figure shows the OAuth2 access token and OIDC ID token that are returned from the /token endpoint and the user profile returned from the /userInfo endpoint. Now that the user has been authenticated, the application can use the user’s email address to look up the user’s account information in an application data store. Based on the user’s account information, the application can grant/restrict access to paid content or show account information like order history.

Figure 11: Token and user profile information

Figure 11: Token and user profile information

Note: Many browsers will cache redirects. If your browser is repeatedly redirecting to the index.html page, clear the browser cache.

Summary

In this post, we’ve shown you how easy it is to add user authentication to your web and mobile apps with Amazon Cognito.

We created a Cognito User Pool as our user directory, assigned a domain name to the Amazon Cognito hosted UI, and created an application client for our application. Then we created an Amazon S3 bucket to host our website. Next, we created a CloudFront distribution for our Amazon S3 bucket. Then we created our application and uploaded it to our Amazon S3 website bucket. From there, we configured the client app settings with our identity provider, OAuth flows, and scopes. Then we accessed our application and used the Amazon Cognito sign-in flow to create a username and password. Finally, we logged into our application to see the OAuth and OIDC tokens.

Amazon Cognito saves you time and effort when implementing authentication with an intuitive UI, OAuth2 and OIDC support, and customizable workflows. You can now focus on building features that are important to your core business.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

George Conti

George is a Solution Architect for the AWS Financial Services team. He is passonate about technology and helping Financial Services Companies build solutions with AWS Services.

Architecting for database encryption on AWS

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/architecting-for-database-encryption-on-aws/

In this post, I review the options you have to protect your customer data when migrating or building new databases in Amazon Web Services (AWS). I focus on how you can support sensitive workloads in ways that help you maintain compliance and regulatory obligations, and meet security objectives.

Understanding transparent data encryption

I commonly see enterprise customers migrating existing databases straight from on-premises to AWS without reviewing their design. This might seem simpler and faster, but they miss the opportunity to review the scalability, cost-savings, and feature capability of native cloud services. A straight lift and shift migration can also create unnecessary operational overheads, carry-over unneeded complexity, and result in more time spent troubleshooting and responding to events over time.

One example is when enterprise customers who are using Transparent Data Encryption (TDE) or Extensible Key Management (EKM) technologies want to reuse the same technologies in their migration to AWS. TDE and EKM are database technologies that encrypt and decrypt database records as the records are written and read to the underlying storage medium. Customers use TDE features in Microsoft SQL Server, Oracle 10g and 11g, and Oracle Enterprise Edition to meet requirements for data-at-rest encryption. This shouldn’t mean that TDE is the requirement. It’s infrequent that an organizational policy or compliance framework specifies a technology such as TDE in the actual requirement. For example, the Payment Card Industry Data Security Standard (PCI-DSS) standard requires that sensitive data must be protected using “Strong cryptography with associated key-management processes and procedures.” Nowhere does PCI-DSS endorse or require the use of a specific technology.

Understanding risks

It’s important that you understand the risks that encryption-at-rest mitigates before selecting a technology to use. Encryption-at-rest, in the context of databases, generally manages the risk that one of the disks used to store database data is physically stolen and thus compromised. In on-premises scenarios, TDE is an effective technology used to manage this risk. All data from the database—up to and including the disk—is encrypted. The database manages all key management and cryptographic operations. You can also use TDE with a hardware security module (HSM) so that the keys and cryptography for the database are managed outside of the database itself. In TDE implementations, the HSM is used only to manage the key encryption keys (KEK), and not the data encryption keys (DEK) themselves. The DEKs are in volatile memory in the database at runtime, and so the cryptographic operations occur on the database itself.

You can also use native operating system encryption technologies such as dm-crypt or LUKS (Linux Unified Key Setup). Dm-crypt is a full disk encryption (FDE) subsystem in Linux kernel version 2.6 and beyond. Dm-crypt can be used on its own or with LUKS as an extension to add more features. When using dm-crypt, the operating system kernel is responsible for encrypting and decrypting data as it’s written and read from the attached volumes. This would achieve the same outcome as TDE—data written and read to the disk volume is encrypted, and the risk related to physical disk compromise is managed. DEKs are in runtime memory of the machine running the database.

With some TDE implementations, you can encrypt tables, rows, columns, and cells with different DEKs to achieve granular separation of duties between operators. Customers can then configure TDE to authorize access to each DEK based on database login credentials and job function, helping to manage risks associated with unauthorized access. However, the most common configuration I’ve seen is to rely on whole database encryption when using TDE. This configuration gives similar protection against the identified risks as dm-crypt with LUKS used without an HSM, since the DEKs and KEKs are stored within the instance in both cases and the result is that the database data on disk is encrypted.

Using encryption to manage data at rest risks in AWS

When you move to AWS, you gain additional security capabilities that can simplify your security implementations. Since the announcement of the AWS Key Management Service (AWS KMS) in 2014, it has been tightly integrated with Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), and dozens of other services on AWS. This means that data is encrypted on disk by checking a single check box. Furthermore, you get the benefits of AWS KMS for key management and cryptographic operations, while being transparent to the Amazon Elastic Compute Cloud (Amazon EC2) instance where the data is being encrypted and decrypted. For simplicity, the authorization for access to the data is managed entirely by AWS Identity and Access Management (IAM) and AWS KMS key resource policies.

If you need more granular access control to the data, you can use the AWS Encryption SDK to encrypt data at the application layer. That provides the same effect as TDE cell-level protection, with a FIPS140-2 Level 2 validated HSM, as might be required by a recognizing standard.

If you must use a FIPS140-2 Level 3 validated HSM to meet more stringent compliance standards or regulations, then you can use the Custom Key Store capability of AWS KMS to achieve that—again in a transparent way. This option has a trade-off, as there is additional operational overhead in terms of managing an AWS CloudHSM cluster.

Many customers choose to migrate their database into the managed Amazon Relational Database Service (Amazon RDS), rather than managing the database instance themselves. Like the Amazon EC2 service, RDS uses Amazon EBS volumes for its data storage, and so can seamlessly use AWS KMS for encryption at rest functionality. When you do so, your management overhead for the protection of data-at-rest reduces to almost zero. This lets you focus on business value while AWS is responsible for the management of your database and the protection of the underlying data. The next section reviews this option and others in more detail.

You can review the available Amazon RDS database engines and versions via the Amazon RDS User Guide documentation, or by running the following AWS Command Line Interface (AWS CLI) command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBEngineVersionDescription" --region <regionIdentifier>

Recommended Solutions

If you’re moving an existing database to AWS, you have the following solutions for data at rest encryption. I go into more detail for each option below.

Table 1 – Encryption options

OptionDatabase managementHostEncryptionKey management
1Amazon managedAmazon RDSAmazon EBSAWS KMS
2Amazon managedAmazon RDSAmazon EBSAWS KMS Custom Key Store
3Customer managedAmazon EC2Amazon EBSAWS KMS
4Customer managedAmazon EC2Amazon EBSAWS KMS Custom Key Store
5Customer managedAmazon EC2Amazon EBSLUKS
6Customer managedAmazon EC2DatabaseDatabase TDE
7Customer managedAmazon EC2DatabaseCloudHSM

Option 1 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within an AWS Region to provide resiliency. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended as your default database encryption approach.

Benefits

  • No key management requirement on host; key management is automated and performed by AWS KMS
  • Meets FIPS140-2 Level 2 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the operating system and database engine
  • Well-recognized configuration, with support offered through AWS Support
  • AWS KMS costs are comparatively low

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 2 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS custom key store

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within a Region to provide resiliency. CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended where FIPS140-2 Level 3 validation is a specified compliance requirement.

Benefits

  • No key management requirement on host; key management is performed by AWS KMS
  • Meets FIPS140-2 Level 3 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the database engine
  • Well-recognized configuration with support offered through AWS Support

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Option 3 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS

In this approach, the key difference is that you’re responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Key rotation and management is handled transparently by AWS
  • Data encryption keys are managed by the hypervisor, not by your EC2 instance
  • AWS KMS costs are comparatively low

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 4 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS custom key store

In this approach, you are again responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. And similar to Option 2, CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version and when FIPS140-2 Level 3 compliance is required.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Data encryption keys managed by the hypervisor, not by your EC2 instance
  • Keys managed by FIPS140-2 Level 3 validated HSM

Challenges

  • You’re responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 5 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by LUKS

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. You also need to install LUKS onto the Linux instance to manage the encryption of data on Amazon EBS.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Transparent encryption is managed by OS with LUKS

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Data encryption keys are managed directly on the EC2 instance, and not a dedicated key management system
  • Scaling must be vertical, which is slow and costly
  • LUKS is supported through open-source licensing
  • Support for backup and recovery is LUKS specific, and require additional consideration
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Note: This approach limits you to only Linux instances and requires the most technical knowledge and effort on your part. Options, such as BitLocker and SQL Server Always Encrypted, exist for Windows hosts, and the complexity and challenges are similar to those of LUKS.

Option 6 – Customer-managed database platform hosted on Amazon EC2 with database encryption and key management provided by TDE

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by the database engine to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • Data encryption keys are managed directly on the EC2 instance
  • Scaling is dependent on TDE functionality and Amazon EC2 scaling
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Option 7 – Customer-managed database platform hosted on Amazon EC2 with database encryption performed by TDE and key management provided by CloudHSM

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by a CloudHSM cluster to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Wallet keys (KEK) are managed by a FIPS140-2 Level 3 validated HSM
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Integration and support of CloudHSM with TDE might vary
  • Scaling is dependent on TDE functionality, Amazon EC2 scaling, and CloudHSM cluster.
  • Data encryption keys are managed on EC2 instance
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Summary

While you can operate in AWS similar to how you operate in your on-premises environment, the preceding configurations and recommendations show how you can significantly reduce your challenges and increase your benefits by using cloud-native security services like AWS KMS, Amazon RDS, and CloudHSM. Specifically, using Amazon RDS with Amazon EBS volumes encrypted by AWS KMS provides a highly scalable, resilient, and secure way to manage your keys in AWS.

While there might be some architectural redesign and configuration work needed to move an on-premises database into Amazon RDS, you can leverage AWS services to help you meet your compliance requirements with less effort. By offloading the OS and database maintenance responsibility to AWS, you simultaneously reduce operational friction and increase security. By migrating this way, you can benefit from the scalability and resilience of the AWS global infrastructure and expertise. Lastly, to get started with migrating your database to AWS, I encourage you to use the AWS Database Migration Service.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Author

Scott Conklin

Scott is a Senior Security Consultant with AWS Professional Services (Global Specialty Practice). Based out of Chicago with 4 years tenure, he is an avid distance runner, crypto nerd, lover of unicorns, and enjoys camping, nature, playing Minecraft with his 3 kids, and binge watching Amazon Prime with his wife.

AWS Firewall Manager helps automate security group management: 3 scenarios

Post Syndicated from Sonakshi Pandey original https://aws.amazon.com/blogs/security/aws-firewall-manager-helps-automate-security-group-management-3-scenarios/

In this post, we walk you through scenarios that use AWS Firewall Manager to centrally manage security groups across your AWS Organizations implementation. Firewall Manager is a security management tool that helps you centralize, configure, and maintain AWS WAF rules, AWS Shield Advanced protections, and Amazon Virtual Private Cloud (Amazon VPC) security groups across AWS Organizations.

A multi-account strategy provides the highest level of resource isolation, and helps you to efficiently track costs and avoid running into any API limits. Creating a separate account for each project, business unit, and development stage also enforces logical separation of your resources.

As organizations innovate, developers are constantly updating applications and, in the process, setting up new resources. Managing security groups for new resources across multiple accounts becomes complex as the organization grows. To enable developers to have control over the configuration of their own applications, you can use Firewall Manager to automate the auditing and management of VPC security groups across multiple Amazon Web Services (AWS) accounts.

Firewall Manager enables you to create security group policies and automatically implement them. You can do this across your entire organization, or limit it to specified accounts and organizational units (OU). Also, Firewall Manager lets you use AWS Config to identify and review resources that don’t comply with the security group policy. You can choose to view the accounts and resources that are out of compliance without taking corrective action, or to automatically remediate noncompliant resources.

Scenarios where AWS Firewall Manager can help manage security groups

Scenario 1: Central security group management for required security groups

Let’s consider an example where you’re running an ecommerce website. You’ve decided to use Organizations to centrally manage billing and several aspects of access, compliance, security, and sharing resources across AWS accounts. As shown in the following figure, AWS accounts that belong to the same team are grouped into OUs. In this example, the organization has a foundational OU, and multiple business OUs—ecommerce, digital marketing, and product.

Figure 1: Overview of ecommerce website

Figure 1: Overview of ecommerce website

The business OUs contain the development, test, and production accounts. Each of these accounts is managed by the developers in charge of development, test, and production stages used for the launch of the ecommerce website.

The product teams are responsible for configuring and maintaining the AWS environment according to the guidance from the security team. An intrusion detection system (IDS) has been set up to monitor infrastructure for security activity. The IDS architecture requires that an agent be installed on instances across multiple accounts. The IDS agent running on the Amazon Elastic Compute Cloud (Amazon EC2) instances protects their infrastructure from common security issues. The agent collects telemetry data used for analysis, and communicates with the central IDS instance that sits in the AWS security account. The central IDS instance analyzes the telemetry data and notifies the administrators with its findings.

For the host-based agent to communicate with the central system correctly, each Amazon EC2 instance must have specific inbound and outbound ports and specific destinations defined as allowed. To enable our product to focus on their applications, we want to use automation to ensure that the right network configuration is implemented so that instances can communicate with the central IDS.

You can address the preceding problem with Firewall Manager by implementing a common security group policy for required accounts. With Firewall Manager, you create a common IDS security group in the central security account and replicate it across other accounts in the ecommerce OU, as shown in the following figure.

Figure 2: Security groups central management with Firewall Manager

Figure 2: Security groups central management with Firewall Manager

Changes made to these security groups can be seamlessly propagated to all the accounts. The changes can be tracked from the Firewall Manager console as shown in figure 3. Firewall Manager propagates changes to the security groups based on the tags attached to the Amazon EC2 instance.

As shown in figure 3, with Firewall Manager you can quickly view the compliance status for each policy by looking at how many accounts are included in the scope of the policy and how many out of those are compliant or non-compliant. Firewall Manager is also integrated with AWS Security Hub, which can trigger security automation based on findings.

Figure 3: Firewall Manager findings

Figure 3: Firewall Manager findings

Scenario 2: Clean-up of unused and redundant security groups

Firewall Manager can also help manage the clean-up of unused and redundant security groups. In a development environment, instances are often terminated post testing, but the security groups associated with those instances might remain. We want to only remove the security groups that are no longer in use to avoid causing issues with running applications.

Figure 4: Ecommerce OU, accounts, and security groups

Figure 4: Ecommerce OU, accounts, and security groups

In our example, developers are testing features in a test account. In this scenario, once the testing is completed, the instances are terminated and the security groups remain in the account. The preceding figure shows unused security groups like Test1, Test2, and Test3 in the test account.

A Firewall Manager usage audit security group policy monitors your organization for unused and redundant security groups. You can configure Firewall Manager to automatically notify you of unused, redundant, or non-compliant security groups, and to automatically remove them. These actions are applied to existing and new accounts that are added to your organization.

Scenario 3: Audit and remediate overly permissive security groups across all AWS accounts

The security team is responsible for maintaining the security of the AWS environment and must monitor and remediate overly permissive security groups across all AWS accounts. Auditing security groups for overly permissive access is a critical security function and can become inefficient and time consuming when done manually.

You can use Firewall Manager content audit security group policy to provide auditing and enforcement of your organization’s security policy for risky security groups, most commonly known as allowed or blocked security group rules. This enables you to set guardrails and monitor for overly permissive rules centrally. For example, we set an allow list policy to allow secure shell access only from authorized IP addresses on the corporate network.

Firewall Manager enables you to create security group policies to protect all accounts across your organization. These policies are applied to accounts or to OUs that contain specific tags, as shown in figure 5. Using the Firewall Manager console, you can get a quick view of the non-compliant security groups across accounts in your organization. Additionally, Firewall Manager can be configured to send notifications to the security administrators or automatically remove non-compliant security groups.

In the policy scope, you can choose the AWS accounts this policy applies to, the resource type, and which resource to include based on the resource tags, as shown in figure 5.

Figure 5: Edit tags for policy scope

Figure 5: Edit tags for policy scope

Conclusion

This post shares a few core use cases that enable security practitioners to build the capability to centrally manage security groups across AWS Organizations. Developers can focus on building applications, while the audit and configuration of network controls is automated by Firewall Manager. The key use cases we discussed are:

  1. Common security group policies
  2. Content audit security groups policies
  3. Usage audit security group policies

Firewall Manager is useful in a dynamic and growing multi-account AWS environment. Follow the Getting Started with Firewall Manager guide to learn more about implementing this service in your AWS environment.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonakshi Pandey

Sonakshi is a Solutions Architect at Amazon Web Services. She helps customers migrate and optimize workloads on AWS. Sonakshi is based in Seattle and enjoys cooking, traveling, blogging, reading thriller novels, and spending time with her family.

Author

Laura Reith

Laura is a Solutions Architect at Amazon Web Services. Before AWS, she worked as a Solutions Architect in Taiwan focusing on physical security and retail analytics.

Author

Kevin Moraes

Kevin is a Partner Solutions Architect with AWS. Kevin enjoys working with customers and helping to build them in areas of Network Infrastructure, Security, and Migration conforming to best practices. When not at work, Kevin likes to travel, watch sports, and listen to music.

Isolating network access to your AWS Cloud9 environments

Post Syndicated from Brandon Wu original https://aws.amazon.com/blogs/security/isolating-network-access-to-your-aws-cloud9-environments/

In this post, I show you how to create isolated AWS Cloud9 environments for your developers without requiring ingress (inbound) access from the internet. I also walk you through optional steps to further isolate your AWS Cloud9 environment by removing egress (outbound) access. Until recently, AWS Cloud9 required you to allow ingress Secure Shell (SSH) access from authorized AWS Cloud9 IP addresses. Now AWS Cloud 9 allows you to create and run your development environments within your isolated Amazon Virtual Private Cloud (Amazon VPC), without direct connectivity from the internet, adding an additional layer of security.

AWS Cloud9 is an integrated development environment (IDE) that lets you write, run, edit, and debug code using only a web browser. Developers who use AWS Cloud9 have access to an isolated environment where they can innovate, experiment, develop, and perform early testing without impacting the overall security and stability of other environments. By using AWS Cloud9, you can store your code securely in a version control system (like AWS CodeCommit), configure your AWS Cloud9 EC2 development environments to use encrypted Amazon Elastic Block Store (Amazon EBS) volumes, and share your environments within the same account.

Solution overview

Before enhanced virtual private cloud (VPC) support was available, AWS Cloud9 required you to allow ingress Secure Shell (SSH) access from authorized AWS Cloud9 IP addresses in order to use the IDE. The addition of private VPC support enables you to create and run AWS Cloud9 environments in private subnets without direct connectivity from the internet. You can use VPC security groups to configure the ingress and egress traffic that you allow, or choose to disallow all traffic.

Since this feature uses AWS Systems Manager to support using AWS Cloud9 in private subnets, it’s worth taking a minute to read and understand a bit about it before you continue. Systems Manager Session Manager provides an interactive shell connection between AWS Cloud9 and its associated Amazon Elastic Compute Cloud (Amazon EC2) instance in the Amazon Virtual Private Cloud (Amazon VPC). The AWS Cloud9 instance initiates an egress connection to the Session Manager service using the pre-installed Systems Manager agent. In order to use this feature, your developers must have access to instances managed by Session Manager in their IAM policy.

When you create an AWS Cloud9 no-ingress EC2 instance (with access via Systems Manager) into a private subnet, its security group doesn’t have an ingress rule to allow incoming network traffic. The security group does, however, have an egress rule that permits egress traffic from the instance. AWS Cloud9 requires this to download packages and libraries to keep the AWS Cloud9 IDE up to date.

If you want to prevent egress connectivity in addition to ingress traffic for the instance, you can configure Systems Manager to use an interface VPC endpoint. This allows you to restrict egress connections from your environment and ensure the encrypted connections between the AWS Cloud9 EC2 instance and Systems Manager are carried over the AWS global network. The architecture of accessing your AWS Cloud9 instance using Systems Manager and interface VPC endpoints is shown in Figure 1.
 

Figure 1: Accessing AWS Cloud9 environment via AWS Systems Manager and Interface VPC Endpoints

Figure 1: Accessing AWS Cloud9 environment via AWS Systems Manager and Interface VPC Endpoints

Note: The use of interface VPC endpoints incurs an additional charge for each hour your VPC endpoints remain provisioned. This is in addition to the AWS Cloud9 EC2 instance cost.

Prerequisites

You must have a VPC configured with an attached internet gateway, public and private subnets, and a network address translation (NAT) gateway created in your public subnet. Your VPC must also have DNS resolution and DNS hostnames options enabled. To learn more, you can visit Working with VPCs and subnets, Internet gateways, and NAT gateways.

You must also give your developers access to their AWS Cloud9 environments managed by Session Manager.

AWS Cloud9 requires egress access to the internet for some features, including downloading required libraries or packages needed for updates to the IDE and running AWS Lambda functions. If you don’t want to allow egress internet access for your environment, you can create your VPC without an attached internet gateway, public subnet, and NAT gateway.

Implement the solution

To set up AWS Cloud9 with access via Systems Manager:

  1. Optionally, if no egress access is required, set up interface VPC endpoints for Session Manager
  2. Create a no-ingress Amazon EC2 instance for your AWS Cloud9 environment

(Optional) Set up interface VPC endpoints for Session Manager

Note: For no-egress environments only.

You can skip this step if you don’t need your VPC to restrict egress access. If you need your environment to restrict egress access, continue.

Start by using the AWS Management Console to configure Systems Manager to use an interface VPC endpoint (powered by AWS PrivateLink). If you’d prefer, you can use this custom AWS CloudFormation template to configure the VPC endpoints.

Interface endpoints allow you to privately access Amazon EC2 and System Manager APIs by using a private IP address. This also restricts all traffic between your managed instances, Systems Manager, and Amazon EC2 to the Amazon network. Using the interface VPC endpoint, you don’t need to set up an internet gateway, a NAT device, or a virtual private gateway.

To set up interface VPC endpoints for Session Manager

  1. Create a VPC security group to allow ingress access over HTTPS (port 443) from the subnet where you will deploy your AWS Cloud9 environment. This is applied to your interface VPC endpoints to allow connections from your AWS Cloud9 instance to use Systems Manager.
  2. Create a VPC endpoint.
  3. In the list of Service Names, select com.amazonaws.<region>.ssm service as shown in Figure 2.
     
    Figure 2: AWS PrivateLink service selection filter

    Figure 2: AWS PrivateLink service selection filter

  4. Select your VPC and private Subnets you want to associate the interface VPC endpoint with.
  5. Choose Enable for this endpoint for the Enable DNS name setting.
  6. Select the security group you created in Step 1.
  7. Add any optional tags for the interface VPC endpoint.
  8. Choose Create endpoint.
  9. Repeat Steps 2 through 8 to create interface VPC endpoints for the com.amazonaws.<region>.ssmmessages and com.amazonaws.<region>.ec2messages services.
  10. When all three interface VPC endpoints have a status of available, you can move to the next procedure.

Create a no-ingress Amazon EC2 instance for your AWS Cloud9 environment

Deploy a no-ingress Amazon EC2 instance for your AWS Cloud9 environment using the console. Optionally, you can use this custom AWS CloudFormation template to create the no-ingress Amazon EC2 instance. You can also use the AWS Command Line Interface, or AWS Cloud9 API to set up your AWS Cloud9 environment with access via Systems Manager.

As part of this process, AWS Cloud9 automatically creates three IAM resources pre-configured with the appropriate permissions:

  • An IAM service-linked role (AWSServiceRoleForAWSCloud9)
  • A service role (AWSCloud9SSMAccessRole)
  • An instance profile (AWSCloud9SSMInstanceProfile)

The AWSCloud9SSMAccessRole and AWSCloud9SSMInstanceProfile are attached to your AWS Cloud9 EC2 instance. This service role for Amazon EC2 is configured with the minimum permissions required to integrate with Session Manager. By default, AWS Cloud9 makes managed temporary AWS access credentials available to you in the environment. If you need to grant additional permissions to your AWS Cloud9 instance to access other services, you can create a new role and instance profile and attach it to your AWS Cloud9 instance.

By default, your AWS Cloud9 environment is created with a VPC security group with no ingress access and allowing egress access so the AWS Cloud9 IDE can download required libraries or packages needed for urgent updates to IDE plugins. You can optionally configure your AWS Cloud9 environment to restrict egress access by removing the egress rules in the security group. If you restrict egress access, some features won’t work (for example, the AWS Lambda plugin and updates to IDE plugins).

To use the console to create your AWS Cloud9 environment

  1. Navigate to the AWS Cloud9 console.
  2. Select Create environment on the top right of the console.
  3. Enter a Name and Description.
  4. Select Next step.
  5. Select Create a new no-ingress EC2 instance for your environment (access via Systems Manager) as shown in Figure 3.
     
    Figure 3: AWS Cloud9 environment settings

    Figure 3: AWS Cloud9 environment settings

  6. Select your preferred Instance type, Platform, and Cost-saving setting.
  7. You can optionally configure the Network settings to select the Network (VPC) and private Subnet to create your AWS Cloud9 instance.
  8. Select Next step.

Your AWS Cloud9 environment is ready to use. You can access your AWS Cloud9 environment console via Session Manager using encrypted connections over the AWS global network as shown in Figure 4.
 

Figure 4: AWS Cloud9 instance console access

Figure 4: AWS Cloud9 instance console access

You can see that this AWS Cloud9 connection is using Session Manager by navigating to the Session Manager console and viewing the active sessions as shown in Figure 5.
 

Figure 5: AWS Systems Manager Session Manager active sessions

Figure 5: AWS Systems Manager Session Manager active sessions

Summary

Security teams are charged with providing secure operating environments without inhibiting developer productivity. With the ability to deploy your AWS Cloud9 environment instances in a private subnet, you can provide a seamless experience for developing applications using the AWS Cloud9 IDE while enabling security teams to enforce key security controls to protect their corporate networks and intellectual property.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Cloud9 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Brandon Wu

Brandon is a security solutions architect helping financial services organizations secure their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and experimenting in the kitchen.

On-Demand SCIM provisioning of Azure AD to AWS SSO with PowerShell

Post Syndicated from Natalie Doerr original https://aws.amazon.com/blogs/security/on-demand-scim-provisioning-of-azure-ad-to-aws-sso-with-powershell/

In this post, I will demonstrate how you can use a PowerShell script to initiate an on-demand synchronization between Azure Active Directory and AWS Single Sign-On (AWS SSO) and avoid the default 40-minute synchronization schedule between both identity providers. This solution helps enterprises quickly synchronize changes made to users, groups, or permissions within Azure AD with AWS SSO. This allows user or permission changes to be quickly reflected in associated AWS accounts.

Prerequisites

You need the following to complete this session:

This post focuses on the steps needed to set up the on-demand sync solution. You can find specifics on how to set up and use PowerShell and the Azure PowerShell modules at Installing Azure PowerShell.
 

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Figure 1: Triggering the SCIM Endpoint to sync all users and groups

Grant permission to the Graph API to access the Default Directory in Azure AD

To get started, grant the permissions needed for the application to have access to the directory endpoint.

To grant permissions

  1. Sign in to the Azure Portal and navigate to the Azure AD dashboard.
  2. From the left navigation pane, select App registrations. If you don’t see your application listed, select the All applications tab.
    For this example, I’m using an application named AWS.
     
    Figure 2: Select the AWS app registration

    Figure 2: Select the AWS app registration

  3. Choose API permissions from the navigation pane.
  4. Choose the Add a permission option.
     
    Figure 3: Select the Add API permission

    Figure 3: Select the Add API permission

  5. From the settings page that opens, choose the Microsoft Graph option.
     
    Figure 4: Request API permissions

    Figure 4: Request API permissions

    Under What type of permissions does your application require, select Delegated permissions and enter directory.readwrite.all in the permissions search field. Select Directory.ReadWrite.All and choose Add permissions at the bottom of the page.
     

    Figure 5: Request API permissions - Add permissions

    Figure 5: Request API permissions – Add permissions

  6. On the API permissions page, choose Grant admin consent for Default Directory and select Yes.
     
    Figure 6: Grant permission for the account to have administrator permissions

    Figure 6: Grant permission for the account to have administrator permissions

Create a certificate and secret to access the application

To get started, create a certificate and secret which grants secure access to the AWS application.

To create a certificate and secret

  1. Choose Certificate & secrets from the left navigation menu and then choose New client secret.
     
    Figure 7: Creating a client secret for 1 year

    Figure 7: Creating a client secret for 1 year

  2. Select the desired length of the certificate.
  3. Provide a description and choose Add.
    1. Copy the value of the certificate that’s generated and save it to use later in this process.
    2. After you’ve saved the value to use later, select Home from the top left corner of the screen.
    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

    Figure 8: Make sure you click Copy to clipboard to store the value of the secret

Create a user with permissions to run the code

Now that you’ve given your application access to the directory, let’s create a user and assign the proper permissions to run the code.

To create a user and assign permissions

  1. Choose Azure Active Directory from the Azure services list.
  2. Choose Users and select New user. The User name, First name, and Last name fields are required. In this example, I set the User name and First name to Auth and the Last name to User.
    1. Take note of the password that is set for this user and save it to use later.
    2. Once completed, choose Create.
    Figure 9: Create a user in Azure AD

    Figure 9: Create a user in Azure AD

  3. Select the newly created user from the list.
    1. On the left navigation pane, select Assigned roles.
    2. Choose Add assignments.
    3. Choose Hybrid identity administrator and select Add.
    Figure 10: Assign the user the role to trigger the API

    Figure 10: Assign the user the role to trigger the API

  4. Select Default Directory from the top of the navigation pane.
    1. Choose Enterprise applications.
    2. Choose the AWS application.
    3. Select Assign users and groups.
    Figure 11: Azure Enterprise applications - Assign users and groups

    Figure 11: Azure Enterprise applications – Assign users and groups

  5. Choose + Add user at the top of the window.
    1. Select the user you created earlier. I select Auth as that was the user I created earlier.
    2. Choose Select and then Assign.
    Figure 12: Select the user we created earlier from Figure 9

    Figure 12: Select the user we created earlier from Figure 9

     

    Figure 13: Assign the user to the application

    Figure 13: Assign the user to the application

  6. Now that you’ve added the user, you can see that the user is assigned to the application.
     
    Figure 14: Screen now showing that the user has been assigned to the application

    Figure 14: Screen now showing that the user has been assigned to the application

  7. It’s recommended to log in to the Azure portal as the user you just created in a new incognito or private browser session. As part of the first log in, you’ll be prompted to change the password.

Prerequisites to trigger the SCIM endpoint

You need the following items to run the PowerShell code that triggers the endpoint.

  1. From the application registration, retrieve the items shown below. Note that you must use the client secret saved earlier when the certificate was created.
    • Tenant ID
    • Display name
    • Application ID
    • Client secret
    • User name
    • Password
  2. Copy the items to a notepad in the preceding order so you can enter all of them through a single copy and paste action while running the script.
  3. From the menu, select Azure Active Directory.
  4. Choose App registrations and select the AWS App that was set up.
  5. Copy the Application (client) ID and the Directory (tenant) ID.
Figure 15: App registration contains all the items needed for the PowerShell script

Figure 15: App registration contains all the items needed for the PowerShell script

Trigger the SCIM endpoint with PowerShell

Now that you’ve completed all of the previous steps, you need to copy the code from the GitHub repository to your local machine and run it. We’ve configured the code to run manually, but you can also automate it to trigger an Azure Automation runbook when users are added to Azure through Alerts. You can also configure CloudWatch Events to run a Lambda function at periodic intervals.

To trigger the SCIM endpoint

  1. Copy the code from the GitHub repository.
  2. Save the code using the code editor of your choice, or you can download Visual Studio Code. Give the file a user-friendly name, such as Sync.ps1.
  3. Navigate to the location where you saved the file and run ./sync.ps1.
  4. When prompted, enter the values from the notepad. You can paste these all at one time so you don’t have to copy and paste each individual item.

    Note: When copying and pasting in Windows, choose the PowerShell icon, then Edit > Paste.

     

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

    Figure 16: Windows Command Prompt – Select Paste to copy all items needed to trigger the sync

After you paste the values into the PowerShell window, you see the script input as shown in the following screenshot. The client secret and password are secure values and are masked for security purposes.
 

Figure 17: PowerShell script with input values pasted in

Figure 17: PowerShell script with input values pasted in

After the job has started in PowerShell, two messages are displayed. One indicating that synchronization is starting and a following message when synchronization has completed. Both are shown in the following figure.
 

Figure 18: Output from a successful run of the PowerShell script

Figure 18: Output from a successful run of the PowerShell script

View the synchronization status and logs

To verify that the job ran successfully, you can check the completed time from the Azure portal. You can verify the time the script ran by viewing the completion time along with the current status.

To view the status and logs

  1. From the menu, choose Azure Active Directory.
  2. Choose Enterprise applications and select the AWS App.
  3. From the left navigation menu, choose Provisioning and then choose View provisioning details. This displays the last time the sync completed.
     
    Figure 19: View the Provisioning details about the job

    Figure 19: View the Provisioning details about the job

Summary

In this post, I demonstrate how you can use a PowerShell script to trigger the SCIM endpoint to on-demand synchronize Azure AD with AWS Single Sign-On. You can find the code in this GitHub repository and use it to synchronize user and group changes on demand.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Aidan Keane

Aidan is a Senior Technical Account Manager for AWS Enterprise Support. He has been working with Cloud technologies for more than 5 years. Outside of technology, he is a sports enthusiast who enjoys golf, biking, and watching Liverpool FC. He spends his free time with his family and enjoys traveling to Ireland and South America.

Cross-account and cross-region deployment using GitHub actions and AWS CDK

Post Syndicated from DAMODAR SHENVI WAGLE original https://aws.amazon.com/blogs/devops/cross-account-and-cross-region-deployment-using-github-actions-and-aws-cdk/

GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

A cross-account deployment strategy is a CI/CD pattern or model in AWS. In this pattern, you have a designated AWS account called tools, where all CI/CD pipelines reside. Deployment is carried out by these pipelines across other AWS accounts, which may correspond to dev, staging, or prod. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline.

In this post, we show you how to use GitHub Actions to deploy an AWS Lambda-based API to an AWS account and Region using the cross-account deployment strategy.

Using GitHub Actions may have associated costs in addition to the cost associated with the AWS resources you create. For more information, see About billing for GitHub Actions.

Prerequisites

Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work:

  • Tools – Where you create an AWS Identity and Access Management (IAM) user for GitHub Actions to use to carry out deployment.
  • Target – Where deployment occurs. You can call this as your dev/stage/prod environment.

You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. So, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI.

Solution overview

You start by building the necessary resources in the tools account (an IAM user with permissions to assume a specific IAM role from the target account to carry out deployment). For simplicity, we refer to this IAM role as the cross-account role, as specified in the architecture diagram.

You also create the cross-account role in the target account that trusts the IAM user in the tools account and provides the required permissions for AWS CDK to bootstrap and initiate creating an AWS CloudFormation deployment stack in the target account. GitHub Actions uses the tools account IAM user credentials to the assume the cross-account role to carry out deployment.

In addition, you create an AWS CloudFormation execution role in the target account, which AWS CloudFormation service assumes in the target account. This role has permissions to create your API resources, such as a Lambda function and Amazon API Gateway, in the target account. This role is passed to AWS CloudFormation service via AWS CDK.

You then configure your tools account IAM user credentials in your Git secrets and define the GitHub Actions workflow, which triggers upon pushing code to a specific branch of the repo. The workflow then assumes the cross-account role and initiates deployment.

The following diagram illustrates the solution architecture and shows AWS resources across the tools and target accounts.

Architecture diagram

Creating an IAM user

You start by creating an IAM user called git-action-deployment-user in the tools account. The user needs to have only programmatic access.

  1. Clone the GitHub repo aws-cross-account-cicd-git-actions-prereq and navigate to folder tools-account. Here you find the JSON parameter file src/cdk-stack-param.json, which contains the parameter CROSS_ACCOUNT_ROLE_ARN, which represents the ARN for the cross-account role we create in the next step in the target account. In the ARN, replace <target-account-id> with the actual account ID for your designated AWS target account.                                             Replace <target-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the tools AWS account profile you created earlier. The script compiles the code, builds a package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd aws-cross-account-cicd-git-actions-prereq/tools-account/
./deploy.sh "<AWS-TOOLS-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in the tools account: CDKToolkit and cf-GitActionDeploymentUserStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package. cf-GitActionDeploymentUserStack creates the IAM user with permission to assume git-action-cross-account-role (which you create in the next step). On the Outputs tab of the stack, you can find the user access key and the AWS Secrets Manager ARN that holds the user secret. To retrieve the secret, you need to go to Secrets Manager. Record the secret to use later.

Stack that creates IAM user with its secret stored in secrets manager

Creating a cross-account IAM role

In this step, you create two IAM roles in the target account: git-action-cross-account-role and git-action-cf-execution-role.

git-action-cross-account-role provides required deployment-specific permissions to the IAM user you created in the last step. The IAM user in the tools account can assume this role and perform the following tasks:

  • Upload deployment assets such as the CloudFormation template and Lambda code package to a designated S3 bucket via AWS CDK
  • Create a CloudFormation stack that deploys API Gateway and Lambda using AWS CDK

AWS CDK passes git-action-cf-execution-role to AWS CloudFormation to create, update, and delete the CloudFormation stack. It has permissions to create API Gateway and Lambda resources in the target account.

To deploy these two roles using AWS CDK, complete the following steps:

  1. In the already cloned repo from the previous step, navigate to the folder target-account. This folder contains the JSON parameter file cdk-stack-param.json, which contains the parameter TOOLS_ACCOUNT_USER_ARN, which represents the ARN for the IAM user you previously created in the tools account. In the ARN, replace <tools-account-id> with the actual account ID for your designated AWS tools account.                                             Replace <tools-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the target AWS account profile you created earlier. The script compiles the code, builds the package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd ../target-account/
./deploy.sh "<AWS-TARGET-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in your target account: CDKToolkit and cf-CrossAccountRolesStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket to hold deployment assets such as the CloudFormation template and Lambda code package. The cf-CrossAccountRolesStack creates the two IAM roles we discussed at the beginning of this step. The IAM role git-action-cross-account-role now has the IAM user added to its trust policy. On the Outputs tab of the stack, you can find these roles’ ARNs. Record these ARNs as you conclude this step.

Stack that creates IAM roles to carry out cross account deployment

Configuring secrets

One of the GitHub actions we use is aws-actions/[email protected]. This action configures AWS credentials and Region environment variables for use in the GitHub Actions workflow. The AWS CDK CLI detects the environment variables to determine the credentials and Region to use for deployment.

For our cross-account deployment use case, aws-actions/[email protected] takes three pieces of sensitive information besides the Region: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET, and CROSS_ACCOUNT_ROLE_TO_ASSUME. Secrets are recommended for storing sensitive pieces of information in the GitHub repo. It keeps the information in an encrypted format. For more information about referencing secrets in the workflow, see Creating and storing encrypted secrets.

Before we continue, you need your own empty GitHub repo to complete this step. Use an existing repo if you have one, or create a new repo. You configure secrets in this repo. In the next section, you check in the code provided by the post to deploy a Lambda-based API CDK stack into this repo.

  1. On the GitHub console, navigate to your repo settings and choose the Secrets tab.
  2. Add a new secret with name as TOOLS_ACCOUNT_ACCESS_KEY_ID.
  3. Copy the access key ID from the output OutGitActionDeploymentUserAccessKey of the stack GitActionDeploymentUserStack in tools account.
  4. Enter the ID in the Value field.                                                                                                                                                                Create secret
  5. Repeat this step to add two more secrets:
    • TOOLS_ACCOUNT_SECRET_ACCESS_KEY (value retrieved from the AWS Secrets Manager in tools account)
    • CROSS_ACCOUNT_ROLE (value copied from the output OutCrossAccountRoleArn of the stack cf-CrossAccountRolesStack in target account)

You should now have three secrets as shown below.

All required git secrets

Deploying with GitHub Actions

As the final step, first clone your empty repo where you set up your secrets. Download and copy the code from the GitHub repo into your empty repo. The folder structure of your repo should mimic the folder structure of source repo. See the following screenshot.

Folder structure of the Lambda API code

We can take a detailed look at the code base. First and foremost, we use Typescript to deploy our Lambda API, so we need an AWS CDK app and AWS CDK stack. The app is defined in app.ts under the repo root folder location. The stack definition is located under the stack-specific folder src/git-action-demo-api-stack. The Lambda code is located under the Lambda-specific folder src/git-action-demo-api-stack/lambda/ git-action-demo-lambda.

We also have a deployment script deploy.sh, which compiles the app and Lambda code, packages the Lambda code into a .zip file, bootstraps the app by copying the assets to an S3 bucket, and deploys the stack. To deploy the stack, AWS CDK has to pass CFN_EXECUTION_ROLE to AWS CloudFormation; this role is configured in src/params/cdk-stack-param.json. Replace <target-account-id> with your own designated AWS target account ID.

Update cdk-stack-param.json in git-actions-cross-account-cicd repo with TARGET account id

Finally, we define the Git Actions workflow under the .github/workflows/ folder per the specifications defined by GitHub Actions. GitHub Actions automatically identifies the workflow in this location and triggers it if conditions match. Our workflow .yml file is named in the format cicd-workflow-<region>.yml, where <region> in the file name identifies the deployment Region in the target account. In our use case, we use us-east-1 and us-west-2, which is also defined as an environment variable in the workflow.

The GitHub Actions workflow has a standard hierarchy. The workflow is a collection of jobs, which are collections of one or more steps. Each job runs on a virtual machine called a runner, which can either be GitHub-hosted or self-hosted. We use the GitHub-hosted runner ubuntu-latest because it works well for our use case. For more information about GitHub-hosted runners, see Virtual environments for GitHub-hosted runners. For more information about the software preinstalled on GitHub-hosted runners, see Software installed on GitHub-hosted runners.

The workflow also has a trigger condition specified at the top. You can schedule the trigger based on the cron settings or trigger it upon code pushed to a specific branch in the repo. See the following code:

name: Lambda API CICD Workflow
# This workflow is triggered on pushes to the repository branch master.
on:
  push:
    branches:
      - master

# Initializes environment variables for the workflow
env:
  REGION: us-east-1 # Deployment Region

jobs:
  deploy:
    name: Build And Deploy
    # This job runs on Linux
    runs-on: ubuntu-latest
    steps:
      # Checkout code from git repo branch configured above, under folder $GITHUB_WORKSPACE.
      - name: Checkout
        uses: actions/[email protected]
      # Sets up AWS profile.
      - name: Configure AWS credentials
        uses: aws-actions/[email protected]
        with:
          aws-access-key-id: ${{ secrets.TOOLS_ACCOUNT_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.TOOLS_ACCOUNT_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.REGION }}
          role-to-assume: ${{ secrets.CROSS_ACCOUNT_ROLE }}
          role-duration-seconds: 1200
          role-session-name: GitActionDeploymentSession
      # Installs CDK and other prerequisites
      - name: Prerequisite Installation
        run: |
          sudo npm install -g [email protected]
          cdk --version
          aws s3 ls
      # Build and Deploy CDK application
      - name: Build & Deploy
        run: |
          cd $GITHUB_WORKSPACE
          ls -a
          chmod 700 deploy.sh
          ./deploy.sh

For more information about triggering workflows, see Triggering a workflow with events.

We have configured a single job workflow for our use case that runs on ubuntu-latest and is triggered upon a code push to the master branch. When you create an empty repo, master branch becomes the default branch. The workflow has four steps:

  1. Check out the code from the repo, for which we use a standard Git action actions/[email protected]. The code is checked out into a folder defined by the variable $GITHUB_WORKSPACE, so it becomes the root location of our code.
  2. Configure AWS credentials using aws-actions/[email protected]. This action is configured as explained in the previous section.
  3. Install your prerequisites. In our use case, the only prerequisite we need is AWS CDK. Upon installing AWS CDK, we can do a quick test using the AWS Command Line Interface (AWS CLI) command aws s3 ls. If cross-account access was successfully established in the previous step of the workflow, this command should return a list of buckets in the target account.
  4. Navigate to root location of the code $GITHUB_WORKSPACE and run the deploy.sh script.

You can check in the code into the master branch of your repo. This should trigger the workflow, which you can monitor on the Actions tab of your repo. The commit message you provide is displayed for the respective run of the workflow.

Workflow for region us-east-1 Workflow for region us-west-2

You can choose the workflow link and monitor the log for each individual step of the workflow.

Git action workflow steps

In the target account, you should now see the CloudFormation stack cf-GitActionDemoApiStack in us-east-1 and us-west-2.

Lambda API stack in us-east-1 Lambda API stack in us-west-2

The API resource URL DocUploadRestApiResourceUrl is located on the Outputs tab of the stack. You can invoke your API by choosing this URL on the browser.

API Invocation Output

Clean up

To remove all the resources from the target and tools accounts, complete the following steps in their given order:

  1. Delete the CloudFormation stack cf-GitActionDemoApiStack from the target account. This step removes the Lambda and API Gateway resources and their associated IAM roles.
  2. Delete the CloudFormation stack cf-CrossAccountRolesStack from the target account. This removes the cross-account role and CloudFormation execution role you created.
  3. Go to the CDKToolkit stack in the target account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.
  4. Delete the CloudFormation stack cf-GitActionDeploymentUserStack from tools account. This removes cross-account-deploy-user IAM user.
  5. Go to the CDKToolkit stack in the tools account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.

Security considerations

Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific Amazon S3 and CloudFormation permissions. This makes sure that the cross-account role can only do those things. The actual creation of Lambda, API Gateway, and Amazon DynamoDB resources happens via the AWS CloudFormation IAM role, which AWS  CloudFormation assumes in the target AWS account.

Make sure that you use secrets to store your sensitive workflow configurations, as specified in the section Configuring secrets.

Conclusion

In this post we showed how you can leverage GitHub’s popular software development platform to securely deploy to AWS accounts and Regions using GitHub actions and AWS CDK.

Build your own GitHub Actions CI/CD workflow as shown in this post.

About the author

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, ci/cd and automation.