Tag Archives: Security, Identity & Compliance

New AWS workbook for New Zealand financial services customers

Post Syndicated from Julian Busic original https://aws.amazon.com/blogs/security/new-aws-workbook-for-new-zealand-financial-services-customers/

We are pleased to announce a new AWS workbook designed to help New Zealand financial services customers align with the Reserve Bank of New Zealand (RBNZ) Guidance on Cyber Resilience.

The RBNZ Guidance on Cyber Resilience sets out the RBNZ expectations for its regulated entities regarding cyber resilience, and aims to raise awareness and promote the cyber resilience of the financial sector, especially at board and senior management level. The guidance applies to all entities regulated by the RBNZ, including registered banks, licensed non-bank deposit takers, licensed insurers, and designated financial market infrastructures.

While the RBNZ describes its guidance as “a set of recommendations rather than requirements” which are not legally enforceable, it also states that it expects regulated entities to “proactively consider how their current approach to cyber risk management lines up with the recommendations in [the] guidance and look for [opportunities] for improvement as early as possible.”

Security and compliance is a shared responsibility between AWS and the customer. This differentiation of responsibility is commonly referred to as the AWS Shared Responsibility Model, in which AWS is responsible for security of the cloud, and the customer is responsible for their security in the cloud. The new AWS Reserve Bank of New Zealand Guidance on Cyber Resilience (RBNZ-GCR) Workbook helps customers align with the RBNZ Guidance on Cyber Resilience by providing control mappings for the following:

  • Security in the cloud by mapping RBNZ Guidance on Cyber Resilience practices to the five pillars of the AWS Well-Architected Framework.
  • Security of the cloud by mapping RBNZ Guidance on Cyber Resilience practices to control statements from the AWS Compliance Program.

The downloadable AWS RBNZ-GCR Workbook contains two embedded formats:

  • Microsoft Excel – Coverage includes AWS responsibility control statements and Well-Architected Framework best practices.
  • Dynamic HTML – Coverage is the same as in the Microsoft Excel format, with the added feature that the Well-Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

The AWS RBNZ-GCR Workbook is available for download in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Julian Busic

Julian is a Security Solutions Architect with a focus on regulatory engagement. He works with our customers, their regulators, and AWS teams to help customers raise the bar on secure cloud adoption and usage. Julian has over 15 years of experience working in risk and technology across the financial services industry in Australia and New Zealand.

Introducing the Security at the Edge: Core Principles whitepaper

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/introducing-the-security-at-the-edge-core-principles-whitepaper/

Amazon Web Services (AWS) recently released the Security at the Edge: Core Principles whitepaper. Today’s business leaders know that it’s critical to ensure that both the security of their environments and the security present in traditional cloud networks are extended to workloads at the edge. The whitepaper provides security executives the foundations for implementing a defense in depth strategy for security at the edge by addressing three areas of edge security:

  • AWS services at AWS edge locations
  • How those services and others can be used to implement the best practices outlined in the design principles of the AWS Well-Architected Framework Security Pillar
  • Additional AWS edge services, which customers can use to help secure their edge environments or expand operations into new, previously unsupported environments

Together, these elements offer core principles for designing a security strategy at the edge, and demonstrate how AWS services can provide a secure environment extending from the core cloud to the edge of the AWS network and out to customer edge devices and endpoints. You can find more information in the Security at the Edge: Core Principles whitepaper.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Author

Jana Kay

Since 2018, Jana has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Enabling data classification for Amazon RDS database with Macie

Post Syndicated from Bruno Silveira original https://aws.amazon.com/blogs/security/enabling-data-classification-for-amazon-rds-database-with-amazon-macie/

Customers have been asking us about ways to use Amazon Macie data discovery on their Amazon Relational Database Service (Amazon RDS) instances. This post presents how to do so using AWS Database Migration Service (AWS DMS) to extract data from Amazon RDS, store it on Amazon Simple Storage Service (Amazon S3), and then classify the data using Macie. Macie’s resulting findings will also be made available to be queried with Amazon Athena by appropriate teams.

The challenge

Let’s suppose you need to find sensitive data in an RDS-hosted database using Macie, which currently only supports S3 as a data source. Therefore, you will need to extract and store the data from RDS in S3. In addition, you will need an interface for audit teams to audit these findings.

Solution overview

Figure 1: Solution architecture workflow

Figure 1: Solution architecture workflow

The architecture of the solution in Figure 1 can be described as:

  1. A MySQL engine running on RDS is populated with the Sakila sample database.
  2. A DMS task connects to the Sakila database, transforms the data into a set of Parquet compressed files, and loads them into the dcp-macie bucket.
  3. A Macie classification job analyzes the objects in the dcp-macie bucket using a combination of techniques such as machine learning and pattern matching to determine whether the objects contain sensitive data and to generate detailed reports on the findings.
  4. Amazon EventBridge routes the Macie findings reports events to Amazon Kinesis Data Firehose.
  5. Kinesis Data Firehose stores these reports in the dcp-glue bucket.
  6. S3 event notification triggers an AWS Lambda function whenever an object is created in the dcp-glue bucket.
  7. The Lambda function named Start Glue Workflow starts a Glue Workflow.
  8. Glue Workflow transforms the data from JSONL to Apache Parquet file format and places it in the dcp-athena bucket. This provides better performance during data query and optimized storage usage using a binary optimized columnar storage.
  9. Athena is used to query and visualize data generated by Macie.

Note: For better readability, the S3 bucket nomenclature omits the suffix containing the AWS Region and AWS account ID used to meet the global uniqueness naming requirement (for example, dcp-athena-us-east-1-123456789012).

The Sakila database schema consists of the following tables:

  • actor
  • address
  • category
  • city
  • country
  • customer

Building the solution

Prerequisites

Before configuring the solution, the AWS Identity and Access Management (IAM) user must have appropriate access granted for the following services:

You can find an IAM policy with the required permissions here.

Step 1 – Deploying the CloudFormation template

You’ll use CloudFormation to provision quickly and consistently the AWS resources illustrated in Figure 1. Through a pre-built template file, it will create the infrastructure using an Infrastructure-as-Code (IaC) approach.

  1. Download the CloudFormation template.
  2. Go to the CloudFormation console.
  3. Select the Stacks option in the left menu.
  4. Select Create stack and choose With new resources (standard).
  5. On Step 1 – Specify template, choose Upload a template file, select Choose file, and select the file template.yaml downloaded previously.
  6. On Step 2 – Specify stack details, enter a name of your preference for Stack name. You might also adjust the parameters as needed, like the parameter CreateRDSServiceRole to create a service role for RDS if it does not exist in the current account.
  7. On Step 3 – Configure stack options, select Next.
  8. On Step 4 – Review, check the box for I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  9. Wait for the stack to show status CREATE_COMPLETE.

Note: It is expected that provisioning will take around 10 minutes to complete.

Step 2 – Running an AWS DMS task

To extract the data from the Amazon RDS instance, you need to run an AWS DMS task. This makes the data available for Amazon Macie in an S3 bucket in Parquet format.

  1. Go to the AWS DMS console.
  2. In the left menu, select Database migration tasks.
  3. Select the task Identifier named rdstos3task.
  4. Select Actions.
  5. Select the option Restart/Resume.

When the Status changes to Load Complete the task has finished and you will be able to see migrated data in your target bucket (dcp-macie).

Inside each folder you can see parquet file(s) with names similar to LOAD00000001.parquet. Now you can use Macie to discover if you have sensitive data in your database contents as exported to S3.

Step 3 – Running a classification job with Amazon Macie

Now you need to create a data classification job so you can assess the contents of your S3 bucket. The job you create will run once and evaluate the complete contents of your S3 bucket to determine whether it can identify PII among the data. As mentioned earlier, this job only uses the managed identifiers available with Macie – you could also add your own custom identifiers.

  1. Go to the Macie console.
  2. Select the S3 buckets option in the left menu.
  3. Choose the S3 bucket dcp-macie containing the output data from the DMS task. You may need to wait a minute and select the Refresh icon if the bucket names do not display.

  4. Select Create job.
  5. Select Next to continue.
  6. Create a job with the following scope.
    1. Sensitive data Discovery options: One-time job
    2. Sampling Depth: 100%
    3. Leave all other settings with their default values
  7. Select Next to continue.
  8. Select Next again to skip past the Custom data identifiers section.
  9. Give the job a name and description.
  10. Select Next to continue.
  11. Verify the details of the job you have created and select Submit to continue.

You will see a green banner stating that The Job was successfully created. The job can take up to 15 minutes to conclude and the Status will change from Active to Complete. To open the findings from the job, select the job’s check box, choose Show results, and select Show findings.
 

Figure 2: Macie Findings screen

Figure 2: Macie Findings screen

Note: You can navigate in the findings and select each checkbox to see the details.

Step 4 – Enabling querying on classification job results with Amazon Athena

  1. Go to the Athena console and open the Query editor.
  2. If it’s your first-time using Athena you will see a message Before you run your first query, you need to set up a query result location in Amazon S3. Learn more. Select the link presented with this message.
  3. In the Settings window, choose Select and then choose the bucket dcp-assets to store the Athena query results.
  4. (Optional) To store the query results encrypted, check the box for Encrypt query results and select your preferred encryption type. To learn more about Amazon S3 encryption types, see Protecting data using encryption.
  5. Select Save.

Step 5 – Query Amazon Macie results with Amazon Athena.

It might take a few minutes for the data to complete the flow between Amazon Macie and AWS Glue. After it’s finished, you’ll be able to see in Athena’s console the table dcp_athena within the database dcp.

Then, select the three dots next to the table dcp_athena and select the Preview table option to see a data preview, or run your own custom queries.
 

Figure 3: Athena table preview

Figure 3: Athena table preview

As your environment grows, this blog post on Top 10 Performance Tuning Tips for Amazon Athena can help you apply partitioning of data and consolidate your data into larger files if needed.

Clean up

After you finish, to clean up the solution and avoid unnecessary expenses, complete the following steps:

  1. Go to the Amazon S3 console.
  2. Navigate to each of the buckets listed below and delete all its objects:
    • dcp-assets
    • dcp-athena
    • dcp-glue
    • dcp-macie
  3. Go to the CloudFormation console.
  4. Select the Stacks option in the left menu.
  5. Choose the stack you created in Step 1 – Deploying the CloudFormation template.
  6. Select Delete and then select Delete Stack in the pop-up window.

Conclusion

In this blog post, we show how you can find Personally Identifiable Information (PII), and other data defined as sensitive, in Macie’s Managed Data Identifiers in an RDS-hosted MySQL database. You can use this solution with other relational databases like PostgreSQL, SQL Server, or Oracle, whether hosted on RDS or EC2. If you’re using Amazon DynamoDB, you may also find useful the blog post Detecting sensitive data in DynamoDB with Macie.

By classifying your data, you can inform your management of appropriate data protection and handling controls for the use of that data.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Bruno Silveira

Bruno is a Solutions Architect Manager in the Public Sector team with a focus on educational institutions in Brazil. His previous career was in government, financial services, utilities, and nonprofit institutions. Bruno is an enthusiast of cloud security and an appreciator of good rock’n roll with a good beer.

Author

Thiago Pádua

Thiago is Solutions Architect in the AWS Worldwide Public Sector team working in the development and support of partners. He is experienced in software development and systems integration, mainly in the telecommunications industry. He has a special interest in microservices, serverless, and containers.

How to set up a two-way integration between AWS Security Hub and Jira Service Management

Post Syndicated from Ramesh Venkataraman original https://aws.amazon.com/blogs/security/how-to-set-up-a-two-way-integration-between-aws-security-hub-and-jira-service-management/

If you use both AWS Security Hub and Jira Service Management, you can use the new AWS Service Management Connector for Jira Service Management to create an automated, bidirectional integration between these two products that keeps your Security Hub findings and Jira issues in sync. In this blog post, I’ll show you how to set up this integration.

As a Jira administrator, you’ll then be able to create Jira issues from Security Hub findings automatically, and when you update those issues in Jira, the changes are automatically replicated back into the original Security Hub findings. For example, if you resolve an issue in Jira, the workflow status of the finding in Security Hub will also be resolved. This way, Security Hub always has up-to-date status about your security posture.

Watch a demonstration of the integration.

Prerequisites

To complete this walkthrough, you’ll need a Jira instance with the connector configured. For more information on how to set this up, see AWS Service Management Connector for Jira Service Management in the AWS Service Catalog Administrator Guide. At the moment, this connector can be used with Atlassian Data Center.

On the AWS side, you need Security Hub enabled in your AWS account. For more information, see Enabling Security Hub manually.

This walkthrough uses an AWS CloudFormation template to create the necessary AWS resources for this integration. In this template, I use the AWS Region us-east-1, but you can use any of the supported Regions for Security Hub.

Deploy the solution

In this solution, you will first deploy an AWS CloudFormation stack that sets up the necessary AWS resources that are needed to set up the integration in Jira.

To download and run the CloudFormation template

  1. Download the sample template for this walkthrough.
  2. In the AWS CloudFormation console, choose Create stack, choose With new resources (standard), and then select Template is ready.
  3. For Specify template, choose Upload a template file and select the template that you downloaded in step 1.

To create the CloudFormation stack

  1. In the CloudFormation console, choose Specify stack details, and enter a Stack name (in the example, I named mine SecurityHub-Jira-Integration).
  2. Keep the other default values as shown in Figure 1, and then choose Next.
     
    Figure 1: Creating a CloudFormation stack

    Figure 1: Creating a CloudFormation stack

  3. On the Configure stack options page, choose Next.
  4. On the Review page, select the check box I acknowledge that AWS CloudFormation might create IAM resources with custom names. (Optional) If you would like more information about this acknowledgement, choose Learn more.
  5. Choose Create stack.
     
    Figure 2: Acknowledge creation of IAM resources

    Figure 2: Acknowledge creation of IAM resources

  6. After the CloudFormation stack status is CREATE_COMPLETE, you can see the list of resources that are created, as shown in Figure 3.
     
    Figure 3: Resources created from the CloudFormation template

    Figure 3: Resources created from the CloudFormation template

Next, you’ll integrate Jira with Security Hub.

To integrate Jira with Security Hub

  1. In the Jira dashboard, choose the gear icon to open the JIRA ADMINISTRATION menu, and then choose Manage apps
      
    Figure 4: Jira Manage apps

    Figure 4: Jira Manage apps

  2. On the Administration screen, under AWS SERVICE MANAGEMENT CONNECTOR in the left navigation menu, choose AWS accounts
     
    Figure 5: Choose AWS accounts

    Figure 5: Choose AWS accounts

  3. Choose Connect new account to open a page where you can configure Jira to access an AWS account. 
     
    Figure 6: Connect new account

    Figure 6: Connect new account

  4. Enter values for the account alias and user credentials. For the account alias, I’ve named my account SHJiraIntegrationAccount. In the SecurityHub-Jira-Integration CloudFormation stack that you created previously, see the Outputs section to get the values for SCSyncUserAccessKey, SCSyncUserSecretAccessKey, SCEndUserAccessKey, and SCEndUserSecretAccessKey, as shown in Figure 7.
     
    Figure 7: CloudFormation Outputs details

    Figure 7: CloudFormation Outputs details

    Important: Because this is an example walkthrough, I show the access key and secret key generated as CloudFormation outputs. However, if you’re using the AWS Service Management Connector for Jira in a production workload, see How do I create an AWS access key? to understand the connectivity and to create the access key and secret key for users. Visit that link to create an IAM user and access key. For the permissions that are required for the IAM user, you can review the permissions and policies outlined in the template.

  5. In Jira, on the Connect new account page, enter all the values from the CloudFormation Outputs that you saw in step 4, and choose the Region you used to launch your CloudFormation resources. I chose the Region US East (N.Virginia)/us-east-1.
  6. Choose Connect, and you should see a success message for the test connection. You can also choose Test connectivity after connecting the account, as shown in figure 8. 
     
    Figure 8: Test connectivity

    Figure 8: Test connectivity

The connector is preconfigured to automatically create Jira incidents for Security Hub findings. The findings will have the same information in both the AWS Security Hub console and the Jira console.

Test the integration

Finally, you can test the integration between Security Hub and Jira Service Management.

To test the integration

  1. For this walkthrough, I’ve created a new project from the Projects console in Jira. If you have an existing project, you can link the AWS account to the project.
  2. In the left navigation menu, under AWS SERVICE MANAGEMENT CONNECTOR, choose Connector settings.
  3. On the AWS Service Management Connector settings page, under Projects enabled for Connector, choose Add Jira project, and select the project you want to connect to the AWS account. 
     
    Figure 9: Add the Jira project

    Figure 9: Add the Jira project

  4. On the same page, under OpsCenter Configuration, choose the project to associate with the AWS accounts. Under Security Hub Configuration, associate the Jira project with the AWS account. Choose Save after you’ve configured the project.
  5. On the AWS accounts page, choose Sync now
     
    Figure 10: Sync now

    Figure 10: Sync now

  6. In the top pane, under Issues, choose Search for issues.
  7. Choose the project that you added in step 3. You will see a screen like the one shown in Figure 11.

    To further filter just the Security Hub findings, you can also choose AWS Security Hub Finding under Type
     

    Figure 11: A Security Hub finding in the Jira console

    Figure 11: A Security Hub finding in the Jira console

  8. You can review the same finding from Security Hub in the AWS console, as shown in Figure 12, to verify that it’s the same as the finding you saw in step 7. 
     
    Figure 12: A Security Hub finding in the AWS console

    Figure 12: A Security Hub finding in the AWS console

  9. On the Jira page for the Security Hub finding (the same page discussed in step 7), you can update the workflow status to Notified, after which the issue status changes to NOTIFIED, as shown in Figure 13. 
     
    Figure 13: Update the finding status to NOTIFIED

    Figure 13: Update the finding status to NOTIFIED

    You can navigate to the AWS Security Hub console and look at the finding’s workflow, as shown in Figure 14. The workflow should say NOTIFIED, as you updated it in the Jira console.
     

    Figure 14: The Security Hub finding workflow updated to NOTIFIED

    Figure 14: The Security Hub finding workflow updated to NOTIFIED

  10. You can now fix the issue from the Security Hub console. When you resolve the finding from Security Hub, it will also show up as resolved in the Jira console.
  11. (Optional) In the AWS Service Management Connector in the Jira console, you can configure several settings, such as Sync Interval, SQS Queue Name, and Number of messages to pull from SQS, as shown in Figure 15. You can also synchronize Security Hub findings according to their Severity value.
     
    Figure 15: Jira settings for Security Hub

    Figure 15: Jira settings for Security Hub

Conclusion

In this blog post, I showed you how to set up the new two-way integration of AWS Security Hub and Jira by using the AWS Service Management Connector for Jira Service Management. To learn more about Jira’s integration with Security Hub, watch the video AWS Security Hub – Bidirectional integration with Jira Service Management Center, and see AWS Service Management Connector for Jira Service Management in the AWS Service Catalog Administrator Guide. To download the free AWS Service Management Connector for Jira, see the Atlassian Marketplace. If you have additional questions, you can post them to the AWS Security Hub forum.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Venkataraman

Ramesh is a Solutions Architect who enjoys working with customers to solve their technical challenges using AWS services. Outside of work, Ramesh enjoys following stack overflow questions and answers them in any way he can.

Enable Security Hub PCI DSS standard across your organization and disable specific controls

Post Syndicated from Pablo Pagani original https://aws.amazon.com/blogs/security/enable-security-hub-pci-dss-standard-across-your-organization-and-disable-specific-controls/

At this time, enabling the PCI DSS standard from within AWS Security Hub enables this compliance framework only within the Amazon Web Services (AWS) account you are presently administering.

This blog post showcases a solution that can be used to customize the configuration and deployment of the PCI DSS standard compliance standard using AWS Security Hub across multiple AWS accounts and AWS Regions managed by AWS Organizations. It also demonstrates how to disable specific standards or controls that aren’t required by your organization to meet its compliance requirement. This solution can be used as a baseline for implementation when creating new AWS accounts through the use of AWS CloudFormation StackSets.

Solution overview

Figure 1 that follows shows a sample account setup using the automated solution in this blog post to enable PCI DSS monitoring and reporting across multiple AWS accounts using AWS Organizations. The hierarchy depicted is of one management account used to monitor two member accounts with infrastructure spanning across multiple Regions. Member accounts are configured to send their Security Hub findings to the designated Security Hub management account for centralized compliance management.

Figure 1: Security Hub deployment using AWS Organizations

Figure 1: Security Hub deployment using AWS Organizations

Prerequisites

The following prerequisites must be in place in order to enable the PCI DSS standard:

  1. A designated administrator account for Security Hub.
  2. Security Hub enabled in all the desired accounts and Regions.
  3. Access to the management account for the organization. The account must have the required permissions for stack set operations.
  4. Choose which deployment targets (accounts and Regions) you want to enable the PCI DSS standard. Typically, you set this on the accounts where Security Hub is already enabled, or on the accounts where PCI workloads reside.
  5. (Optional) If you find standards or controls that aren’t applicable to your organization, get the Amazon Resource Names (ARNs) of the desired standards or controls to disable.

Solution Resources

The CloudFormation template that you use in the following steps contains:

Solution deployment

To set up this solution for automated deployment, stage the following CloudFormation StackSet template for rollout via the AWS CloudFormation service. The stack set runs across the organization at the root or organizational units (OUs) level of your choice. You can choose which Regions to run this solution against and also to run it each time a new AWS account is created.

To deploy the solution

  1. Open the AWS Management Console.
  2. Download the sh-pci-enabler.yaml template and save it to an Amazon Simple Storage Services (Amazon S3) bucket on the management account. Make a note of the path to use later.
  3. Navigate to CloudFormation service on the management account. Select StackSets from the menu on the left, and then choose Create StackSet.
     
    Figure 2: CloudFormation – Create StackSet

    Figure 2: CloudFormation – Create StackSet

  4. On the Choose a template page, go to Specify template and select Amazon S3 URL and enter the path to the sh-pci-enabler.yaml template you saved in step 2 above. Choose Next.
     
    Figure 3: CloudFormation – Choose a template

    Figure 3: CloudFormation – Choose a template

  5. Enter a name and (optional) description for the StackSet. Choose Next.
     
    Figure 4: CloudFormation – enter StackSet details

    Figure 4: CloudFormation – enter StackSet details

  6. (Optional) On the Configure StackSet options page, go to Tags and add tags to identify and organize your stack set.
     
    Figure 5: CloudFormation – Configure StackSet options

    Figure 5: CloudFormation – Configure StackSet options

  7. Choose Next.
  8. On the Set deployment options page, select the desired Regions, and then choose Next.

    Figure 6: CloudFormation – Set deployment options

    Figure 6: CloudFormation – Set deployment options

  9. Review the definition and select I acknowledge that AWS CloudFormation might create IAM resources. Choose Submit.
     
    Figure 7: CloudFormation – Review, acknowledge, and submit

    Figure 7: CloudFormation – Review, acknowledge, and submit

  10. After you choose Submit, you can monitor the creation of the StackSet from the Operations tab to ensure that deployment is successful.
     
    Figure 8: CloudFormation – Monitor creation of the StackSet

    Figure 8: CloudFormation – Monitor creation of the StackSet

Disable standards that don’t apply to your organization

To disable a standard that isn’t required by your organization, you can use the same template and steps as described above with a few changes as explained below.

To disable standards

  1. Start by opening the SH-PCI-enabler.yaml template and saving a copy under a new name.
  2. In the template, look for sh.batch_enable_standards. Change it to sh.batch_disable_standards.
  3. Locate standardArn=f”arn:aws:securityhub:{region}::standards/pci-dss/v/3.2.1″ and change it to the desired ARN. To find the correct standard ARN, you can use the AWS Command Line Interface (AWS CLI) or AWS CloudShell to run the command aws securityhub describe-standards.
Figure 9: Describe Security Hub standards using CLI

Figure 9: Describe Security Hub standards using CLI

Note: Be sure to keep the f before the quotation marks and replace any Region you might get from the command with the {region} variable. If the CIS standard doesn’t have the Region defined, remove the variable.

Disable controls that don’t apply to your organization

When you enable a standard, all of the controls for that standard are enabled by default. If necessary, you can disable specific controls within an enabled standard.

When you disable a control, the check for the control is no longer performed, no additional findings are generated for that control, and the related AWS Config rules that Security Hub created are removed.

Security Hub is a regional service. When you disable or enable a control, the change is applied in the Region that you specify in the API request. Also, when you disable an entire standard, Security Hub doesn’t track which controls were disabled. If you enable the standard again later, all of the controls in that standard will be enabled.

To disable a list of controls

  1. Open the Security Hub console and select Security standards from the left menu. For each check you want to disable, select Finding JSON and make a note of each StandardsControlArn to add to your list.

    Note: Another option is to use the DescribeStandardsControls API to create a list of StandardsControlArn to be disabled.

     

    Figure 10: Security Hub console – finding JSON download option

    Figure 10: Security Hub console – finding JSON download option

  2. Download the StackSet SH-disable-controls.yaml template to your computer.
  3. Use a text editor to open the template file.
  4. Locate the list of controls to disable, and edit the template to replace the provided list of StandardsControlArn with your own list of controls to disable, as shown in the following example. Use a comma as the delimiter for each ARN.
    controls=f"arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/ACM.1, arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/APIGateway.1, arn:aws:securityhub:{region}:{account_id}:control/aws-foundational-security-best-practices/v/1.0.0/APIGateway.2"
    

  5. Save your changes to the template.
  6. Follow the same steps you used to deploy the PCI DSS standard, but use your edited template.

Note: The region and account_id are set as variables, so you decide in which accounts and Regions to disable the controls from the StackSet deployment options (step 8 in Deploy the solution).

Troubleshooting

The following are issues you might encounter when you deploy this solution:

  1. StackSets deployment errors: Review the troubleshooting guide for CloudFormation StackSets.
  2. Dependencies issues: To modify the status of any standard or control, Security Hub must be enabled first. If it’s not enabled, the operation will fail. Make sure you meet the prerequisites listed earlier in this blog post. Use CloudWatch logs to analyze possible errors from the Lambda function to help identify the cause.
  3. StackSets race condition error: When creating new accounts, the Organizations service enables Security Hub in the account, and invokes the stack sets during account creation. If the stack set runs before the Security Hub service is enabled, the stack set can’t enable the PCI standard. If this happens, you can fix it by adding the Amazon EventBridge rule as shown in SH-EventRule-PCI-enabler.yaml. The EventBridge rule invokes the SHLambdaFunctionEB Lambda function after Security Hub is enabled.

Conclusion

The AWS Security Hub PCI DSS standard is fundamental for any company involved with storing, processing, or transmitting cardholder data. In this post, you learned how to enable or disable a standard or specific controls in all your accounts throughout the organization to proactively monitor your AWS resources. Frequently reviewing failed security checks, prioritizing their remediation, and aiming for a Security Hub score of 100 percent can help improve your security posture.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions, please start a new thread on the Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Pablo Pagani

Pablo is the Latam Security Manager for AWS Professional Services based in Buenos Aires, Argentina. He developed his passion for computers while writing his first lines of code in BASIC using a Talent MSX.

Author

Rogerio Kasa

Rogerio is a Senior SRC Consultant based in Sao Paulo, Brazil. He has more than 20 years experience in information security, including 11 years in financial services as a local information security officer. As a security consultant, he helps customers improve their security posture by understanding business goals and creating controls aligned with their risk strategy.

Validate IAM policies in CloudFormation templates using IAM Access Analyzer

Post Syndicated from Matt Luttrell original https://aws.amazon.com/blogs/security/validate-iam-policies-in-cloudformation-templates-using-iam-access-analyzer/

In this blog post, I introduce IAM Policy Validator for AWS CloudFormation (cfn-policy-validator), an open source tool that extracts AWS Identity and Access Management (IAM) policies from an AWS CloudFormation template, and allows you to run existing IAM Access Analyzer policy validation APIs against the template. I also show you how to run the tool in a continuous integration and continuous delivery (CI/CD) pipeline to validate IAM policies in a CloudFormation template before they are deployed to your AWS environment.

Embedding this validation in a CI/CD pipeline can help prevent IAM policies that have IAM Access Analyzer findings from being deployed to your AWS environment. This tool acts as a guardrail that can allow you to delegate the creation of IAM policies to the developers in your organization. You can also use the tool to provide additional confidence in your existing policy authoring process, enabling you to catch mistakes prior to IAM policy deployment.

What is IAM Access Analyzer?

IAM Access Analyzer mathematically analyzes access control policies that are attached to resources, and determines which resources can be accessed publicly or from other accounts. IAM Access Analyzer can also validate both identity and resource policies against over 100 checks, each designed to improve your security posture and to help you to simplify policy management at scale.

The IAM Policy Validator for AWS CloudFormation tool

IAM Policy Validator for AWS CloudFormation (cfn-policy-validator) is a new command-line tool that parses resource-based and identity-based IAM policies from your CloudFormation template, and runs the policies through IAM Access Analyzer checks. The tool is designed to run in the CI/CD pipeline that deploys your CloudFormation templates, and to prevent a deployment when an IAM Access Analyzer finding is detected. This ensures that changes made to IAM policies are validated before they can be deployed.

The cfn-policy-validator tool looks for all identity-based policies, and a subset of resource-based policies, from your templates. For the full list of supported resource-based policies, see the cfn-policy-validator GitHub repository.

Parsing IAM policies from a CloudFormation template

One of the challenges you can face when parsing IAM policies from a CloudFormation template is that these policies often contain CloudFormation intrinsic functions (such as Ref and Fn::GetAtt) and pseudo parameters (such as AWS::AccountId and AWS::Region). As an example, it’s common for least privileged IAM policies to reference the Amazon Resource Name (ARN) of another CloudFormation resource. Take a look at the following example CloudFormation resources that create an Amazon Simple Queue Service (Amazon SQS) queue, and an IAM role with a policy that grants access to perform the sqs:SendMessage action on the SQS queue.
 

Figure 1- Example policy in CloudFormation template

Figure 1- Example policy in CloudFormation template

As you can see in Figure 1, line 21 uses the function Fn::Sub to restrict this policy to MySQSQueue created earlier in the template.

In this example, if you were to pass the root policy (lines 15-21) as written to IAM Access Analyzer, you would get an error because !Sub ${MySQSQueue.Arn} is syntax that is specific to CloudFormation. The cfn-policy-validator tool takes the policy and translates the CloudFormation syntax to valid IAM policy syntax that Access Analyzer can parse.

The cfn-policy-validator tool recognizes when an intrinsic function such as !Sub ${MySQSQueue.Arn} evaluates to a resource’s ARN, and generates a valid ARN for the resource. The tool creates the ARN by mapping the type of CloudFormation resource (in this example AWS::SQS::Queue) to a pattern that represents what the ARN of the resource will look like when it is deployed. For example, the following is what the mapping looks like for the SQS queue referenced previously:

AWS::SQS::Queue.Arn -> arn:${Partition}:sqs:${Region}:${Account}:${QueueName}

For some CloudFormation resources, the Ref intrinsic function also returns an ARN. The cfn-policy-validator tool handles these cases as well.

Cfn-policy-validator walks through each of the six parts of an ARN and substitutes values for variables in the ARN pattern (any text contained within ${}). The values of ${Partition} and ${Account} are taken from the identity of the role that runs the cfn-policy-validator tool, and the value for ${Region} is provided as an input flag. The cfn-policy-validator tool performs a best-effort resolution of the QueueName, but typically defaults it to the name of the CloudFormation resource (in the previous example, MySQSQueue). Validation of policies with IAM Access Analyzer does not rely on the name of the resource, so the cfn-policy-validator tool is able to substitute a replacement name without affecting the policy checks.

The final ARN generated for the MySQSQueue resource looks like the following (for an account with ID of 111111111111):

arn:aws:sqs:us-east-1:111111111111:MySQSQueue

The cfn-policy-validator tool substitutes this generated ARN for !Sub ${MySQSQueue.Arn}, which allows the cfn-policy-validator tool to parse a policy from the template that can be fed into IAM Access Analyzer for validation. The cfn-policy-validator tool walks through your entire CloudFormation template and performs this ARN substitution until it has generated ARNs for all policies in your template.

Validating the policies with IAM Access Analyzer

After the cfn-policy-validator tool has your IAM policies in a valid format (with no CloudFormation intrinsic functions or pseudo parameters), it can take those policies and feed them into IAM Access Analyzer for validation. The cfn-policy-validator tool runs resource-based and identity-based policies in your CloudFormation template through the ValidatePolicy action of the IAM Access Analyzer. ValidatePolicy is what ensures that your policies have correct grammar and follow IAM policy best practices (for example, not allowing iam:PassRole to all resources). The cfn-policy-validator tool also makes a call to the CreateAccessPreview action for supported resource policies to determine if the policy would grant unintended public or cross-account access to your resource.

The cfn-policy-validator tool categorizes findings from IAM Access Analyzer into the categories blocking or non-blocking. Findings categorized as blocking cause the tool to exit with a non-zero exit code, thereby causing your deployment to fail and preventing your CI/CD pipeline from continuing. If there are no findings, or only non-blocking findings detected, the tool will exit with an exit code of zero (0) and your pipeline can to continue to the next stage. For more information about how the cfn-policy-validator tool decides what findings to categorize as blocking and non-blocking, as well as how to customize the categorization, see the cfn-policy-validator GitHub repository.

Example of running the cfn-policy-validator tool

This section guides you through an example of what happens when you run a CloudFormation template that has some policy violations through the cfn-policy-validator tool.

The following template has two CloudFormation resources with policy findings: an SQS queue policy that grants account 111122223333 access to the SQS queue, and an IAM role with a policy that allows the role to perform a misspelled sqs:ReceiveMessages action. These issues are highlighted in the policy below.

Important: The policy in Figure 2 is written to illustrate a CloudFormation template with potentially undesirable IAM policies. You should be careful when setting the Principal element to an account that is not your own.

Figure 2: CloudFormation template with undesirable IAM policies

Figure 2: CloudFormation template with undesirable IAM policies

When you pass this template as a parameter to the cfn-policy-validator tool, you specify the AWS Region that you want to deploy the template to, as follows:

cfn-policy-validator validate --template-path ./template.json --region us-east-1

After the cfn-policy-validator tool runs, it returns the validation results, which includes the actual response from IAM Access Analyzer:

{
    "BlockingFindings": [
        {
            "findingType": "ERROR",
            "code": "INVALID_ACTION",
            "message": "The action sqs:ReceiveMessages does not exist.",
            "resourceName": "MyRole",
            "policyName": "root",
            "details": …
        },
        {
            "findingType": "SECURITY_WARNING",
            "code": "EXTERNAL_PRINCIPAL",
            "message": "Resource policy allows access from external principals.",
            "resourceName": "MyQueue",
            "policyName": "QueuePolicy",
            "details": …
        }
    ],
    "NonBlockingFindings": []
}

The output from the cfn-policy-validator tool includes the type of finding, the code for the finding, and a message that explains the finding. It also includes the resource and policy name from the CloudFormation template, to allow you to quickly track down and resolve the finding in your template. In the previous example, you can see that IAM Access Analyzer has detected two findings, one security warning and one error, which the cfn-policy-validator tool has classified as blocking. The actual response from IAM Access Analyzer is returned under details, but is excluded above for brevity.

If account 111122223333 is an account that you trust and you are certain that it should have access to the SQS queue, then you can suppress the finding for external access from the 111122223333 account in this example. Modify the call to the cfn-policy-validator tool to ignore this specific finding by using the –-allow-external-principals flag, as follows:

cfn-policy-validator validate --template-path ./template.json --region us-east-1 --allow-external-principals 111122223333

When you look at the output that follows, you’re left with only the blocking finding that states that sqs:ReceiveMessages does not exist.

{
    "BlockingFindings": [
        {
            "findingType": "ERROR",
            "code": "INVALID_ACTION",
            "message": "The action sqs:ReceiveMessages does not exist.",
            "resourceName": "MyRole",
            "policyName": "root",
            "details": …
    ],
    "NonBlockingFindings": []
}

To resolve this finding, update the template to have the correct spelling, sqs:ReceiveMessage (without trailing s).

For the full list of available flags and commands supported, see the cfn-policy-validator GitHub repository.

Now that you’ve seen an example of how you can run the cfn-policy-validator tool to validate policies that are on your local machine, you will take it a step further and see how you can embed the cfn-policy-validator tool in a CI/CD pipeline. By embedding the cfn-policy-validator tool in your CI/CD pipeline, you can ensure that your IAM policies are validated each time a commit is made to your repository.

Embedding the cfn-policy-validator tool in a CI/CD pipeline

The CI/CD pipeline you will create in this post uses AWS CodePipeline and AWS CodeBuild. AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. AWS CodeBuild is a fully-managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. You will also use AWS CodeCommit as the source repository where your CloudFormation template is stored. AWS CodeCommit is a fully managed source-control service that hosts secure Git-based repositories.

To deploy the pipeline

  1. To deploy the CloudFormation template that builds the source repository and AWS CodePipeline pipeline, select the following Launch Stack button.
    Select the Launch Stack button to launch the template
  2. In the CloudFormation console, choose Next until you reach the Review page.
  3. Select I acknowledge that AWS CloudFormation might create IAM resources and choose Create stack.
  4. Open the AWS CodeCommit console and choose Repositories.
  5. Select cfn-policy-validator-source-repository.
  6. Download the template.json and template-configuration.json files to your machine.
  7. In the cfn-policy-validator-source-repository, on the right side, select Add file and choose Upload file.
  8. Choose Choose File and select the template.json file that you downloaded previously.
  9. Enter an Author name and an E-mail address and choose Commit changes.
  10. In the cfn-policy-validator-source-repository, repeat steps 7-9 for the template-configuration.json file.

To view validation in the pipeline

  1. In the AWS CodePipeline console choose the IAMPolicyValidatorPipeline.
  2. Watch as your commit travels through the pipeline. If you followed the previous instructions and made two separate commits, you can ignore the failed results of the first pipeline execution. As shown in Figure 3, you will see that the pipeline fails in the Validation stage on the CfnPolicyValidator action, because it detected a blocking finding in the template you committed, which prevents the invalid policy from reaching your AWS environment.
     
    Figure 3: Validation failed on the CfnPolicyValidator action

    Figure 3: Validation failed on the CfnPolicyValidator action

  3. Under CfnPolicyValidator, choose Details, as shown in Figure 3.
  4. In the Action execution failed pop-up, choose Link to execution details to view the cfn-policy-validator tool output in AWS CodeBuild.

Architectural overview of deploying cfn-policy-validator in your pipeline

You can see the architecture diagram for the CI/CD pipeline you deployed in Figure 4.
 
Figure 4: CI/CD pipeline that performs IAM policy validation using the AWS CloudFormation Policy Validator and IAM Access Analyzer

Figure 4 shows the following steps, starting with the CodeCommit source action on the left:

  1. The pipeline starts when you commit to your AWS CodeCommit source code repository. The AWS CodeCommit repository is what contains the CloudFormation template that has the IAM policies that you would like to deploy to your AWS environment.
  2. AWS CodePipeline detects the change in your source code repository and begins the Validation stage. The first step it takes is to start an AWS CodeBuild project that runs the CloudFormation template through the AWS CloudFormation Linter (cfn-lint). The cfn-lint tool validates your template against the CloudFormation resource specification. Taking this initial step ensures that you have a valid CloudFormation template before validating your IAM policies. This is an optional step, but a recommended one. Early schema validation provides fast feedback for any typos or mistakes in your template. There’s little benefit to running additional static analysis tools if your template has an invalid schema.
  3. If the cfn-lint tool completes successfully, you then call a separate AWS CodeBuild project that invokes the IAM Policy Validator for AWS CloudFormation (cfn-policy-validator). The cfn-policy-validator tool then extracts the identity-based and resource-based policies from your template, as described earlier, and runs the policies through IAM Access Analyzer.

    Note: if your template has parameters, then you need to provide them to the cfn-policy-validator tool. You can provide parameters as command-line arguments, or use a template configuration file. It is recommended to use a template configuration file when running validation with an AWS CodePipeline pipeline. The same file can also be used in the deploy stage to deploy the CloudFormation template. By using the template configuration file, you can ensure that you use the same parameters to validate and deploy your template. The CloudFormation template for the pipeline provided with this blog post defaults to using a template configuration file.

    If there are no blocking findings found in the policy validation, the cfn-policy-validator tool exits with an exit code of zero (0) and the pipeline moves to the next stage. If any blocking findings are detected, the cfn-policy-validator tool will exit with a non-zero exit code and the pipeline stops, to prevent the deployment of undesired IAM policies.

  4. The final stage of the pipeline uses the AWS CloudFormation action in AWS CodePipeline to deploy the template to your environment. Your template will only make it to this stage if it passes all static analysis checks run in the Validation Stage.

Cleaning Up

To avoid incurring future charges, in the AWS CloudFormation console delete the validate-iam-policy-pipeline stack. This will remove the validation pipeline from your AWS account.

Summary

In this blog post, I introduced the IAM Policy Validator for AWS CloudFormation (cfn-policy-validator). The cfn-policy-validator tool automates the parsing of identity-based and resource-based IAM policies from your CloudFormation templates and runs those policies through IAM Access Analyzer. This enables you to validate that the policies in your templates follow IAM best practices and do not allow unintended external access to your AWS resources.

I showed you how the IAM Policy Validator for AWS CloudFormation can be included in a CI/CD pipeline. This allows you to run validation on your IAM policies on every commit to your repository and only deploy the template if validation succeeds.

For more information, or to provide feedback and feature requests, see the cfn-policy-validator GitHub repository.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Matt Luttrell

Matt is a Sr. Solutions Architect on the AWS Identity Solutions team. When he’s not spending time chasing his kids around, he enjoys skiing, cycling, and the occasional video game.

Securely extend and access on-premises Active Directory domain controllers in AWS

Post Syndicated from Mangesh Budkule original https://aws.amazon.com/blogs/security/securely-extend-and-access-on-premises-active-directory-domain-controllers-in-aws/

If you have an on-premises Windows Server Active Directory infrastructure, it’s important to plan carefully how to extend it into Amazon Web Services (AWS) when you’re migrating or implementing cloud-based applications. In this scenario, existing applications require Active Directory for authentication and identity management. When you migrate these applications to the cloud, having a locally accessible Active Directory domain controller is an important factor in achieving fast, reliable, and secure Active Directory authentication.

In this blog post, I’ll provide guidance on how to securely extend your existing Active Directory domain to AWS and optimize your infrastructure for maximum performance. I’ll also show you a best practice that implements a remote desktop gateway solution to access your domain controllers securely while using the minimum required ports. Additionally, you will learn about how AWS Systems Manager Session Manager port forwarding helps provide a secure and simple way to manage your domain resources remotely, without the need to open inbound ports and maintain RDGW hosts.

Administrators can use this blog post as guidance to design Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) domain controllers. This post can also be used to determine which ports and protocols are required for domain controller infrastructure communication in a segmented network.

Design and guidelines for EC2-hosted domain controllers

This section provides a set of best practices for designing and deploying EC2-hosted domain controllers in AWS.

AWS has multiple options for hosting Active Directory on AWS, which are discussed in detail in the Active Directory Domain Services on AWS Design and Planning Guide. One option is to use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD). AWS Managed Microsoft AD provides you with a complete new forest and domain to start your Active Directory deployment on AWS. However, if you prefer to extend your existing Active Directory domain infrastructure to AWS and manage it yourself, you have the option of running Active Directory on EC2-hosted domain controllers. See our Quick Start guide for instructions on how to deploy both of these options (AWS Managed Microsoft AD or EC2-hosted domain controllers on AWS).

If you’re operating in more than one AWS Region and require Active Directory to be available in all these Regions, use the best practices in the Design and Planning Guide for a multi-Region deployment strategy. Within each of the Regions, follow the guidelines and best practices described in this blog post.

Figure 1 shows an example of how to deploy Active Directory on EC2 instances in multiple Regions with multiple virtual private clouds (VPCs). In this example, I’m showing the Active Directory design in multiple Regions that interconnect to each other by using AWS Transit Gateway.
 

Figure 1: Extended EC2 domain controllers architecture

Figure 1: Extended EC2 domain controllers architecture

In order to extend your existing Active Directory deployment from on-premises to AWS as shown in the example, you do two things. First, you add additional domain controllers (running on Amazon EC2) to your existing domain. Second, you place the domain controllers in multiple Availability Zones (AZs) within your VPC, in multiple Regions, by keeping the same forest (Example.com) and domain structure.

Consider these best practices when you deploy or extend Active Directory on EC2 instances:

  1. We recommend deploying at least two domain controllers (DCs) in each Region and configuring a minimum of two AZs, to provide high availability.
  2. If you require additional domain controllers to achieve your performance goals, add more domain controllers to existing AZs or deploy to another available AZ.
  3. It’s important to define Active Directory sites and subnets correctly to prevent clients from using domain controllers that are located in different Regions, which causes increased latency.
  4. Configure the VPC in a Region as a single Active Directory site and configure Active Directory subnets accordingly in the AD Sites and Services console. This configuration confirms that your clients correctly select the closest available domain controller.
  5. If you have multiple VPCs, centralize the Active Directory services in one of your existing VPCs or create a shared services VPC to centralize the domain controllers.
  6. Make sure that robust inter-Region connectivity exists between all of the Regions. Within AWS, you can leverage cross-Region VPC peering to achieve highly available private connectivity between Regions. You can also use the Transit Gateway VPC solution, as shown in Figure 1, to interconnect multiple Regions.
  7. Make sure that you’re deploying your domain controllers in a private subnet without internet access.
  8. Keep your security patches up to date on EC2 domain controllers. You can use AWS Systems Manager to patch your domain controllers in a controlled manner.
  9. Have a dedicated AWS account for directory services and don’t share the account with other general services and applications. This helps you to control access to the AWS account and add domain controller–specific automation.
  10. If your users need to manage AWS services and access AWS applications with their Active Directory credentials, we recommend integrating your identity service with the management account in AWS Organizations. You can configure the AWS Single Sign-On (AWS SSO) service to use AD Connector in a primary account VPC to connect to self-managed Active Directory domain controllers that are hosted in a Shared Services account.

    Alternatively, you can deploy AWS Managed Microsoft AD in the management account, with trust to your EC2 Active Directory domain, to allow users from any trusted domain to access AWS applications. However, you could host these EC2 domain controllers in the primary account, similar to the AWS Managed AD option.

  11. Build domain controllers with end-to-end automation using version control (for example, GIT and AWS CodeCommit) and Desired State Configuration (DSC)/PowerShell.

Security considerations for EC2-hosted domains

This section explains how you can maximize the security of your extended EC2-hosted domain controller infrastructure, and use AWS services to help achieve security compliance. You should also refer to your organization’s IT system security policies to determine the most relevant recommendations to implement.

AWS operates under a shared security responsibility model, where AWS is responsible for the security of the underlying cloud infrastructure and you are responsible for securing workloads you deploy in AWS.

Our recommendations for security for EC2-hosted domains are as follows:

  1. We recommend that you place EC2-hosted domain controllers in a single dedicated AWS account or deploy them in your AWS Organizations management account. This makes it possible for you to use your Active Directory credentials for authentication to access the AWS Management Console and other AWS applications.
  2. Use tag-based policies to restrict access to domain controllers if you’re using the Shared Services account for hosting domain controllers.
  3. Take advantage of the EC2 Image Builder service to deploy a domain controller that uses a CIS standard base image. By doing this, you can avoid manual deployment by setting up an image pipeline.
  4. Secure the AWS account where the domain controllers are running by following the principle of least privilege and by using role-based access control.
  5. Take advantage of these AWS services to help secure your workloads and application:
    • AWS Landing Zone–A solution that helps you more quickly set up a secure, multi-account AWS environment, based on AWS best practices.
    • AWS Organizations–A service that helps you centrally manage and govern your environment as you grow and scale your AWS resources.
    • Amazon Guard Duty–An automated threat detection service that continuously monitors for suspicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data that are stored in Amazon Simple Storage Service (Amazon S3).
    • Amazon Detective–A service that can analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.
    • Amazon Inspector–An automated security assessment service that helps improve the security and compliance of applications that are deployed on AWS.
    • AWS Security Hub–A service that provides customers with a comprehensive view of their security and compliance status across their AWS accounts. You can import critical patch compliance findings into Security Hub for easy reference.

Use data encryption

AWS offers you the ability to add a layer of security to your data at rest in the cloud, providing scalable and efficient encryption features. These are some best practices for data encryption:

  1. Encrypt the Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the domain controllers, and keep the customer master key (CMK) safe with AWS Key Management Service (AWS KMS) or AWS CloudHSM, according to your security team’s guidance and policies.
  2. Consider using a separate CMK for the Active Directory and restrict access to the CMK to a specific team.
  3. Enable LDAP over SSL (LDAPS) on all domain controllers, for secure authentication, if your application supports LDAPS authentication.
  4. Deploy and manage a public key infrastructure (PKI) on AWS. For more information, see the Microsoft PKI Quick Start guide.

Restrict account and instance access

Provide management access for directory service accounts and domain controller instances only to the specific team that manages the Active Directory. To do this, follow these guidelines:

  1. Restrict access to an EC2 domain controller’s start, stop, and terminate behavior by using AWS Identity and Access Management (IAM) policy and resources tags. Example: Restrict-ec2-iam
  2. Restrict access to Amazon EBS volumes and snapshots.
  3. Restrict account root access and implement multi-factor authentication (MFA) for this access.

Network access control for domain controllers

Whenever possible, block all unnecessary traffic to and from your domain controllers to limit the communication so that only the necessary ports are opened between a domain controller and another computer. Use these best practices:

  1. Allow only the required network ports between the client and domain controllers, and between domain controllers.
  2. Use a security group to narrow down the access to domain controllers.
  3. Use network access control lists (network ACLs) to filter Active Directory ports as this gives you better control than using ephemeral ports.
  4. Deploy domain controllers in private subnets.
  5. Route only the required subnets into the VPC that contains the domain controllers.

Secure administration

AWS provides services that continuously monitor your data, accounts, and workloads to help protect them from unauthorized access. We recommend that you take advantage of the following services to securely administer your domain controller’s deployment:

  1. Use AWS Systems Manager Session Manager or Run Command to manage your instances remotely. The command actions are sent to Amazon CloudWatch Logs or Amazon S3 for auditing purposes. Leaving inbound Remote Desktop Protocol (RDP), WinRM ports, and remote PowerShell ports open on your instances greatly increases the risk of entities running unauthorized or malicious commands on the instances. Session Manager helps you improve your security posture by letting you close these inbound ports, which frees you from managing SSH keys and certificates, bastion hosts, and jump boxes.
  2. Use Amazon EventBridge to set up rules to detect when changes happen to your domain controller EC2 instances and to send notifications by using Amazon Simple Notification Service (Amazon SNS) when a command is run.
  3. Manage configuration drift on EC2 instances. Systems Manager State Manager helps you automate the process of keeping your domain controller EC2 instances in the desired state and integrates with Systems Manager Compliance.
  4. Avoid any manual interventions while you build and manage domain controllers. Automate the domain join process for Amazon EC2 instances from multiple AWS accounts and Regions.
  5. For developing your applications with domain controllers, use the Windows DC locator service or use the Dynamic DNS (DDNS) service of your AWS Managed Microsoft AD to locate domain controllers. Do not hard-code applications with the address of a domain controller.
  6. Use AWS Config to manage your domain controller configuration.
  7. Use Systems Manager Parameter Store or Secrets Manager to store all secrets, as well as configurations for your domain controller automation.
  8. Use version control to update the domain controller source code with pipeline approvals to avoid any misconfigurations and faulty deployments.

Logging and monitoring

AWS provides tools and features that you can use to see what’s happening in your AWS environment. We recommend that you use these logging and monitoring practices for your EC2-hosted domain controllers:

  1. Enable VPC Flow Logs data for each domain controller’s accounts to monitor the traffic that’s reaching your domain controller instance.
  2. Log Windows and Active Directory events in Amazon CloudWatch Logs for increased visibility.
  3. Consider setting up alerts and notifications for key security events for EC2 domain controllers, in real time. These alerts can be sent to your Red and Blue security response teams for further analysis.
  4. Deploy the CloudWatch agent or the Amazon Kinesis Agent for Windows on EC2 for detail monitoring and alerting at the domain controller operating system level.
  5. Log Systems Manager API calls with AWS CloudTrail.

Other security considerations

As a best practice, implement domain controller security at the operating system level, according to your security team’s recommendations. We recommend these options:

  1. Block executables from running on domain controllers.
  2. Prevent web browsing from domain controllers.
  3. Configure a Windows Server Core base image for domain controllers.
  4. Integrate bastion hosts with Systems Manager Session Manager and use MFA to manage domain controllers remotely.
  5. Perform regular system state backups of your Active Directory environments. Encrypt these backups.
  6. Perform Active Directory administrative management from a remote server, and avoid logging in to domain controllers interactively unless needed.
  7. For FSMO roles, you can follow the same recommendations you would follow for your on-premises deployment to determine FSMO roles on domain controllers. For more information, see these best practices from Microsoft. In the case of AWS Managed Microsoft AD, all domain controllers and FSMO role assignments are managed by AWS and don’t require you to manage or change them.

Domain controller ports

In this section, I’m going to cover the network ports and protocols that are needed to deploy domain services securely. Understanding how traffic flows and is processed by a network firewall is essential when someone requests or implements firewall rules, to avoid any connectivity issues.

Here are some common problems that you might observe because of network port blockage:

  • The RPC server is unavailable
  • Name resolution issues
  • A connectivity issue is detected with LDAP, LDAPS, and Kerberos
  • Domain replication issues
  • Domain authentication issues
  • Domain trust issues between on-premises Active Directory and AWS Managed Microsoft AD
  • AD Connector connectivity issues
  • Issues with domain join, password reset, and more

Understand Active Directory firewall ports

You must allow traffic from your on-premises network to the VPC that contains your extended domain controllers. To do this, make sure that the firewall ports that opened with the VPC subnets that were used to deploy your EC2-hosted domain controllers and the security group rules that are configured on your domain controllers both allow the network traffic to support domain trusts.

Domain controller to domain controller core ports requirements

The following table lists the port requirements for establishing DC-to-DC communication in all versions of Windows Server.

Source Destination Protocol Port Type Active Directory usage Type of traffic
Any domain controller Any domain controller TCP and UDP 53 Bi-directional User and computer authentication, name resolution, trusts DNS
TCP and UDP 88 Bi-directional User and computer authentication, forest level trusts Kerberos
UDP 123 Bi-directional Windows Time, trusts Windows Time
TCP 135 Bi-directional Replication RPC, Endpoint Mapper (EPM)
UDP 137 Bi-directional User and computer authentication NetLogon, NetBIOS name resolution
UDP 138 Bi-directional Distributed File System (DFS), Group Policy DFSN, NetLogon, NetBIOS Datagram Service
TCP 139 Bi-directional User and computer authentication, replication DFSN, NetBIOS Session Service, NetLogon
TCP and UDP 389 Bi-directional Directory, replication, user, and computer authentication, Group Policy, trustss LDAP
TCP and UDP 445 Bi-directional Replication, user, and computer authentication, Group Policy, trusts SMB, CIFS, SMB2, DFSN, LSARPC, NetLogonR, SamR, SrvSvc
TCP and UDP 464 Bi-directional Replication, user, and computer authentication, trusts Kerberos change/set password
TCP 636 Bi-directional Directory, replication, user, and computer authentication, Group Policy, trusts LDAP SSL (required only if LDAP over SSL is configured)
TCP 3268 Bi-directional Directory, replication, user, and computer authentication, Group Policy, trusts LDAP Global Catalog (GC)
TCP 3269 Bi-directional Directory, replication, user, and computer authentication, Group Policy, trusts LDAP GC SSL (required only if LDAP over SSL is configured)
TCP 5722 Bi-directional File replication RPC, DFSR (SYSVOL)
TCP 9389 Bi-directional AD DS web services SOAP
TCP Dynamic 49152–65535 Bi-directional Replication, user, and computer authentication, Group Policy, trusts RPC, DCOM, EPM, DRSUAPI, NetLogonR, SamR, File Replication Service (FRS)
UDP Dynamic 49152–65535 Bi-directional Group Policy DCOM, RPC, EPM

Note: There is no need to open a DNS port on domain controllers if you are not using a domain controller as a DNS server, or if you’re using any third-party DNS solutions.

Client to domain controller core ports requirements

The following table lists the port requirements for establishing client to domain controller communication for Active Directory.

Source Destination Protocol Port Type Usage Type of traffic
All internal company client network IP subnets Any domain controller TCP 53 Uni-directional DNS DNS
UDP 53 Uni-directional DNS Kerberos
TCP 88 Uni-directional Kerberos Auth Kerberos
UDP 88 Uni-directional Kerberos Auth Kerberos
UDP 123 Uni-directional Windows Time Windows Time
TCP 135 Uni-directional RPC, EPM RPC, EPM
UDP 137 Uni-directional NetLogon, NetBIOS name User and computer authentication
UDP 138 Uni-directional DFSN, NetLogon, NetBIOS datagram service DFS, Group Policy, NetBIOS, NetLogon, browsing
TCP 389 Uni-directional LDAP Directory, replication, user, and computer authentication, Group Policy, trust
UDP 389 Uni-directional LDAP Directory, replication, user, and computer authentication, Group Policy, trust
TCP 445 Uni-directional SMB, CIFS, SMB3, DFSN, LSARPC, NetLogonR, SamR, SrvSvc Replication, user, and computer authentication, Group Policy, trust
TCP 464 Uni-directional Kerberos change/set password Replication, user, and computer authentication, trust
UDP 464 Uni-directional Kerberos change/set password Replication, user, and computer authentication, trust
TCP 636 Uni-directional LDAP SSL Directory, replication, user, and computer authentication, Group Policy, trust
TCP 3268 Uni-directional LDAP GC Directory, replication, user, and computer authentication, Group Policy, trust
TCP 3269 Uni-directional LDAP GC SSL Directory, replication, user, and computer authentication, Group Policy, trust
TCP 9389 Uni-directional SOAP AD DS web service
TCP 49152–65535 Uni-directional DCOM, RPC, EPM Group Policy

Note:

  • You must allow network traffic communication from your on-premises network to the VPC that contains your AWS-hosted EC2 domain controllers.
  • You also can restrict DC-to-DC replication traffic and DC-to-client communications to specific ports.
  • Packet fragmentation can cause issues with services such as Kerberos. You should make sure that maximum transmission unit (MTU) sizes match your network devices.
  • Additionally, unless a tunneling protocol is used to encapsulate traffic to Active Directory, ranges of ephemeral TCP ports between 49152 to 65535 are required. Ephemeral ports are also known as service response ports. These ports are created dynamically for session responses for each client that establishes a session. These ports are required not only for Windows but for Linux and UNIX.

Manage domain controllers securely using a bastion host and RDGW

We recommend that you restrict the domain controller’s management by using a secure, highly available, and scalable Microsoft Remote Desktop Gateway (RDGW) solution in conjunction with bastion hosts. A bastion host that is designed to work with a specific part of the infrastructure should work with that unit only, and nothing else. Limiting the use of bastion hosts to a specific instance example domain controller can help improve your security posture.

The reference architecture shown in Figure 2 restricts management access to your domain controllers and access via port 443. The bastion hosts in the diagram are configured to only allow RDP from the RDGW.
 

For additional security, follow these best practices:

  • Configure RDGW and bastions hosts to use MFA for logins.
  • Implement login restrictions by using a Group Policy Object (GPO), so that only required administrators log in to RDGW and the bastion host, based on their group membership.

Bastion host to domain controllers ports requirements

The following table lists the port requirements for establishing bastion host-to-DC communication in all versions of Windows Server.

Source Destination Protocol Ports Type Usage Type of traffic
Bastion host to domain controller Any domain controller subset TCP 443 Uni-directional TPKT Remote Protocol Gateway access
UDP 3389 Uni-directional TPKT Remote Desktop Protocol
TCP 3389 Uni-directional WS-Man Remote Desktop Protocol
TCP 5985 Bi-directional HTTPS Windows Remote Management (WinRM)
TCP 5985 Bi-directional WS-Man Windows Remote Management (WinRM)

You can also take advantage of Systems Manager Session Manager to manage domain joined resources instead of using bastion hosts for management. This option eliminates the need to manage bastion infrastructure and open any inbound rules. It also integrates natively with IAM and AWS CloudTrail, two services that enhance your security and audit posture. In the next section, I’ll discuss Session Manager and how it is useful in this context.

Session Manager port forwarding

Active Directory administrators are accustomed to managing domain resources by using Remote Server Administrators Tools (RSAT) that are installed on either their workstations or a member server in the domain (for example, RDP to a bastion host). Although RDP is effective, using RDP requires more management, such as managing inbound rules for port 3389. In some cases, having this port exposed to the internet might put your systems at risk. For example, systems can be susceptible to brute force or unauthorized dictionary activity. Instead of using a RDGW host and opening RDP inbound RDP ports, we recommend using the Session Manager Service, which provides port-forwarding ability without opening inbound ports.

Port forwarding provides the ability to forward traffic between your clients to open ports on your EC2 instance. After you configure port forwarding, you can connect to the local port and access the server application that is running inside the instance, as shown in Figure 3. To configure the port-forwarding feature in Session Manager, you can use IAM policies and the AWS-StartPortForwardingSession document.
 

Figure 3: Session Manager tunnel

Figure 3: Session Manager tunnel

To start a session using the AWS Command Line Interface (AWS CLI), run the following command.

aws ssm start-session --target "instance-id" --document-name AWS StartPortForwardingSession -parameters portNumber="3389",localPortNumber="9999"

Note: You can use any available ephemeral port. 9999 is just an example. Install and configure the AWS CLI, if you haven’t already.

You can also start a session by using an IAM policy like the one shown in the following example. To learn more about creating IAM policies for Session Manager, see the topic Quickstart default IAM policies for Session Manager.

In this policy example, I created the policy for Systems Manager for both AWS-StartPortForwadingSession and AWS-StartSSHSession for Linux (SSH) environments, for your reference and guidance.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:*: <AccountID>:instance/*",
                "arn:aws:ssm:*: <AccountID>:document/SSM-SessionManagerRunShell"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/TAGKEY": [
                        "TAGVALUE"
                    ]
                }
            }
        },
        {

            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:document/AWS-StartSSHSession",
                "arn:aws:ssm:*:*:document/AWS-StartPortForwardingSession"
            ]
        },
        {

            "Effect": "Allow",

            "Action": [

                "ssm:DescribeSessions",

                "ssm:GetConnectionStatus",

                "ssm:DescribeInstanceInformation",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:username}-*"
            ]
        }
    ]
}

When you use the port-forwarding feature in Session Manager, you have the option to use an auditing service like AWS CloudTrail to provide a record of the connections made to your instances. You can also monitor the session by using Amazon CloudWatch Events with Amazon SNS to receive notifications when a user starts or ends session activity.

There is no additional charge for accessing EC2 instances by using Session Manager port forwarding. Port forwarding is available today in all AWS Regions where Systems Manager is available. You will be charged for the outgoing bandwidth from the NAT Gateway or your VPC Private Link.

Bastion host architecture using Session Manager

In this section, I discuss how to use a bastion host with Session Manager. Session Manager uses the Systems Manager infrastructure to create an SSH-like session with an instance. Session Manager tunnels real SSH connections, which allows you to tunnel to another resource within your VPC directly from your local machine. A managed instance that you create, acts as a bastion host, or gateway, to your AWS resources. The benefits of this configuration are:

  • Increased security: This configuration uses only one EC2 instance (the bastion host), and connects outbound port 443 to Systems Manager infrastructure. This allows you to use Session Manager without any inbound connections. The local resource must allow inbound traffic only from the instance that is acting as bastion host. Therefore, there is no need to open any inbound rule publicly.
  • Ease of use: You can access resources in your private VPC directly from your local machine.

In the example shown in Figure 4, the EC2 instance is acting as a domain controller that must be accessed securely by an Active Directory administrator who is working remotely via bastion host. To support this use case, I’ve chosen to use an interface VPC endpoint for Systems Manager, in order to facilitate private connectivity between Systems Manager Agent (SSM Agent) on the EC2 instance that is acting as a bastion host, and the Systems Manager service endpoints. You can configure Session Manager to enable port forwarding between the administrator’s local workstation and the private EC2 bastion instances, so that they can securely access the bastion host from the internet. This architecture helps you to eliminate RDGW infrastructure setup and reduce management efforts. You can add MFA at the bastion host level to enhance security.
 

Note:

  • If you want to use the AWS CLI to start and end sessions that connect you to your managed instances, you must first install the Session Manager plugin on your local machine.
  • Make sure that the bastion host has SSM Agent installed, because Session Manager only works with Systems Manager managed instances.
  • Follow the steps in Creating an interface endpoint to create the following interface endpoints:
    • com.amazonaws.<region>.ssm – The endpoint for the Systems Manager service.
    • com.amazonaws.><region>.ec2messages – Systems Manager uses this endpoint to make calls from the SSM Agent to the Systems Manager service.
    • com.amazonaws.<region>.ec2 – The endpoint to the EC2 service. If you’re using Systems Manager to create VSS-enabled snapshots, you must ensure that you have this endpoint. Without the EC2 endpoint defined, a call to enumerate attached EBS volumes fails. This causes the Systems Manager command to fail.
    • com.amazonaws.<region>.ssmmessages – This endpoint is required for connecting to your instances through a secure data channel by using Session Manager, in this case the port-forwarding requirement.

Support for domain controllers in Session Manager

You can use Session Manager to connect EC2 domain controllers directly, as well. To initiate a connection with either the default Session Manager connection or the port-forwarding feature discussed in this post, complete these steps.

To initiation a connection

  1. Create the ssm-user in your domain.
  2. Add the ssm-user to the domain groups that grant the user local access to the domain controller. One example is to add the user to the Domain Admins group.

IMPORTANT: Follow your organization’s security best practices when you grant the ssm-user access to the domain.

Conclusion

In this blog post, I described best practices for deploying domain controllers on EC2 instances and extending on-premises Active Directory to AWS for your guidance and quick reference. I also covered how you can maximize security for your extended EC2-hosted domain controller infrastructure by using AWS services. In addition, you learned about how AWS Systems Manager Session Manager port forwarding to RDP provides a simple and secure way to manage your domain resources remotely, without the need to open inbound ports and maintain RDGW hosts. Port forwarding works for Windows and Linux instances. It’s available today in all AWS Regions where Systems Manager is available. Depending on your use case, you should consider additional protection mechanisms per your organization’s security best practices.

To learn more about migrating Windows Server or SQL Server, visit Windows on AWS. For more information about how AWS can help you modernize your legacy Windows applications, see Modernize Windows Workloads with AWSContact us to start your modernization journey today.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mangesh Budkule

Mangesh is a Microsoft Specialist Solutions Architect at AWS. He works with customers to provide architectural guidance and technical assistance on AWS services, improving the value of their solutions when using AWS.

Introducing the Ransomware Risk Management on AWS Whitepaper

Post Syndicated from Temi Adebambo original https://aws.amazon.com/blogs/security/introducing-the-ransomware-risk-management-on-aws-whitepaper/

AWS recently released the Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper. This whitepaper aligns the National Institute of Standards and Technology (NIST) recommendations for security controls that are related to ransomware risk management, for workloads built on AWS. The whitepaper maps the technical capabilities to AWS services and implementation guidance. While this whitepaper is primarily focused on managing the risks associated with ransomware, the security controls and AWS services outlined are consistent with general security best practices.

The National Cybersecurity Center of Excellence (NCCoE) at NIST has published Practice Guides (NIST 1800-11, 1800-25, and 1800-26) to demonstrate how organizations can develop and implement security controls to combat the data integrity challenges posed by ransomware and other destructive events. Each of the Practice Guides include a detailed set of goals that are designed to help organizations establish the ability to identify, protect, detect, respond, and recover from ransomware events.

The Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper helps AWS customers confidently meet the goals of the Practice Guides the following categories:

Identify and protect

  • Identify systems, users, data, applications, and entities on the network.
  • Identify vulnerabilities in enterprise components and clients.
  • Create a baseline for the integrity and activity of enterprise systems in preparation for an unexpected event.
  • Create backups of enterprise data in advance of an unexpected event.
  • Protect these backups and other potentially important data against alteration.
  • Manage enterprise health by assessing machine posture.

Detect and respond

  • Detect malicious and suspicious activity generated on the network by users, or from applications that could indicate a data integrity event.
  • Mitigate and contain the effects of events that can cause a loss of data integrity.
  • Monitor the integrity of the enterprise for detection of events and after-the-fact analysis.
  • Use logging and reporting features to speed response time for data integrity events.
  • Analyze data integrity events for the scope of their impact on the network, enterprise devices, and enterprise data.
  • Analyze data integrity events to inform and improve the enterprise’s defenses against future attacks.

Recover

  • Restore data to its last known good configuration.
  • Identify the correct backup version (free of malicious code and data for data restoration).
  • Identify altered data, as well as the date and time of alteration.
  • Determine the identity/identities of those who altered data.

To achieve the above goals, the Practice Guides outline a set of technical capabilities that should be established, and provide a mapping between the generic application term and the security controls that the capability provides.

AWS services can be mapped to theses technical capabilities as outlined in the Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF) whitepaper. AWS offers a comprehensive set of services that customers can implement to establish the necessary technical capabilities to manage the risks associated with ransomware. By following the mapping in the whitepaper, AWS customers can identify which services, features, and functionality can help their organization identify, protect, detect, respond, and from ransomware events. If you’d like additional information about cloud security at AWS, please contact us.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Temi Adebambo

Temi is the Senior Manager for the America’s Security and Network Solutions Architect team. His team is focused on working with customers on cloud migration and modernization, cybersecurity strategy, architecture best practices, and innovation in the cloud. Before AWS, he spent over 14 years as a consultant, advising CISOs and security leaders.

Manage your AWS Directory Service credentials using AWS Secrets Manager

Post Syndicated from Ashwin Bhargava original https://aws.amazon.com/blogs/security/manage-your-aws-directory-service-credentials-using-aws-secrets-manager/

AWS Secrets Manager helps you protect the secrets that are needed to access your applications, services, and IT resources. With this service, you can rotate, manage, and retrieve database credentials, API keys, OAuth tokens, and other secrets throughout their lifecycle. The secret value rotation feature has built-in integration for services like Amazon Relational Database Service (Amazon RDS) , whose credentials can be rotated. The same integration functionality can also be extended to other types of secrets, including API keys and OAuth tokens, with the help of AWS Lambda functions.

This blog post provides details on how Secrets Manager can be used to store and rotate the admin password of AWS Directory Service at a specified frequency. Customers who use the directory services in AWS can deploy the solution in this blog post to minimize the effort spent by their operations team to manually rotate the password (which is one of the best practices of password management). These customers can also benefit by using the secure API access of Secrets Manager to allow access by applications that are using Active Directory–specific accounts. A good example is having an application to reset passwords for AD users and can be done using the API access.

Solution overview

When you configure AWS Directory Service, one of the inputs the service expects is the password for the admin user (administrator). By using an AWS Lambda function and Secrets Manager, you can store the password and rotate it periodically.

Figure 1 shows the architecture diagram for this solution.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

The workflow is as follows:

  1. During initial setup (which can be performed either manually or through a CloudFormation template), the password of the admin user is stored as a secret in Secrets Manager. The secret is in the JSON format and contains three fields: Directory ID, UserName, and Password. The secret is encrypted using KMS Key to provide an added layer of security.
  2. This secret is attached to a Lambda function that controls rotation.
  3. This rotation Lambda function generates a new password, updates Active Directory, and then updates the secret. The function can be invoked on as-needed basis or at a desired interval. The CFN template we provide in this post schedules the rotation at a 30-day interval.
  4. Applications can securely fetch the new secret value from Secrets Manager.

Prerequisites and assumptions

To implement this solution, you need an AWS account to test the solution and access AWS services.

Also be aware of the following:

  1. In this solution, you will configure all the (supported) services in the same virtual private cloud (VPC) to simplify networking considerations.
  2. The predefined admin user name for Simple Active Directory is Administrator.
  3. The predefined password is a random 12-character string.

Important: The AWS CloudFormation template that we provide deploys a Simple Active Directory. This is for testing and demonstration purposes; you can modify or reuse the solution for other types of Active Directory solutions.

Deploy the solution

To deploy the solution, you first provision the baseline networking and other resources by using a CloudFormation stack.

The resource provisioning in this step creates these resources:

  • An Amazon Virtual Private Cloud (Amazon VPC) with two private subnets
  • AWS Directory Service installed and configured in the VPC
  • A Secrets Manager secret with rotation enabled
  • A Lambda function inside the VPC
  • These AWS Identity and Access Management (IAM) roles and permissions:
    • Secrets Manager has permission to invoke Lambda functions
    • The Lambda function has permission to update the secret in Secrets Manager
    • The Lambda function has permission to update the password for Directory Service

To deploy the solution by using the CloudFormation template

  1. You can use this downloadable template to set up the resources. To launch directly through the console, choose the following Launch Stack button, which creates the stack in the us-east-1 AWS Region.
    Select the Launch Stack button to launch the template
  2. Choose Next to go to the Specify stack details page.
  3. The bucket hosting the Lambda function code is predefined for ease of implementation, but you can edit the bucket name if necessary. Specify any other template details as needed, and then choose Next.
  4. (Optional) On the Configure Stack Options page, enter any tags, and then choose Next.
  5. On the Review page, select the check box for I acknowledge that AWS CloudFormation might create IAM resources with custom names, and choose Create stack.

It takes approximately 20–25 minutes for the provisioning to complete. When the stack status shows Create Complete, review the outputs that were created by navigating to the Outputs tab, as shown in Figure 2.
 

Figure 2: Outputs created by the CloudFormation template

Figure 2: Outputs created by the CloudFormation template

Now that the stack creation has completed successfully, you should validate the resources that were created.

To validate the resources

  1. Navigate to the AWS Directory Service console. You should see a new directory service that has the corp.com directory set up.
  2. Navigate to the AWS Secrets Manager console and review the secret that was created called DSAdminPswd. Choose the secret value, and then choose Retrieve secret value to reveal the secret values.
     
    Figure 3: Checking the secret value in the Secrets Manager console

    Figure 3: Checking the secret value in the Secrets Manager console

  3. As you might have noticed, the secret value changed from what was initially generated in the template. The Lambda function was invoked when it was attached to the secret, which caused the secret to rotate. To verify that the secret value changed, navigate to the Amazon CloudWatch console, and then navigate to Log groups.
  4. In the search bar, type the Lambda function name dj-rotate-lambda to filter on the log group name.
     
    Figure 4: CloudWatch log groups

    Figure 4: CloudWatch log groups

  5. Choose the log group /aws/lambda/dj-rotate-lambda to open the detailed log streams.
  6. Look at the Log streams and open the recent log stream to view the series of rotation events.
     
    Figure 5: The log data for a complete rotation

    Figure 5: The log data for a complete rotation

    You should see that each of the four stages of rotation (create, set, test, and finish) are called in the right sequence. A Success message in the finishSecret stage confirms the successful rotation of the secret value.

The next step is to rotate the secret manually or set a policy for rotation.

To rotate the secret

The CloudFormation automation has set the rotation configuration to rotate the secret every 30 days. You can alternatively initiate another rotation by choosing Rotate secret immediately, as shown in Figure 6. You will observe the log stream (in CloudWatch Logs) changing, followed by the new secret value.
 

Figure 6: Manual rotation of the secret

Figure 6: Manual rotation of the secret

You can also edit the rotation configuration by choosing Edit rotation and configuring the rotation policy that suits your organizational standards, as shown in Figure 7.
 

Figure 7: Editing the rotation configuration

Figure 7: Editing the rotation configuration

Code walkthrough

The rotation Lambda function works in four stages:

  1. CreateSecret – In this stage, the Lambda function creates a new password for the administrator user and sets up the staging label AWSPENDING for the secret’s new value.
  2. SetSecret – In this stage, the Lambda function fetches the newly generated password by using the label AWSPENDING and sets it as the password to the Active Directory administrator user.
  3. TestSecret – In this stage, the Lambda function verifies that the password is working by using the kinit command and the necessary dependent libraries of the Linux OS (the base OS for Lambda functions). If successful, the function continues to the next stage. In the case of failure, the catch block reverts the password of the Active Directory administrator user to the value in the AWSCURRENT label.
  4. FinishSecret – This is the final stage, where the Lambda function moves the labels AWSCURRENT from the current version of secret to the new version. And the same time, the old version of the secret is given AWSPREVIOUS label.

The Lambda function is written in Python 3.7 runtime and uses AWS SDK for Python (Boto3) API calls for interacting with Secrets Manager and Directory Services.

The directory ID and Secrets Manager endpoint are supplied as environment variables to the Lambda function, as shown in Figure 8. The secret ID is fetched from the event context.
 

Figure 8: Environment variables setup

Figure 8: Environment variables setup

You can download the Lambda code that is used for the rotation logic and modify it to suit your organizational needs. For instance, the random password is configured to have a length of 12 characters, excluding special characters and punctuations, as shown in the following code snippet. You can modify this configuration as needed.

newpasswd = service_client.get_random_password(PasswordLength=12,ExcludeCharacters='/@"\'\\',ExcludePunctuation=True)

As mentioned in the Prerequisites section, make sure that you do proper testing in development or test environments before proceeding to deploy the solution in production environments.

Cleanup

After you complete and test this solution, clean up the resources by deleting the AWS CloudFormation stack called aws-ds-creds-manager. For more information on deleting the stacks, see Deleting a stack on the AWS CloudFormation console.

Conclusion

In this post, we demonstrated how to use the AWS Secrets Manager service to store and rotate the AWS Directory Service Simple Active Directory admin password. You can also use this solution to rotate the AWS Managed Microsoft AD directory.

There are many other code samples listed in the AWS Code Sample Catalog that show how to rotate the passwords for other database services that are supported by this service.

You can find additional rotation Lambda function examples in the open source AWS library for Secrets Manager.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashwin Bhargava

Ashwin is a DevOps Consultant at AWS working in Professional Services Canada. He is a DevOps expert and a security enthusiast with more than 13 years of development and consulting experience.

Author

Satya Vajrapu

Satya is a Senior DevOps Consultant with AWS. He works with customers to help design, architect, and develop various practices and tools in the DevOps and cloud toolchain.

AWS achieves FedRAMP P-ATO for 18 additional services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-18-additional-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

We’re pleased to announce that 18 additional AWS services have achieved Provisional Authority to Operate (P-ATO) by the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). The following are the 18 additional services with FedRAMP authorization for the US federal government, and organizations with regulated workloads:

  • Amazon Cognito lets you add user sign-up, sign-in, and access control to their web and mobile apps quickly and easily.
  • Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning to extract health data from medical text–no machine learning experience is required.
  • Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service that gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises.
  • Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service.
  • Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud that lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights.
  • Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application.
  • Amazon Textract is a machine learning service that automatically extracts text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
  • AWS Backup enables you to centralize and automate data protection across AWS services.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
  • AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
  • AWS Ground Station is a fully managed service that lets you control satellite communications, process data, and scale your operations without having to worry about building or managing your own ground station infrastructure.
  • AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise. AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and node statuses. AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
  • AWS Personal Health Dashboard provides alerts and guidance for AWS events that might affect your environment, and provides proactive and transparent notifications about your specific AWS environment.
  • AWS Resource Groups grants you the ability to organize your AWS resources, and manage and automate tasks on large numbers of resources at one time.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • AWS Storage Gateway is a set of hybrid cloud storage services that gives you on-premises access to virtually unlimited cloud storage.
  • AWS Systems Manager provides a unified user interface so you can track and resolve operational issues across your AWS applications and resources from a central place.
  • AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Cognito  
Amazon Comprehend Medical
Amazon Elastic Kubernetes Service (Amazon EKS)  
Amazon Pinpoint  
Amazon QuickSight  
Amazon Simple Email Service (Amazon SES)  
Amazon Textract
AWS Backup
AWS CloudHSM  
AWS CodePipeline
AWS Ground Station  

AWS OpsWorks for Chef Automate and

AWS OpsWorks for Puppet Enterprise

 
AWS Personal Health Dashboard
AWS Resource Groups  
AWS Security Hub  
AWS Storage Gateway
AWS Systems Manager
AWS X-Ray

 
AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. Today, AWS offers 100 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 90 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Customer Success Stories page. For up-to-date information when new services are added, see our AWS Services in Scope by Compliance Program page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security & Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

137 AWS services achieve HITRUST certification

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/137-aws-services-achieve-hitrust-certification/

We’re excited to announce that 137 Amazon Web Services (AWS) services are certified for the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF) for the 2021 cycle.

The full list of AWS services that were audited by a third-party auditor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification on demand through AWS Artifact.

AWS HITRUST CSF certification is available for customer inheritance

You don’t have to assess inherited controls for your HITRUST validated assessment, because AWS already has! You can deploy business solutions into AWS and inherit our HITRUST CSF certification, provided that you use only in-scope services and apply the controls detailed on the HITRUST website that you are responsible for implementing.

With the HITRUST certification, you, as an AWS customer, can tailor your security control baselines to a variety of factors—including, but not limited to, regulatory requirements and organization type. The HITRUST CSF is widely adopted by leading organizations in a variety of industries as part of their approach to security and privacy. Visit the HITRUST website for more information.

As always, we value your feedback and questions and are committed to helping you achieve and maintain the highest standard of security and compliance. Feel free to contact the team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali is a Security Assurance Manager at AWS. She leads the global HITRUST assurance program within AWS. Sonali considers herself a perpetual student of information security, and holds multiple certifications like CISSP, PCIP, CCSK, CEH, CISA, ISO 27001 Lead Auditor, ISO 22301 Lead Auditor, C-GDPR Practitioner, and ITIL.

AWS achieves GSMA security certification for US East (Ohio) Region

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-gsma-security-certification-for-us-east-ohio-region/

We continue to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that our US East (Ohio) Region (us-east-2) is now certified by the GSM Association (GSMA) under its Security Accreditation Scheme Subscription Management (SAS-SM) with scope Data Center Operations and Management (DCOM). This alignment with GSMA requirements demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified US East (Ohio) Region.

As of this writing, 128 services offered in the US East (Ohio) Region are in scope of this certification. For up-to-date information, including when additional services are added, see the AWS Services in Scope by Compliance Program and choose GSMA.

AWS was evaluated by independent third-party auditors chosen by GSMA. The Certificate of Compliance illustrating the AWS GSMA compliance status is available on the GSMA website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a Security Audit Program Manager at AWS, based in New York. She leads various security audit programs across Europe. She previously worked in security assurance and technology risk management in the financial industry for 10 years.

Author

Karthik Amrutesh

Karthik is a Senior Manager, Security Assurance at AWS, based in New York. He leads a team responsible for audits, attestations, and certifications across the European Union. Karthik has previously worked in risk management, security assurance, and technology audits for over 18 years.

How to automate incident response to security events with AWS Systems Manager Incident Manager

Post Syndicated from Sumit Patel original https://aws.amazon.com/blogs/security/how-to-automate-incident-response-to-security-events-with-aws-systems-manager-incident-manager/

Incident response is a core security capability for organizations to develop, and a core element in the AWS Cloud Adoption Framework (AWS CAF). Responding to security incidents quickly is important to minimize their impacts. Automating incident response helps you scale your capabilities, rapidly reduce the scope of compromised resources, and reduce repetitive work by your security team.

In this post, I show you how to use Incident Manager, a capability of AWS Systems Manager, to build an effective automated incident management and response solution to security events.

You’ll walk through three common security-related events and how you can use Incident Manager to automate your response.

  • AWS account root user activity: An Amazon Web Services (AWS) account root user has full access to all your resources for all AWS services, including billing information. It’s therefore elemental to adhere to the best practice of using the root user only to create your first IAM user and securely lock away the root user credentials and use them to perform only a few account and service management tasks. And it is critical to be aware when root user activity occurs in your AWS account.
  • Amazon GuardDuty high severity findings: Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. In this blog post, you’ll learn how to initiate an incident response plan whenever a high severity finding is discovered.
  • AWS Config rule change and S3 bucket allowing public access: AWS Config enables continuous monitoring of your AWS resources, making it simple to assess, audit, and record resource configurations and changes. You will use AWS Config to monitor your Amazon Simple Storage Service (S3) bucket ACLs and policies for settings that allow public read or public write access.

Prerequisites

If this is your first time using Incident Manager, follow the initial onboarding steps in Getting prepared with Incident Manager.

Incident Manager can start managing incidents automatically using Amazon CloudWatch or Amazon EventBridge. For the solution in this blog post, you will use EventBridge to capture events and start an incident.

To complete the steps in this walkthrough, you need the following:

Create an Incident Manager response plan

A response plan ties together the contacts, escalation plan, and runbook. When an incident occurs, a response plan defines who to engage, how to engage, which runbook to initiate, and which metrics to monitor. By creating a well-defined response plan, you can save your security team time down the road.

Add contacts

Your contacts should include everyone who might be involved in the incident. Follow these steps to add a contact.

To add contacts

  1. Open the AWS Management Console, and then go to Systems Manager within the console, expand Operations Management, and then expand Incident Manager.
  2. Choose Contacts, and then choose Create contact.

    Figure 1: Adding contact details

    Figure 1: Adding contact details

  3. On Contact information, enter names and define contact channels for your contacts.
  4. Under Contact channel, you can select Email, SMS, or Voice. You can also add multiple contact channels.
  5. In Engagement plan, specify how fast to engage your responders. In the example illustrated below, the incident responder will be engaged through email immediately (0 minutes) when an incident is detected and then through SMS 10 minutes into an incident. Complete the fields and then choose Create.

    Figure 2: Engagement plan

    Figure 2: Engagement plan

Create a response plan

Once you’ve created your contacts, you can create a response plan to define how to respond to incidents. Refer to the Best Practices for Response Plans.

Note: (Optional) You can also create an escalation plan that lets you further define the escalation path for your contacts. You can learn more in Create an escalation plan.

To create a response plan

  1. Open the Incident Manager console, and choose Response plans in the left navigation pane.
  2. Choose Create response plan.
  3. Enter a unique and identifiable name for your response plan.
  4. Enter an incident title. The incident title helps to identify an incident on the incidents home page.
  5. Select an appropriate Impact based on the potential scope of the incident.

    Figure 3: Selecting your impact level

    Figure 3: Selecting your impact level

  6. (Optional) Choose a chat channel for the incident responders to interact in during an incident. For more information about chat channels, see Chat channels.
  7. (Optional) For Engagement, you can choose any number of contacts and escalation plans. For this solution, select the security team responder that you created earlier as one of your contacts.

    Figure 4: Adding engagements

    Figure 4: Adding engagements

  8. (Optional) You can also create a runbook that can drive the incident mitigation and response. For further information, refer to Runbooks and automation.
  9. Under Execution permissions, choose Create an IAM role using a template. Under Role name, select the IAM role you created in the prerequisites that allows Incident Manager to run SSM automation documents, and then choose Create response plan.

Monitor AWS account root activity

When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the root user and is accessed by signing in with the email address and password that you used to create the account.

An AWS account root user has full access to all your resources for all AWS services, including billing information. It is critical to prevent root user access from unauthorized use and to be aware whenever root user activity occurs in your AWS account. For more information about AWS recommendations, see Security best practices in IAM.

To be certain that all root user activity is authorized and expected, it’s important to monitor root API calls to a given AWS account and to be notified when root user activity is detected.

Create an EventBridge rule

Create and validate an EventBridge rule to capture AWS account root activity.

To create an EventBridge rule

  1. Open the EventBridge console.
  2. In the navigation pane, choose Rules, and then choose Create rule.
  3. Enter a name and description for the rule.
  4. For Define pattern, choose Event pattern.
  5. Choose Custom pattern.
  6. Enter the following event pattern:
    {
      "detail-type": [
        "AWS API Call via CloudTrail",
        "AWS Console Sign In via CloudTrail"
      ],
      "detail": {
        "userIdentity": {
          "type": [
            "Root"
          ]
        }
      }
    }
    

  7. For Select targets, choose Incident Manager response plan.
  8. For Response plan, choose SecurityEventResponsePlan, which you created when you set up Incident Manager.
  9. To create an IAM role automatically, choose Create a new role for this specific resource. To use an existing IAM role, choose Use existing role.
  10. (Optional) Enter one or more tags for the rule.
  11. Choose Create.

To validate the rule

  1. Sign in using root credentials.
  2. This console login activity by a root user should invoke the Incident Manager response plan and show an open incident as illustrated below. The respective contact channels that you defined earlier in your Engagement Plan, will be engaged.
Figure 5: Incident Manager open incidents

Figure 5: Incident Manager open incidents

Watch for GuardDuty high severity findings

GuardDuty is a monitoring service that analyzes AWS CloudTrail management and Amazon S3 data events, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs to generate security findings for your account. Once GuardDuty is enabled, it immediately starts monitoring your environment.

GuardDuty integrates with EventBridge, which can be used to send findings data to other applications and services for processing. With EventBridge, you can use GuardDuty findings to invoke automatic responses to your findings by connecting finding events to targets such as Incident Manager response plan.

Create an EventBridge rule

You’ll use an EventBridge rule to capture GuardDuty high severity findings.

To create an EventBridge rule

  1. Open the EventBridge console.
  2. In the navigation pane, select Rules, and then choose Create rule.
  3. Enter a name and description for the rule.
  4. For Define pattern, choose Event pattern.
  5. Choose Custom pattern
  6. Enter the following event pattern which will filter on GuardDuty high severity findings
    {
      "source": ["aws.guardduty"],
      "detail-type": ["GuardDuty Finding"],
      "detail": {
        "severity": [
          7.0,
          7.1,
          7.2,
          7.3,
          7.4,
          7.5,
          7.6,
          7.7,
          7.8,
          7.9,
          8,
          8.0,
          8.1,
          8.2,
          8.3,
          8.4,
          8.5,
          8.6,
          8.7,
          8.8,
          8.9
        ]
      }
    } 
    

  7. For Select targets, choose Incident Manager response plan.
  8. For Response plan, select SecurityEventResponsePlan, which you created when you set up Incident Manager.
  9. To create an IAM role automatically, choose Create a new role for this specific resource. To use an IAM role that you created before, choose Use existing role.
  10. (Optional) Enter one or more tags for the rule.
  11. Choose Create.

To validate the rule

To test and validate whether the above rule is now functional, you can generate sample findings within the GuardDuty console.

  1. Open the GuardDuty console.
  2. In the navigation pane, choose Settings.
  3. On the Settings page, under Sample findings, choose Generate sample findings.
  4. In the navigation pane, choose Findings. The sample findings are displayed on the Current findings page with the prefix [SAMPLE].

Once you have generated sample findings, your Incident Manager response plan will be invoked almost immediately and the engagement plan with your contacts will begin.

You can select an open incident in the Incident Manager console to see additional details from the GuardDuty finding. Figure 6 shows a high severity finding.

Figure 6: Incident Manager open incident for GuardDuty high severity finding

Figure 6: Incident Manager open incident for GuardDuty high severity finding

Monitor S3 bucket settings for public access

AWS Config enables continuous monitoring of your AWS resources, making it easier to assess, audit, and record resource configurations and changes. AWS Config does this through rules that define the desired configuration state of your AWS resources. AWS Config provides a number of AWS managed rules that address a wide range of security concerns such as checking that your Amazon Elastic Block Store (Amazon EBS) volumes are encrypted, your resources are tagged appropriately, and multi-factor authentication (MFA) is enabled for root accounts.

Set up AWS Config and EventBridge

You will use AWS Config to monitor your S3 bucket ACLs and policies for violations which could allow public read or public write access. If AWS Config finds a policy violation, it will initiate an AWS EventBridge rule to invoke your Incident Manager response plan.

To create the AWS Config rule to capture S3 bucket public access

  1. Sign in to the AWS Config console.
  2. If this is your first time in the AWS Config console, refer to the Getting Started guide for more information.
  3. Select Rules from the menu and choose Add Rule.
  4. On the AWS Config rules page, enter S3 in the search box and select the s3-bucket-public-read-prohibited and s3-bucket-public-write-prohibited rules, and then choose Next.

    Figure 7: AWS Config rules

    Figure 7: AWS Config rules

  5. Leave the Configure rules page as default and select Next.
  6. On the Review page, select Add Rule. AWS Config is now analyzing your S3 buckets, capturing their current configurations, and evaluating the configurations against the rules you selected.

To create the EventBridge rule

  1. Open the Amazon EventBridge console
  2. In the navigation pane, choose Rules, and then choose Create rule.
  3. Enter a name and description for the rule.
  4. For Define pattern, choose Event pattern.
  5. Choose Custom pattern
  6. Enter the following event pattern, which will filter on AWS Config rule s3-bucket-public-write-prohibited being non-compliant.
    {
      "source": ["aws.config"],
      "detail-type": ["Config Rules Compliance Change"],
      "detail": {
        "messageType": ["ComplianceChangeNotification"],
        "configRuleName": ["s3-bucket-public-write-prohibited", ""],
        "newEvaluationResult": {
          "complianceType": [
            "NON_COMPLIANT"
          ]
        }
      }
    }
    

  7. For Select targets, choose Incident Manager response plan.
  8. For Response plan, choose SecurityEventResponsePlan, which you created earlier when setting up Incident Manager.
  9. To create an IAM role automatically, choose Create a new role for this specific resource. To use an existing IAM role, choose Use existing role.
  10. (Optional) Enter one or more tags for the rule.
  11. Choose Create.

To validate the rule

  1. Create a compliant test S3 bucket with no public read or write access through either an ACL or a policy.
  2. Change the ACL of the bucket to allow public listing of objects so that the bucket is non-compliant.

    Figure 8: Amazon S3 console

    Figure 8: Amazon S3 console

  3. After a few minutes, you should see the AWS Config rule initiated which invokes the EventBridge rule and therefore your Incident Manager response plan.

Summary

In this post, I showed you how to use Incident Manager to monitor for security events and invoke a response plan via Amazon CloudWatch or Amazon EventBridge. AWS CloudTrail API activity (for a root account login), Amazon GuardDuty (for high severity findings), and AWS Config (to enforce policies like preventing public write access to an S3 bucket). I demonstrated how you can create an incident management and response plan to ensure you have used the power of cloud to create automations that respond to and mitigate security incidents in a timely manner. To learn more about Incident Manager, see What Is AWS Systems Manager Incident Manager in the AWS documentation.

If you have feedback about this post, submit comments in the comments section below. If you have questions about this post, start a new thread on the Systems Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sumit Patel

As a Senior Solutions Architect at AWS, Sumit works with large enterprise customers helping them create innovative solutions to address their cloud challenges. Sumit uses his more than 15 years of enterprise experience to help customers navigate their cloud transformation journey and shape the right dynamics between technology and business.

New Standard Contractual Clauses now part of the AWS GDPR Data Processing Addendum for customers

Post Syndicated from Stéphane Ducable original https://aws.amazon.com/blogs/security/new-standard-contractual-clauses-now-part-of-the-aws-gdpr-data-processing-addendum-for-customers/

Today, we’re happy to announce an update to our online AWS GDPR Data Processing Addendum (AWS GDPR DPA) and our online Service Terms to include the new Standard Contractual Clauses (SCCs) that the European Commission (EC) adopted in June 2021. The EC-approved SCCs give our customers the ability to comply with the General Data Protection Regulation (GDPR) when they transfer personal data subject to GDPR to countries outside the European Economic Area (EEA) that haven’t received an EC adequacy decision (third countries). The new SCCs are now better adapted to how our customers operate their applications or run their workloads in the cloud, because they cover different transfer scenarios, and also provide enhanced safeguards for data transfers.

Achieving compliance with GDPR is critical for hundreds of thousands of AWS customers and AWS. The new SCCs allow all AWS customers that are controllers or processors under GDPR to continue to transfer personal data from their AWS accounts in compliance with GDPR. As part of the online Service Terms, the new SCCs will apply automatically whenever an AWS customer uses AWS services to transfer customer data to third countries.

The updated AWS GDPR DPA incorporating the new SCCs supplements our announcement in February 2021 of strengthened commitments to protect customer data, such as challenging law enforcement requests that conflict with EU law. We have also published the blog post How AWS is helping EU customers navigate the new normal for data protection, and the whitepaper Navigating Compliance with EU Data Transfer Requirements to help AWS customers conduct their data transfer assessments and comply with GDPR, the Schrems II ruling, and the recommendations issued by the European Data Protection Board. AWS is constantly working to ensure that our customers can enjoy the benefits of AWS everywhere they operate, and we welcome the new SCCs because they enable our customers to continue using AWS services in compliance with GDPR. If you have questions or need more information, visit our EU Data Protection page and our GDPR Center.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Stéphane Ducable

Stéphane is Vice President of Public Policy – EMEA at AWS. He is focused on increasing awareness of the benefits of adopting cloud computing technology across the EMEA region.

Disaster recovery compliance in the cloud, part 2: A structured approach

Post Syndicated from Dan MacKay original https://aws.amazon.com/blogs/security/disaster-recovery-compliance-in-the-cloud-part-2-a-structured-approach/

Compliance in the cloud is fraught with myths and misconceptions. This is particularly true when it comes to something as broad as disaster recovery (DR) compliance where the requirements are rarely prescriptive and often based on legacy risk-mitigation techniques that don’t account for the exceptional resilience of modern cloud-based architectures. For regulated entities subject to principles-based supervision such as many financial institutions (FIs), the responsibility lies with the FI to determine what’s necessary to adequately recover from a disaster event. Without clear instructions, FIs are susceptible to making incorrect assumptions regarding their compliance requirements for DR.

In Part 1 of this two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. In Part 2, I outline five steps you can take to avoid these misconceptions when architecting DR-compliant workloads for deployment on Amazon Web Services (AWS).

1. Identify workloads planned for deployment

It’s common for FIs to have a portfolio of workloads they are considering deploying to the cloud and often want to know that they can be compliant across the board. But compliance isn’t a one-size-fits-all domain—it’s based on the characteristics of each workload. For example, does the workload contain personally identifiable information (PII)? Will it be used to store, process, or transmit credit card information? Compliance is dependent on the answers to questions such as these and must be assessed on a case-by-case basis. Therefore, the first step in architecting for compliance is to identify the specific workloads you plan to deploy to the cloud. This way, you can assess the requirements of these specific workloads and not be distracted by aspects of compliance that might not be relevant.

2. Define the workload’s resiliency requirements

Resiliency is the ability of a workload to recover from infrastructure or service disruptions. DR is an important part of your resiliency strategy and concerns how your workload responds to a disaster event. DR strategies on AWS range from simple, low cost options such as backup and restore, to more complex options such as multi-site active-active, as shown in Figure 1.
 

For more information, I encourage you to read Seth Eliot’s blog series on DR Architecture on AWS as well as the AWS whitepaper Disaster Recovery of Workloads on AWS: Recovery in the Cloud.

The DR strategy you choose for a particular workload is dependent on your organization’s requirements for avoiding loss of data—known as the recovery point objective (RPO)—and reducing downtime where the workload isn’t available —known as the recovery time objective (RTO). RPO and RTO are key factors for determining the minimum architectural specifications necessary to meet the workload’s resiliency requirements. For example, can the workload’s RPO and RTO be achieved using a multi-AZ architecture in a single AWS Region, or do the resiliency requirements necessitate deploying the workload across multiple AWS Regions? Even if your workload is not subject to explicit compliance requirements for resiliency, understanding these requirements is necessary for assessing other aspects of DR compliance, including data residency and geodiversity.

3. Confirm the workload’s data residency requirements

As I mentioned in Part 1, data residency requirements might restrict which AWS Region or Regions you can deploy your workload to. Therefore, you need to confirm whether the workload is subject to any data residency requirements within applicable laws and regulations, corporate policies, or contractual obligations.

In order to properly assess these requirements, you must review the explicit language of the requirements so as to understand the specific constraints they impose. You should also consult legal, privacy, and compliance subject-matter specialists to help you interpret these requirements based on the characteristics of the workload. For example, do the requirements specifically state that the data cannot leave the country, or can the requirement be met so long as the data can be accessed from that country? Does the requirement restrict you from storing a copy of the data in another country—for example, for backup and recovery purposes? What if the data is encrypted and can only be read using decryption keys kept within the home country? Consulting subject-matter specialists to help interpret these requirements can help you avoid making overly restrictive assumptions and imposing unnecessary constraints on the workload’s architecture.

4. Confirm the workload’s geodiversity requirements

A single Region, multiple-AZ architecture is often sufficient to meet a workload’s resiliency requirements. However, if the workload is subject to geodiversity requirements, the distance between the AZs in an AWS Region might not conform to the minimum distance between individual data centers specified by the requirements. Therefore, it’s critical to confirm whether any geodiversity requirements apply to the workload.

Like data residency, it’s important to assess the explicit language of geodiversity requirements. Are they written down in a regulation or corporate policy, or are they just a recommended practice? Can the requirements be met if the workload is deployed across three or more AZs even if the minimum distance between those AZs is less than the specified minimum distance between the primary and backup data centers? If it’s a corporate policy, does it allow for exceptions if an alternative method provides equal or greater resiliency than asynchronous replication between two geographically distant data centers? Or perhaps the corporate policy is outdated and should be revised to reflect modern risk mitigation techniques. Understanding these parameters can help you avoid unnecessary constraints as you assess architectural options for your workloads.

5. Assess architectural options to meet the workload’s requirements

Now that you understand the workload’s requirements for resiliency, data residency, and geodiversity, you can assess the architectural options that meet these requirements in the cloud.

As per AWS Well-Architected best practices, you should strive for the simplest architecture necessary to meet your requirements. This includes assessing whether the workload can be accommodated within a single AWS Region. If the workload is constrained by explicit geographic diversity requirements or has resiliency requirements that cannot be accommodated by a single AWS Region, then you might need to architect the workload for deployment across multiple AWS Regions. If the workload is also constrained by explicit data residency requirements, then it might not be possible to deploy to multiple AWS Regions. In cases such as these, you can work with our AWS Solution Architects to assess hybrid options that might meet your compliance requirements, such as using AWS Outposts, Amazon Elastic Container Service (Amazon ECS) Anywhere, or Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere. Another option may be to consider a DR solution in which your on-premises infrastructure is used as a backup for a workload running on AWS. In some cases, this might be a long-term solution. In others, it might be an interim solution until certain constraints can be removed—for example, a change to corporate policy or the introduction of additional AWS Regions in a particular country.

Conclusion

Let’s recap by summarizing some guiding principles for architecting compliant DR workloads as outlined in this two-part series:

  • Avoid assumptions; confirm the facts. If it’s not written down, it’s unlikely to be considered a mandatory compliance requirement.
  • Consult the experts. Legal, privacy, and compliance, as well as AWS Solution Architects, AWS security and compliance specialists, and other subject-matter specialists.
  • Avoid generalities; focus on the specifics. There is no one-size-fits-all approach.
  • Strive for simplicity, not zero risk. Don’t use multiple AWS Regions when one will suffice.
  • Don’t get distracted by exceptions. Focus on your current requirements, not workloads you’re not yet prepared to deploy to the cloud.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dan MacKay

Dan is the Financial Services Compliance Specialist for AWS Canada. As a member of the Worldwide Financial Services Security & Compliance team, Dan advises financial services customers on best practices and practical solutions for cloud-related governance, risk, and compliance. He specializes in helping AWS customers navigate financial services and privacy regulations applicable to the use of cloud technology in Canada.

Disaster recovery compliance in the cloud, part 1: Common misconceptions

Post Syndicated from Dan MacKay original https://aws.amazon.com/blogs/security/disaster-recovery-compliance-in-the-cloud-part-1-common-misconceptions/

Compliance in the cloud can seem challenging, especially for organizations in heavily regulated sectors such as financial services. Regulated financial institutions (FIs) must comply with laws and regulations (often in multiple jurisdictions), global security standards, their own corporate policies, and even contractual obligations with their customers and counterparties. These various compliance requirements may impose constraints on how their workloads can be architected for the cloud, and may require interpretation on what FIs must do in order to be compliant. It’s common for FIs to make assumptions regarding their compliance requirements, which can result in unnecessary costs and increased complexity, and might not align with their strategic objectives. A modern, rationalized approach to compliance can help FIs avoid imposing unnecessary constraints while meeting their mandatory requirements.

In my role as an Amazon Web Services (AWS) Compliance Specialist, I work with our financial services customers to identify, assess, and determine solutions to address their compliance requirements as they move to the cloud. One of the most common challenges customers ask me about is how to comply with disaster recovery (DR) requirements for workloads they plan to run in the cloud. In this blog post, I share some of the typical misconceptions FIs have about DR compliance in the cloud. In Part 2, I outline a structured approach to designing compliant architectures for your DR workloads. As my primary market is Canada, the examples in this blog post largely pertain to FIs operating in Canada, but the principles and best practices are relevant to regulated organizations in any country.

“Why isn’t there a checklist for compliance in the cloud?”

Compliance requirements are sometimes prescriptive: “if X, then you must do Y.” When requirements are prescriptive, it’s usually clear what you must do in order to be compliant. For example, the Payment Card Industry Data Security Standard (PCI DSS) requirement 8.2.4 obliges companies that process, store, or transmit credit card information to “change user passwords/passphrases at least once every 90 days.” But in the financial services sector, compliance requirements for managing operational risks can be subjective. When regulators take what is known as a principles-based approach to setting regulatory expectations, each FI is required to assess their specific risks and determine the mitigating controls necessary to conform with the organization’s tolerance for operational risk. Because the rules aren’t prescriptive, there is no “checklist for achieving compliance.” Instead, principles-based requirements are guidelines that FIs are expected to consider as they design and implement technology solutions. They are, by definition, subject to interpretation and can be prone to myths and misconceptions among FIs and their service providers. To illustrate this, let’s look at two aspects of DR that are frequently misunderstood within the Canadian financial services industry: data residency and geodiversity.

“My data has to stay in country X”

Data residency or data localization is a requirement for specific data-sets processed and stored in an IT system to remain within a specific jurisdiction (for example, a country). As discussed in our Policy Perspectives whitepaper, contrary to historical perspectives, data residency doesn’t provide better security. Most cyber-attacks are perpetrated remotely and attackers aren’t deterred by the physical location of their victims. In fact, data residency can run counter to an organization’s objectives for security and resilience. For example, data residency requirements can limit the options our customers have when choosing the AWS Region or Regions in which to run their production workloads. This is especially challenging for customers who want to use multiple Regions for backup and recovery purposes.

It’s common for FIs operating in Canada to assume that they’re required to keep their data—particularly customer data—in Canada. In reality, there’s very little from a statutory perspective that imposes such a constraint. None of the private sector privacy laws include data residency requirements, nor do any of the financial services regulatory guidelines. There are some place of records requirements in Canadian federal financial services legislation such as The Bank Act and The Insurance Companies Act, but these are relatively narrow in scope and apply primarily to corporate records. For most Canadian FIs, their requirements are more often a result of their own corporate policies or contractual obligations, not externally imposed by public policies or regulations.

“My data centers have to be X kilometers apart”

Geodiversity—short for geographic diversity—is the concept of maintaining a minimum distance between primary and backup data processing sites. Geodiversity is based on the principle that requiring a certain distance between data centers mitigates the risk of location-based disruptions such as natural disasters. The principle is still relevant in a cloud computing context, but is not the only consideration when it comes to planning for DR. The cloud allows FIs to define operational resilience requirements instead of limiting themselves to antiquated business continuity planning and DR concepts like physical data center implementation requirements. Legacy disaster recovery solutions and architectures, and lifting and shifting such DR strategies into the cloud, can diminish the potential benefits of using the cloud to improve operational resilience. Modernizing your information technology also means modernizing your organization’s approach to DR.

In the cloud, vast physical distance separation is an anti-pattern—it’s an arbitrary metric that does little to help organizations achieve availability and recovery objectives. At AWS, we design our global infrastructure so that there’s a meaningful distance between the Availability Zones (AZs) within an AWS Region to support high availability, but close enough to facilitate synchronous replication across those AZs (an AZ being a cluster of data centers). Figure 1 shows the relationship between Regions, AZs, and data centers.
 

Synchronous replication across multiple AZs enables you to minimize data loss (defined as the recovery point objective or RPO) and reduce the amount of time that workloads are unavailable (defined as the recovery time objective or RTO). However, the low latency required for synchronous replication becomes less achievable as the distance between data centers increases. Therefore, a geodiversity requirement that mandates a minimum distance between data centers that’s too far for synchronous replication might prohibit you from taking advantage of AWS’s multiple-AZ architecture. A multiple-AZ architecture can achieve RTOs and RPOs that aren’t possible with a simple geodiversity mitigation strategy. For more information, refer to the AWS whitepaper Disaster Recovery of Workloads on AWS: Recovery in the Cloud.

Again, it’s a common perception among Canadian FIs that the disaster recovery architecture for their production workloads must comply with specific geodiversity requirements. However, there are no statutory requirements applicable to FIs operating in Canada that mandate a minimum distance between data centers. Some FIs might have corporate policies or contractual obligations that impose geodiversity requirements, but for most FIs I’ve worked with, geodiversity is usually a recommended practice rather than a formal policy. Informal corporate guidelines can have some value, but they aren’t absolute rules and shouldn’t be treated the same as mandatory compliance requirements. Otherwise, you might be unintentionally restricting yourself from taking advantage of more effective risk management techniques.

“But if it is a compliance requirement, doesn’t that mean I have no choice?”

Both of the previous examples illustrate the importance of not only confirming your compliance requirements, but also recognizing the source of those requirements. It might be infeasible to obtain an exception to an externally-imposed obligation such as a regulatory requirement, but exceptions or even revisions to corporate policies aren’t out of the question if you can demonstrate that modern approaches provide equal or greater protection against a particular risk—for example, the high availability and rapid recoverability supported by a multiple-AZ architecture. Consider whether your compliance requirements provide for some level of flexibility in their application.

Also, because many of these requirements are principles-based, they might be subject to interpretation. You have to consider the specific language of the requirement in the context of the workload. For example, a data residency requirement might not explicitly prohibit you from storing a copy of the content in another country for backup and recovery purposes. For this reason, I recommend that you consult applicable specialists from your legal, privacy, and compliance teams to aid in the interpretation of compliance requirements. Once you understand the legal boundaries of your compliance requirements, AWS Solutions Architects and other financial services industry specialists such as myself can help you assess viable options to meet your needs.

Conclusion

In this first part of a two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. The key is to avoid making assumptions that might impose greater constraints on your architecture than are necessary. In Part 2, I show you a structured approach for architecting compliant DR workloads that can help you to avoid these preventable missteps.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dan MacKay

Dan is the Financial Services Compliance Specialist for AWS Canada. As a member of the Worldwide Financial Services Security & Compliance team, Dan advises financial services customers on best practices and practical solutions for cloud-related governance, risk, and compliance. He specializes in helping AWS customers navigate financial services and privacy regulations applicable to the use of cloud technology in Canada.

Protect your remote workforce by using a managed DNS firewall and network firewall

Post Syndicated from Patrick Duffy original https://aws.amazon.com/blogs/security/protect-your-remote-workforce-by-using-a-managed-dns-firewall-and-network-firewall/

More of our customers are adopting flexible work-from-home and remote work strategies that use virtual desktop solutions, such as Amazon WorkSpaces and Amazon AppStream 2.0, to deliver their user applications. Securing these workloads benefits from a layered approach, and this post focuses on protecting your users at the network level. Customers can now apply these security measures by using Route 53 Resolver DNS Firewall and AWS Network Firewall, two managed services that provide layered protection for the customer’s virtual private cloud (VPC). This blog post provides recommendations for how you can build network protection for your remote workforce by using DNS Firewall and Network Firewall.

Overview

DNS Firewall helps you block DNS queries that are made for known malicious domains, while allowing DNS queries to trusted domains. DNS Firewall has a simple deployment model that makes it straightforward for you to start protecting your VPCs by using managed domain lists, as well as custom domain lists. With DNS Firewall, you can filter and regulate outbound DNS requests. The service inspects DNS requests that are handled by Route 53 Resolver and applies actions that you define to allow or block requests.

DNS Firewall consists of domain lists and rule groups. Domain lists include custom domain lists that you create and AWS managed domain lists. Rule groups are associated with VPCs and control the response for domain lists that you choose. You can configure rule groups at scale by using AWS Firewall Manager. Rule groups process in priority order and stop processing after a rule is matched.

Network Firewall helps customers protect their VPCs by protecting the workload at the network layer. Network Firewall is an automatically scaling, highly available service that simplifies deployment and management for network administrators. With Network Firewall, you can perform inspection for inbound traffic, outbound traffic, traffic between VPCs, and traffic between VPCs and AWS Direct Connect or AWS VPN traffic. You can deploy stateless rules to allow or deny traffic based on the protocol, source and destination ports, and source and destination IP addresses. Additionally, you can deploy stateful rules that allow or block traffic based on domain lists, standard rule groups, or Suricata compatible intrusion prevention system (IPS) rules.

To configure Network Firewall, you need to create Network Firewall rule groups, a Network Firewall policy, and finally, a network firewall. Rule groups consist of stateless and stateful rule groups. For both types of rule groups, you need to estimate the capacity when you create the rule group. See the Network Firewall Developer Guide to learn how to estimate the capacity that is needed for the stateless and stateful rule engines.

This post shows you how to configure DNS Firewall and Network Firewall to protect your workload. You will learn how to create rules that prevent DNS queries to unapproved DNS servers, and that block resources by protocol, domain, and IP address. For the purposes of this post, we’ll show you how to protect a workload consisting of two Microsoft Active Directory domain controllers, an application server running QuickBooks, and Amazon WorkSpaces to deliver the QuickBooks application to end users, as shown in Figure 1.
 

Figure 1: An example architecture that includes domain controllers and QuickBooks hosted on EC2 and Amazon WorkSpaces for user virtual desktops

Figure 1: An example architecture that includes domain controllers and QuickBooks hosted on EC2 and Amazon WorkSpaces for user virtual desktops

Configure DNS Firewall

DNS Firewall domain lists currently include two managed lists to block malware and botnet command-and-control networks, and you can also bring your own list. Your list can include any domain names that you have found to be malicious and any domains that you don’t want your workloads connecting to.

To configure DNS Firewall domain lists (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Domain lists.
  3. Choose Add domain list to configure a customer-owned domain list.
  4. In the domain list builder dialog box, do the following.
    1. Under Domain list name, enter a name.
    2. In the second dialog box, enter the list of domains you want to allow or block.
    3. Choose Add domain list.

When you create a domain list, you can enter a list of domains you want to block or allow. You also have the option to upload your domains by using a bulk upload. You can use wildcards when you add domains for DNS Firewall. Figure 2 shows an example of a custom domain list that matches the root domain and any subdomain of box.com, dropbox.com, and sharefile.com, to prevent users from using these file sharing platforms.
 

Figure 2: Domains added to a customer-owned domain list

Figure 2: Domains added to a customer-owned domain list

To configure DNS Firewall rule groups (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Rule group.
  3. Choose Create rule group to apply actions to domain lists.
  4. Enter a rule group name and optional description.
  5. Choose Add rule to add a managed or customer-owned domain list, and do the following.
    1. Enter a rule name and optional description.
    2. Choose Add my own domain list or Add AWS managed domain list.
    3. Select the desired domain list.
    4. Choose an action, and then choose Next.
  6. (Optional) Change the rule priority.
  7. (Optional) Add tags.
  8. Choose Create rule group.

When you create your rule group, you attach rules and set an action and priority for the rule. You can set rule actions to Allow, Block, or Alert. When you set the action to Block, you can return the following responses:

  • NODATA – Returns no response.
  • NXDOMAIN – Returns an unknown domain response.
  • OVERRIDE – Returns a custom CNAME response.

Figure 3 shows rules attached to the DNS firewall.
 

Figure 3: DNS Firewall rules

Figure 3: DNS Firewall rules

To associate your rule group to a VPC (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under DNS Firewall, choose Rule group.
  3. Select the desired rule group.
  4. Choose Associated VPCs, and then choose Associate VPC.
  5. Select one or more VPCs, and then choose Associate.

The rule group will filter your DNS requests to Route 53 Resolver. Set your DNS servers forwarders to use your Route 53 Resolver.

To configure logging for your firewall’s activity, navigate to the Route 53 console and select your VPC under the Resolver section. You can configure multiple logging options, if required. You can choose to log to Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), or Amazon Kinesis Data Firehose. Select the VPC that you want to log queries for and add any tags that you require.

Configure Network Firewall

In this section, you’ll learn how to create Network Firewall rule groups, a firewall policy, and a network firewall.

Configure rule groups

Stateless rule groups are straightforward evaluations of a source and destination IP address, protocol, and port. It’s important to note that stateless rules don’t perform any deep inspection of network traffic.

Stateless rules have three options:

  • Pass – Pass the packet without further inspection.
  • Drop – Drop the packet.
  • Forward – Forward the packet to stateful rule groups.

Stateless rules inspect each packet in isolation in the order of priority and stop processing when a rule has been matched. This example doesn’t use a stateless rule, and simply uses the default firewall action to forward all traffic to stateful rule groups.

Stateful rule groups support deep packet inspection, traffic logging, and more complex rules. Stateful rule groups evaluate traffic based on standard rules, domain rules or Suricata rules. Depending on the type of rule that you use, you can pass, drop, or create alerts on the traffic that is inspected.

To create a rule group (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Network Firewall rule groups.
  3. Choose Create Network Firewall rule group.
  4. Choose Stateful rule group or Stateless rule group.
  5. Enter the desired settings.
  6. Choose Create stateful rule group.

The example in Figure 4 uses standard rules to block outbound and inbound Server Message Block (SMB), Secure Shell (SSH), Network Time Protocol (NTP), DNS, and Kerberos traffic, which are common protocols used in our example workload. Network Firewall doesn’t inspect traffic between subnets within the same VPC or over VPC peering, so these rules won’t block local traffic. You can add rules with the Pass action to allow traffic to and from trusted networks.
 

Figure 4: Standard rules created to block unauthorized SMB, SSH, NTP, DNS, and Kerberos traffic

Figure 4: Standard rules created to block unauthorized SMB, SSH, NTP, DNS, and Kerberos traffic

Blocking outbound DNS requests is a common strategy to verify that DNS traffic resolves only from local resolvers, such as your DNS server or the Route 53 Resolver. You can also use these rules to prevent inbound traffic to your VPC-hosted resources, as an additional layer of security beyond security groups. If a security group erroneously allows SMB access to a file server from external sources, Network Firewall will drop this traffic based on these rules.

Even though the DNS Firewall policy described in this blog post will block DNS queries for unauthorized sharing platforms, some users might attempt to bypass this block by modifying the HOSTS file on their Amazon WorkSpace. To counter this risk, you can add a domain rule to your firewall policy to block the box.com, dropbox.com, and sharefile.com domains, as shown in Figure 5.
 

Figure 5: A domain list rule to block box.com, dropbox.com, and sharefile.com

Figure 5: A domain list rule to block box.com, dropbox.com, and sharefile.com

Configure firewall policy

You can use firewall policies to attach stateless and stateful rule groups to a single policy that is used by one or more network firewalls. Attach your rule groups to this policy and set your preferred default stateless actions. The default stateless actions will apply to any packets that don’t match a stateless rule group within the policy. You can choose separate actions for full packets and fragmented packets, depending on your needs, as shown in Figure 6.
 

Figure 6: Stateful rule groups attached to a firewall policy

Figure 6: Stateful rule groups attached to a firewall policy

You can choose to forward the traffic to be processed by any stateful rule groups that you have attached to your firewall policy. To bypass any stateful rule groups, you can select the Pass option.

To create a firewall policy (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Firewall policies.
  3. Choose Create firewall policy.
  4. Enter a name and description for the policy.
  5. Choose Add rule groups.
    1. Select the stateless default actions you want to use.
    2. For any stateless or stateful rule groups, choose Add rule groups to add any rule groups that you want to use.
  6. (Optional) Add tags.
  7. Choose Create firewall policy.

Configure a network firewall

Configuring the network firewall requires you to attach the firewall to a VPC and select at least one subnet.

To create a network firewall (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under AWS Network Firewall, choose Firewalls.
  3. Choose Create firewall.
  4. Under Firewall details, do the following:
    1. Enter a name for the firewall.
    2. Select the VPC.
    3. Select one or more Availability Zones and subnets, as needed.
  5. Under Associated firewall policy, do the following:
    1. Choose Associate an existing firewall policy.
    2. Select the firewall policy.
  6. (Optional) Add tags.
  7. Choose Create firewall.

Two subnets in separate Availability Zones are used for the network firewall example shown in Figure 7, to provide high availability.
 

Figure 7: A network firewall configuration that includes multiple subnets

Figure 7: A network firewall configuration that includes multiple subnets

After the firewall is in the ready state, you’ll be able to see the endpoint IDs of the firewall endpoints, as shown in Figure 8. The endpoint IDs are needed when you update VPC route tables.
 

Figure 8: Firewall endpoint IDs

Figure 8: Firewall endpoint IDs

You can configure alert logs, flow logs, or both to be sent to Amazon S3, CloudWatch log groups, or Kinesis Data Firehose. Administrators configure alert logging to build proactive alerting and flow logging to use in troubleshooting and analysis.

Finalize the setup

After the firewall is created and ready, the last step to complete setup is to update the VPC route tables. Update your routing in the VPC to route traffic through the new network firewall endpoints. Update the public subnets route table to direct traffic to the firewall endpoint in the same Availability Zone. Update the internet gateway route to direct traffic to the firewall endpoints in the matching Availability Zone for public subnets. These routes are shown in Figure 9.
 

Figure 9: Network diagram of the firewall solution

Figure 9: Network diagram of the firewall solution

In this example architecture, Amazon WorkSpaces users are able to connect directly between private subnet 1 and private subnet 2 to access local resources. Security groups and Windows authentication control access from WorkSpaces to EC2-hosted workloads such as Active Directory, file servers, and SQL applications. For example, Microsoft Active Directory domain controllers are added to a security group that allows inbound ports 53, 389, and 445, as shown in Figure 10.
 

Figure 10: Domain controller security group inbound rules

Figure 10: Domain controller security group inbound rules

Traffic from WorkSpaces will first resolve DNS requests by using the Active Directory domain controller. The domain controller uses the local Route 53 Resolver as a DNS forwarder, which DNS Firewall protects. Network traffic then flows from the private subnet to the NAT gateway, through the network firewall to the internet gateway. Response traffic flows back from the internet gateway to the network firewall, then to the NAT gateway, and finally to the user WorkSpace. This workflow is shown in Figure 11.
 

Figure 11: Traffic flow for allowed traffic

Figure 11: Traffic flow for allowed traffic

If a user attempts to connect to blocked internet resources, such as box.com, a botnet, or a malware domain, this will result in a NXDOMAIN response from DNS Firewall, and the connection will not proceed any further. This blocked traffic flow is shown in Figure 12.
  

Figure 12: Traffic flow when blocked by DNS Firewall

Figure 12: Traffic flow when blocked by DNS Firewall

If a user attempts to initiate a DNS request to a public DNS server or attempts to access a public file server, this will result in a dropped connection by Network Firewall. The traffic will flow as expected from the user WorkSpace to the NAT gateway and from the NAT gateway to the network firewall, which inspects the traffic. The network firewall then drops the traffic when it matches a rule with the drop or block action, as shown in Figure 13. This configuration helps to ensure that your private resources only use approved DNS servers and internet resources. Network Firewall will block unapproved domains and restricted protocols that use standard rules.
 

Figure 13: Traffic flow when blocked by Network Firewall

Figure 13: Traffic flow when blocked by Network Firewall

Take extra care to associate a route table with your internet gateway to route private subnet traffic to your firewall endpoints; otherwise, response traffic won’t make it back to your private subnets. Traffic will route from the private subnet up through the NAT gateway in its Availability Zone. The NAT gateway will pass the traffic to the network firewall endpoint in the same Availability Zone, which will process the rules and send allowed traffic to the internet gateway for the VPC. By using this method, you can block outbound network traffic with criteria that are more advanced than what is allowed by network ACLs.

Conclusion

Amazon Route 53 Resolver DNS Firewall and AWS Network Firewall help you protect your VPC workloads by inspecting network traffic and applying deep packet inspection rules to block unwanted traffic. This post focused on implementing Network Firewall in a virtual desktop workload that spans multiple Availability Zones. You’ve seen how to deploy a network firewall and update your VPC route tables. This solution can help increase the security of your workloads in AWS. If you have multiple VPCs to protect, consider enforcing your policies at scale by using AWS Firewall Manager, as outlined in this blog post.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Network Firewall forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Patrick Duffy

Patrick is a Solutions Architect in the Small Medium Business (SMB) segment at AWS. He is passionate about raising awareness and increasing security of AWS workloads. Outside work, he loves to travel and try new cuisines and enjoys a match in Magic Arena or Overwatch.

How US federal agencies can use AWS to encrypt data at rest and in transit

Post Syndicated from Robert George original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-use-aws-to-encrypt-data-at-rest-and-in-transit/

This post is part of a series about how Amazon Web Services (AWS) can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. You will learn how you can use AWS information security practices to meet the requirement to encrypt your data at rest and in transit, to the maximum extent possible.

Encrypt your data at rest in AWS

Data at rest represents any data that you persist in non-volatile storage for any duration in your workload. This includes block storage, object storage, databases, archives, IoT devices, and any other storage medium on which data is persisted. Protecting your data at rest reduces the risk of unauthorized access when encryption and appropriate access controls are implemented.

AWS KMS provides a streamlined way to manage keys used for at-rest encryption. It integrates with AWS services to simplify using your keys to encrypt data across your AWS workloads. It uses hardware security modules that have been validated under FIPS 140-2 to protect your keys. You choose the level of access control that you need, including the ability to share encrypted resources between accounts and services. AWS KMS logs key usage to AWS CloudTrail to provide an independent view of who accessed encrypted data, including AWS services that are using keys on your behalf. As of this writing, AWS KMS integrates with 81 different AWS services. Here are details on recommended encryption for workloads using some key services:

You can use AWS KMS to encrypt other data types including application data with client-side encryption. A client-side application or JavaScript encrypts data before uploading it to S3 or other storage resources. As a result, uploaded data is protected in transit and at rest. Customer options for client-side encryption include the AWS SDK for KMS, the AWS Encryption SDK, and use of third-party encryption tools.

You can also use AWS Secrets Manager to encrypt application passwords, connection strings, and other secrets. Database credentials, resource names, and other sensitive data used in AWS Lambda functions can be encrypted and accessed at run time. This increases the security of these secrets and allows for easier credential rotation.

KMS HSMs are FIPS 140-2 validated and accessible using FIPS validated endpoints. Agencies with additional requirements that require a FIPS 140-3 validated hardware security module (HSM) (for example, for securing third-party secrets managers) can use AWS CloudHSM.

For more information about AWS KMS and key management best practices, visit these resources:

Encrypt your data in transit in AWS

In addition to encrypting data at rest, agencies must also encrypt data in transit. AWS provides a variety of solutions to help agencies encrypt data in transit and enforce this requirement.

First, all network traffic between AWS data centers is transparently encrypted at the physical layer. This data-link layer encryption includes traffic within an AWS Region as well as between Regions. Additionally, all traffic within a virtual private cloud (VPC) and between peered VPCs is transparently encrypted at the network layer when you are using supported Amazon EC2 instance types. Customers can choose to enable Transport Layer Security (TLS) for the applications they build on AWS using a variety of services. All AWS service endpoints support TLS to create a secure HTTPS connection to make API requests.

AWS offers several options for agency-managed infrastructure within the AWS Cloud that needs to terminate TLS. These options include load balancing services (for example, Elastic Load Balancing, Network Load Balancer, and Application Load Balancer), Amazon CloudFront (a content delivery network), and Amazon API Gateway. Each of these endpoint services enable customers to upload their digital certificates for the TLS connection. Digital certificates then need to be managed appropriately to account for expiration and rotation requirements. AWS Certificate Manager (ACM) simplifies generating, distributing, and rotating digital certificates. ACM offers publicly trusted certificates that can be used in AWS services that require certificates to terminate TLS connections to the internet. ACM also provides the ability to create a private certificate authority (CA) hierarchy that can integrate with existing on-premises CAs to automatically generate, distribute, and rotate certificates to secure internal communication among customer-managed infrastructure.

Finally, you can encrypt communications between your EC2 instances and other AWS resources that are connected to your VPC, such as Amazon Relational Database Service (Amazon RDS) databasesAmazon Elastic File System (Amazon EFS) file systemsAmazon S3Amazon DynamoDBAmazon Redshift, Amazon EMR, Amazon OpenSearch Service, Amazon ElasticCache for RedisAmazon FSx Windows File Server, AWS Direct Connect (DX) MACsec, and more.

Conclusion

This post has reviewed services that are used to encrypt data at rest and in transit, following the Executive Order on Improving the Nation’s Cybersecurity. I discussed the use of AWS KMS to manage encryption keys that handle the management of keys for at-rest encryption, as well as the use of ACM to manage certificates that protect data in transit.

Next steps

To learn more about how AWS can help you meet the requirements of the executive order, see the other posts in this series:

Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Robert George

Robert is a Solutions Architect on the Worldwide Public Sector (WWPS) team who works with customers to design secure, high-performing, and cost-effective systems in the AWS Cloud. He has previously worked in cybersecurity roles focused on designing security architectures, securing enterprise systems, and leading incident response teams for highly regulated environments.

17 additional AWS services authorized for DoD workloads in the AWS GovCloud Regions

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/17-additional-aws-services-authorized-for-dod-workloads-in-the-aws-govcloud-regions/

I’m pleased to announce that the Defense Information Systems Agency (DISA) has authorized 17 additional Amazon Web Services (AWS) services and features in the AWS GovCloud (US) Regions, bringing the total to 105 services and major features that are authorized for use by the U.S. Department of Defense (DoD). AWS now offers additional services to DoD mission owners in these categories: business applications; computing; containers; cost management; developer tools; management and governance; media services; security, identity, and compliance; and storage.

Why does authorization matter?

DISA authorization of 17 new cloud services enables mission owners to build secure innovative solutions to include systems that process unclassified national security data (for example, Impact Level 5). DISA’s authorization demonstrates that AWS effectively implemented more than 421 security controls by using applicable criteria from NIST SP 800-53 Revision 4, the US General Services Administration’s FedRAMP High baseline, and the DoD Cloud Computing Security Requirements Guide.

Recently authorized AWS services at DoD Impact Levels (IL) 4 and 5 include the following:

Business Applications

Compute

Containers

Cost Management

  • AWS Budgets – Set custom budgets to track your cost and usage, from the simplest to the most complex use cases
  • AWS Cost Explorer – An interface that lets you visualize, understand, and manage your AWS costs and usage over time
  • AWS Cost & Usage Report – Itemize usage at the account or organization level by product code, usage type, and operation

Developer Tools

  • AWS CodePipeline – Automate continuous delivery pipelines for fast and reliable updates
  • AWS X-Ray – Analyze and debug production and distributed applications, such as those built using a microservices architecture

Management & Governance

Media Services

  • Amazon Textract – Extract printed text, handwriting, and data from virtually any document

Security, Identity & Compliance

  • Amazon Cognito – Secure user sign-up, sign-in, and access control
  • AWS Security Hub – Centrally view and manage security alerts and automate security checks

Storage

  • AWS Backup – Centrally manage and automate backups across AWS services

Figure 1 shows the IL 4 and IL 5 AWS services that are now authorized for DoD workloads, broken out into functional categories.
 

Figure 1: The AWS services newly authorized by DISA

Figure 1: The AWS services newly authorized by DISA

To learn more about AWS solutions for the DoD, see our AWS solution offerings. Follow the AWS Security Blog for updates on our Services in Scope by Compliance Program. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler is the DoD Compliance Program Manager for AWS Security Assurance. He has over 20 years of experience providing information security solutions to the federal civilian, DoD, and intelligence agencies.