Tag Archives: Compliance

Continuous Compliance Workflow for Infrastructure as Code: Part 2

Post Syndicated from DAMODAR SHENVI WAGLE original https://aws.amazon.com/blogs/devops/continuous-compliance-workflow-for-infrastructure-as-code-part-2/

In the first post of this series, we introduced a continuous compliance workflow in which an enterprise security and compliance team can release guardrails in a continuous integration, continuous deployment (CI/CD) fashion in your organization.

In this post, we focus on the technical implementation of the continuous compliance workflow. We demonstrate how to use AWS Developer Tools to create a CI/CD pipeline that releases guardrails for Terraform application workloads.

We use the Terraform-Compliance framework to define the guardrails. Terraform-Compliance is a lightweight, security and compliance-focused test framework for Terraform to enable the negative testing capability for your infrastructure as code (IaC).

With this compliance framework, we can ensure that the implemented Terraform code follows security standards and your own custom standards. Currently, HashiCorp provides Sentinel (a policy as code framework) for enterprise products. AWS has CloudFormation Guard an open-source policy-as-code evaluation tool for AWS CloudFormation templates. Terraform-Compliance allows us to build a similar functionality for Terraform, and is open source.

This post is from the perspective of a security and compliance engineer, and assumes that the engineer is familiar with the practices of IaC, CI/CD, behavior-driven development (BDD), and negative testing.

Solution overview

You start by building the necessary resources as listed in the workload (application development team) account:

  • An AWS CodeCommit repository for the Terraform workload
  • A CI/CD pipeline built using AWS CodePipeline to deploy the workload
  • A cross-account AWS Identity and Access Management (IAM) role that gives the security and compliance account the permissions to pull the Terraform workload from the workload account repository for testing their guardrails in observation mode

Next, we build the resources in the security and compliance account:

  • A CodeCommit repository to hold the security and compliance standards (guardrails)
  • A CI/CD pipeline built using CodePipeline to release new guardrails
  • A cross-account role that gives the workload account the permissions to pull the activated guardrails from the main branch of the security and compliance account repository.

The following diagram shows our solution architecture.

solution architecture diagram

The architecture has two workflows: security and compliance (Steps 1–4) and application delivery (Steps 5–7).

  1. When a new security and compliance guardrail is introduced into the develop branch of the compliance repository, it triggers the security and compliance pipeline.
  2. The pipeline pulls the Terraform workload.
  3. The pipeline tests this compliance check guardrail against the Terraform workload in the workload account repository.
  4. If the workload is compliant, the guardrail is automatically merged into the main branch. This activates the guardrail by making it available for all Terraform application workload pipelines to consume. By doing this, we make sure that we don’t break the Terraform application deployment pipeline by introducing new guardrails. It also provides the security and compliance team visibility into the resources in the application workload that are noncompliant. The security and compliance team can then reach out to the application delivery team and suggest appropriate remediation before the new standards are activated. If the compliance check fails, the automatic merge to the main branch is stopped. The security and compliance team has an option to force merge the guardrail into the main branch if it’s deemed critical and they need to activate it immediately.
  5. The Terraform deployment pipeline in the workload account always pulls the latest security and compliance checks from the main branch of the compliance repository.
  6. Checks are run against the Terraform workload to ensure that it meets the organization’s security and compliance standards.
  7. Only secure and compliant workloads are deployed by the pipeline. If the workload is noncompliant, the security and compliance checks fail and break the pipeline, forcing the application delivery team to remediate the issue and recheck-in the code.

Prerequisites

Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work:

  • Security and Compliance – In which you create a CodeCommit repository to hold compliance standards that are written based on Terraform-Compliance framework. You also create a CI/CD pipeline to release new compliance guardrails.
  • Workload – In which the Terraform workload resides. The pipeline to deploy the Terraform workload enforces the compliance guardrails prior to the deployment.

You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. Therefore, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI.

In addition, you need to generate a cucumber-sandwich.jar file by following the steps in the cucumber-sandwich GitHub repo. The JAR file is needed to generate pretty HTML compliance reports. The security and compliance team can use these reports to make sure that the standards are met.

To implement our solution, we complete the following high-level steps:

  1. Create the security and compliance account stack.
  2. Create the workload account stack.
  3. Test the compliance workflow.

Create the security and compliance account stack

We create the following resources in the security and compliance account:

  • A CodeCommit repo to hold the security and compliance guardrails
  • A CI/CD pipeline to roll out the Terraform compliance guardrails
  • An IAM role that trusts the application workload account and allows it to pull compliance guardrails from its CodeCommit repo

In this section, we set up the properties for the pipeline and cross-account role stacks, and run the deployment scripts.

Set up properties for the pipeline stack

Clone the GitHub repo aws-continuous-compliance-for-terraform and navigate to the folder security-and-compliance-account/stacks. This contains the folder pipeline_stack/, which holds the code and properties for creating the pipeline stack.

The folder has a JSON file cdk-stack-param.json, which has the parameter TERRAFORM_APPLICATION_WORKLOADS, which represents the list of application workloads that the security and compliance pipeline pulls and runs tests against to make sure that the workloads are compliant. In the workload list, you have the following parameters:

  • GIT_REPO_URL – The HTTPS URL of the CodeCommit repository in the workload account against which the security and compliance check pipeline runs compliance guardrails.
  • CROSS_ACCOUNT_ROLE_ARN – The ARN for the cross-account role we create in the next section. This role gives the security and compliance account permissions to pull Terraform code from the workload account.

For CROSS_ACCOUNT_ROLE_ARN, replace <workload-account-id> with the account ID for your designated AWS workload account. For GIT_REPO_URL, replace <region> with AWS Region where the repository resides.

security and compliance pipeline stack parameters

Set up properties for the cross-account role stack

In the cloned GitHub repo aws-continuous-compliance-for-terraform from the previous step, navigate to the folder security-and-compliance-account/stacks. This contains the folder cross_account_role_stack/, which holds the code and properties for creating the cross-account role.

The folder has a JSON file cdk-stack-param.json, which has the parameter TERRAFORM_APPLICATION_WORKLOAD_ACCOUNTS, which represents the list of Terraform workload accounts that intend to integrate with the security and compliance account for running compliance checks. All these accounts are trusted by the security and compliance account and given permissions to pull compliance guardrails. Replace <workload-account-id> with the account ID for your designated AWS workload account.

security and compliance cross account role stack parameters

Run the deployment script

Run deploy.sh by passing the name of the AWS security and compliance account profile you created earlier. The script uses the AWS CDK CLI to bootstrap and deploy the two stacks we discussed. See the following code:

cd aws-continuous-compliance-for-terraform/security-and-compliance-account/
./deploy.sh "<AWS-COMPLIANCE-ACCOUNT-PROFILE-NAME>"

You should now see three stacks in the tools account:

  • CDKToolkit – AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as an AWS CloudFormation template and AWS Lambda code package.
  • cf-CrossAccountRoles – This stack creates the cross-account IAM role.
  • cf-SecurityAndCompliancePipeline – This stack creates the pipeline. On the Outputs tab of the stack, you can find the CodeCommit source repo URL from the key OutSourceRepoHttpUrl. Record the URL to use later.

security and compliance stack

Create a workload account stack

We create the following resources in the workload account:

  • A CodeCommit repo to hold the Terraform workload to be deployed
  • A CI/CD pipeline to deploy the Terraform workload
  • An IAM role that trusts the security and compliance account and allows it to pull Terraform code from its CodeCommit repo for testing

We follow similar steps as in the previous section to set up the properties for the pipeline stack and cross-account role stack, and then run the deployment script.

Set up properties for the pipeline stack

In the already cloned repo, navigate to the folder workload-account/stacks. This contains the folder pipeline_stack/, which holds the code and properties for creating the pipeline stack.

The folder has a JSON file cdk-stack-param.json, which has the parameter COMPLIANCE_CODE, which provides details on where to pull the compliance guardrails from. The pipeline pulls and runs compliance checks prior to deployment, to make sure that application workload is compliant. You have the following parameters:

  • GIT_REPO_URL – The HTTPS URL of the CodeCommit repositoryCode in the security and compliance account, which contains compliance guardrails that the pipeline in the workload account pulls to carry out compliance checks.
  • CROSS_ACCOUNT_ROLE_ARN – The ARN for the cross-account role we created in the previous step in the security and compliance account. This role gives the workload account permissions to pull the Terraform compliance code from its respective security and compliance account.

For CROSS_ACCOUNT_ROLE_ARN, replace <compliance-account-id> with the account ID for your designated AWS security and compliance account. For GIT_REPO_URL, replace <region> with Region where the repository resides.

workload pipeline stack config

Set up the properties for cross-account role stack

In the already cloned repo, navigate to folder workload-account/stacks. This contains the folder cross_account_role_stack/, which holds the code and properties for creating the cross-account role stack.

The folder has a JSON file cdk-stack-param.json, which has the parameter COMPLIANCE_ACCOUNT, which represents the security and compliance account that intends to integrate with the workload account for running compliance checks. This account is trusted by the workload account and given permissions to pull compliance guardrails. Replace <compliance-account-id> with the account ID for your designated AWS security and compliance account.

workload cross account role stack config

Run the deployment script

Run deploy.sh by passing the name of the AWS workload account profile you created earlier. The script uses the AWS CDK CLI to bootstrap and deploy the two stacks we discussed. See the following code:

cd aws-continuous-compliance-for-terraform/workload-account/
./deploy.sh "<AWS-WORKLOAD-ACCOUNT-PROFILE-NAME>"

You should now see three stacks in the tools account:

  • CDKToolkit –AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package.
  • cf-CrossAccountRoles – This stack creates the cross-account IAM role.
  • cf-TerraformWorkloadPipeline – This stack creates the pipeline. On the Outputs tab of the stack, you can find the CodeCommit source repo URL from the key OutSourceRepoHttpUrl. Record the URL to use later.

workload pipeline stack

Test the compliance workflow

In this section, we walk through the following steps to test our workflow:

  1. Push the application workload code into its repo.
  2. Push the security and compliance code into its repo and run its pipeline to release the compliance guardrails.
  3. Run the application workload pipeline to exercise the compliance guardrails.
  4. Review the generated reports.

Push the application workload code into its repo

Clone the empty CodeCommit repo from workload account. You can find the URL from the variable OutSourceRepoHttpUrl on the Outputs tab of the cf-TerraformWorkloadPipeline stack we deployed in the previous section.

  1. Create a new branch main and copy the workload code into it.
  2. Copy the cucumber-sandwich.jar file you generated in the prerequisites section into a new folder /lib.
  3. Create a directory called reports with an empty file dummy. The reports directory is where Terraform-Compliance framework create compliance reports.
  4. Push the code to the remote origin.

See the following sample script

git checkout -b main
# Copy the code from git repo location
# Create reports directory and a dummy file.
mkdir reports
touch reports/dummy
git add .
git commit -m “Initial commit”
git push origin main

The folder structure of workload code repo should match the structure shown in the following screenshot.

workload code folder structure

The first commit triggers the pipeline-workload-main pipeline, which fails in the stage RunComplianceCheck due to the security and compliance repo not being present (which we add in the next section).

Push the security and compliance code into its repo and run its pipeline

Clone the empty CodeCommit repo from the security and compliance account. You can find the URL from the variable OutSourceRepoHttpUrl on the Outputs tab of the cf-SecurityAndCompliancePipeline stack we deployed in the previous section.

  1. Create a new local branch main and check in the empty branch into the remote origin so that the main branch is created in the remote origin. Skipping this step leads to failure in the code merge step of the pipeline due to the absence of the main branch.
  2. Create a new branch develop and copy the security and compliance code into it. This is required because the security and compliance pipeline is configured to be triggered from the develop branch for the purposes of this post.
  3. Copy the cucumber-sandwich.jar file you generated in the prerequisites section into a new folder /lib.

See the following sample script:

cd security-and-compliance-code
git checkout -b main
git add .
git commit --allow-empty -m “initial commit”
git push origin main
git checkout -b develop main
# Here copy the code from git repo location
# You also copy cucumber-sandwich.jar into a new folder /lib
git add .
git commit -m “Initial commit”
git push origin develop

The folder structure of security and compliance code repo should match the structure shown in the following screenshot.

security and compliance code folder structure

The code push to the develop branch of the security-and-compliance-code repo triggers the security and compliance pipeline. The pipeline pulls the code from the workload account repo, then runs the compliance guardrails against the Terraform workload to make sure that the workload is compliant. If the workload is compliant, the pipeline merges the compliance guardrails into the main branch. If the workload fails the compliance test, the pipeline fails. The following screenshot shows a sample run of the pipeline.

security and compliance pipeline

Run the application workload pipeline to exercise the compliance guardrails

After we set up the security and compliance repo and the pipeline runs successfully, the workload pipeline is ready to proceed (see the following screenshot of its progress).

workload pipeline

The service delivery teams are now being subjected to the security and compliance guardrails being implemented (RunComplianceCheck stage), and their pipeline breaks if any resource is noncompliant.

Review the generated reports

CodeBuild supports viewing reports generated in cucumber JSON format. In our workflow, we generate reports in cucumber JSON and BDD XML formats, and we use this capability of CodeBuild to generate and view HTML reports. Our implementation also generates report directly in HTML using the cucumber-sandwich library.

The following screenshot is snippet of the script compliance-check.sh, which implements report generation.

compliance check script

The bug noted in the screenshot is in the radish-bdd library that Terraform-Compliance uses for the cucumber JSON format report generation. For more information, you can review the defect logged against radish-bdd for this issue.

After the script generates the reports, CodeBuild needs to be configured to access them to generate HTML reports. The following screenshot shows a snippet from buildspec-compliance-check.yml, which shows how the reports section is set up for report generation:

buildspec compliance check

For more details on how to set up buildspec file for CodeBuild to generate reports, see Create a test report.

CodeBuild displays the compliance run reports as shown in the following screenshot.

code build cucumber report

We can also view a trending graph for multiple runs.

code build cucumber report

The other report generated by the workflow is the pretty HTML report generated by the cucumber-sandwich library.

code build cucumber report

The reports are available for download from the S3 bucket <OutPipelineBucketName>/pipeline-security-an/report_App/<zip file>.

The cucumber-sandwich generated report marks scenarios with skipped tests as failed scenarios. This is the only noticeable difference between the CodeBuild generated HTML and cucumber-sandwich generated HTML reports.

Clean up

To remove all the resources from the workload account, complete the following steps in order:

  1. Go to the folder where you cloned the workload code and edit buildspec-workload-deploy.yml:
    • Comment line 44 (- ./workload-deploy.sh).
    • Uncomment line 45 (- ./workload-deploy.sh --destroy).
    • Commit and push the code change to the remote repo. The workload pipeline is triggered, which cleans up the workload.
  2. Delete the CloudFormation stack cf-CrossAccountRoles. This step removes the cross-account role from the workload account, which gives permission to the security and compliance account to pull the Terraform workload.
  3. Go to the CloudFormation stack cf-TerraformWorkloadPipeline and note the OutPipelineBucketName and OutStateFileBucketName on the Outputs tab. Empty the two buckets and then delete the stack. This removes pipeline resources from workload account.
  4. Go to the CDKToolkit stack and note the BucketName on the Outputs tab. Empty that bucket and then delete the stack.

To remove all the resources from the security and compliance account, complete the following steps in order:

  1. Delete the CloudFormation stack cf-CrossAccountRoles. This step removes the cross-account role from the security and compliance account, which gives permission to the workload account to pull the compliance code.
  2. Go to CloudFormation stack cf-SecurityAndCompliancePipeline and note the OutPipelineBucketName on the Outputs tab. Empty that bucket and then delete the stack. This removes pipeline resources from the security and compliance account.
  3. Go to the CDKToolkit stack and note the BucketName on the Outputs tab. Empty that bucket and then delete the stack.

Security considerations

Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific CodeCommit permissions. This makes sure that the cross-account role can only do those things.

Conclusion

In this post in our two-part series, we implemented a continuous compliance workflow using CodePipeline and the open-source Terraform-Compliance framework. The Terraform-Compliance framework allows you to build guardrails for securing Terraform applications deployed on AWS.

We also showed how you can use AWS developer tools to seamlessly integrate security and compliance guardrails into an application release cycle and catch noncompliant AWS resources before getting deployed into AWS.

Try implementing the solution in your enterprise as shown in this post, and leave your thoughts and questions in the comments.

About the authors

sumit mishra

 

Sumit Mishra is Senior DevOps Architect at AWS Professional Services. His area of expertise include IaC, Security in pipeline, CI/CD and automation.

 

 

 

Damodar Shenvi Wagle

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, CI/CD and automation.

Building an end-to-end Kubernetes-based DevSecOps software factory on AWS

Post Syndicated from Srinivas Manepalli original https://aws.amazon.com/blogs/devops/building-an-end-to-end-kubernetes-based-devsecops-software-factory-on-aws/

DevSecOps software factory implementation can significantly vary depending on the application, infrastructure, architecture, and the services and tools used. In a previous post, I provided an end-to-end DevSecOps pipeline for a three-tier web application deployed with AWS Elastic Beanstalk. The pipeline used cloud-native services along with a few open-source security tools. This solution is similar, but instead uses a containers-based approach with additional security analysis stages. It defines a software factory using Kubernetes along with necessary AWS Cloud-native services and open-source third-party tools. Code is provided in the GitHub repo to build this DevSecOps software factory, including the integration code for third-party scanning tools.

DevOps is a combination of cultural philosophies, practices, and tools that combine software development with information technology operations. These combined practices enable companies to deliver new application features and improved services to customers at a higher velocity. DevSecOps takes this a step further by integrating and automating the enforcement of preventive, detective, and responsive security controls into the pipeline.

In a DevSecOps factory, security needs to be addressed from two aspects: security of the software factory, and security in the software factory. In this architecture, we use AWS services to address the security of the software factory, and use third-party tools along with AWS services to address the security in the software factory. This AWS DevSecOps reference architecture covers DevSecOps practices and security vulnerability scanning stages including secret analysis, SCA (Software Composite Analysis), SAST (Static Application Security Testing), DAST (Dynamic Application Security Testing), RASP (Runtime Application Self Protection), and aggregation of vulnerability findings into a single pane of glass.

The focus of this post is on application vulnerability scanning. Vulnerability scanning of underlying infrastructure such as the Amazon Elastic Kubernetes Service (Amazon EKS) cluster and network is outside the scope of this post. For information about infrastructure-level security planning, refer to Amazon Guard Duty, Amazon Inspector, and AWS Shield.

You can deploy this pipeline in either the AWS GovCloud (US) Region or standard AWS Regions. All listed AWS services are authorized for FedRamp High and DoD SRG IL4/IL5.

Security and compliance

Thoroughly implementing security and compliance in the public sector and other highly regulated workloads is very important for achieving an ATO (Authority to Operate) and continuously maintain an ATO (c-ATO). DevSecOps shifts security left in the process, integrating it at each stage of the software factory, which can make ATO a continuous and faster process. With DevSecOps, an organization can deliver secure and compliant application changes rapidly while running operations consistently with automation.

Security and compliance are shared responsibilities between AWS and the customer. Depending on the compliance requirements (such as FedRamp or DoD SRG), a DevSecOps software factory needs to implement certain security controls. AWS provides tools and services to implement most of these controls. For example, to address NIST 800-53 security controls families such as access control, you can use AWS Identity Access and Management (IAM) roles and Amazon Simple Storage Service (Amazon S3) bucket policies. To address auditing and accountability, you can use AWS CloudTrail and Amazon CloudWatch. To address configuration management, you can use AWS Config rules and AWS Systems Manager. Similarly, to address risk assessment, you can use vulnerability scanning tools from AWS.

The following table is the high-level mapping of the NIST 800-53 security control families and AWS services that are used in this DevSecOps reference architecture. This list only includes the services that are defined in the AWS CloudFormation template, which provides pipeline as code in this solution. You can use additional AWS services and tools or other environmental specific services and tools to address these and the remaining security control families on a more granular level.

# NIST 800-53 Security Control Family – Rev 5 AWS Services Used (In this DevSecOps Pipeline)
1 AC – Access Control

AWS IAM, Amazon S3, and Amazon CloudWatch are used.

AWS::IAM::ManagedPolicy
AWS::IAM::Role
AWS::S3::BucketPolicy
AWS::CloudWatch::Alarm

2 AU – Audit and Accountability

AWS CloudTrail, Amazon S3, Amazon SNS, and Amazon CloudWatch are used.

AWS::CloudTrail::Trail
AWS::Events::Rule
AWS::CloudWatch::LogGroup
AWS::CloudWatch::Alarm
AWS::SNS::Topic

3 CM – Configuration Management

AWS Systems Manager, Amazon S3, and AWS Config are used.

AWS::SSM::Parameter
AWS::S3::Bucket
AWS::Config::ConfigRule

4 CP – Contingency Planning

AWS CodeCommit and Amazon S3 are used.

AWS::CodeCommit::Repository
AWS::S3::Bucket

5 IA – Identification and Authentication

AWS IAM is used.

AWS:IAM:User
AWS::IAM::Role

6 RA – Risk Assessment

AWS Config, AWS CloudTrail, AWS Security Hub, and third party scanning tools are used.

AWS::Config::ConfigRule
AWS::CloudTrail::Trail
AWS::SecurityHub::Hub
Vulnerability Scanning Tools (AWS/AWS Partner/3rd party)

7 CA – Assessment, Authorization, and Monitoring

AWS CloudTrail, Amazon CloudWatch, and AWS Config are used.

AWS::CloudTrail::Trail
AWS::CloudWatch::LogGroup
AWS::CloudWatch::Alarm
AWS::Config::ConfigRule

8 SC – System and Communications Protection

AWS KMS and AWS Systems Manager are used.

AWS::KMS::Key
AWS::SSM::Parameter
SSL/TLS communication

9 SI – System and Information Integrity

AWS Security Hub, and third party scanning tools are used.

AWS::SecurityHub::Hub
Vulnerability Scanning Tools (AWS/AWS Partner/3rd party)

10 AT – Awareness and Training N/A
11 SA – System and Services Acquisition N/A
12 IR – Incident Response Not implemented, but services like AWS Lambda, and Amazon CloudWatch Events can be used.
13 MA – Maintenance N/A
14 MP – Media Protection N/A
15 PS – Personnel Security N/A
16 PE – Physical and Environmental Protection N/A
17 PL – Planning N/A
18 PM – Program Management N/A
19 PT – PII Processing and Transparency N/A
20 SR – SupplyChain Risk Management N/A

Services and tools

In this section, we discuss the various AWS services and third-party tools used in this solution.

CI/CD services

For continuous integration and continuous delivery (CI/CD) in this reference architecture, we use the following AWS services:

  • AWS CodeBuild – A fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy.
  • AWS CodeCommit – A fully managed source control service that hosts secure Git-based repositories.
  • AWS CodeDeploy – A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon Elastic Compute Cloud (Amazon EC2), AWS Fargate, AWS Lambda, and your on-premises servers.
  • AWS CodePipeline – A fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
  • AWS Lambda – A service that lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
  • Amazon Simple Notification Service – Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
  • Amazon S3 – Amazon S3 is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
  • AWS Systems Manager Parameter Store – Parameter Store provides secure, hierarchical storage for configuration data management and secrets management.

Continuous testing tools

The following are open-source scanning tools that are integrated in the pipeline for the purpose of this post, but you could integrate other tools that meet your specific requirements. You can use the static code review tool Amazon CodeGuru for static analysis, but at the time of this writing, it’s not yet available in AWS GovCloud and currently supports Java and Python.

  • Anchore (SCA and SAST) – Anchore Engine is an open-source software system that provides a centralized service for analyzing container images, scanning for security vulnerabilities, and enforcing deployment policies.
  • Amazon Elastic Container Registry image scanning – Amazon ECR image scanning helps in identifying software vulnerabilities in your container images. Amazon ECR uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source Clair project and provides a list of scan findings.
  • Git-Secrets (Secrets Scanning) – Prevents you from committing sensitive information to Git repositories. It is an open-source tool from AWS Labs.
  • OWASP ZAP (DAST) – Helps you automatically find security vulnerabilities in your web applications while you’re developing and testing your applications.
  • Snyk (SCA and SAST) – Snyk is an open-source security platform designed to help software-driven businesses enhance developer security.
  • Sysdig Falco (RASP) – Falco is an open source cloud-native runtime security project that detects unexpected application behavior and alerts on threats at runtime. It is the first runtime security project to join CNCF as an incubation-level project.

You can integrate additional security stages like IAST (Interactive Application Security Testing) into the pipeline to get code insights while the application is running. You can use AWS partner tools like Contrast Security, Synopsys, and WhiteSource to integrate IAST scanning into the pipeline. Malware scanning tools, and image signing tools can also be integrated into the pipeline for additional security.

Continuous logging and monitoring services

The following are AWS services for continuous logging and monitoring used in this reference architecture:

Auditing and governance services

The following are AWS auditing and governance services used in this reference architecture:

  • AWS CloudTrail – Enables governance, compliance, operational auditing, and risk auditing of your AWS account.
  • AWS Config – Allows you to assess, audit, and evaluate the configurations of your AWS resources.
  • AWS Identity and Access Management – Enables you to manage access to AWS services and resources securely. With IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

Operations services

The following are the AWS operations services used in this reference architecture:

  • AWS CloudFormation – Gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code.
  • Amazon ECR – A fully managed container registry that makes it easy to store, manage, share, and deploy your container images and artifacts anywhere.
  • Amazon EKS – A managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all of the existing plugins and tooling from the Kubernetes community.
  • AWS Security Hub – Gives you a comprehensive view of your security alerts and security posture across your AWS accounts. This post uses Security Hub to aggregate all the vulnerability findings as a single pane of glass.
  • AWS Systems Manager Parameter Store – Provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values.

Pipeline architecture

The following diagram shows the architecture of the solution. We use AWS CloudFormation to describe the pipeline as code.

Containers devsecops pipeline architecture

Kubernetes DevSecOps Pipeline Architecture

The main steps are as follows:

    1. When a user commits the code to CodeCommit repository, a CloudWatch event is generated, which triggers CodePipeline to orchestrate the events.
    2. CodeBuild packages the build and uploads the artifacts to an S3 bucket.
    3. CodeBuild scans the code with git-secrets. If there is any sensitive information in the code such as AWS access keys or secrets keys, CodeBuild fails the build.
    4. CodeBuild creates the container image and perform SCA and SAST by scanning the image with Snyk or Anchore. In the provided CloudFormation template, you can pick one of these tools during the deployment. Please note, CodeBuild is fully enabled for a “bring your own tool” approach.
      • (4a) If there are any vulnerabilities, CodeBuild invokes the Lambda function. The function parses the results into AWS Security Finding Format (ASFF) and posts them to Security Hub. Security Hub helps aggregate and view all the vulnerability findings in one place as a single pane of glass. The Lambda function also uploads the scanning results to an S3 bucket.
      • (4b) If there are no vulnerabilities, CodeBuild pushes the container image to Amazon ECR and triggers another scan using built-in Amazon ECR scanning.
    5. CodeBuild retrieves the scanning results.
      • (5a) If there are any vulnerabilities, CodeBuild invokes the Lambda function again and posts the findings to Security Hub. The Lambda function also uploads the scan results to an S3 bucket.
      • (5b) If there are no vulnerabilities, CodeBuild deploys the container image to an Amazon EKS staging environment.
    6. After the deployment succeeds, CodeBuild triggers the DAST scanning with the OWASP ZAP tool (again, this is fully enabled for a “bring your own tool” approach).
      • (6a) If there are any vulnerabilities, CodeBuild invokes the Lambda function, which parses the results into ASFF and posts it to Security Hub. The function also uploads the scan results to an S3 bucket (similar to step 4a).
    7. If there are no vulnerabilities, the approval stage is triggered, and an email is sent to the approver for action via Amazon SNS.
    8. After approval, CodeBuild deploys the code to the production Amazon EKS environment.
    9. During the pipeline run, CloudWatch Events captures the build state changes and sends email notifications to subscribed users through Amazon SNS.
    10. CloudTrail tracks the API calls and sends notifications on critical events on the pipeline and CodeBuild projects, such as UpdatePipeline, DeletePipeline, CreateProject, and DeleteProject, for auditing purposes.
    11. AWS Config tracks all the configuration changes of AWS services. The following AWS Config rules are added in this pipeline as security best practices:
      1. CODEBUILD_PROJECT_ENVVAR_AWSCRED_CHECK – Checks whether the project contains environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The rule is NON_COMPLIANT when the project environment variables contain plaintext credentials. This rule ensures that sensitive information isn’t stored in the CodeBuild project environment variables.
      2. CLOUD_TRAIL_LOG_FILE_VALIDATION_ENABLED – Checks whether CloudTrail creates a signed digest file with logs. AWS recommends that the file validation be enabled on all trails. The rule is noncompliant if the validation is not enabled. This rule ensures that pipeline resources such as the CodeBuild project aren’t altered to bypass critical vulnerability checks.

Security of the pipeline is implemented using IAM roles and S3 bucket policies to restrict access to pipeline resources. Pipeline data at rest and in transit is protected using encryption and SSL secure transport. We use Parameter Store to store sensitive information such as API tokens and passwords. To be fully compliant with frameworks such as FedRAMP, other things may be required, such as MFA.

Security in the pipeline is implemented by performing the Secret Analysis, SCA, SAST, DAST, and RASP security checks. Applicable AWS services provide encryption at rest and in transit by default. You can enable additional controls on top of these wherever required.

In the next section, I explain how to deploy and run the pipeline CloudFormation template used for this example. As a best practice, we recommend using linting tools like cfn-nag and cfn-guard to scan CloudFormation templates for security vulnerabilities. Refer to the provided service links to learn more about each of the services in the pipeline.

Prerequisites

Before getting started, make sure you have the following prerequisites:

  • An EKS cluster environment with your application deployed. In this post, we use PHP WordPress as a sample application, but you can use any other application.
  • Sysdig Falco installed on an EKS cluster. Sysdig Falco captures events on the EKS cluster and sends those events to CloudWatch using AWS FireLens. For implementation instructions, see Implementing Runtime security in Amazon EKS using CNCF Falco. This step is required only if you need to implement RASP in the software factory.
  • A CodeCommit repo with your application code and a Dockerfile. For more information, see Create an AWS CodeCommit repository.
  • An Amazon ECR repo to store container images and scan for vulnerabilities. Enable vulnerability scanning on image push in Amazon ECR. You can enable or disable the automatic scanning on image push via the Amazon ECR
  • The provided buildspec-*.yml files for git-secrets, Anchore, Snyk, Amazon ECR, OWASP ZAP, and your Kubernetes deployment .yml files uploaded to the root of the application code repository. Please update the Kubernetes (kubectl) commands in the buildspec files as needed.
  • A Snyk API key if you use Snyk as a SAST tool.
  • The Lambda function uploaded to an S3 bucket. We use this function to parse the scan reports and post the results to Security Hub.
  • An OWASP ZAP URL and generated API key for dynamic web scanning.
  • An application web URL to run the DAST testing.
  • An email address to receive approval notifications for deployment, pipeline change notifications, and CloudTrail events.
  • AWS Config and Security Hub services enabled. For instructions, see Managing the Configuration Recorder and Enabling Security Hub manually, respectively.

Deploying the pipeline

To deploy the pipeline, complete the following steps:

  1. Download the CloudFormation template and pipeline code from the GitHub repo.
  2. Sign in to your AWS account if you have not done so already.
  3. On the CloudFormation console, choose Create Stack.
  4. Choose the CloudFormation pipeline template.
  5. Choose Next.
  6. Under Code, provide the following information:
    1. Code details, such as repository name and the branch to trigger the pipeline.
    2. The Amazon ECR container image repository name.
  7. Under SAST, provide the following information:
    1. Choose the SAST tool (Anchore or Snyk) for code analysis.
    2. If you select Snyk, provide an API key for Snyk.
  8. Under DAST, choose the DAST tool (OWASP ZAP) for dynamic testing and enter the API token, DAST tool URL, and the application URL to run the scan.
  9. Under Lambda functions, enter the Lambda function S3 bucket name, filename, and the handler name.
  10. For STG EKS cluster, enter the staging EKS cluster name.
  11. For PRD EKS cluster, enter the production EKS cluster name to which this pipeline deploys the container image.
  12. Under General, enter the email addresses to receive notifications for approvals and pipeline status changes.
  13. Choose Next.
  14. Complete the stack.
  15. After the pipeline is deployed, confirm the subscription by choosing the provided link in the email to receive notifications.
Pipeline-CF-Parameters.png

Pipeline CloudFormation Parameters

The provided CloudFormation template in this post is formatted for AWS GovCloud. If you’re setting this up in a standard Region, you have to adjust the partition name in the CloudFormation template. For example, change ARN values from arn:aws-us-gov to arn:aws.

Running the pipeline

To trigger the pipeline, commit changes to your application repository files. That generates a CloudWatch event and triggers the pipeline. CodeBuild scans the code and if there are any vulnerabilities, it invokes the Lambda function to parse and post the results to Security Hub.

When posting the vulnerability finding information to Security Hub, we need to provide a vulnerability severity level. Based on the provided severity value, Security Hub assigns the label as follows. Adjust the severity levels in your code based on your organization’s requirements.

  • 0 – INFORMATIONAL
  • 1–39 – LOW
  • 40– 69 – MEDIUM
  • 70–89 – HIGH
  • 90–100 – CRITICAL

The following screenshot shows the progression of your pipeline.

DevSecOps-Pipeline.png

DevSecOps Kubernetes CI/CD Pipeline

 

Secrets analysis scanning

In this architecture, after the pipeline is initiated, CodeBuild triggers the Secret Analysis stage using git-secrets and the buildspec-gitsecrets.yml file. Git-Secrets looks for any sensitive information such as AWS access keys and secret access keys. Git-Secrets allows you to add custom strings to look for in your analysis. CodeBuild uses the provided buildspec-gitsecrets.yml file during the build stage.

SCA and SAST scanning

In this architecture, CodeBuild triggers the SCA and SAST scanning using Anchore, Snyk, and Amazon ECR. In this solution, we use the open-source versions of Anchore and Snyk. Amazon ECR uses open-source Clair under the hood, which comes with Amazon ECR for no additional cost. As mentioned earlier, you can choose Anchore or Snyk to do the initial image scanning.

Scanning with Anchore

If you choose Anchore as a SAST tool during the deployment, the build stage uses the buildspec-anchore.yml file to scan the container image. If there are any vulnerabilities, it fails the build and triggers the Lambda function to post those findings to Security Hub. If there are no vulnerabilities, it proceeds to next stage.

Anchore-lambda-codesnippet.png

Anchore Lambda Code Snippet

Scanning with Snyk

If you choose Snyk as a SAST tool during the deployment, the build stage uses the buildspec-snyk.yml file to scan the container image. If there are any vulnerabilities, it fails the build and triggers the Lambda function to post those findings to Security Hub. If there are no vulnerabilities, it proceeds to next stage.

Snyk-lambda-codesnippet.png

Snyk Lambda Code Snippet

Scanning with Amazon ECR

If there are no vulnerabilities from Anchore or Snyk scanning, the image is pushed to Amazon ECR, and the Amazon ECR scan is triggered automatically. Amazon ECR lists the vulnerability findings on the Amazon ECR console. To provide a single pane of glass view of all the vulnerability findings and for easy administration, we retrieve those findings and post them to Security Hub. If there are no vulnerabilities, the image is deployed to the EKS staging cluster and next stage (DAST scanning) is triggered.

ECR-lambda-codesnippet.png

ECR Lambda Code Snippet

 

DAST scanning with OWASP ZAP

In this architecture, CodeBuild triggers DAST scanning using the DAST tool OWASP ZAP.

After deployment is successful, CodeBuild initiates the DAST scanning. When scanning is complete, if there are any vulnerabilities, it invokes the Lambda function, similar to SAST analysis. The function parses and posts the results to Security Hub. The following is the code snippet of the Lambda function.

Zap-lambda-codesnippet.png

Zap Lambda Code Snippet

The following screenshot shows the results in Security Hub. The highlighted section shows the vulnerability findings from various scanning stages.

SecurityHub-vulnerabilities.png

Vulnerability Findings in Security Hub

We can drill down to individual resource IDs to get the list of vulnerability findings. For example, if we drill down to the resource ID of SASTBuildProject*, we can review all the findings from that resource ID.

Anchore-Vulnerability.png

SAST Vulnerabilities in Security Hub

 

If there are no vulnerabilities in the DAST scan, the pipeline proceeds to the manual approval stage and an email is sent to the approver. The approver can review and approve or reject the deployment. If approved, the pipeline moves to next stage and deploys the application to the production EKS cluster.

Aggregation of vulnerability findings in Security Hub provides opportunities to automate the remediation. For example, based on the vulnerability finding, you can trigger a Lambda function to take the needed remediation action. This also reduces the burden on operations and security teams because they can now address the vulnerabilities from a single pane of glass instead of logging into multiple tool dashboards.

Along with Security Hub, you can send vulnerability findings to your issue tracking systems such as JIRA, Systems Manager SysOps, or can automatically create an incident management ticket. This is outside the scope of this post, but is one of the possibilities you can consider when implementing DevSecOps software factories.

RASP scanning

Sysdig Falco is an open-source runtime security tool. Based on the configured rules, Falco can detect suspicious activity and alert on any behavior that involves making Linux system calls. You can use Falco rules to address security controls like NIST SP 800-53. Falco agents on each EKS node continuously scan the containers running in pods and send the events as STDOUT. These events can be then sent to CloudWatch or any third-party log aggregator to send alerts and respond. For more information, see Implementing Runtime security in Amazon EKS using CNCF Falco. You can also use Lambda to trigger and automatically remediate certain security events.

The following screenshot shows Falco events on the CloudWatch console. The highlighted text describes the Falco event that was triggered based on the default Falco rules on the EKS cluster. You can add additional custom rules to meet your security control requirements. You can also trigger responsive actions from these CloudWatch events using services like Lambda.

Falco alerts in CloudWatch

Falco alerts in CloudWatch

Cleanup

This section provides instructions to clean up the DevSecOps pipeline setup:

  1. Delete the EKS cluster.
  2. Delete the S3 bucket.
  3. Delete the CodeCommit repo.
  4. Delete the Amazon ECR repo.
  5. Disable Security Hub.
  6. Disable AWS Config.
  7. Delete the pipeline CloudFormation stack.

Conclusion

In this post, I presented an end-to-end Kubernetes-based DevSecOps software factory on AWS with continuous testing, continuous logging and monitoring, auditing and governance, and operations. I demonstrated how to integrate various open-source scanning tools, such as Git-Secrets, Anchore, Snyk, OWASP ZAP, and Sysdig Falco for Secret Analysis, SCA, SAST, DAST, and RASP analysis, respectively. To reduce operations overhead, I explained how to aggregate and manage vulnerability findings in Security Hub as a single pane of glass. This post also talked about how to implement security of the pipeline and in the pipeline using AWS Cloud-native services. Finally, I provided the DevSecOps software factory as code using AWS CloudFormation.

To get started with DevSecOps on AWS, see AWS DevOps and the DevOps blog.

Srinivas Manepalli is a DevSecOps Solutions Architect in the U.S. Fed SI SA team at Amazon Web Services (AWS). He is passionate about helping customers, building and architecting DevSecOps and highly available software systems. Outside of work, he enjoys spending time with family, nature and good food.

Approaches to meeting Australian Government gateway requirements on AWS

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/approaches-to-meeting-australian-government-gateway-requirements-on-aws/

Australian Commonwealth Government agencies are subject to specific requirements set by the Protective Security Policy Framework (PSPF) for securing connectivity between systems that are running sensitive workloads, and for accessing less trusted environments, such as the internet. These agencies have often met the requirements by using some form of approved gateway solution that provides network-based security controls.

This post examines the types of controls you need to provide a gateway that can meet Australian Government requirements defined in the Protective Security Policy Framework (PSPF) and the challenges of using traditional deployment models to support cloud-based solutions. PSPF requirements are mandatory for non-corporate Commonwealth entities, and represent better practice for corporate Commonwealth entities, wholly-owned Commonwealth companies, and state and territory agencies. We discuss the ability to deploy gateway-style solutions in the cloud, and show how you can meet the majority of gateway requirements by using standard cloud architectures plus services. We provide guidance on deploying gateway solutions in the AWS Cloud, and highlight services that can support such deployments. Finally, we provide an illustrative AWS web architecture pattern to show how to meet the majority of gateway requirements through Well-Architected use of services.

Australian Government gateway requirements

The Australian Government Protective Security Policy Framework (PSPF) highlights the requirement to use secure internet gateways (SIGs) and references the Australian Information Security Manual (ISM) control framework to guide agencies. The ISM has a chapter on gateways, which includes the following recommendations for gateway architecture and operations:

  • Provide a central control point for traffic in and out of the system.
  • Inspect and filter traffic.
  • Log and monitor traffic and gateway operation to a secure location. Use appropriate security event alerting.
  • Use secure administration practices, including multi-factor authentication (MFA) access control, minimum privilege, separation of roles, and network segregation.
  • Perform appropriate authentication and authorization of users, traffic, and equipment. Use MFA when possible.
  • Use demilitarized zone (DMZ) patterns to limit access to internal networks.
  • Test security controls regularly.
  • Set up firewalls between security domains and public network infrastructure.

Since the PSPF references the ISM, the agency should apply the overall ISM framework to meet ISM requirements such as governance and security patching for the environment. The ISM is a risk-based framework, and the risk posture of the workload and organization should inform how to assess the controls. For example, requirements for authentication of users might be relaxed for a public-facing website.

In traditional on-premises environments, some Australian Government agencies have mandated centrally assessed and managed gateway capabilities in order to drive economies of scale across multiple government agencies. However, the PSPF does provide the option for gateways used only by a single government agency to undertake their own risk-based assessment for the single agency gateway solution.

Other government agencies also have specific requirements to connect with cloud providers. For example, the U.S. Government Office of Management and Budget (OMB) mandates that U.S. government users access the cloud through a specific agency connection.

Connecting to the cloud through on-premises gateways

Given the existence of centrally managed off-cloud gateways, one approach by customers has been to continue to use these off-cloud gateways and then connect to AWS through the on-premises gateway environment by using AWS Direct Connect, as shown in Figure 1.
 

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Although this approach does work, and makes use of existing gateway capability, it has a number of downsides:

  • A potential single point of failure: If the on-premises gateway capability is unavailable, the agency can lose connectivity to the cloud-based solution.
  • Bandwidth limitations: The agency is limited by the capacity of the gateway, which might not have been developed with dynamically scalable and bandwidth-intensive cloud-based workloads in mind.
  • Latency issues: The requirement to traverse multiple network hops, in addition to the gateway, will introduce additional latency. This can be particularly problematic with architectures that involve API communications being sent back and forth across the gateway environment.
  • Castle-and-moat thinking: Relying only on the gateway as the security boundary can discourage agencies from using and recognizing the cloud-based security controls that are available.

Some of these challenges are discussed in the context of US Trusted Internet Connection (TIC) programs in this whitepaper.

Moving gateways to the cloud

In response to the limitations discussed in the last section, both customers and AWS Partners have built gateway solutions on AWS to meet gateway requirements while remaining fully within the cloud environment. See this type of solution in Figure 2.
 

Figure 2: Moving the gateway to the AWS Cloud

Figure 2: Moving the gateway to the AWS Cloud

With this approach, you can fully leverage the scalable bandwidth that is available from the AWS environment, and you can also reduce latency issues, particularly when multiple hops to and from the gateway are required. This blog post describes a pilot program in the US that combines AWS services and AWS Marketplace technologies to provide a cloud-based gateway.

You can use AWS Transit Gateway (released after the referenced pilot program) to provide the option to centralize such a gateway capability within an organization. This makes it possible to utilize the gateway across multiple cloud solutions that are running in their own virtual private clouds (VPCs) and accounts. This approach also facilitates the principle of the gateway being the central control point for traffic flowing in and out. For more information on using AWS Transit Gateway with security appliances, see the Appliance VPC topic in the Amazon VPC documentation.

More recently, AWS has released additional services and features that can assist with delivering government gateway requirements.

Elastic Load Balancing Gateway Load Balancer provide the capability to deploy third-party network appliances in a scalable fashion. With this capability, you can leverage existing investment in licensing, use familiar tooling, reuse intellectual property (IP) such as rule sets, and reuse skills, because staff are already trained in configuring and managing the chosen device. You have one gateway for distributing traffic across multiple virtual appliances, while scaling the appliances up and down based on demand. This reduces the potential points of failure in your network and increases availability. Gateway Load Balancer is a straightforward way to use third-party network appliances from industry leaders in the cloud. You benefit from the features of these devices, while Gateway Load Balancer makes them automatically scalable and easier to deploy. You can find an AWS Partner with Gateway Load Balancer expertise on the AWS Marketplace. For more information on combining Transit Gateway and Gateway Load Balancer for a centralized inspection architecture, see this blog post. The post shows centralized architecture for East-West (VPC-to-VPC) and North-South (internet or on-premises bound) traffic inspection, plus processing.

To further simplify this area for customers, AWS has introduced the AWS Network Firewall service. Network Firewall is a managed service that you can use to deploy essential network protections for your VPCs. The service is simple to set up and scales automatically with your network traffic so you don’t have to worry about deploying and managing any infrastructure. You can combine Network Firewall with Transit Gateway to set up centralized inspection architecture models, such as those described in this blog post.

Reviewing a typical web architecture in the cloud

In the last section, you saw that SIG patterns can be created in the cloud. Now we can put that in context with the layered security controls that are implemented in a typical web application deployment. Consider a web application hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, as shown in Figure 3, within the context of other services that will support the architecture.
 

Figure 3: Security controls in a web application hosted on EC2

Figure 3: Security controls in a web application hosted on EC2

Although this example doesn’t include a traditional SIG-type infrastructure that inspects and controls traffic before it’s sent to the AWS Cloud, the architecture has many of the technical controls that are called for in SIG solutions as a result of using the AWS Well-Architected Framework. We’ll now step through some of these services to highlight the relevant security functionality that each provides.

Network control services

Amazon Virtual Private Cloud (Amazon VPC) is a service you can use to launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Amazon VPC lets you use multiple layers of security, including security groups and network access control lists (network ACLs), to help control access to Amazon EC2 instances in each subnet. Security groups act as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. A network ACL is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups to add an additional layer of security to your VPC. Read about the specific differences between security groups and network ACLs.

Having this level of control throughout the application architecture has advantages over relying only on a central, border-style gateway pattern, because security groups for each tier of the application architecture can be locked down to only those ports and sources required for that layer. For example, in the architecture shown in Figure 3, only the application load balancer security group would allow web traffic (ports 80, 443) from the internet. The web-tier-layer security group would only accept traffic from the load-balancer layer, and the database-layer security group would only accept traffic from the web tier.

If you need to provide a central point of control with this model, you can use AWS Firewall Manager, which simplifies the administration and maintenance of your VPC security groups across multiple accounts and resources. With Firewall Manager, you can configure and audit your security groups for your organization using a single, central administrator account. Firewall Manager automatically applies rules and protections across your accounts and resources, even as you add new resources. Firewall Manager is particularly useful when you want to protect your entire organization, or if you frequently add new resources that you want to protect via a central administrator account.

To support separation of management plan activities from data plane aspects in workloads, agencies can use multiple elastic network interface patterns on EC2 instances to provide a separate management network path.

Edge protection services

In the example in Figure 3, several services are used to provide edge-based protections in front of the web application. AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards applications that are running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there’s no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield: Standard and Advanced. When you use Shield Advanced, you can apply protections at both the Amazon CloudFront, Amazon EC2 and application load balancer layers. Shield Advanced also gives you 24/7 access to the AWS DDoS Response Team (DRT).

AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns that you define. Again, you can apply this protection at both the Amazon CloudFront and application load balancer layers in our illustrated solution. Agencies can also use managed rules for WAF to benefit from rules developed and maintained by AWS Marketplace sellers.

Amazon CloudFront is a fast content delivery network (CDN) service. CloudFront seamlessly integrates with AWS ShieldAWS WAF, and Amazon Route 53 to help protect against multiple types of unauthorized access, including network and application layer DDoS attacks.

Logging and monitoring services

The example application in Figure 3 shows several services that provide logging and monitoring of network traffic, application activity, infrastructure, and AWS API usage.

At the VPC level, the VPC Flow Logs feature provides you with the ability to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3). Traffic Mirroring is a feature that you can use in a VPC to capture traffic if needed for inspection. This allows agencies to implement full packet capture on a continuous basis, or in response to a specific event within the application.

Amazon CloudWatch provides a monitoring service with alarms and analytics. In the example application, AWS WAF can also be configured to log activity as described in the AWS WAF Developer Guide.

AWS Config provides a timeline view of the configuration of the environment. You can also define rules to provide alerts and remediation when the environment moves away from the desired configuration.

AWS CloudTrail is a service that you can use to handle governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity that is related to actions across your AWS infrastructure.

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. This blog post highlights a third-party assessment of GuardDuty that compares its performance to other intrusion detection systems (IDS).

Route 53 Resolver Query Logging lets you log the DNS queries that originate in your VPCs. With query logging turned on, you can see which domain names have been queried, the AWS resources from which the queries originated—including source IP and instance ID—and the responses that were received.

With Route 53 Resolver DNS Firewall, you can filter and regulate outbound DNS traffic for your VPCs. To do this, you create reusable collections of filtering rules in DNS Firewall rule groups, associate the rule groups to your VPC, and then monitor activity in DNS Firewall logs and metrics. Based on the activity, you can adjust the behavior of DNS Firewall accordingly.

Mapping services to control areas

Based on the above description of the use of additional services, we can summarize which services contribute to the control and recommendation areas in the gateway chapter in the Australian ISM framework.

Control and recommendation areas Contributing services
Inspect and filter traffic AWS WAF, VPC Traffic Mirroring
Central control point Infrastructure as code, AWS Firewall Manager
Authentication and authorization (MFA) AWS Identity and Access Management (IAM), solution and application IAM, VPC security groups
Logging and monitoring Amazon CloudWatch, AWS CloudTrail, AWS Config, Amazon VPC (flow logs and mirroring), load balancer logs, Amazon CloudFront logs, Amazon GuardDuty, Route 53 Resolver Query Logging
Secure administration (MFA) IAM, directory federation (if used)
DMZ patterns VPC subnet layout, security groups, network ACLs
Firewalls VPC security groups, network ACLs, AWS WAF, Route 53 Resolver DNS Firewall
Web proxy; site and content filtering and scanning AWS WAF, Firewall Manager

Note that the listed AWS service might not provide all relevant controls in each area, and it is part of the customer’s risk assessment and design to determine what additional controls might need to be implemented.

As you can see, many of the recommended practices and controls from the Australian Government gateway requirements are already encompassed in a typical Well-Architected solution. The implementing agency has the choice of two options: it can continue to place such a solution behind a gateway that runs either within or outside of AWS, leveraging the gateway controls that are inherent in the application architecture as additional layers of defense. Otherwise, the agency can conduct a risk assessment to understand which gateway controls can be supplied by means of the application architecture to reduce the gateway control requirements at any gateway layer in front of the application.

Summary

In this blog post, we’ve discussed the requirements for Australian Government gateways which provide network controls to secure workloads. We’ve outlined the downsides of using traditional on-premises solutions and illustrated how services such as AWS Transit Gateway, Elastic Load Balancing, Gateway Load Balancer, and AWS Network Firewall facilitate moving gateway solutions into the cloud. These are services you can evaluate against your network control requirements. Finally, we reviewed a typical web architecture running in the AWS Cloud with associated services to illustrate how many of the typical gateway controls can be met by using a standard Well-Architected approach.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on one of the AWS Security or Networking forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is a Principal Solutions Architect in the Australian National Security team at AWS in Canberra, Australia. He is passionate about facilitating cloud adoption for customers to enable innovation. John has been working with government customers at AWS for over 8 years, as the first employee for the ANZ Public Sector team.

Continuous Compliance Workflow for Infrastructure as Code: Part 1

Post Syndicated from Sumit Mishra original https://aws.amazon.com/blogs/devops/continuous-compliance-workflow-for-infrastructure-as-code-part-1/

Security and compliance standards are of paramount importance for organizations in many industries. There is a growing need to seamlessly integrate these standards in an application release cycle. From a DevOps standpoint, an application can be subject to these standards during two phases:

  • Pre-deployment – Standards are enforced in an application deployment pipeline prior to the deployment of the workload. This follows a shift-left testing approach of catching defects early in the release cycle and preventing security vulnerabilities and compliance issues from being deployed into your AWS account. Example of service/tool providing this capability are Amazon CodeGuru Reviewer and AWS CloudFormation Guard for security static analysis.
  • Post-deployment – Standards are deployed in application-specific AWS accounts. They only operate and report on resources deployed in those accounts. Example of a service providing this capability is AWS Config for runtime compliance checks.

For this post, we focus on pre-deployment security and compliance standards.

As a security and compliance engineer, you’re responsible for introducing guardrails based on your organizations’ security policies, ensuring continuous compliance of the workloads and preventing noncompliant workloads from being promoted to production. The process of releasing security and compliance guardrails to the individual application development teams who have to incorporate them into their release cycle can become challenging from a scalability standpoint.

You need a process with the following features:

  • A place to develop and test the guardrails before promotion or activation
  • Visibility into potential noncompliant resources before activating the guardrails (observation mode)
  • The ability to notify delivery teams if a noncompliant resource is found in their workload, allowing them time to remediate before guardrail activation
  • A defined deadline for the delivery teams to mitigate the issues
  • The ability to add exclusions to guardrails
  • The ability to enable the guardrail in production in active mode, causing the delivery pipeline to break if a noncompliant resource is found

In this post, we propose a continuous compliance workflow that uses the pattern of continuous integration and continuous deployment (CI/CD) to implement these capabilities. We discuss this solution from the perspective of a security and compliance engineer, and assume that you’re aware of application development terminologies and practices such as CI/CD, infrastructure as code (IaC), behavior-driven development (BDD), and negative testing.

Our continuous compliance workflow is technology agnostic. You can implement it using any combination of CI/CD tools and IaC frameworks such as AWS CloudFormation / AWS CDK as IaC and AWS CloudFormation Guard as policy-as-code tool.

This is part one of a two-part series; in this post, we focus on the continuous compliance workflow and not on its implementation. In Part 2, we focus on the technical implementation of the workflow using AWS Developer Tools, Terraform, and Terraform-Compliance, an open-source compliance framework for Terraform.

Continuous compliance workflow

The security and compliance team is responsible for releasing guardrails implementing compliance policies. Application delivery pipelines are enforced to carry out compliance checks by subjecting their workloads to these guardrails. However, as the guardrails are released and enforced in application delivery pipelines, there should not be an element of surprise for the application teams in which new guardrails suddenly break their pipelines without any warning. A critical ingredient of the continuous compliance workflow is the CI/CD pipeline, which allows for a controlled release of the guardrails to the application delivery pipelines.

To help facilitate this process, we introduce the workflow shown in the following diagram.

continuous compliance workflow

The security and compliance team implements compliance as code using a framework of their choice. The following is an example of compliance as code:

Scenario: Ensure all resources have tags
  Given I have resource that supports tags defined
  Then it must contain tags
  And its value must not be null

This compliance check ensures that all AWS resources created have the tags property defined. It’s written using an open-source compliance framework for Terraform called Terraform-Compliance. The framework uses BDD syntax to define the guardrails.

The guardrail is then checked into the feature branch of the repository where all the compliance guardrails reside. This triggers the security and compliance continuous integration (CI) process. The CI flow runs all the guardrails (including newly introduced ones) against the application workload code. Because this occurs in the security and compliance CI pipeline and not the application delivery pipeline, it’s not visible to the application delivery team and doesn’t impact them. This is called observation mode. The security and compliance team can observe the results of their new guardrails against application code without impacting the application delivery team. This allows for notification to the application delivery team to fix any noncompliant resources if found.

Actions taken for compliant workloads

If the workload is compliant with the newly introduced guardrail, the pipeline automatically merges the guardrail to the mainline branch and moves it to active mode. When a guardrail is in active mode, it impacts the application delivery pipelines by breaking them if any noncompliant resources are introduced in the application workload.

Actions taken for noncompliant workloads

If the workload is found to be noncompliant, the pipeline stops the automatic merge. At this point, an alternate path of the workflow takes over, in which the application delivery team is notified and asked to fix the compliance issues before an established deadline. After the deadline, the compliance code is manually merged into the mainline branch, thereby activating it.

The application delivery team may have a valid reason for being noncompliant with one or more guardrails, in which case they have to take their request to the security and compliance team so that the noncompliant resource is added to the exclusion list for that guardrail. If approved, the security and compliance team modifies the guardrail and updates the exclusion list, and the pipeline merges the changes to the mainline branch. The exclusion list is owned and managed by the security and compliance team—only they can approve an exclusion.

Application delivery pipelines run the compliance checks by first pulling guardrails from the mainline branch of the security and compliance repository and subjecting their respective terraform workloads to these guardrails. Only the guardrails in active mode are pulled, which is ensured by pulling the guardrails from the mainline branch only. This workflow implements the integration of the application delivery pipelines with the security and compliance repository, allowing it to pull the guardrails from the compliance repository on every run of the application pipeline. This integration enforces each AWS resource created in the terraform code to be subjected to the guardrails. If any resource isn’t in line with the guardrails, it’s found to be noncompliant and the pipeline stops deployment.

Customer testimonials

Truist Financial Corporation is an American bank holding company headquartered in Charlotte, North Carolina. The company was formed in December 2019 as the result of the merger of BB&T and SunTrust Banks. With AWS Professional Services, Truist implemented the Continuous Compliance Workflow using their own tool stack. Below is what the leadership had to say about the implementation:

“The continuous compliance workflow helped us scale our security and operational compliance checks across all our development teams in a short period of time with a limited staff. We implemented this at Truist using our own tool stack, as the workflow itself is tech stack agnostic. It helped us with shifting left of the development and implementation of compliance checks, and the observation mode in the workflow provided us with an early insight into our workload compliance report before activating the checks to start impacting pipelines of development teams. The workflow allows the development team to take ownership of their workload compliance, while at the same time having a centralized view of the compliance/noncompliance reports allows us to crowdsource learning and share remediations across the teams.”

—Gary Smith, Group Vice President (GPV) Digital Enablement and Quality Engineering, Truist Financial Corporation

“The continuous compliance workflow provided us with a framework over which we are able to roll out any industry standard compliance sets—CIS, PCI, NIST, etc. It provided centralized visibility around policy adherence to these standards, which helped us with our audits. The centralized view also provided us with patterns across development teams of most common noncompliance issues, allowing us to create a knowledge base to help new teams as we on-boarded them. And being self-service, it reduced the friction of on-boarding development teams, therefore improving adoption.”

—David Jankowski, SVP Digital Application Support Services, Truist Financial Corporation

Conclusion

In this two-part series, we introduce the continuous compliance workflow that outlines how you can seamlessly integrate security and compliance guardrails into an application release cycle. This workflow can benefit enterprises with stringent requirements around security and compliance of AWS resources being deployed into cloud.

Be on the lookout for Part 2, in which we implement the continuous compliance workflow using AWS CodePipeline and the Terraform-Compliance framework.

About the authors

Damodar Shenvi Wagle

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, ci/cd and automation.

 

 

 

 

sumit mishra

 

Sumit Mishra is Senior DevOps Architect at AWS Professional Services. His area of expertise include IaC, Security in pipeline, ci/cd and automation.

 

 

 

 

David Jankowski

David Jankowski is the group head and leads Channel and innovations build and support of DevSecOps Services, Quality Engineering practices, Production Operations and Cloud Migration and Enablement at TRUIST

 

 

 

Gary Smith

 

Gary Smith is the Quality Engineering practice lead for the Channels and Innovations SupportServices organization and was directly responsible for working with our AWS partners on building and implementing the continuous compliance process at TRUIST

 

Fall 2020 PCI DSS report now available with eight additional services in scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2020-pci-dss-report-now-available-with-eight-additional-services-in-scope/

We continue to expand the scope of our assurance programs and are pleased to announce that eight additional services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This gives our customers more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in Amazon Web Services (AWS).

You can see the full list on Services in Scope by Compliance Program. The eight additional services are:

  1. Amazon Augmented AI (Amazon A2I) (excluding public workforce and vendor workforce)
  2. Amazon Kendra
  3. Amazon Keyspaces (for Apache Cassandra)
  4. Amazon Timestream
  5. AWS App Mesh
  6. AWS Cloud Map
  7. AWS Glue DataBrew
  8. AWS Ground Station

Private AWS Local Zones and AWS Wavelength sites were newly assessed as additional infrastructure deployments as part of the fall 2020 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) evidencing AWS PCI compliance status is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions. You can contact the compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS. He has over 15 years of experience managing information technology risk and control for Fortune 500 companies covering security compliance, auditing, and control framework implementation. He has a bachelor’s degree in Finance, master’s degree in Business Administration, and industry certifications including CISA and ISSPCS. Outside of work, he loves singing and reading.

Building end-to-end AWS DevSecOps CI/CD pipeline with open source SCA, SAST and DAST tools

Post Syndicated from Srinivas Manepalli original https://aws.amazon.com/blogs/devops/building-end-to-end-aws-devsecops-ci-cd-pipeline-with-open-source-sca-sast-and-dast-tools/

DevOps is a combination of cultural philosophies, practices, and tools that combine software development with information technology operations. These combined practices enable companies to deliver new application features and improved services to customers at a higher velocity. DevSecOps takes this a step further, integrating security into DevOps. With DevSecOps, you can deliver secure and compliant application changes rapidly while running operations consistently with automation.

Having a complete DevSecOps pipeline is critical to building a successful software factory, which includes continuous integration (CI), continuous delivery and deployment (CD), continuous testing, continuous logging and monitoring, auditing and governance, and operations. Identifying the vulnerabilities during the initial stages of the software development process can significantly help reduce the overall cost of developing application changes, but doing it in an automated fashion can accelerate the delivery of these changes as well.

To identify security vulnerabilities at various stages, organizations can integrate various tools and services (cloud and third-party) into their DevSecOps pipelines. Integrating various tools and aggregating the vulnerability findings can be a challenge to do from scratch. AWS has the services and tools necessary to accelerate this objective and provides the flexibility to build DevSecOps pipelines with easy integrations of AWS cloud native and third-party tools. AWS also provides services to aggregate security findings.

In this post, we provide a DevSecOps pipeline reference architecture on AWS that covers the afore-mentioned practices, including SCA (Software Composite Analysis), SAST (Static Application Security Testing), DAST (Dynamic Application Security Testing), and aggregation of vulnerability findings into a single pane of glass. Additionally, this post addresses the concepts of security of the pipeline and security in the pipeline.

You can deploy this pipeline in either the AWS GovCloud Region (US) or standard AWS Regions. As of this writing, all listed AWS services are available in AWS GovCloud (US) and authorized for FedRAMP High workloads within the Region, with the exception of AWS CodePipeline and AWS Security Hub, which are in the Region and currently under the JAB Review to be authorized shortly for FedRAMP High as well.

Services and tools

In this section, we discuss the various AWS services and third-party tools used in this solution.

CI/CD services

For CI/CD, we use the following AWS services:

  • AWS CodeBuild – A fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy.
  • AWS CodeCommit – A fully managed source control service that hosts secure Git-based repositories.
  • AWS CodeDeploy – A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon Elastic Compute Cloud (Amazon EC2), AWS Fargate, AWS Lambda, and your on-premises servers.
  • AWS CodePipeline – A fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
  • AWS Lambda – A service that lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
  • Amazon Simple Notification Service – Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.
  • Amazon Simple Storage Service – Amazon S3 is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
  • AWS Systems Manager Parameter Store – Parameter Store gives you visibility and control of your infrastructure on AWS.

Continuous testing tools

The following are open-source scanning tools that are integrated in the pipeline for the purposes of this post, but you could integrate other tools that meet your specific requirements. You can use the static code review tool Amazon CodeGuru for static analysis, but at the time of this writing, it’s not yet available in GovCloud and currently supports Java and Python (available in preview).

  • OWASP Dependency-Check – A Software Composition Analysis (SCA) tool that attempts to detect publicly disclosed vulnerabilities contained within a project’s dependencies.
  • SonarQube (SAST) – Catches bugs and vulnerabilities in your app, with thousands of automated Static Code Analysis rules.
  • PHPStan (SAST) – Focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code.
  • OWASP Zap (DAST) – Helps you automatically find security vulnerabilities in your web applications while you’re developing and testing your applications.

Continuous logging and monitoring services

The following are AWS services for continuous logging and monitoring:

Auditing and governance services

The following are AWS auditing and governance services:

  • AWS CloudTrail – Enables governance, compliance, operational auditing, and risk auditing of your AWS account.
  • AWS Identity and Access Management – Enables you to manage access to AWS services and resources securely. With IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
  • AWS Config – Allows you to assess, audit, and evaluate the configurations of your AWS resources.

Operations services

The following are AWS operations services:

  • AWS Security Hub – Gives you a comprehensive view of your security alerts and security posture across your AWS accounts. This post uses Security Hub to aggregate all the vulnerability findings as a single pane of glass.
  • AWS CloudFormation – Gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code.
  • AWS Systems Manager Parameter Store – Provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values.
  • AWS Elastic Beanstalk – An easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. This post uses Elastic Beanstalk to deploy LAMP stack with WordPress and Amazon Aurora MySQL. Although we use Elastic Beanstalk for this post, you could configure the pipeline to deploy to various other environments on AWS or elsewhere as needed.

Pipeline architecture

The following diagram shows the architecture of the solution.

AWS DevSecOps CICD pipeline architecture

AWS DevSecOps CICD pipeline architecture

 

The main steps are as follows:

  1. When a user commits the code to a CodeCommit repository, a CloudWatch event is generated which, triggers CodePipeline.
  2. CodeBuild packages the build and uploads the artifacts to an S3 bucket. CodeBuild retrieves the authentication information (for example, scanning tool tokens) from Parameter Store to initiate the scanning. As a best practice, it is recommended to utilize Artifact repositories like AWS CodeArtifact to store the artifacts, instead of S3. For simplicity of the workshop, we will continue to use S3.
  3. CodeBuild scans the code with an SCA tool (OWASP Dependency-Check) and SAST tool (SonarQube or PHPStan; in the provided CloudFormation template, you can pick one of these tools during the deployment, but CodeBuild is fully enabled for a bring your own tool approach).
  4. If there are any vulnerabilities either from SCA analysis or SAST analysis, CodeBuild invokes the Lambda function. The function parses the results into AWS Security Finding Format (ASFF) and posts it to Security Hub. Security Hub helps aggregate and view all the vulnerability findings in one place as a single pane of glass. The Lambda function also uploads the scanning results to an S3 bucket.
  5. If there are no vulnerabilities, CodeDeploy deploys the code to the staging Elastic Beanstalk environment.
  6. After the deployment succeeds, CodeBuild triggers the DAST scanning with the OWASP ZAP tool (again, this is fully enabled for a bring your own tool approach).
  7. If there are any vulnerabilities, CodeBuild invokes the Lambda function, which parses the results into ASFF and posts it to Security Hub. The function also uploads the scanning results to an S3 bucket (similar to step 4).
  8. If there are no vulnerabilities, the approval stage is triggered, and an email is sent to the approver for action.
  9. After approval, CodeDeploy deploys the code to the production Elastic Beanstalk environment.
  10. During the pipeline run, CloudWatch Events captures the build state changes and sends email notifications to subscribed users through SNS notifications.
  11. CloudTrail tracks the API calls and send notifications on critical events on the pipeline and CodeBuild projects, such as UpdatePipeline, DeletePipeline, CreateProject, and DeleteProject, for auditing purposes.
  12. AWS Config tracks all the configuration changes of AWS services. The following AWS Config rules are added in this pipeline as security best practices:
  13. CODEBUILD_PROJECT_ENVVAR_AWSCRED_CHECK – Checks whether the project contains environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The rule is NON_COMPLIANT when the project environment variables contains plaintext credentials.
  14. CLOUD_TRAIL_LOG_FILE_VALIDATION_ENABLED – Checks whether CloudTrail creates a signed digest file with logs. AWS recommends that the file validation be enabled on all trails. The rule is noncompliant if the validation is not enabled.

Security of the pipeline is implemented by using IAM roles and S3 bucket policies to restrict access to pipeline resources. Pipeline data at rest and in transit is protected using encryption and SSL secure transport. We use Parameter Store to store sensitive information such as API tokens and passwords. To be fully compliant with frameworks such as FedRAMP, other things may be required, such as MFA.

Security in the pipeline is implemented by performing the SCA, SAST and DAST security checks. Alternatively, the pipeline can utilize IAST (Interactive Application Security Testing) techniques that would combine SAST and DAST stages.

As a best practice, encryption should be enabled for the code and artifacts, whether at rest or transit.

In the next section, we explain how to deploy and run the pipeline CloudFormation template used for this example. Refer to the provided service links to learn more about each of the services in the pipeline. If utilizing CloudFormation templates to deploy infrastructure using pipelines, we recommend using linting tools like cfn-nag to scan CloudFormation templates for security vulnerabilities.

Prerequisites

Before getting started, make sure you have the following prerequisites:

Deploying the pipeline

To deploy the pipeline, complete the following steps: Download the CloudFormation template and pipeline code from GitHub repo.

  1. Log in to your AWS account if you have not done so already.
  2. On the CloudFormation console, choose Create Stack.
  3. Choose the CloudFormation pipeline template.
  4. Choose Next.
  5. Provide the stack parameters:
    • Under Code, provide code details, such as repository name and the branch to trigger the pipeline.
    • Under SAST, choose the SAST tool (SonarQube or PHPStan) for code analysis, enter the API token and the SAST tool URL. You can skip SonarQube details if using PHPStan as the SAST tool.
    • Under DAST, choose the DAST tool (OWASP Zap) for dynamic testing and enter the API token, DAST tool URL, and the application URL to run the scan.
    • Under Lambda functions, enter the Lambda function S3 bucket name, filename, and the handler name.
    • Under STG Elastic Beanstalk Environment and PRD Elastic Beanstalk Environment, enter the Elastic Beanstalk environment and application details for staging and production to which this pipeline deploys the application code.
    • Under General, enter the email addresses to receive notifications for approvals and pipeline status changes.

CF Deploymenet - Passing parameter values

CloudFormation deployment - Passing parameter values

CloudFormation template deployment

After the pipeline is deployed, confirm the subscription by choosing the provided link in the email to receive the notifications.

The provided CloudFormation template in this post is formatted for AWS GovCloud. If you’re setting this up in a standard Region, you have to adjust the partition name in the CloudFormation template. For example, change ARN values from arn:aws-us-gov to arn:aws.

Running the pipeline

To trigger the pipeline, commit changes to your application repository files. That generates a CloudWatch event and triggers the pipeline. CodeBuild scans the code and if there are any vulnerabilities, it invokes the Lambda function to parse and post the results to Security Hub.

When posting the vulnerability finding information to Security Hub, we need to provide a vulnerability severity level. Based on the provided severity value, Security Hub assigns the label as follows. Adjust the severity levels in your code based on your organization’s requirements.

  • 0 – INFORMATIONAL
  • 1–39 – LOW
  • 40– 69 – MEDIUM
  • 70–89 – HIGH
  • 90–100 – CRITICAL

The following screenshot shows the progression of your pipeline.

CodePipeline stages

CodePipeline stages

SCA and SAST scanning

In our architecture, CodeBuild trigger the SCA and SAST scanning in parallel. In this section, we discuss scanning with OWASP Dependency-Check, SonarQube, and PHPStan. 

Scanning with OWASP Dependency-Check (SCA)

The following is the code snippet from the Lambda function, where the SCA analysis results are parsed and posted to Security Hub. Based on the results, the equivalent Security Hub severity level (normalized_severity) is assigned.

Lambda code snippet for OWASP Dependency-check

Lambda code snippet for OWASP Dependency-check

You can see the results in Security Hub, as in the following screenshot.

SecurityHub report from OWASP Dependency-check scanning

SecurityHub report from OWASP Dependency-check scanning

Scanning with SonarQube (SAST)

The following is the code snippet from the Lambda function, where the SonarQube code analysis results are parsed and posted to Security Hub. Based on SonarQube results, the equivalent Security Hub severity level (normalized_severity) is assigned.

Lambda code snippet for SonarQube

Lambda code snippet for SonarQube

The following screenshot shows the results in Security Hub.

SecurityHub report from SonarQube scanning

SecurityHub report from SonarQube scanning

Scanning with PHPStan (SAST)

The following is the code snippet from the Lambda function, where the PHPStan code analysis results are parsed and posted to Security Hub.

Lambda code snippet for PHPStan

Lambda code snippet for PHPStan

The following screenshot shows the results in Security Hub.

SecurityHub report from PHPStan scanning

SecurityHub report from PHPStan scanning

DAST scanning

In our architecture, CodeBuild triggers DAST scanning and the DAST tool.

If there are no vulnerabilities in the SAST scan, the pipeline proceeds to the manual approval stage and an email is sent to the approver. The approver can review and approve or reject the deployment. If approved, the pipeline moves to next stage and deploys the application to the provided Elastic Beanstalk environment.

Scanning with OWASP Zap

After deployment is successful, CodeBuild initiates the DAST scanning. When scanning is complete, if there are any vulnerabilities, it invokes the Lambda function similar to SAST analysis. The function parses and posts the results to Security Hub. The following is the code snippet of the Lambda function.

Lambda code snippet for OWASP-Zap

Lambda code snippet for OWASP-Zap

The following screenshot shows the results in Security Hub.

SecurityHub report from OWASP-Zap scanning

SecurityHub report from OWASP-Zap scanning

Aggregation of vulnerability findings in Security Hub provides opportunities to automate the remediation. For example, based on the vulnerability finding, you can trigger a Lambda function to take the needed remediation action. This also reduces the burden on operations and security teams because they can now address the vulnerabilities from a single pane of glass instead of logging into multiple tool dashboards.

Conclusion

In this post, I presented a DevSecOps pipeline that includes CI/CD, continuous testing, continuous logging and monitoring, auditing and governance, and operations. I demonstrated how to integrate various open-source scanning tools, such as SonarQube, PHPStan, and OWASP Zap for SAST and DAST analysis. I explained how to aggregate vulnerability findings in Security Hub as a single pane of glass. This post also talked about how to implement security of the pipeline and in the pipeline using AWS cloud native services. Finally, I provided the DevSecOps pipeline as code using AWS CloudFormation. For additional information on AWS DevOps services and to get started, see AWS DevOps and DevOps Blog.

 

Srinivas Manepalli is a DevSecOps Solutions Architect in the U.S. Fed SI SA team at Amazon Web Services (AWS). He is passionate about helping customers, building and architecting DevSecOps and highly available software systems. Outside of work, he enjoys spending time with family, nature and good food.

re:Invent – New security sessions launching soon

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-new-security-sessions-launching-soon/

Where did the last month go? Were you able to catch all of the sessions in the Security, Identity, and Compliance track you hoped to see at AWS re:Invent? If you missed any, don’t worry—you can stream all the sessions released in 2020 via the AWS re:Invent website. Additionally, we’re starting 2021 with all new sessions that you can stream live January 12–15. Here are the new Security, Identity, and Compliance sessions—each session is offered at multiple times, so you can find the time that works best for your location and schedule.

Protecting sensitive data with Amazon Macie and Amazon GuardDuty – SEC210
Himanshu Verma, AWS Speaker

Tuesday, January 12 – 11:00 AM to 11:30 AM PST
Tuesday, January 12 – 7:00 PM to 7:30 PM PST
Wednesday, January 13 – 3:00 AM to 3:30 AM PST

As organizations manage growing volumes of data, identifying and protecting your sensitive data can become increasingly complex, expensive, and time-consuming. In this session, learn how Amazon Macie and Amazon GuardDuty together provide protection for your data stored in Amazon S3. Amazon Macie automates the discovery of sensitive data at scale and lowers the cost of protecting your data. Amazon GuardDuty continuously monitors and profiles S3 data access events and configurations to detect suspicious activities. Come learn about these security services and how to best use them for protecting data in your environment.

BBC: Driving security best practices in a decentralized organization – SEC211
Apurv Awasthi, AWS Speaker
Andrew Carlson, Sr. Software Engineer – BBC

Tuesday, January 12 – 1:15 PM to 1:45 PM PST
Tuesday, January 12 – 9:15 PM to 9:45 PM PST
Wednesday, January 13 – 5:15 AM to 5:45 AM PST

In this session, Andrew Carlson, engineer at BBC, talks about BBC’s journey while adopting AWS Secrets Manager for lifecycle management of its arbitrary credentials such as database passwords, API keys, and third-party keys. He provides insight on BBC’s secrets management best practices and how the company drives these at enterprise scale in a decentralized environment that has a highly visible scope of impact.

Get ahead of the curve with DDoS Response Team escalations – SEC321
Fola Bolodeoku, AWS Speaker

Tuesday, January 12 – 3:30 PM to 4:00 PM PST
Tuesday, January 12 – 11:30 PM to 12:00 AM PST
Wednesday, January – 7:30 AM to 8:00 AM PST

This session identifies tools and tricks that you can use to prepare for application security escalations, with lessons learned provided by the AWS DDoS Response Team. You learn how AWS customers have used different AWS offerings to protect their applications, including network access control lists, security groups, and AWS WAF. You also learn how to avoid common misconfigurations and mishaps observed by the DDoS Response Team, and you discover simple yet effective actions that you can take to better protect your applications’ availability and security controls.

Network security for serverless workloads – SEC322
Alex Tomic, AWS Speaker

Thursday, January 14 -1:30 PM to 2:00 PM PST
Thursday, January 14 – 9:30 PM to 10:00 PM PST
Friday, January 15 – 5:30 AM to 6:00 AM PST

Are you building a serverless application using services like Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Aurora, and Amazon SQS? Would you like to apply enterprise network security to these AWS services? This session covers how network security concepts like encryption, firewalls, and traffic monitoring can be applied to a well-architected AWS serverless architecture.

Building your cloud incident response program – SEC323
Freddy Kasprzykowski, AWS Speaker

Wednesday, January 13 – 9:00 AM to 9:30 AM PST
Wednesday, January 13 – 5:00 PM to 5:30 PM PST
Thursday, January 14 – 1:00 AM to 1:30 AM PST

You’ve configured your detection services and now you’ve received your first alert. This session provides patterns that help you understand what capabilities you need to build and run an effective incident response program in the cloud. It includes a review of some logs to see what they tell you and a discussion of tools to analyze those logs. You learn how to make sure that your team has the right access, how automation can help, and which incident response frameworks can guide you.

Beyond authentication: Guide to secure Amazon Cognito applications – SEC324
Mahmoud Matouk, AWS Speaker

Wednesday, January 13 – 2:15 PM to 2:45 PM PST
Wednesday, January 13 – 10:15 PM to 10:45 PM PST
Thursday, January 14 – 6:15 AM to 6:45 AM PST

Amazon Cognito is a flexible user directory that can meet the needs of a number of customer identity management use cases. Web and mobile applications can integrate with Amazon Cognito in minutes to offer user authentication and get standard tokens to be used in token-based authorization scenarios. This session covers best practices that you can implement in your application to secure and protect tokens. You also learn about new Amazon Cognito features that give you more options to improve the security and availability of your application.

Event-driven data security using Amazon Macie – SEC325
Neha Joshi, AWS Speaker

Thursday, January 14 – 8:00 AM to 8:30 AM PST
Thursday, January 14 – 4:00 PM to 4:30 PM PST
Friday, January 15 – 12:00 AM to 12:30 AM PST

Amazon Macie sensitive data discovery jobs for Amazon S3 buckets help you discover sensitive data such as personally identifiable information (PII), financial information, account credentials, and workload-specific sensitive information. In this session, you learn about an automated approach to discover sensitive information whenever changes are made to the objects in your S3 buckets.

Instance containment techniques for effective incident response – SEC327
Jonathon Poling, AWS Speaker

Thursday, January 14 – 10:15 AM to 10:45 AM PST
Thursday, January 14 – 6:15 PM to 6:45 PM PST
Friday, January 15 – 2:15 AM to 2:45 AM PST

In this session, learn about several instance containment and isolation techniques, ranging from simple and effective to more complex and powerful, that leverage native AWS networking services and account configuration techniques. If an incident happens, you may have questions like “How do we isolate the system while preserving all the valuable artifacts?” and “What options do we even have?”. These are valid questions, but there are more important ones to discuss amidst a (possible) incident. Join this session to learn highly effective instance containment techniques in a crawl-walk-run approach that also facilitates preservation and collection of valuable artifacts and intelligence.

Trusted connects for government workloads – SEC402
Brad Dispensa, AWS Speaker

Wednesday, January 13 – 11:15 AM to 11:45 AM PST
Wednesday, January 13 – 7:15 PM to 7:45 PM PST
Thursday, January 14 – 3:15 AM to 3:45 AM PST

Cloud adoption across the public sector is making it easier to provide government workforces with seamless access to applications and data. With this move to the cloud, we also need updated security guidance to ensure public-sector data remain secure. For example, the TIC (Trusted Internet Connections) initiative has been a requirement for US federal agencies for some time. The recent TIC-3 moves from prescriptive guidance to an outcomes-based model. This session walks you through how to leverage AWS features to better protect public-sector data using TIC-3 and the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF). Also, learn how this might map into other geographies.

I look forward to seeing you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Deploy an automated ChatOps solution for remediating Amazon Macie findings

Post Syndicated from Nick Cuneo original https://aws.amazon.com/blogs/security/deploy-an-automated-chatops-solution-for-remediating-amazon-macie-findings/

The amount of data being collected, stored, and processed by Amazon Web Services (AWS) customers is growing at an exponential rate. In order to keep pace with this growth, customers are turning to scalable cloud storage services like Amazon Simple Storage Service (Amazon S3) to build data lakes at the petabyte scale. Customers are looking for new, automated, and scalable ways to address their data security and compliance requirements, including the need to identify and protect their sensitive data. Amazon Macie helps customers address this need by offering a managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data that is stored in Amazon S3.

In this blog post, I show you how to deploy a solution that establishes an automated event-driven workflow for notification and remediation of sensitive data findings from Macie. Administrators can review and approve remediation of findings through a ChatOps-style integration with Slack. Slack is a business communication tool that provides messaging functionality, including persistent chat rooms known as channels. With this solution, you can streamline the notification, investigation, and remediation of sensitive data findings in your AWS environment.

Prerequisites

Before you deploy the solution, make sure that your environment is set up with the following prerequisites:

Important: This solution uses various AWS services, and there are costs associated with these resources after the Free Tier usage. See the AWS pricing page for details.

Solution overview

The solution architecture and workflow are detailed in Figure 1.

Figure 1: Solution overview

Figure 1: Solution overview

This solution allows for the configuration of auto-remediation behavior based on finding type and finding severity. For each finding type, you can define whether you want the offending S3 object to be automatically quarantined, or whether you want the finding details to be reviewed and approved by a human in Slack prior to being quarantined. In a similar manner, you can define the minimum severity level (Low, Medium, High) that a finding must have before the solution will take action. By adjusting these parameters, you can manage false positives and tune the volume and type of findings about which you want to be notified and take action. This configurability is important because customers have different security, risk, and regulatory requirements.

Figure 1 details the services used in the solution and the integration points between them. Let’s walk through the full sequence from the detection of sensitive data to the remediation (quarantine) of the offending object.

  1. Macie is configured with sensitive data discovery jobs (scheduled or one-time), which you create and run to detect sensitive data within S3 buckets. When Macie runs a job, it uses a combination of criteria and techniques to analyze objects in S3 buckets that you specify. For a full list of the categories of sensitive data Macie can detect, see the Amazon Macie User Guide.
  2. For each sensitive data finding, an event is sent to Amazon EventBridge that contains the finding details. An EventBridge rule triggers a Lambda function for processing.
  3. The Finding Handler Lambda function parses the event and examines the type of the finding. Based on the auto-remediation configuration, the function either invokes the Finding Remediator function for immediate remediation, or sends the finding details for manual review and remediation approval through Slack.
  4. Delegated security and compliance administrators monitor the configured Slack channel for notifications. Notifications provide high-level finding information, remediation status, and a link to the Macie console for the finding in question. For findings configured for manual review, administrators can choose to approve the remediation in Slack by using an action button on the notification.
  5. After an administrator chooses the Remediate button, Slack issues an API call to an Amazon API Gateway endpoint, supplying both the unique identifier of the finding to be remediated and that of the Slack user. API Gateway proxies the request to a Remediation Handler Lambda function.
  6. The Remediation Handler Lambda function validates the request and request signature, extracts the offending object’s location from the finding, and makes an asynchronous call to the Finding Remediator Lambda function.
  7. The Finding Remediator Lambda function moves the offending object from the source bucket to a designated S3 quarantine bucket with restricted access.
  8. Finally, the Finding Remediator Lambda function uses a callback URL to update the original finding notification in Slack, indicating that the offending object has now been quarantined.

Deploy the solution

Now we’ll walk through the steps for configuring Slack and deploying the solution into your AWS environment by using the AWS CDK. The AWS CDK is a software development framework that you can use to define cloud infrastructure in code and provision through AWS CloudFormation.

The deployment steps can be summarized as follows:

  1. Configure a Slack channel and app
  2. Check the project out from GitHub
  3. Set the configuration parameters
  4. Build and deploy the solution
  5. Configure Slack with an API Gateway endpoint

To configure a Slack channel and app

  1. In your browser, make sure you’re logged into the Slack workspace where you want to integrate the solution.
  2. Create a new channel where you will send the notifications, as follows:
    1. Choose the + icon next to the Channels menu, and select Create a channel.
    2. Give your channel a name, for example macie-findings, and make sure you turn on the Make private setting.

      Important: By providing Slack users with access to this configured channel, you’re providing implicit access to review Macie finding details and approve remediations. To avoid unwanted user access, it’s strongly recommended that you make this channel private and by invite only.

  3. On your Apps page, create a new app by selecting Create New App, and then enter the following information:
    1. For App Name, enter a name of your choosing, for example MacieRemediator.
    2. Select your chosen development Slack workspace that you logged into in step 1.
    3. Choose Create App.
    Figure 2: Create a Slack app

    Figure 2: Create a Slack app

  4. You will then see the Basic Information page for your app. Scroll down to the App Credentials section, and note down the Signing Secret. This secret will be used by the Lambda function that handles all remediation requests from Slack. The function uses the secret with Hash-based Message Authentication Code (HMAC) authentication to validate that requests to the solution are legitimate and originated from your trusted Slack channel.

    Figure 3: Signing secret

    Figure 3: Signing secret

  5. Scroll back to the top of the Basic Information page, and under Add features and functionality, select the Incoming Webhooks tile. Turn on the Activate Incoming Webhooks setting.
  6. At the bottom of the page, choose Add New Webhook to Workspace.
    1. Select the macie-findings channel you created in step 2, and choose Allow.
    2. You should now see webhook URL details under Webhook URLs for Your Workspace. Use the Copy button to note down the URL, which you will need later.

      Figure 4: Webhook URL

      Figure 4: Webhook URL

To check the project out from GitHub

The solution source is available on GitHub in AWS Samples. Clone the project to your local machine or download and extract the available zip file.

To set the configuration parameters

In the root directory of the project you’ve just cloned, there’s a file named cdk.json. This file contains configuration parameters to allow integration with the macie-findings channel you created earlier, and also to allow you to control the auto-remediation behavior of the solution. Open this file and make sure that you review and update the following parameters:

  • autoRemediateConfig – This nested attribute allows you to specify for each sensitive data finding type whether you want to automatically remediate and quarantine the offending object, or first send the finding to Slack for human review and authorization. Note that you will still be notified through Slack that auto-remediation has taken place if this attribute is set to AUTO. Valid values are either AUTO or REVIEW. You can use the default values.
  • minSeverityLevel – Macie assigns all findings a Severity level. With this parameter, you can define a minimum severity level that must be met before the solution will trigger action. For example, if the parameter is set to MEDIUM, the solution won’t take any action or send any notifications when a finding has a LOW severity, but will take action when a finding is classified as MEDIUM or HIGH. Valid values are: LOW, MEDIUM, and HIGH. The default value is set to LOW.
  • slackChannel – The name of the Slack channel you created earlier (macie-findings).
  • slackWebHookUrl – For this parameter, enter the webhook URL that you noted down during Slack app setup in the “Configure a Slack channel and app” step.
  • slackSigningSecret – For this parameter, enter the signing secret that you noted down during Slack app setup.

Save your changes to the configuration file.

To build and deploy the solution

  1. From the command line, make sure that your current working directory is the root directory of the project that you cloned earlier. Run the following commands:
    • npm install – Installs all Node.js dependencies.
    • npm run build – Compiles the CDK TypeScript source.
    • cdk bootstrap – Initializes the CDK environment in your AWS account and Region, as shown in Figure 5.

      Figure 5: CDK bootstrap output

      Figure 5: CDK bootstrap output

    • cdk deploy – Generates a CloudFormation template and deploys the solution resources.

    The resources created can be reviewed in the CloudFormation console and can be summarized as follows:

    • Lambda functions – Finding Handler, Remediation Handler, and Remediator
    • IAM execution roles and associated policy – The roles and policy associated with each Lambda function and the API Gateway
    • S3 bucket – The quarantine S3 bucket
    • EventBridge rule – The rule that triggers the Lambda function for Macie sensitive data findings
    • API Gateway – A single remediation API with proxy integration to the Lambda handler
  2. After you run the deploy command, you’ll be prompted to review the IAM resources deployed as part of the solution. Press y to continue.
  3. Once the deployment is complete, you’ll be presented with an output parameter, shown in Figure 6, which is the endpoint for the API Gateway that was deployed as part of the solution. Copy this URL.

    Figure 6: CDK deploy output

    Figure 6: CDK deploy output

To configure Slack with the API Gateway endpoint

  1. Open Slack and return to the Basic Information page for the Slack app you created earlier.
  2. Under Add features and functionality, select the Interactive Components tile.
  3. Turn on the Interactivity setting.
  4. In the Request URL box, enter the API Gateway endpoint URL you copied earlier.
  5. Choose Save Changes.

    Figure 7: Slack app interactivity

    Figure 7: Slack app interactivity

Now that you have the solution components deployed and Slack configured, it’s time to test things out.

Test the solution

The testing steps can be summarized as follows:

  1. Upload dummy files to S3
  2. Run the Macie sensitive data discovery job
  3. Review and act upon Slack notifications
  4. Confirm that S3 objects are quarantined

To upload dummy files to S3

Two sample text files containing dummy financial and personal data are available in the project you cloned from GitHub. If you haven’t changed the default auto-remediation configurations, these two files will exercise both the auto-remediation and manual remediation review flows.

Find the files under sensitive-data-samples/dummy-financial-data.txt and sensitive-data-samples/dummy-personal-data.txt. Take these two files and upload them to S3 by using either the console, as shown in Figure 8, or AWS CLI. You can choose to use any new or existing bucket, but make sure that the bucket is in the same AWS account and Region that was used to deploy the solution.

Figure 8: Dummy files uploaded to S3

Figure 8: Dummy files uploaded to S3

To run a Macie sensitive data discovery job

  1. Navigate to the Amazon Macie console, and make sure that your selected Region is the same as the one that was used to deploy the solution.
    1. If this is your first time using Macie, choose the Get Started button, and then choose Enable Macie.
  2. On the Macie Summary dashboard, you will see a Create Job button at the top right. Choose this button to launch the Job creation wizard. Configure each step as follows:
    1. Select S3 buckets: Select the bucket where you uploaded the dummy sensitive data file. Choose Next.
    2. Review S3 buckets: No changes are required, choose Next.
    3. Scope: For Job type, choose One-time job. Make sure Sampling depth is set to 100%. Choose Next.
    4. Custom data identifiers: No changes are required, choose Next.
    5. Name and description: For Job name, enter any name you like, such as Dummy job, and then choose Next.
    6. Review and create: Review your settings; they should look like the following sample. Choose Submit.
Figure 9: Configure the Macie sensitive data discovery job

Figure 9: Configure the Macie sensitive data discovery job

Macie will launch the sensitive data discovery job. You can track its status from the Jobs page within the Macie console.

To review and take action on Slack notifications

Within five minutes of submitting the data discovery job, you should expect to see two notifications appear in your configured Slack channel. One notification, similar to the one in Figure 10, is informational only and is related to an auto-remediation action that has taken place.

Figure 10: Slack notification of auto-remediation for the file containing dummy financial data

Figure 10: Slack notification of auto-remediation for the file containing dummy financial data

The other notification, similar to the one in Figure 11, requires end user action and is for a finding that requires administrator review. All notifications will display key information such as the offending S3 object, a description of the finding, the finding severity, and other relevant metadata.

Figure 11: Slack notification for human review of the file containing dummy personal data

Figure 11: Slack notification for human review of the file containing dummy personal data

(Optional) You can review the finding details by choosing the View Macie Finding in Console link in the notification.

In the Slack notification, choose the Remediate button to quarantine the object. The notification will be updated with confirmation of the quarantine action, as shown in Figure 12.

Figure 12: Slack notification of authorized remediation

Figure 12: Slack notification of authorized remediation

To confirm that S3 objects are quarantined

Finally, navigate to the S3 console and validate that the objects have been removed from their original bucket and placed into the quarantine bucket listed in the notification details, as shown in Figure 13. Note that you may need to refresh your S3 object listing in the browser.

Figure 13: Slack notification of authorized remediation

Figure 13: Slack notification of authorized remediation

Congratulations! You now have a fully operational solution to detect and respond to Macie sensitive data findings through a Slack ChatOps workflow.

Solution cleanup

To remove the solution and avoid incurring additional charges from the AWS resources that you deployed, complete the following steps.

To remove the solution and associated resources

  1. Navigate to the Macie console. Under Settings, choose Suspend Macie.
  2. Navigate to the S3 console and delete all objects in the quarantine bucket.
  3. Run the command cdk destroy from the command line within the root directory of the project. You will be prompted to confirm that you want to remove the solution. Press y.

Summary

In this blog post, I showed you how to integrate Amazon Macie sensitive data findings with an auto-remediation and Slack ChatOps workflow. We reviewed the AWS services used, how they are integrated, and the steps to configure, deploy, and test the solution. With Macie and the solution in this blog post, you can substantially reduce the heavy lifting associated with detecting and responding to sensitive data in your AWS environment.

I encourage you to take this solution and customize it to your needs. Further enhancements could include supporting policy findings, adding additional remediation actions, or integrating with additional findings from AWS Security Hub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nick Cuneo

Nick is an Enterprise Solutions Architect at AWS who works closely with Australia’s largest financial services organisations. His previous roles span operations, software engineering, and design. Nick is passionate about application and network security, automation, microservices, and event driven architectures. Outside of work, he enjoys motorsport and is found most weekends in his garage wrenching on cars.

Announcing Workplace Records for Cloudflare for Teams

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/work-jurisdiction-records-for-teams/

Announcing Workplace Records for Cloudflare for Teams

We wanted to close out Privacy & Compliance Week by talking about something universal and certain: taxes. Businesses worldwide pay employment taxes based on where their employees do work. For most businesses and in normal times, where employees do work has been relatively easy to determine: it’s where they come into the office. But 2020 has made everything more complicated, even taxes.

As businesses worldwide have shifted to remote work, employees have been working from “home” — wherever that may be. Some employees have taken this opportunity to venture further from where they usually are, sometimes crossing state and national borders.

Announcing Workplace Records for Cloudflare for Teams

In a lot of ways, it’s gone better than expected. We’re proud of helping provide technology solutions like Cloudflare for Teams that allow employees to work from anywhere and ensure they still have a fast, secure connection to their corporate resources. But increasingly we’ve been hearing from the heads of the finance, legal, and HR departments of our customers with a concern: “If I don’t know where my employees are, I have no idea where I need to pay taxes.”

Today we’re announcing the beta of a new feature for Cloudflare for Teams to help solve this problem: Workplace Records. Cloudflare for Teams uses Access and Gateway logs to provide the state and country from which employees are working. Workplace Records can be used to help finance, legal, and HR departments determine where payroll taxes are due and provide a record to defend those decisions.

Every location became a potential workplace

Before 2020, employees who frequently traveled could manage tax jurisdiction reporting by gathering plane tickets or keeping manual logs of where they spent time. It was tedious, for employees and our payroll team, but manageable.

The COVID pandemic transformed that chore into a significant challenge for our finance, legal, and HR teams. Our entire organization was suddenly forced to work remotely. If we couldn’t get comfortable that we knew where people were working, we worried we may be forced to impose somewhat draconian rules requiring employees to check-in. That didn’t seem very Cloudflare-y.

The challenge impacts individual team members as well. Reporting mistakes can lead to tax penalties for employees or amendments during filing season. Our legal team started to field questions from employees stuck in new regions because of travel restrictions. Our payroll team prepared for a backlog of amendments.

Announcing Workplace Records for Cloudflare for Teams

Logging jurisdiction without manual reporting

When team members open their corporate laptops and start a workday, they log in to Cloudflare Access — our Zero Trust tool that protects applications and data. Cloudflare Access checks their identity and other signals like multi-factor methods to determine if they can proceed. Importantly, the process also logs their region so we can enforce country-specific rules.

Our finance, legal, and HR teams worked with our engineering teams to use that model to create Workplace Records. We now have the confidence to know we can meet our payroll tax obligations without imposing onerous limitations on team members. We’re able to prepare and adjust, in real-time, while confidentially supporting our employees as they work remotely for wherever is most comfortable and productive for them.

Announcing Workplace Records for Cloudflare for Teams

Respecting team member privacy

Workplace Records only provides resolution within a taxable jurisdiction, not a specific address. The goal is to give only the information that finance, legal, and HR departments need to ensure they can meet their compliance obligations.

The system also generates these reports by capturing team member logins to work applications on corporate devices. We use the location of that login to determine “this was a workday from Texas”. If a corporate laptop is closed or stored away for the weekend, we aren’t capturing location logs. We’d rather team members enjoy time off without connecting.

Two clicks to enforce regional compliance

Workplace Records can also help ensure company policy compliance for a company’s teams. For instance, companies may have policies about engineering teams only creating intellectual property in countries in which transfer agreements are in place. Workplace Records can help ensure that engineering work isn’t being done in countries that may put the intellectual property at risk.

Announcing Workplace Records for Cloudflare for Teams

Administrators can build rules in Cloudflare Access to require that team members connect to internal or SaaS applications only from countries where they operate. Cloudflare’s network will check every request both for identity and the region from which they’re connecting.

We also heard from our own accounting teams that some regions enforce strict tax penalties when employees work without an incorporated office or entity. In the same way that you can require users to work only from certain countries, you can also block users from connecting to your applications from specific regions.

No deciphering required

When we started planning Workplace Records, our payroll team asked us to please not send raw data that added more work on them to triage and sort.

Available today, you can view the country of each login to internal systems on a per-user basis. You can export this data to an external SIEM and you can build rules that control access to systems by country.

Launching today in beta is a new UI that summarizes the working days spent in specific regions for each user. Workplace Records will add a company-wide report early in Q1. The service is available as a report for free to all Cloudflare for Teams customers.

Announcing Workplace Records for Cloudflare for Teams

Going forward, we plan to work with Human Capital Management (HCM), Human Resource Information Systems (HRIS), Human Resource Management Systems (HRMS), and Payroll providers to automatically integrate Workplace Records.

What’s next?

At Cloudflare, we know even after the pandemic we are going to be more tolerant of remote work than before. The more that we can allow our team to work remotely and ensure we are meeting our regulatory, compliance, and tax obligations, the more flexibility we will be able to provide.

Cloudflare for Teams with Workplace Records is helping solve a challenge for our finance, legal, and HR teams. Now with the launch of the beta, we hope we can help enable a more flexible and compliant work environment for all our Cloudflare for Teams customers.
This feature will be available to all Cloudflare for Teams subscribers early next week. You can start using Cloudflare for Teams today at no cost for up to 50 users, including the Workplace Records feature.

Announcing Workplace Records for Cloudflare for Teams

Three common cloud encryption questions and their answers on AWS

Post Syndicated from Peter M. O'Donnell original https://aws.amazon.com/blogs/security/three-common-cloud-encryption-questions-and-their-answers-on-aws/

At Amazon Web Services (AWS), we encourage our customers to take advantage of encryption to help secure their data. Encryption is a core component of a good data protection strategy, but people sometimes have questions about how to manage encryption in the cloud to meet the growth pace and complexity of today’s enterprises. Encryption can seem like a difficult task—people often think they need to master complicated systems to encrypt data—but the cloud can simplify it.

In response to frequently asked questions from executives and IT managers, this post provides an overview of how AWS makes encryption less difficult for everyone. In it, I describe the advantages to encryption in the cloud, common encryption questions, and some AWS services that can help.

Cloud encryption advantages

The most important thing to remember about encryption on AWS is that you always own and control your data. This is an extension of the AWS shared responsibility model, which makes the secure delivery and operation of your applications the responsibility of both you and AWS. You control security in the cloud, including encryption of content, applications, systems, and networks. AWS manages security of the cloud, meaning that we are responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud.

Encryption in the cloud offers a number of advantages in addition to the options available in on-premises environments. This includes on-demand access to managed services that enable you to more easily create and control the keys used for cryptographic operations, integrated identity and access management, and automating encryption in transit and at rest. With the cloud, you don’t manage physical security or the lifecycle of hardware. Instead of the need to procure, configure, deploy, and decommission hardware, AWS offers you a managed service backed by hardware that meets the security requirements of FIPS 140-2. If you need to use that key tens of thousands of times per second, the elastic capacity of AWS services can scale to meet your demands. Finally, you can use integrated encryption capabilities with the AWS services that you use to store and process your data. You pay only for what you use and can instead focus on configuring and monitoring logical security, and innovating on behalf of your business.

Addressing three common encryption questions

For many of the technology leaders I work with, agility and risk mitigation are top IT business goals. An enterprise-wide cloud encryption and data protection strategy helps define how to achieve fine-grained access controls while maintaining nearly continuous visibility into your risk posture. In combination with the wide range of AWS services that integrate directly with AWS Key Management Service (AWS KMS), AWS encryption services help you to achieve greater agility and additional control of your data as you move through the stages of cloud adoption.

The configuration of AWS encryption services is part of your portion of the shared responsibility model. You’re responsible for your data, AWS Identity and Access Management (IAM) configuration, operating systems and networks, and encryption on the client-side, server-side, and network. AWS is responsible for protecting the infrastructure that runs all of the services offered in AWS.

That still leaves you with responsibilities around encryption—which can seem complex, but AWS services can help. Three of the most common questions we get from customers about encryption in the cloud are:

  • How can I use encryption to prevent unauthorized access to my data in the cloud?
  • How can I use encryption to meet compliance requirements in the cloud?
  • How do I demonstrate compliance with company policies or other standards to my stakeholders in the cloud?

Let’s look closely at these three questions and some ways you can address them in AWS.

How can I use encryption to prevent unauthorized access to my data in the cloud?

Start with IAM

The primary way to protect access to your data is access control. On AWS, this often means using IAM to describe which users or roles can access resources like Amazon Simple Storage Service (Amazon S3) buckets. IAM allows you to tightly define the access for each user—whether human or system—and set the conditions in which that access is allowed. This could mean requiring the use of multi-factor authentication, or making the data accessible only from your Amazon Virtual Private Cloud (Amazon VPC).

Encryption allows you to introduce an additional authorization condition before granting access to data. When you use AWS KMS with other services, you can get further control over access to sensitive data. For example, with S3 objects that are encrypted by KMS, each IAM user must not only have access to the storage itself but also have authorization to use the KMS key that protects the data. This works similarly for Amazon Elastic Block Store (Amazon EBS). For example, you can allow an entire operations team to manage Amazon EBS volumes and snapshots, but, for certain Amazon EBS volumes that contain sensitive data, you can use a different KMS master key with different permissions that are granted only to the individuals you specify. This ability to define more granular access control through independent permission on encryption keys is supported by all AWS services that integrate with KMS.

When you configure IAM for your users to access your data and resources, it’s critical that you consider the principle of least privilege. This means you grant only the access necessary for each user to do their work and no more. For example, instead of granting users access to an entire S3 bucket, you can use IAM policy language to specify the particular Amazon S3 prefixes that are required and no others. This is important when thinking about the difference between using a service—data plane events—and managing a service—management plane events. An application might store and retrieve objects in an S3 bucket, but it’s rarely the case that the same application needs to list all of the buckets in an account or configure the bucket’s settings and permissions.

Making clear distinctions between who can use resources and who can manage resources is often referred to as the principle of separation of duties. Consider the circumstance of having a single application with two identities that are associated with it—an application identity that uses a key to encrypt and decrypt data and a manager identity that can make configuration changes to the key. By using AWS KMS together with services like Amazon EBS, Amazon S3, and many others, you can clearly define which actions can be used by each persona. This prevents the application identity from making configuration or permission changes while allowing the manager to make those changes but not use the services to actually access the data or use the encryption keys.

Use AWS KMS and key policies with IAM policies

AWS KMS provides you with visibility and granular permissions control of a specific key in the hierarchy of keys used to protect your data. Controlling access to the keys in KMS is done using IAM policy language. The customer master key (CMK) has its own policy document, known as a key policy. AWS KMS key policies can work together with IAM identity policies or you can manage the permissions for a KMS CMK exclusively with key policies. This gives you greater flexibility to separately assign permissions to use the key or manage the key, depending on your business use case.

Encryption everywhere

AWS recommends that you encrypt as much as possible. This means encrypting data while it’s in transit and while it’s at rest.

For customers seeking to encrypt data in transit for their public facing applications, our recommended best practice is to use AWS Certificate Manager (ACM). This service automates the creation, deployment, and renewal of public TLS certificates. If you’ve been using SSL/TLS for your websites and applications, then you’re familiar with some of the challenges related to dealing with certificates. ACM is designed to make certificate management easier and less expensive.

One way ACM does this is by generating a certificate for you. Because AWS operates a certificate authority that’s already trusted by industry-standard web browsers and operating systems, public certificates created by ACM can be used with public websites and mobile applications. ACM can create a publicly trusted certificate that you can then deploy into API Gateway, Elastic Load Balancing, or Amazon CloudFront (a globally distributed content delivery network). You don’t have to handle the private key material or figure out complicated tooling to deploy the certificates to your resources. ACM helps you to deploy your certificates either through the AWS Management Console or with automation that uses AWS Command Line Interface (AWS CLI) or AWS SDKs.

One of the challenges related to certificates is regularly rotating and renewing them so they don’t unexpectedly expire and prevent your users from using your website or application. Fortunately, ACM has a feature that updates the certificate before it expires and automatically deploys the new certificate to the resources associated with it. No more needing to make a calendar entry to remind your team to renew certificates and, most importantly, no more outages because of expired certificates.

Many customers want to secure data in transit for services by using privately trusted TLS certificates instead of publicly trusted TLS certificates. For this use case, you can use AWS Certificate Manager Private Certificate Authority (ACM PCA) to issue certificates for both clients and servers. ACM PCA provides an inexpensive solution for issuing internally trusted certificates and it can be integrated with ACM with all of the same integrative benefits that ACM provides for public certificates, including automated renewal.

For encrypting data at rest, I strongly encourage using AWS KMS. There is a broad range of AWS storage and database services that support KMS integration so you can implement robust encryption to protect your data at rest within AWS services. This lets you have the benefit of the KMS capabilities for encryption and access control to build complex solutions with a variety of AWS services without compromising on using encryption as part of your data protection strategy.

How can I use encryption to meet compliance requirements in the cloud?

The first step is to identify your compliance requirements. This can often be done by working with your company’s risk and compliance team to understand the frameworks and controls that your company must abide by. While the requirements vary by industry and region, the most common encryption compliance requirements are to encrypt your data and make sure that the access control for the encryption keys (for example by using AWS KMS CMK key policies) is separate from the access control to the encrypted data itself (for example through Amazon S3 bucket policies).

Another common requirement is to have separate encryption keys for different classes of data, or for different tenants or customers. This is directly supported by AWS KMS as you can have as many different keys as you need within a single account. If you need to use even more than the 10,000 keys AWS KMS allows by default, contact AWS Support about raising your quota.

For compliance-related concerns, there are a few capabilities that are worth exploring as options to increase your coverage of security controls.

  • Amazon S3 can automatically encrypt all new objects placed into a bucket, even when the user or software doesn’t specify encryption.
  • You can use batch operations in Amazon S3 to encrypt existing objects that weren’t originally stored with encryption.
  • You can use the Amazon S3 inventory report to generate a list of all S3 objects in a bucket, including their encryption status.

AWS services that track encryption configurations to comply with your requirements

Anyone who has pasted a screenshot of a configuration into a word processor at the end of the year to memorialize compliance knows how brittle traditional on-premises forms of compliance attestation can be. Everything looked right the day it was installed and still looked right at the end of the year—but how can you be certain that everything was correctly configured at all times?

AWS provides several different services to help you configure your environment correctly and monitor its configuration over time. AWS services can also be configured to perform automated remediation to correct any deviations from your desired configuration state. AWS helps automate the collection of compliance evidence and provides nearly continuous, rather than point in time, compliance snapshots.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and helps you to automate the evaluation of recorded configurations against desired configurations. One of the most powerful features of AWS Config is AWS Config Rules. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant. AWS Config comes with a wide range of prewritten managed rules to help you maintain compliance for many different AWS services. The managed rules include checks for encryption status on a variety of resources, ACM certificate expiration, IAM policy configurations, and many more.

For additional monitoring capabilities, consider Amazon Macie and AWS Security Hub. Amazon Macie is a service that helps you understand the contents of your S3 buckets by analyzing and classifying the data contained within your S3 objects. It can also be used to report on the encryption status of your S3 buckets, giving you a central view into the configurations of all buckets in your account, including default encryption settings. Amazon Macie also integrates with AWS Security Hub, which can perform automated checks of your configurations, including several checks that focus on encryption settings.

Another critical service for compliance outcomes is AWS CloudTrail. CloudTrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. AWS KMS records all of its activity in CloudTrail, allowing you to identify who used the encryption keys, in what context, and with which resources. This information is useful for operational purposes and to help you meet your compliance needs.

How do I demonstrate compliance with company policy to my stakeholders in the cloud?

You probably have internal and external stakeholders that care about compliance and require that you document your system’s compliance posture. These stakeholders include a range of possible entities and roles, including internal and external auditors, risk management departments, industry and government regulators, diligence teams related to funding or acquisition, and more.

Unfortunately, the relationship between technical staff and audit and compliance staff is sometimes contentious. AWS believes strongly that these two groups should work together—they want the same things. The same services and facilities that engineering teams use to support operational excellence can also provide output that answers stakeholders’ questions about security compliance.

You can provide access to the console for AWS Config and CloudTrail to your counterparts in audit and risk management roles. Use AWS Config to continuously monitor your configurations and produce periodic reports that can be delivered to the right stakeholders. The evolution towards continuous compliance makes compliance with your company policies on AWS not just possible, but often better than is possible in traditional on-premises environments. AWS Config includes several managed rules that check for encryption settings in your environment. CloudTrail contains an ongoing record of every time AWS KMS keys are used to either encrypt or decrypt your resources. The contents of the CloudTrail entry include the KMS key ID, letting your stakeholders review and connect the activity recorded in CloudTrail with the configurations and permissions set in your environment. You can also use the reports produced by Security Hub automated compliance checks to verify and validate your encryption settings and other controls.

Your stakeholders might have further requirements for compliance that are beyond your scope of control because AWS is operating those controls for you. AWS provides System and Organization Controls (SOC) Reports that are independent, third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. You can consult the AWS SOC2 report, available through AWS Artifact, for more information about how AWS operates in the cloud and provides assurance around AWS security procedures. The SOC2 report includes several AWS KMS-specific controls that might be of interest to your audit-minded colleagues.

Summary

Encryption in the cloud is easier than encryption on-premises, powerful, and can help you meet the highest standards for controls and compliance. The cloud provides more comprehensive data protection capabilities for customers looking to rapidly scale and innovate than are available for on-premises systems. This post provides guidance for how to think about encryption in AWS. You can use IAM, AWS KMS, and ACM to provide granular access control to your most sensitive data, and support protection of your data in transit and at rest. Once you’ve identified your compliance requirements, you can use AWS Config and CloudTrail to review your compliance with company policy over time, rather than point-in-time snapshots obtained through traditional audit methods. AWS can provide on-demand compliance evidence, with tools such as reporting from CloudTrail and AWS Config, and attestations such as SOC reports.

I encourage you to review your current encryption approach against the steps I’ve outlined in this post. While every industry and company is different, I believe the core concepts presented here apply to all scenarios. I want to hear from you. If you have any comments or feedback on the approach discussed here, or how you’ve used it for your use case, leave a comment on this post.

And for more information on encryption in the cloud and on AWS, check out the following resources, in addition to our collection of encryption blog posts.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Peter M. O’Donnell

Peter is an AWS Principal Solutions Architect, specializing in security, risk, and compliance with the Strategic Accounts team. Formerly dedicated to a major US commercial bank customer, Peter now supports some of AWS’s largest and most complex strategic customers in security and security-related topics, including data protection, cryptography, identity, threat modeling, incident response, and CISO engagement.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS, focused on marketing, encryption, and emerging areas of cybersecurity. She has worked to drive large scale marketing and content initiatives forward in a variety of regulated industries. She is passionate about helping customers learn best practices to secure their AWS cloud environment so they can innovate faster on behalf of their business.

Announcing Cloud Audit Academy AWS-specific for audit and compliance teams

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/announcing-cloud-audit-academy-aws-specific-for-audit-and-compliance-teams/

Today, I’m pleased to announce the launch of Cloud Audit Academy AWS-specific (CAA AWS-specific). This is a new, accelerated training program for auditing AWS Cloud implementations, and is designed for auditors, regulators, or anyone working within a control framework.

Over the past few years, auditing security in the cloud has become one of the fastest growing questions among Amazon Web Services (AWS) customers, across multiple industries and all around the world. Here are the two pain points that I hear about most often:

  • Engineering teams want to move regulatory frameworks compliant workloads to AWS to take advantage of its innovation capabilities, but security and risk teams are uncertain how AWS can help them meet their compliance requirements through audits.
  • Compliance teams want to effectively audit the cloud environments and take advantage of the available security control options that are built into the cloud, but the legacy audit processes and control frameworks are built for an on-premises environment. The differences require some reconciliation and improvement work to be done on compliance programs, audit processes, and auditor training.

To help address these issues for not only AWS customers but for any auditor or compliance team facing cloud migration, we announced Cloud Audit Academy Cloud Agnostic (CAA Cloud Agnostic) at re:Inforce 2019. This foundational, first-of-its-kind, course provides baseline knowledge on auditing in the cloud and in understanding the differences in control operation, design, and auditing. It is cloud agnostic and can benefit security and compliance professionals in any industry—including independent third-party auditors. Since its launch in June 2019, 1,400 students have followed this cloud audit learning path, with 91 percent of participants saying that they would recommend the workshop to others.

So today we’re releasing the next phase of that education program, Cloud Audit Academy AWS-specific. Offered virtually or in-person, CAA AWS-specific is an instructor-led workshop on addressing risks and auditing security in the AWS Cloud, with a focus on the security and audit tools provided by AWS. All instructors have professional audit industry experience, current audit credentials, and maintain AWS Solutions Architect credentials.

Here are four things to know about CAA AWS-specific and what it has to offer audit and compliance teams:

  1. Content was created with PricewaterhouseCoopers (PwC)
    PricewaterhouseCoopers worked with us to develop the curriculum content, bringing their expertise in independent risk and control auditing.
     
    “With so many of our customers already in the cloud—or ready to be—we’ve seen a huge increase in the need to meet regulatory and compliance requirements. We’re excited to have combined our risk and controls experience with the power of AWS to create a curriculum in which customers can not only [leverage AWS to help them] meet their compliance needs, but unlock the total value of their cloud investment.” – Paige Hayes, Global Account Leader at PwC

  2. Attendees earn continuing professional education credits
    Based on feedback from CAA Cloud Agnostic, we now offer continuing professional education (CPE) credits to attendees. Completion of CAA AWS-specific will allow attendees to earn 28 CPE credits towards any of the International Information System Security Certification Consortium, or (ISC)², certifications, and 18 CPE credits towards any Global Information Assurance Certification (GIAC).

  3. Training helps boost confidence when auditing the AWS cloud
    Our customers have proven repeatedly that running sensitive workloads in AWS can be more secure than in on-premises environments. However, a lack of knowledge and updated processes for implementing, monitoring, and proving compliance in the cloud has caused some difficulty. Through CAA AWS-specific, you will get critical training to become more comfortable and confident knowing how to audit the AWS environment with precision.

    “Our FSI customer conversations are often focused on security and compliance controls. Leveraging the Cloud Audit Academy enables our team to educate the internal and external auditors of our customers. CAA provides them the necessary tools and knowledge to evaluate and gain comfort with their AWS control environment firsthand. The varying depth and levels focus on everything from basic cloud auditing to diving deeper into the domains which align with our governance and control domains. We reference key AWS services that customers can utilize to create an effective control environment that [helps to meet their] regulatory and audit expectations.” – Jeff (Axe) Axelrad, Compliance Manager, AWS Financial Services

  4. Training enables the governance, risk, and compliance professional
    In four days of CAA AWS-specific, you’ll become more comfortable with topics like control domains, network management, vulnerability management, logging and monitoring, incident response, and general knowledge about compliance controls in the cloud.

    “In addition to [using AWS to help support and maintain their compliance], our customers need to be able to clearly communicate with their external auditors and regulators HOW compliance is achieved. CAA doesn’t teach auditors how to audit, but rather accelerates the learning necessary to understand specifically how the control landscape changes.” – Jesse Skibbe, Sr. Practice Manager, AWS Professional Services

CAA Cloud Agnostic provides some foundational concepts and is a prerequisite to CAA AWS-specific. It is available for free online at our AWS Training and Certification learning library, or you can contact your account manager to have a one-day instructor-led training session in person.

If it sounds like Cloud Audit Academy training would benefit you and your team, contact our AWS Security Assurance Services team or contact your AWS account manager. For more information, check out the newly updated Security Audit Learning Path.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter

Set up centralized monitoring for DDoS events and auto-remediate noncompliant resources

Post Syndicated from Fola Bolodeoku original https://aws.amazon.com/blogs/security/set-up-centralized-monitoring-for-ddos-events-and-auto-remediate-noncompliant-resources/

When you build applications on Amazon Web Services (AWS), it’s a common security practice to isolate production resources from non-production resources by logically grouping them into functional units or organizational units. There are many benefits to this approach, such as making it easier to implement the principal of least privilege, or reducing the scope of adversely impactful activities that may occur in non-production environments. After building these applications, setting up monitoring for resource compliance and security risks, such as distributed denial of service (DDoS) attacks across your AWS accounts, is just as important. The recommended best practice to perform this type of monitoring involves using AWS Shield Advanced with AWS Firewall Manager, and integrating these with AWS Security Hub.

In this blog post, I show you how to set up centralized monitoring for Shield Advanced–protected resources across multiple AWS accounts by using Firewall Manager and Security Hub. This enables you to easily manage resources that are out of compliance from your security policy and to view DDoS events that are detected across multiple accounts in a single view.

Shield Advanced is a managed application security service that provides DDoS protection for your workloads against infrastructure layer (Layer 3–4) attacks, as well as application layer (Layer 7) attacks, by using AWS WAF. Firewall Manager is a security management service that enables you to centrally configure and manage firewall rules across your accounts and applications in an organization in AWS. Security Hub consumes, analyzes, and aggregates security events produced by your application running on AWS by consuming security findings. Security Hub integrates with Firewall Manager without the need for any action to be taken by you.

I’m going to cover two different scenarios that show you how to use Firewall Manager for:

  1. Centralized visibility into Shield Advanced DDoS events
  2. Automatic remediation of noncompliant resources

Scenario 1: Centralized visibility of DDoS detected events

This scenario represents a fully native and automated integration, where Shield Advanced DDoSDetected events (indicates whether a DDoS event is underway for a particular Amazon Resource Name (ARN)) are made visible as a security finding in Security Hub, through Firewall Manager.

Solution overview

Figure 1 shows the solution architecture for scenario 1.
 

Figure 1: Scenario 1 – Shield Advanced DDoS detected events visible in Security Hub

Figure 1: Scenario 1 – Shield Advanced DDoS detected events visible in Security Hub

The diagram illustrates a customer using AWS Organizations to isolate their production resources into the Production Organizational Unit (OU), with further separation into multiple accounts for each of the mission-critical applications. The resources in Account 1 are protected by Shield Advanced. The Security OU was created to centralize security functions across all AWS accounts and OUs, obscuring the visibility of the production environment resources from the Security Operations Center (SOC) engineers and other security staff. The Security OU is home to the designated administrator account for Firewall Manager and the Security Hub dashboard.

Scenario 1 implementation

You will be setting up Security Hub in an account that has the prerequisite services configured in it as explained below. Before you proceed, see the architecture requirements in the next section. Once Security Hub is enabled for your organization, you can simulate a DDoS event in strict accordance with the AWS DDoS Simulation Testing Policy or use one of AWS DDoS Test Partners.

Architecture requirements

In order to implement these steps, you must have the following:

Once you have all these requirements completed, you can move on to enable Security Hub.

Enable Security Hub

Note: If you plan to protect resources with Shield Advanced across multiple accounts and in multiple Regions, we recommend that you use the AWS Security Hub Multiaccount Scripts from AWS Labs. Security Hub needs to be enabled in all the Regions and all the accounts where you have Shield protected resources. For global resources, like Amazon CloudFront, you should enable Security Hub in the us-east-1 Region.

To enable Security Hub

  1. In the AWS Security Hub console, switch to the account you want to use as the designated Security Hub administrator account.
  2. Select the security standard or standards that are applicable to your application’s use-case, and choose Enable Security Hub.
     
    Figure 2: Enabling Security Hub

    Figure 2: Enabling Security Hub

  3. From the designated Security Hub administrator account, go to the Settings – Account tab, and add accounts by sending invites to all the accounts you want added as member accounts. The invited accounts become associated as member accounts once the owner of the invited account has accepted the invite and Security Hub has been enabled. It’s possible to upload a comma-separated list of accounts you want to send to invites to.
     
    Figure 3: Designating a Security Hub administrator account by adding member accounts

    Figure 3: Designating a Security Hub administrator account by adding member accounts

View detected events in Shield and Security Hub

When Shield Advanced detects signs of DDoS traffic that is destined for a protected resource, the Events tab in the Shield console displays information about the event detected and provides a status on the mitigation that has been performed. Following is an example of how this looks in the Shield console.
 

Figure 4: Scenario 1 - The Events tab on the Shield console showing a Shield event in progress

Figure 4: Scenario 1 – The Events tab on the Shield console showing a Shield event in progress

If you’re managing multiple accounts, switching between these accounts to view the Shield console to keep track of DDoS incidents can be cumbersome. Using the Amazon CloudWatch metrics that Shield Advanced reports for Shield events, visibility across multiple accounts and Regions is easier through a custom CloudWatch dashboard or by consuming these metrics in a third-party tool. For example, the DDoSDetected CloudWatch metric has a binary value, where a value of 1 indicates that an event that might be a DDoS has been detected. This metric is automatically updated by Shield when the DDoS event starts and ends. You only need permissions to access the Security Hub dashboard in order to monitor all events on production resources. Following is an example of what you see in the Security Hub console.
 

Figure 5: Scenario 1 - Shield Advanced DDoS alarm showing in Security Hub

Figure 5: Scenario 1 – Shield Advanced DDoS alarm showing in Security Hub

Configure Shield event notification in Firewall Manager

In order to increase your visibility into possible Shield events across your accounts, you must configure Firewall Manager to monitor your protected resources by using Amazon Simple Notification Service (Amazon SNS). With this configuration, Firewall Manager sends you notifications of possible attacks by creating an Amazon SNS topic in Regions where you might have protected resources.

To configure SNS topics in Firewall Manager

  1. In the Firewall Manager console, go to the Settings page.
  2. Under Amazon SNS Topic Configuration, select a Region.
  3. Choose Configure SNS Topic.
     
    Figure 6: The Firewall Manager Settings page for configuring SNS topics

    Figure 6: The Firewall Manager Settings page for configuring SNS topics

  4. Select an existing topic or create a new topic, and then choose Configure SNS Topic.
     
    Figure 7: Configure an SNS topic in a Region

    Figure 7: Configure an SNS topic in a Region

Scenario 2: Automatic remediation of noncompliant resources

The second scenario is an example in which a new production resource is created, and Security Hub has full visibility of the compliance state of the resource.

Solution overview

Figure 8 shows the solution architecture for scenario 2.
 

Figure 8: Scenario 2 – Visibility of Shield Advanced noncompliant resources in Security Hub

Figure 8: Scenario 2 – Visibility of Shield Advanced noncompliant resources in Security Hub

Firewall Manager identifies that the resource is out of compliance with the defined policy for Shield Advanced and posts a finding to Security Hub, notifying your operations team that a manual action is required to bring the resource into compliance. If configured, Firewall Manager can automatically bring the resource into compliance by creating it as a Shield Advanced–protected resource, and then update Security Hub when the resource is in a compliant state.

Scenario 2 implementation

The following steps describe how to use Firewall Manager to enforce Shield Advanced protection compliance of an application that is deployed to a member account within AWS Organizations. This implementation assumes that you set up Security Hub as described for scenario 1.

Create a Firewall Manager security policy for Shield Advanced protected resources

In this step, you create a Shield Advanced security policy that will be enforced by Firewall Manager. For the purposes of this walkthrough, you’ll choose to automatically remediate noncompliant resources and apply the policy to Application Load Balancer (ALB) resources.

To create the Shield Advanced policy

  1. Open the Firewall Manager console in the designated Firewall Manager administrator account.
  2. In the left navigation pane, choose Security policies, and then choose Create a security policy.
  3. Select AWS Shield Advanced as the policy type, and select the Region where your protected resources are. Choose Next.

    Note: You will need to create a security policy for each Region where you have regional resources, such as Elastic Load Balancers and Elastic IP addresses, and a security policy for global resources such as CloudFront distributions.

    Figure 9: Select the policy type and Region

    Figure 9: Select the policy type and Region

  4. On the Describe policy page, for Policy name, enter a name for your policy.
  5. For Policy action, you have the option to configure automatic remediation of noncompliant resources or to only send alerts when resources are noncompliant. You can change this setting after the policy has been created. For the purposes of this blog post, I’m selecting Auto remediate any noncompliant resources. Select your option, and then choose Next.

    Important: It’s a best practice to first identify and review noncompliant resources before you enable automatic remediation.

  6. On the Define policy scope page, define the scope of the policy by choosing which AWS accounts, resource type, or resource tags the policy should be applied to. For the purposes of this blog post, I’m selecting to manage Application Load Balancer (ALB) resources across all accounts in my organization, with no preference for resource tags. When you’re finished defining the policy scope, choose Next.
     
    Figure 10: Define the policy scope

    Figure 10: Define the policy scope

  7. Review and create the policy. Once you’ve reviewed and created the policy in the Firewall Manager designated administrator account, the policy will be pushed to all the Firewall Manager member accounts for enforcement. The new policy could take up to 5 minutes to appear in the console. Figure 11 shows a successful security policy propagation across accounts.
     
    Figure 11: View security policies in an account

    Figure 11: View security policies in an account

Test the Firewall Manager and Security Hub integration

You’ve now defined a policy to cover only ALB resources, so the best way to test this configuration is to create an ALB in one of the Firewall Manager member accounts. This policy causes resources within the policy scope to be added as protected resources.

To test the policy

  1. Switch to the Security Hub administrator account and open the Security Hub console in the same Region where you created the ALB. On the Findings page, set the Title filter to Resource lacks Shield Advanced protection and set the Product name filter to Firewall Manager.
     
    Figure 12: Security Hub findings filter

    Figure 12: Security Hub findings filter

    You should see a new security finding flagging the ALB as a noncompliant resource, according to the Shield Advanced policy defined in Firewall Manager. This confirms that Security Hub and Firewall Manager have been enabled correctly.
     

    Figure 13: Security Hub with a noncompliant resource finding

    Figure 13: Security Hub with a noncompliant resource finding

  2. With the automatic remediation feature enabled, you should see the “Updated at” time reflect exactly when the automatic remediation actions were completed. The completion of the automatic remediation actions can take up to 5 minutes to be reflected in Security Hub.
     
    Figure 14: Security Hub with an auto-remediated compliance finding

    Figure 14: Security Hub with an auto-remediated compliance finding

  3. Go back to the account where you created the ALB, and in the Shield Protected Resources console, navigate to the Protected Resources page, where you should see the ALB listed as a protected resource.
     
    Figure 15: Shield console in the member account shows that the new ALB is a protected resource

    Figure 15: Shield console in the member account shows that the new ALB is a protected resource

    Confirming that the ALB has been added automatically as a Shield Advanced–protected resource means that you have successfully configured the Firewall Manager and Security Hub integration.

(Optional): Send a custom action to a third-party provider

You can send all regional Security Hub findings to a ticketing system, Slack, AWS Chatbot, a Security Information and Event Management (SIEM) tool, a Security Orchestration Automation and Response (SOAR), incident management tools, or to custom remediation playbooks by using Security Hub Custom Actions.

Conclusion

In this blog post I showed you how to set up a Firewall Manager security policy for Shield Advanced so that you can monitor your applications for DDoS events, and their compliance to DDoS protection policies in your multi-account environment from the Security Hub findings console. In line with best practices for account governance, organizations should have a centralized security account that performs monitoring for multiple accounts. Security Hub and Firewall Manager provide a centralized solution to help you achieve your compliance and monitoring goals for DDoS protection.

If you’re interested in exploring how Shield Advanced and AWS WAF help to improve the security posture of your application, have a look at the following resources:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fola Bolodeoku

Fola is a Security Engineer on the AWS Threat Research Team, where he focuses on helping customers improve their application security posture against DDoS and other application threats. When he is not working, he enjoys spending time exploring the natural beauty of the Western Cape.

Use AWS Firewall Manager to deploy protection at scale in AWS Organizations

Post Syndicated from Chamandeep Singh original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-to-deploy-protection-at-scale-in-aws-organizations/

Security teams that are responsible for securing workloads in hundreds of Amazon Web Services (AWS) accounts in different organizational units aim for a consistent approach across AWS Organizations. Key goals include enforcing preventative measures to mitigate known security issues, having a central approach for notifying the SecOps team about potential distributed denial of service (DDoS) attacks, and continuing to maintain compliance obligations. AWS Firewall Manager works at the organizational level to help you achieve your intended security posture while it provides reporting for non-compliant resources in all your AWS accounts. This post provides step-by-step instructions to deploy and manage security policies across your AWS Organizations implementation by using Firewall Manager.

You can use Firewall Manager to centrally manage AWS WAF, AWS Shield Advanced, and Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. Firewall Manager helps to protect resources across different accounts, and it can protect resources with specific tags or resources in a group of AWS accounts that are in specific organizational units (OUs). With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes.

Architecture diagram

Figure 1 shows an example organizational structure in AWS Organizations, with several OUs that we’ll use in the example policy sets in this blog post.

Figure 1: AWS Organizations and OU structure

Figure 1: AWS Organizations and OU structure

Firewall Manager can be associated to either the AWS master payer account or one of the member AWS accounts that has appropriate permissions as a delegated administrator. Following the best practices for organizational units, in this post we use a dedicated Security Tooling AWS account (named Security in the diagram) to operate the Firewall Manager administrator deployment under the Security OU. The Security OU is used for hosting security-related access and services. The Security OU, its child OUs, and the associated AWS accounts should be owned and managed by your security organization.

Firewall Manager prerequisites

Firewall Manager has the following prerequisites that you must complete before you create and apply a Firewall Manager policy:

  1. AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an organization and Enabling all features in your organization.
  2. A Firewall Manager administrator account: You must designate one of the AWS accounts in your organization as the Firewall Manager administrator for Firewall Manager. This gives the account permission to deploy security policies across the organization.
  3. AWS Config: You must enable AWS Config for all of the accounts in your organization so that Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, use the Enable AWS Config template from the StackSets sample templates.

Deployment of security policies

In the following sections, we explain how to create AWS WAF rules, Shield Advanced protections, and Amazon VPC security groups by using Firewall Manager. We further explain how you can deploy these different policy types to protect resources across your accounts in AWS Organizations. Each Firewall Manager policy is specific to an individual resource type. If you want to enforce multiple policy types across accounts, you should create multiple policies. You can create more than one policy for each type. If you add a new account to an organization that you created with AWS Organizations, Firewall Manager automatically applies the policy to the resources in that account that are within scope of the policy. This is a scalable approach to assist you in deploying the necessary configuration when developers create resources. For instance, you can create an AWS WAF policy that will result in a known set of AWS WAF rules being deployed whenever someone creates an Amazon CloudFront distribution.

Policy 1: Create and manage security groups

You can use Firewall Manager to centrally configure and manage Amazon VPC security groups across all your AWS accounts in AWS Organizations. A previous AWS Security blog post walks you through how to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

Firewall Manager automatically audits new resources and rules as customers add resources or security group rules to their accounts. You can audit overly permissive security group rules, such as rules with a wide range of ports or Classless Inter-Domain Routing (CIDR) ranges, or rules that have enabled all protocols to access resources. To audit security group policies, you can use application and protocol lists to specify what’s allowed and what’s denied by the policy.

In this blog post, we use a security policy to audit the security groups for overly permissive rules and high-risk applications that are allowed to open to local CIDR ranges (for example, 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). We created a custom application list named Bastion Host for port 22 and a custom protocol list named Allowed Protocol that allows the child account to create rules only on TCP protocols. Refer link for how to create a custom managed application and protocol list.

To create audit security group policies

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, select Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this example, we selected the Sydney (ap-southeast-2) Region because we have all of our resources in the Sydney Region.
  3. Create the policy, and in Policy details, choose Security group. For Region, select a Region (we selected Sydney (ap-southeast-2)), and then choose Next.
  4. For Security group policy type, choose Auditing and enforcement of security group rules, and then choose Next.
  5. Enter a policy name. We named our policy AWS_FMS_Audit_SecurityGroup.
  6. For Policy rule options, for this example, we chose Configure managed audit policy rules.
  7. Under Policy rules, choose the following:
    1. For Security group rules to audit, choose Inbound Rules.
    2. For Rules, select the following:
      1. Select Audit over permissive security group rules.
        • For Allowed security group rules, choose Add Protocol list and select the custom protocol list Allowed Protocols that we created earlier.
        • For Denied security group rules, select Deny rules with the allow ‘ALL’ protocol.
      2. Select Audit high risk applications.
        • Choose Applications that can only access local CIDR ranges. Then choose Add application list and select the custom application list Bastion host that we created earlier.
  8. For Policy action, for the example in this post, we chose Auto remediate any noncompliant resources. Choose Next.

    Figure 2: Policy rules for the security group audit policy

    Figure 2: Policy rules for the security group audit policy

  9. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included Organizational units, select OU (example – Non-Prod Accounts).
    2. For Resource type, select EC2 Instance, Security Group, and Elastic Network Interface.
    3. For Resources, choose Include all resources that match the selected resource type.
  10. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Audit_Security_group.

Important: Migrating AWS accounts from one organizational unit to another won’t remove or detach the existing security group policy applied by Firewall Manager. For example, in the reference architecture in Figure 1 we have the AWS account Tenant-5 under the Staging OU. We’ve created a different Firewall Manager security group policy for the Pre-Prod OU and Prod OU. If you move the Tenant-5 account to Prod OU from Staging OU, the resources associated with Tenant-5 will continue to have the security group policies that are defined for both Prod and Staging OU unless you select otherwise before relocating the AWS account. Firewall Manager supports the detach option in case of policy deletion, because moving accounts across the OU may have unintended impacts such as loss of connectivity or protection, and therefore Firewall Manager won’t remove the security group.

Policy 2: Managing AWS WAF v2 policy

A Firewall Manager AWS WAF policy contains the rule groups that you want to apply to your resources. When you apply the policy, Firewall Manager creates a Firewall Manager web access control list (web ACL) in each account that’s within the policy scope.

Note: Creating Amazon Kinesis Data Firehose delivery stream is a prerequisite to manage the WAF ACL logging at Step 8 in us-east-1. (example – aws-waf-logs-lab-waf-logs)

To create a Firewall Manager – AWS WAF v2 policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select a Region. FMS region selection is on the service page drop down tab. For this example, we selected the Region as Global, since the policy is to protect CloudFront resources.
  3. Create the policy. Under Policy details, choose AWS WAF and for Region, choose Global. Then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_WAF_Rule.
  5. On the Policy rule page, under Web ACL configuration, add rule groups. AWS WAF supports custom rule groups (the customer creates the rules), AWS Managed Rules rule groups (AWS manages the rules), and AWS Marketplace managed rule groups. For this example, we chose AWS Managed Rules rule groups.
  6. For this example, for First rule groups, we chose the AWS Managed Rules rule group, AWS Core rule set. For Last rule groups, we chose the AWS Managed Rules rule group, Amazon IP reputation list.
  7. For Default web ACL action for requests that don’t match any rules in the web ACL, choose a default action. We chose Allow.
  8. Firewall Manager enables logging for a specific web ACL. This logging is applied to all the in-scope accounts and delivers the logs to a centralized single account. To enable centralized logging of AWS WAF logs:
    1. For Logging configuration status, choose Enabled.
    2. For IAM role, Firewall Manager creates an AWS WAF service-role for logging. Your security account should have the necessary IAM permissions. Learn more about access requirements for logging.
    3. Select Kinesis stream created earlier called aws-waf-logs-lab-waf-logs in us-east-1 as we’re using Cloudfront as a resource in the policy.
    4. For Redacted fields, for this example select HTTP method, Query String, URI, and Header. You can also add a new header. For more information, see Configure logging for an AWS Firewall Manager AWS WAF policy.
  9. For Policy action, for this example, we chose Auto remediate any noncompliant resources. To replace the existing web ACL that is currently associated with the resource, select Replace web ACLs that are currently associated with in-scope resources with the web ACLs created by this policy. Choose Next.

    Note: If a resource has an association with another web ACL that is managed by a different active Firewall Manager, it doesn’t affect that resource.

    Figure 3: Policy rules for the AWS WAF security policy

    Figure 3: Policy rules for the AWS WAF security policy

  10. For Policy scope, choose the following options for this example:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational unit. For Included organizational units, select OU (example – Pre-Prod Accounts).
    2. For Resource type, choose CloudFront distribution.
    3. For Resources, choose Include all resources that match the selected resource type.
  11. You can create tags for the security policy. For the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to WAF_Policy.
  12. Review the security policy, and then choose Create Policy.

    Note: For the AWS WAF v2 policy, the web ACL pushed by the Firewall Manager can’t be modified on the individual account. The account owner can only add a new rule group.

  13. In the policy’s first and last rule groups sets, you can add additional rule groups at the linked AWS account level to provide additional security based on application requirements. You can use managed rule groups, which AWS Managed Rules and AWS Marketplace sellers create and maintain for you. For example, you can use the WordPress application rule group, which contains rules that block request patterns associated with the exploitation of vulnerabilities specific to a WordPress site. You can also manage and use your own rule groups.For more information about all of these options, see Rule groups. Another example could be using a rate-based rule that tracks the rate of requests for each originating IP address, and triggers the rule action on IPs with rates that go over a limit. Learn more about rate-based rules.

Policy 3: Managing AWS Shield Advanced policy

AWS Shield Advanced is a paid service that provides additional protections for internet facing applications. If you have Business or Enterprise support, you can engage the 24X7 AWS DDoS Response Team (DRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks. Please refer Shield Advanced pricing for more info before proceeding with Shield FMS Policy.

After you complete the prerequisites that were outlined in the prerequisites section, we’ll create Shield Advanced policy which contains the accounts and resources that you want to protect with Shield Advanced. Purpose of this policy is to activate the AWS Shield Advanced in the Accounts in OU’s scope and add the selected resources under Shield Advanced protection list.

To create a Firewall Manager – Shield Advanced policy

  1. Sign in to the Firewall Manager delegated administrator account. Navigate to the Firewall Manager console. In the left navigation pane, under AWS Firewall Manager, choose Security policies.
  2. For Region, select the AWS Region where you would like to protect the resources. FMS region selection is on the service page drop down tab. In this post, we’ve selected the Sydney (ap-southeast-2) Region because all of our resources are in the Sydney Region.

    Note: To protect CloudFront resources, select the Global option.

  3. Create the policy, and in Policy details, choose AWS Shield Advanced. For Region, select a Region (example – ap-southeast-2), and then choose Next.
  4. Enter a policy name. We named our policy AWS_FMS_ShieldAdvanced Rule.
  5. For Policy action, for the example in this post, we chose Auto remediate any non-compliant resources. Alternatively, if you choose Create but do not apply this policy to existing or new resources, Firewall Manager doesn’t apply Shield Advanced protection to any resources. You must apply the policy to resources later. Choose Next.
  6. For Policy scope, this example uses the OU structure as the container of multiple accounts with similar requirements:
    1. For AWS accounts this policy applies to, choose Include only the specified accounts and organizational units. For Included organizational units, select OU (example – Staging Accounts OU).
    2. For Resource type, select Application Load Balancer and Elastic IP.
    3. For Resources, choose Include all resources that match the selected resource type.
      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Figure 4: Policy scope page for creating the Shield Advanced security policy

      Note: If you want to protect only the resources with specific tags, or alternatively exclude resources with specific tags, choose Use tags to include/exclude resources, enter the tags, and then choose either Include or Exclude. Tags enable you to categorize AWS resources in different ways, for example by indicating an environment, owner, or team to include or exclude in Firewall Manager policy. Firewall Manager combines the tags with “AND” so that, if you add more than one tag to a policy scope, a resource must have all the specified tags to be included or excluded.

      Important: Shield Advanced supports protection for Amazon Route 53 and AWS Global Accelerator. However, protection for these resources cannot be deployed with the help of Firewall Manager security policy at this time. If you need to protect these resources with Shield Advanced, you should use individual AWS account access through the API or console to activate Shield Advanced protection for the intended resources.

  7. You can create tags for the security policy. In the example in this post, Tag Key is set to Firewall_Manager and Tag Value is set to Shield_Advanced_Policy. You can use the tags in the Resource element of IAM permission policy statements to either allow or deny users to make changes to security policy.
  8. Review the security policy, and then choose Create Policy.

Now you’ve successfully created a Firewall Manager security policy. Using the organizational units in AWS Organizations as a method to deploy the Firewall Manager security policy, when you add an account to the OU or to any of its child OUs, Firewall Manager automatically applies the policy to the new account.

Important: You don’t need to manually subscribe Shield Advanced on the member accounts. Firewall Manager subscribes Shield Advanced on the member accounts as part of creating the policy.

Operational visibility and compliance report

Firewall Manager offers a centralized incident notification for DDoS incidents that are reported by Shield Advanced. You can create an Amazon SNS topic to monitor the protected resources for potential DDoS activities and send notifications accordingly. Learn how to create an SNS topic. If you have resources in different Regions, the SNS topic needs to be created in the intended Region. You must perform this step from the Firewall Manager delegated AWS account (for example, Security Tooling) to receive alerts across your AWS accounts in that organization.

As a best practice, you should set up notifications for all the Regions where you have a production workload under Shield Advanced protection.

To create an SNS topic in the Firewall Manager administrative console

  1. In the AWS Management Console, sign in to the Security Tooling account or the AWS Firewall Manager delegated administrator account. In the left navigation pane, under AWS Firewall Manager, choose Settings.
  2. Select the SNS topic that you created earlier to be used for the Firewall Manager central notification mechanism. For this example, we created a new SNS topic in the Sydney Region (ap-southeast-2) named SNS_Topic_Syd.
  3. For Recipient email address, enter the email address that the SNS topic will be sent to. Choose Configure SNS configuration.

After you create the SNS configuration, you can see the SNS topic in the appropriate Region, as in the following example.

Figure 5: An SNS topic for centralized incident notification

Figure 5: An SNS topic for centralized incident notification

AWS Shield Advanced records metrics in Amazon CloudWatch to monitor the protected resources and can also create Amazon CloudWatch alarms. For the simplicity purpose we took the email notification route for this example. In security operations environment, you should integrate the SNS notification to your existing ticketing system or pager duty for Realtime response.

Important: You can also use the CloudWatch dashboard to monitor potential DDoS activity. It collects and processes raw data from Shield Advanced into readable, near real-time metrics.

You can automatically enforce policies on AWS resources that currently exist or are created in the future, in order to promote compliance with firewall rules across the organization. For all policies, you can view the compliance status for in-scope accounts and resources by using the API or AWS Command Line Interface (AWS CLI) method. For content audit security group policies, you can also view detailed violation information for in-scope resources. This information can help you to better understand and manage your security risk.

View all the policies in the Firewall Manager administrative account

For our example, we created three security policies in the Firewall Manager delegated administrator account. We can check policy compliance status for all three policies by using the AWS Management Console, AWS CLI, or API methods. The AWS CLI example that follows can be further extended to build an automation for notifying the non-compliant resource owners.

To list all the policies in FMS

 aws fms list-policies --region ap-southeast-2
{
    "PolicyList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "RemediationEnabled": false, 
            "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:222222222222:policy/78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "SecurityServiceType": "WAFV2", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58"
        },
        {
            "PolicyName": "AWS_FMS_Audit_SecurityGroup", 
            "RemediationEnabled": true, 
            "ResourceType": "ResourceTypeList", 
            "PolicyArn": "arn:aws:fms:ap-southeast-2:<Account-Id>:policy/d44f3f38-ed6f-4af3-b5b3-78e9583051cf", 
            "SecurityServiceType": "SECURITY_GROUPS_CONTENT_AUDIT", 
            "PolicyId": "d44f3f38-ed6f-4af3-b5b3-78e9583051cf"
        }
    ]
}

Now, we got the policy id to check the compliance status

aws fms list-compliance-status --policy-id 78edcc79-c0b1-46ed-b7b9-d166b9fd3b58
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "WAFV2-Test2", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601360994.0, 
            "MemberAccount": "444444444444", 
            "PolicyId": "78edcc79-c0b1-46ed-b7b9-d166b9fd3b58", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 0, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 444444444444 associated to the policy is compliant. The following example shows the status for the second policy.

aws fms list-compliance-status --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d
{
    "PolicyComplianceStatusList": [
        {
            "PolicyName": "AWS_FMS_WAF_Rule", 
            "PolicyOwner": "222222222222", 
            "LastUpdated": 1601361231.0, 
            "MemberAccount": "555555555555", 
            "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
            "IssueInfoMap": {}, 
            "EvaluationResults": [
                {
                    "ViolatorCount": 3, 
                    "EvaluationLimitExceeded": false, 
                    "ComplianceStatus": "NON_COMPLIANT"
                }
            ]
        }
    ]
}

For the preceding policy, member account 555555555555 associated to the policy is non-compliant.

To provide detailed compliance information about the specified member account, the output includes resources that are in and out of compliance with the specified policy, as shown in the following example.

aws fms get-compliance-detail --policy-id 44c0b677-e7d4-4d8a-801f-60be2630a48d --member-account 555555555555
{
    "PolicyComplianceDetail": {
        "Violators": [
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2: 555555555555:loadbalancer/app/FMSTest2/c2da4e99d4d13cf4", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }, 
            {
                "ResourceType": "AWS::ElasticLoadBalancingV2::LoadBalancer", 
                "ResourceId": "arn:aws:elasticloadbalancing:ap-southeast-2:555555555555:loadbalancer/app/fmstest/1e70668ce77eb61b", 
                "ViolationReason": "RESOURCE_MISSING_WEB_ACL"
            }
        ], 
        "EvaluationLimitExceeded": false, 
        "PolicyOwner": "222222222222", 
        "ExpiredAt": 1601362402.0, 
        "MemberAccount": "555555555555", 
        "PolicyId": "44c0b677-e7d4-4d8a-801f-60be2630a48d", 
        "IssueInfoMap": {}
    }
}

In the preceding example, two Application Load Balancers (ALBs) are not associated with a web ACL. You can further introduce automation by using AWS Lambda functions to isolate the non-compliant resources or trigger an alert for the account owner to launch manual remediation.

Resource Clean up

You can delete a Firewall Manager policy by performing the following steps.

To delete a policy (console)

  1. In the navigation pane, choose Security policies.
  2. Choose the option next to the policy that you want to delete. We created 3 policies which needs to be removed one by one.
  3. Choose Delete.

Important: When you delete a Firewall Manager Shield Advanced policy, the policy is deleted, but your accounts remain subscribed to Shield Advanced.

Conclusion

In this post, you learned how you can use Firewall Manager to enforce required preventative policies from a central delegated AWS account managed by your security team. You can extend this strategy to all AWS OUs to meet your future needs as new AWS accounts or resources get added to AWS Organizations. A central notification delivery to your Security Operations team is crucial from a visibility perspective, and with the help of Firewall Manager you can build a scalable approach to stay protected, informed, and compliant. Firewall Manager simplifies your AWS WAF, AWS Shield Advanced, and Amazon VPC security group administration and maintenance tasks across multiple accounts and resources.

For further reading and updates, see the Firewall Manager Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chamandeep Singh

Chamandeep is a Senior Technical Account Manager and member of the Global Security field team at AWS. He works with financial sector enterprise customers to support operations and security, and also designs scalable cloud solutions. He lives in Australia at present and enjoy travelling around the world.

Author

Prabhakaran Thirumeni

Prabhakaran is a Cloud Architect with AWS, specializing in network security and cloud infrastructure. His focus is helping customers design and build solutions for their enterprises. Outside of work he stays active with badminton, running, and exploring the world.

AWS Firewall Manager helps automate security group management: 3 scenarios

Post Syndicated from Sonakshi Pandey original https://aws.amazon.com/blogs/security/aws-firewall-manager-helps-automate-security-group-management-3-scenarios/

In this post, we walk you through scenarios that use AWS Firewall Manager to centrally manage security groups across your AWS Organizations implementation. Firewall Manager is a security management tool that helps you centralize, configure, and maintain AWS WAF rules, AWS Shield Advanced protections, and Amazon Virtual Private Cloud (Amazon VPC) security groups across AWS Organizations.

A multi-account strategy provides the highest level of resource isolation, and helps you to efficiently track costs and avoid running into any API limits. Creating a separate account for each project, business unit, and development stage also enforces logical separation of your resources.

As organizations innovate, developers are constantly updating applications and, in the process, setting up new resources. Managing security groups for new resources across multiple accounts becomes complex as the organization grows. To enable developers to have control over the configuration of their own applications, you can use Firewall Manager to automate the auditing and management of VPC security groups across multiple Amazon Web Services (AWS) accounts.

Firewall Manager enables you to create security group policies and automatically implement them. You can do this across your entire organization, or limit it to specified accounts and organizational units (OU). Also, Firewall Manager lets you use AWS Config to identify and review resources that don’t comply with the security group policy. You can choose to view the accounts and resources that are out of compliance without taking corrective action, or to automatically remediate noncompliant resources.

Scenarios where AWS Firewall Manager can help manage security groups

Scenario 1: Central security group management for required security groups

Let’s consider an example where you’re running an ecommerce website. You’ve decided to use Organizations to centrally manage billing and several aspects of access, compliance, security, and sharing resources across AWS accounts. As shown in the following figure, AWS accounts that belong to the same team are grouped into OUs. In this example, the organization has a foundational OU, and multiple business OUs—ecommerce, digital marketing, and product.

Figure 1: Overview of ecommerce website

Figure 1: Overview of ecommerce website

The business OUs contain the development, test, and production accounts. Each of these accounts is managed by the developers in charge of development, test, and production stages used for the launch of the ecommerce website.

The product teams are responsible for configuring and maintaining the AWS environment according to the guidance from the security team. An intrusion detection system (IDS) has been set up to monitor infrastructure for security activity. The IDS architecture requires that an agent be installed on instances across multiple accounts. The IDS agent running on the Amazon Elastic Compute Cloud (Amazon EC2) instances protects their infrastructure from common security issues. The agent collects telemetry data used for analysis, and communicates with the central IDS instance that sits in the AWS security account. The central IDS instance analyzes the telemetry data and notifies the administrators with its findings.

For the host-based agent to communicate with the central system correctly, each Amazon EC2 instance must have specific inbound and outbound ports and specific destinations defined as allowed. To enable our product to focus on their applications, we want to use automation to ensure that the right network configuration is implemented so that instances can communicate with the central IDS.

You can address the preceding problem with Firewall Manager by implementing a common security group policy for required accounts. With Firewall Manager, you create a common IDS security group in the central security account and replicate it across other accounts in the ecommerce OU, as shown in the following figure.

Figure 2: Security groups central management with Firewall Manager

Figure 2: Security groups central management with Firewall Manager

Changes made to these security groups can be seamlessly propagated to all the accounts. The changes can be tracked from the Firewall Manager console as shown in figure 3. Firewall Manager propagates changes to the security groups based on the tags attached to the Amazon EC2 instance.

As shown in figure 3, with Firewall Manager you can quickly view the compliance status for each policy by looking at how many accounts are included in the scope of the policy and how many out of those are compliant or non-compliant. Firewall Manager is also integrated with AWS Security Hub, which can trigger security automation based on findings.

Figure 3: Firewall Manager findings

Figure 3: Firewall Manager findings

Scenario 2: Clean-up of unused and redundant security groups

Firewall Manager can also help manage the clean-up of unused and redundant security groups. In a development environment, instances are often terminated post testing, but the security groups associated with those instances might remain. We want to only remove the security groups that are no longer in use to avoid causing issues with running applications.

Figure 4: Ecommerce OU, accounts, and security groups

Figure 4: Ecommerce OU, accounts, and security groups

In our example, developers are testing features in a test account. In this scenario, once the testing is completed, the instances are terminated and the security groups remain in the account. The preceding figure shows unused security groups like Test1, Test2, and Test3 in the test account.

A Firewall Manager usage audit security group policy monitors your organization for unused and redundant security groups. You can configure Firewall Manager to automatically notify you of unused, redundant, or non-compliant security groups, and to automatically remove them. These actions are applied to existing and new accounts that are added to your organization.

Scenario 3: Audit and remediate overly permissive security groups across all AWS accounts

The security team is responsible for maintaining the security of the AWS environment and must monitor and remediate overly permissive security groups across all AWS accounts. Auditing security groups for overly permissive access is a critical security function and can become inefficient and time consuming when done manually.

You can use Firewall Manager content audit security group policy to provide auditing and enforcement of your organization’s security policy for risky security groups, most commonly known as allowed or blocked security group rules. This enables you to set guardrails and monitor for overly permissive rules centrally. For example, we set an allow list policy to allow secure shell access only from authorized IP addresses on the corporate network.

Firewall Manager enables you to create security group policies to protect all accounts across your organization. These policies are applied to accounts or to OUs that contain specific tags, as shown in figure 5. Using the Firewall Manager console, you can get a quick view of the non-compliant security groups across accounts in your organization. Additionally, Firewall Manager can be configured to send notifications to the security administrators or automatically remove non-compliant security groups.

In the policy scope, you can choose the AWS accounts this policy applies to, the resource type, and which resource to include based on the resource tags, as shown in figure 5.

Figure 5: Edit tags for policy scope

Figure 5: Edit tags for policy scope

Conclusion

This post shares a few core use cases that enable security practitioners to build the capability to centrally manage security groups across AWS Organizations. Developers can focus on building applications, while the audit and configuration of network controls is automated by Firewall Manager. The key use cases we discussed are:

  1. Common security group policies
  2. Content audit security groups policies
  3. Usage audit security group policies

Firewall Manager is useful in a dynamic and growing multi-account AWS environment. Follow the Getting Started with Firewall Manager guide to learn more about implementing this service in your AWS environment.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonakshi Pandey

Sonakshi is a Solutions Architect at Amazon Web Services. She helps customers migrate and optimize workloads on AWS. Sonakshi is based in Seattle and enjoys cooking, traveling, blogging, reading thriller novels, and spending time with her family.

Author

Laura Reith

Laura is a Solutions Architect at Amazon Web Services. Before AWS, she worked as a Solutions Architect in Taiwan focusing on physical security and retail analytics.

Author

Kevin Moraes

Kevin is a Partner Solutions Architect with AWS. Kevin enjoys working with customers and helping to build them in areas of Network Infrastructure, Security, and Migration conforming to best practices. When not at work, Kevin likes to travel, watch sports, and listen to music.

Introducing the AWS Best Practices for Security, Identity, & Compliance Webpage and Customer Polling Feature

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/introducing-aws-best-practices-security-identity-compliance-webpage-and-customer-polling-feature/

The AWS Security team has made it easier for you to find information and guidance on best practices for your cloud architecture. We’re pleased to share the Best Practices for Security, Identity, & Compliance webpage of the new AWS Architecture Center. Here you’ll find top recommendations for security design principles, workshops, and educational materials, and you can browse our full catalog of self-service content including blogs, whitepapers, videos, trainings, reference implementations, and more.

We’re also running polls on the new AWS Architecture Center to gather your feedback. Want to learn more about how to protect account access? Or are you looking for recommendations on how to improve your incident response capabilities? Let us know by completing the poll. We will use your answers to help guide security topics for upcoming content.

Poll topics will change periodically, so bookmark the Security, Identity, & Compliance webpage for easy access to future questions, or to submit your topic ideas at any time. Our first poll, which asks what areas of the Well-Architected Security Pillar are most important for your use, is available now. We look forward to hearing from you.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

New third-party test compares Amazon GuardDuty to network intrusion detection systems

Post Syndicated from Tim Winston original https://aws.amazon.com/blogs/security/new-third-party-test-compares-amazon-guardduty-to-network-intrusion-detection-systems/

A new whitepaper is available that summarizes the results of tests by Foregenix comparing Amazon GuardDuty with network intrusion detection systems (IDS) on threat detection of network layer attacks. GuardDuty is a cloud-centric IDS service that uses Amazon Web Services (AWS) data sources to detect a broad range of threat behaviors. Security engineers need to understand how Amazon GuardDuty compares to traditional solutions for network threat detection. Assessors have also asked for clarity on the effectiveness of GuardDuty for meeting compliance requirements, like Payment Card Industry (PCI) Data Security Standard (DSS) requirement 11.4, which requires intrusion detection techniques to be implemented at critical points within a network.

A traditional IDS typically relies on monitoring network traffic at specific network traffic control points, like firewalls and host network interfaces. This allows the IDS to use a set of preconfigured rules to examine incoming data packet information and identify patterns that closely align with network attack types. Traditional IDS have several challenges in the cloud:

  • Networks are virtualized. Data traffic control points are decentralized and traffic flow management is a shared responsibility with the cloud provider. This makes it difficult or impossible to monitor all network traffic for analysis.
  • Cloud applications are dynamic. Features like auto-scaling and load balancing continuously change how a network environment is configured as demand fluctuates.

Most traditional IDS require experienced technicians to maintain their effective operation and avoid the common issue of receiving an overwhelming number of false positive findings. As a compliance assessor, I have often seen IDS intentionally de-tuned to address the false positive finding reporting issue when expert, continuous support isn’t available.

GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs. This gives GuardDuty the ability to analyze event data, such as AWS API calls to AWS Identity and Access Management (IAM) login events, which is beyond the capabilities of traditional IDS solutions. Monitoring AWS API calls from CloudTrail also enables threat detection for AWS serverless services, which sets it apart from traditional IDS solutions. However, without inspection of packet contents, the question remained, “Is GuardDuty truly effective in detecting network level attacks that more traditional IDS solutions were specifically designed to detect?”

AWS asked Foregenix to conduct a test that would compare GuardDuty to market-leading IDS to help answer this question for us. AWS didn’t specify any specific attacks or architecture to be implemented within their test. It was left up to the independent tester to determine both the threat space covered by market-leading IDS and how to construct a test for determining the effectiveness of threat detection capabilities of GuardDuty and traditional IDS solutions which included open-source and commercial IDS.

Foregenix configured a lab environment to support tests that used extensive and complex attack playbooks. The lab environment simulated a real-world deployment composed of a web server, a bastion host, and an internal server used for centralized event logging. The environment was left running under normal operating conditions for more than 45 days. This allowed all tested solutions to build up a baseline of normal data traffic patterns prior to the anomaly detection testing exercises that followed this activity.

Foregenix determined that GuardDuty is at least as effective at detecting network level attacks as other market-leading IDS. They found GuardDuty to be simple to deploy and required no specialized skills to configure the service to function effectively. Also, with its inherent capability of analyzing DNS requests, VPC flow logs, and CloudTrail events, they concluded that GuardDuty was able to effectively identify threats that other IDS could not natively detect and required extensive manual customization to detect in the test environment. Foregenix recommended that adding a host-based IDS agent on Amazon Elastic Compute Cloud (Amazon EC2) instances would provide an enhanced level of threat defense when coupled with Amazon GuardDuty.

As a PCI Qualified Security Assessor (QSA) company, Foregenix states that they consider GuardDuty as a qualifying network intrusion technique for meeting PCI DSS requirement 11.4. This is important for AWS customers whose applications must maintain PCI DSS compliance. Customers should be aware that individual PCI QSAs might have different interpretations of the requirement, and should discuss this with their assessor before a PCI assessment.

Customer PCI QSAs can also speak with AWS Security Assurance Services, an AWS Professional Services team of PCI QSAs, to obtain more information on how customers can leverage AWS services to help them maintain PCI DSS Compliance. Customers can request Security Assurance Services support through their AWS Account Manager, Solutions Architect, or other AWS support.

We invite you to download the Foregenix Amazon GuardDuty Security Review whitepaper to see the details of the testing and the conclusions provided by Foregenix.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Winston

Tim is long-time security and compliance consultant and currently a PCI QSA with AWS Security Assurance Services.

How to think about cloud security governance

Post Syndicated from Paul Hawkins original https://aws.amazon.com/blogs/security/how-to-think-about-cloud-security-governance/

When customers first move to the cloud, their instinct might be to build a cloud security governance model based on one or more regulatory frameworks that are relevant to their industry. Although this can be a helpful first step, it’s also critically important that organizations understand what the control objectives for their workloads should be.

In this post, we discuss what you need to do both organizationally and technically with Amazon Web Services (AWS) to build an efficient and effective governance model. People who are taking their first steps in cloud can use this post to guide their thinking. It can also act as useful context for folks who have been running in the cloud for a while to evaluate their current governance approach.

But before you can build that model, it’s important to understand what governance is and to consider why you need it. Governance is how an organization ensures the consistent application of policies across all teams. The best way to implement consistent governance is by codifying as much of the process as possible. Security governance in particular is used to support business objectives by defining policies and controls to manage risk.

Moving to the cloud provides you with an opportunity to deliver features faster, react to the changing world in a more agile way, and return some decision making to the hands of the people closest to the business. In this fast-paced environment, it’s important to have a way to maintain consistency, scaleability, and security. This is where a strong governance model helps.

Creating the right governance model for your organization may seem like a complex task, but it doesn’t have to be.

Frameworks

Many customers use a standard framework that’s relevant to their industry to inform their decision-making process. Some frameworks that are commonly used to develop a security governance model include: NIST Cybersecurity Framework (CSF), Information Security Registered Assessors Program (IRAP), Payment Card Industry Data Security Standard (PCI DSS), or ISO/IEC 27001:2013

Some of these standards provide requirements that are specific to a particular regulator, or region and others are more widely applicable—you should choose one that fits the needs of your organization.

While frameworks are useful to set the context for a security program and give guidance on governance models, you shouldn’t build either one only to check boxes on a particular standard. It’s critical that you should build for security first and then use the compliance standards as a way to demonstrate that you’re doing the right things.

Control objectives

After you’ve selected a framework to use, the next considerations are controls. A control is a technical- or process-based implementation that’s designed to ensure that the likelihood or consequences of an identified risk are reduced to a level that’s acceptable to the organization’s risk appetite. Examples of controls include firewalls, logging mechanisms, access management tools, and many more.

Controls will evolve over time; sometimes they do so very quickly in the early stages of cloud adoption. During this rapid evolution, it’s easy to focus purely on the implementation of a control rather than the objective of it. However, if you want to build a robust and useful governance model, you must not lose sight of control objectives.

Consider the example of the firewall. When you use a firewall, you implement a control. The objective is to make sure that only traffic that should reach your environment is able to reach it. Although a firewall is one way to meet this objective, you can achieve the same outcome with a layered approach using Amazon Virtual Private Cloud (Amazon VPC) Security Groups, AWS WAF and Amazon VPC network access control lists (ACLs). Splitting the control implementation into multiple places can enable workload owners to have greater flexibility in how they configure resources while the baseline posture is delivered automatically.

Not all areas of a business necessarily have the same cloud maturity level, or use the same methods to deploy or run workloads. As a security architect, your job is to help those different parts of the business deliver outcomes in the way that is appropriate for their maturity or particular workload.

The best way to help drive this goal is for the security part of your organization to clearly communicate the necessary control objectives. As a security architect, it’s easier to have a discussion about the things that need tweaking in an application if the objectives are well communicated. It is much harder if the workload owner doesn’t know they have to meet certain security expectations.

What is the job of security?

At AWS, we talk to customers across a range of industries. One thing that consistently comes up in conversation is how to help customers understand the role of their security team in a distributed cloud-aware environment. The answer is always the same: we as security people are here to help the business deploy and run applications securely. Our job is to guide and educate the rest of the organization on the best way to meet the business objectives while meeting the security, risk, and compliance requirements.

So how do you do this?

Technology and culture are both important to an organization’s security posture, and they enable each other. AWS is a good example of an organization that has a strong culture of security ownership. One thing that all customers can take away from AWS: security is everyone’s job. When you understand that, it becomes easier to build the mechanisms that make the configuration and operation of appropriate security control objectives a reality.

The cloud environment that you build goes a long way to achieving this goal in two key ways. First, it provides guardrails and automated guidance for people building on the platform. Second, it allows solutions to scale.

One of the challenges organizations encounter is that there are more developers than there are security people. The traditional approach of point-in-time risk and control assessments performed by a human looking at an architecture diagram doesn’t scale. You need a way to scale that knowledge and capability without increasing the number of people. The best way to achieve this is to codify as much as possible, early in the build and release process.

One way to do this is to run the AWS platform as a product in its own right. Team members should be able to submit feature requests, and there should be metrics on the features that are enabled through the platform. The more security capability that teams building workloads can inherit from the platform, the less they have to implement at the workload level and the more time they can spend on product features. There will always be some security control objectives that can only be delivered by specific configuration at the workload level; this should build on top of what’s inherited from the cloud platform. Your security team and the other teams need to work together to make sure that the capabilities provided by the cloud platform are available to help people build and release securely.

One part of the governance model that we like to highlight is the concept of platform onboarding. The idea of this part of the governance model is to quickly and consistently get to a baseline set of controls that enable you to use a service safely in a particular environment. A good example here is to give developers access to evaluate a service in an experimentation account. To support this process, you don’t want to spend a long time building controls for every possible outcome. The best approach is to take advantage of the foundational controls that are delivered by the cloud platform as the starting point. Things like federation, logging, and service control policies can be used to provide guard rails that enable you to use services quickly. When the services are being evaluated, your security team can work together with your business to define more specific controls that make sense for the actual use cases.

AWS Well-Architected Framework

The cloud platform you use is the foundation of many of the security controls. These guard rails of federation, logging, service control polices, and automated response apply to workloads of all types. The security pillar in the AWS Well-Architected Framework builds on other risk management and compliance frameworks, provides you with best practices, and helps you to evaluate your architectures. These best practices are a great place to look for what you should do when building in the cloud. The categories—identity and access management, detection, infrastructure protection, data protection, and incident response—align with the most important areas to focus on when you build in AWS.

For example, identity is a foundational control in a cloud environment. One of the AWS Well-Architected security best practices is “Rely on a centralized identity provider.” You can use AWS Single Sign-On (AWS SSO) for this purpose or an equivalent centralized mechanism. If you centralize your identity provider, you can perform identity lifecycle management on users, provide them with access to only the resources that are required, and support users who move between teams. This can apply across the multiple AWS accounts in your AWS environment. AWS Organizations uses service control policies to enable you to use a subset of AWS services in particular environments; this is an identity-centric way of providing guard rails.

In addition to federating users, it’s important to enable logging and monitoring services across your environment. This allows you to generate an event when something unexpected happens, such as a user trying to call AWS Key Management Service (AWS KMS) to decrypt data that they should have access to. Securely storing logs means that you can perform investigations to determine the causes of any issues you might encounter. AWS customers who use Amazon GuardDuty and AWS CloudTrail, and have a set of AWS Config rules enabled, have access to security monitoring and logging capabilities as they build their applications.

The layer cake model

When you think about cloud security, we find it useful to use the layer cake as a good mental model. The base of the cake is the understanding of the below-the-line capability that AWS provides. This includes self-serving the compliance documentation from AWS Artifact and understanding the AWS shared responsibility model.

The middle of the cake is the foundational controls, including those described previously in this post. This is the most important layer, because it’s where the most controls are and therefore where the most value is for the security team. You could describe it as the “solve it once, consume it many times” layer.

The top of the cake is the application-specific layer. This layer includes things that are more context dependent, such as the correct control objectives for a certain type of application or data classification. The work in the middle layer helps support this layer, because the middle layer provides the mechanisms that make it easier to automatically deliver the top layer capability.

The middle and top layers are not just technology layers. They also include the people and process parts of the equation. The technology is just there to support the processes.

One thing to be aware of is that you shouldn’t try to define every possible control for a service before you allow your business to use the service. Make use of the various environments in your organization—experimenting, development, testing, and production—to get the services in the hands of developers as quickly as possible with the minimum guardrails to avoid accidental misconfiguration. Then, use the time when the services are being assessed to collaborate with the developers on control implementation. Control implementations can then be rolled into the middle layer of the cake, and the services can be adopted by other parts of the business.

This is also the ideal time to apply practical threat modelling techniques so you can understand what threats and risks you must address. Working with your business to define recommended implementation patterns also helps provide context for how services are typically used. This means you can focus on the controls that are most relevant.

The architecture, platform, or cloud center of excellence (CoE) teams can help at this stage. They can likely make a quick determination of whether an AWS service fits in with your organization’s architectural direction. This quick triage helps the security team focus their efforts in helping get services safely in the hands of the business without being seen as blocking adoption. A good mechanism for streamlining the use of new services is to make sure the backlog is well communicated, typically on a platform team wiki. This helps the security and non-security parts of your organization prioritize their time on services that deliver the most business value. A consistent development approach means that the services that are used are probably being used in more places across the organization. This helps your organization get the benefits of scale as consistent approaches to control implementation are replicated between teams.

Simplicity, metrics, and culture

The world moves fast. You can’t just define a security posture and control objectives, and then walk away. New services are launched that make it easier to do more complex things, business priorities change, and the threat landscape evolves. How do you keep up with all of it?

The answer is a combination of simplicity, metrics, and culture.

Simplicity is hard, but useful. For example, if you have 100 application teams all building in a different way, you have a large number of different configurations that you must ensure are sensibly defined. Ideally, you do this programmatically, which means that the work to define and maintain that set of security controls is significant. If you have 100 application teams using only 10 main patterns, it’s easier to build controls. This has the added benefit of reducing the complexity at the operations end, which applies to both the day-to-day operations and to incident responses. Simplification of your control environment means that your monitoring is less complex, troubleshooting is easier, and people have time to focus on the development of new controls or processes.

Metrics are important because you can make informed decisions based on data. A good example of the usefulness of metrics is patching. Patching is one of the easiest ways to improve your security posture. Having metrics on patch age, presented where this information is most important in your environment, enables you to focus on the most valuable areas. For example, infrastructure on your edge is more important to keep patched than infrastructure that is behind multiple layers of controls. You should patch everything, but you need to make it easy for application teams to do so as part of their build and release cycles. Exposing metrics to teams and leadership helps your organization learn from high performing areas in the business. These could be teams that are regularly meeting the patching expectations or have low instances of needing to remediate penetration testing findings. Metrics and data about your control effectiveness enables you to provide assurance internally and externally that you’re meeting your control objectives.

This brings us to culture. Security as an enabler is something that we think is the most important concept to take away from this post. You must build capabilities that enable people in your organization to have the secure configuration or design choice be the easiest option. This is the role of security. You should also make sure that, when there are problems, your security team works with the business to help everyone learn the cause and improve for next time.

AWS has a culture that uses trouble ticketing for everything. If our employees think they have a security problem, we tell them to open a ticket; if they’re not sure that they have a security problem, we tell them to open a ticket anyway to get guidance. This kind of culture encourages people to communicate and help means so we can identify and fix issues early. Issues that aren’t as severe thought can be downgraded quickly. This culture of ticketing gives us data to inform what we build, which helps people be more secure. You can get started with a system like this in your own environment, or look to extend the capability if you’ve already started.

Take our recommendation to turn on GuardDuty across all your accounts. We recommend that the resulting high and medium alerts are sent to a ticketing system. Look at how you resolve those issues and use that to prioritize the next two weeks of work. Now you can build automation to fix the issues and, more importantly, build to prevent the issues from happening in the first place. Ask yourself, “What information did I need to diagnose the problem?” Then, build automation to enrich the findings so your tickets have that context. Iterate on the automation to understand the context. For example, you may want to include information to show whether the environment is production or non-production.

Note that having production-like controls in non-production environments means that you reduce the chance of deployment failures. It also gets teams used to working within the security guardrails. This increased rigor earlier on in the process, and helps your change management team, too.

Summary

It doesn’t matter what security frameworks or standards you use to inform your business, and you might not even align with a particular industry standard. What does matter is building a governance model that empowers the people in your organization to consistently make good security decisions and provides the capability for your security team to enable this to happen. To get started or continue to evolve your governance model, follow the AWS Well-Architected security best practices. Then, make sure that the platform you implement helps you deliver the foundational security control objectives so that your business can spend more of its time on the business logic and security configuration that is specific to its workloads.

The technology and governance choices you make are the first step in building a positive security culture. Security is everyone’s job, and it’s key to make sure that your platform, automation, and metrics support making that job easy.

The areas of focus we’ve talked about in this post are what allow security to be an enabler for business and to ultimately help you better help your customers and earn their trust with everything you do.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Paul Hawkins

Paul helps customers of all sizes understand how to think about cloud security so they can build the technology and culture where security is a business enabler. He takes an optimistic approach to security and believes that getting the foundations right is the key to improving your security posture.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.