Tag Archives: Security, Identity & Compliance

New IRAP report provides Australian public sector the ability to leverage additional services at PROTECTED level

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/new-irap-report-australian-public-sector-leverage-services-protected-level/

Following the award of PROTECTED certification to AWS in January 2019, we have now released updated Information Security Registered Assessors Program (IRAP) PROTECTED documentation via AWS Artifact. This information provides the ability to plan, architect, and self-assess systems built in AWS under the Digital Transformation Agency’s Secure Cloud Guidelines. The new documentation expands the scope to 64 PROTECTED services, including new category areas such as artificial intelligence (AI), machine learning (ML), and IoT services. For example, Amazon SageMaker is a service that provides every developer and data scientist with tools to build, train, and deploy machine learning models quickly.

This documentation gives public sector customers everything needed to evaluate AWS at the PROTECTED level. AWS is making this resource available to download on-demand through AWS Artifact. The guide provides a mapping of AWS controls for securing PROTECTED data.

The AWS IRAP PROTECTED documentation helps individual agencies simplify the process of adopting AWS services. The information enables individual agencies to complete their own assessments and adopt AWS for a broader range of services. These assessed AWS services are available within the existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, AI/ML, IoT, application integration, management, and governance. This means you can take advantage of all the security benefits without paying a price premium, or needing to modify your existing applications or environments.

The newly added services to the scope of the IRAP document are listed below.

For the full list of services in scope of the IRAP report, see the services in scope page (select the IRAP tab).

If you have questions about our PROTECTED certification or would like to inquire about how to use AWS for your highly sensitive workloads, contact your account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is Head of Security Assurance for Australia and New Zealand at AWS in Canberra Australia. He is passionate about removing regulatory barriers of cloud adoption for customers. John has been working with Government customers at AWS for over 7 years, as the first employee for the ANZ Public Sector team.

Internet Security Notification – Department of Homeland Security Alert AA20-006A

Post Syndicated from Nathan Case original https://aws.amazon.com/blogs/security/internet-security-notification-department-of-homeland-security-alert-aa20-006a/

On January 6, 2020, the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) released an alert (AA20-006A) that highlighted measures for critical infrastructure to prepare for information security risks, but which are also relevant to all organizations. The CISA alert focuses on vulnerability mitigation and incident preparation.

At AWS, security is our core function and highest priority and, as always, we are engaged with the U.S. Government and other responsible national authorities regarding the current threat landscape. We are taking all appropriate steps to ensure that our customers and infrastructure remain protected, and we encourage our customers to do the same with their systems and workloads, whether in the cloud or on-premises.

The CISA recommendations reflect general guidance, as well as specific mitigations and monitoring that can help address information security risks. In this post, we provide customers with resources they can use to apply the CISA recommendations to their environment and implement other best practices to protect their resources. Specifically, the security principles and mechanisms provided in the Well Architected Framework and posts on AWS best practices that can help you address the issues described in the alert.

The specific techniques described in the CISA alert are almost all related to issues that exist in an on-premises Windows or Linux operating system and network environment, and are not directly related to cloud computing. However, the precautions described may be applicable to the extent customers are using those operating systems in an Amazon Elastic Compute Cloud (Amazon EC2) virtual machine environment. There are also cloud-specific technologies and issues that should be considered and addressed. Customers can use the information provided in the table below to help address the issues.

TechniqueMitigation
Credential Dumping & Spearphishing

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Getting Started: Follow Security Best Practices as You Configure Your AWS Resources

How can I configure a CloudWatch events rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger?

Data Compressed & Obfuscated Files or Information

How can I configure a CloudWatch events rule for GuardDuty to send custom SNS notifications if specific AWS service event types trigger?

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

User Execution

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

Scripting

Nine AWS Security Hub best practices

How to import AWS Config rules evaluations as findings in Security Hub

Remote File Copy

Continuous Compliance with AWS Security Hub

Monitor, review, and protect Amazon S3 buckets using Access Analyzer for S3

We’re also including links to GitHub repositories that can be helpful to automate some of the above practices, and the AWS Security Incident Response white paper, to assist with planning and response to security events. We strongly recommend that you review your run-books, disaster recovery plans, and backup procedures.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, please contact your AWS Account Manager or contact AWS Support. If you need urgent help or have relevant information about an existing security issue, contact your AWS account representative.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nathan Case

Nathan Case

Nathan is a Senior Security Strategist, and joined AWS in 2016. He is always interested to see where our customers plan to go and how we can help them get there. He is also interested in intel, combined data lake sharing opportunities, and open source collaboration. In the end Nathan loves technology and that we can change the world to make it a better place.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.

Author

Tim Anderson

Tim Anderson is a Senior Security Advisor with AWS Security where he focuses on addressing the security, compliance, and privacy needs for customers and industry globally. Additionally, Tim designs solutions, capabilities, and practices to teach and democratize security concepts to meet challenges across the global landscape. Previous to AWS, Tim had 16 years’ experience designing, delivering, and managing security and compliance programs for U.S. Federal customers across DoD and federal civilian agencies.

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 16 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization-16-services-us-east-west-govcloud-us-regions/

AWS is continually expanding the scope of our compliance programs to help your organization run sensitive and regulated workloads. Today, we’re pleased to announce an additional array of AWS services that are available in the AWS US East/West and AWS GovCloud (US) Regions, marking a 17.7% increase in our number of FedRAMP authorizations since the beginning of December 2019. We’ve achieved authorizations for 16 additional services, 6 of which have been authorized for both the AWS US East/West and AWS GovCloud (US) Regions.

We’ve achieved FedRAMP authorizations for the following 7 services in our AWS US East/West Regions:

We also received 15 service authorizations in our AWS GovCloud (US) Regions:

In total, we now offer 78 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 70 services authorized in the AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope by Compliance Program page.

Our FedRAMP assessment was completed with an accredited third-party assessment organization (3PAO) to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP NIST requirements and baselines.

We care deeply about our customers’ needs, and compliance is my team’s priority. We want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

author photo

Amendaze Thomas

Amendaze is the manager of the AWS Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the Federal government, and over 13 years’ experience supporting CISO teams with risk management framework (RMF) activities

How to import AWS Config rules evaluations as findings in Security Hub

Post Syndicated from Omar Kahil original https://aws.amazon.com/blogs/security/how-to-import-aws-config-rules-evaluations-findings-security-hub/

In June at re:Inforce 2019, AWS announced the general availability of AWS Security Hub, a security service that enables customers to centrally view and manage compliance checks and security findings across their AWS accounts. AWS Security Hub imports security findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and over 30 AWS partner security solutions.

By default, when Security Hub is enabled, it deploys the CIS AWS Foundations standard in your account. CIS AWS Foundations are a set of security configuration best practices for hardening AWS accounts. In order for Security Hub to successfully run compliance checks against the rules included in the CIS AWS Foundations standard, AWS Config must be enabled in the same account where you enabled AWS Security Hub.

If you had previously created your own AWS Config rules before enabling AWS Security Hub, you will need to import them into AWS Security Hub. This will enable your security teams to have all compliance findings across accounts show up in Security Hub, along with the security findings from services like AWS Guard Duty, Amazon Inspector and Amazon Macie. In this post, we’ll show you how to set this up.

Solution Overview

You’ll use an AWS CloudFormation template to deploy the following components, so that you can import the findings of AWS Config rules into AWS Security Hub. Below are the main resources created by this template:

  • An AWS Lambda function with the necessary IAM role and IAM policies:

    Config-SecHub-Lambda

    This Lambda function will execute immediately after the CloudFormation stack creation. It will parse and import AWS Config Rule findings into Security Hub.

  • A CloudWatch Event rule with the necessary permission to invoke the AWS Lambda function:

    Config-Sechub-CW-Rule

    This CloudWatch Event rule will match the compliance change event from AWS Config and will route it to the Lambda function for processing.

Here’s the high level setup of the AWS services that will be created from the CloudFormation stack deployment:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

  1. A new AWS Config rule is deployed in the account after you enable AWS Security Hub.
  2. The AWS Config rule reacts to resource configuration and compliance changes and send these change items to AWS CloudWatch.
  3. When AWS CloudWatch receives the compliance change, a CloudWatch event rule triggers the AWS Lambda function.
  4. The Lambda function parses the event received from CloudWatch and imports it into Security Hub as a finding.

Deploy the solution

  1. Log in to the AWS Management Console and select the us-east-1 Region (N.Virginia) for this sample deployment.
  2. Make sure that AWS Config and AWS Security Hub are enabled in the region where you want to deploy the template.
  3. Set up the sample deployment by selecting the Launch Stack button below.

The solution must be launched in the region where AWS Config and AWS Security Hub are enabled, and can be deployed in any region that support the resources that will be created.

Select this image to open a link that starts building the CloudFormation stack

Follow these steps to execute the stack:

  1. Leave the default location for the template and select Next.
  2. On the Specify stack details page, add your preferred stack name.
  3. On the Configure Stack options page, select the Next button.
  4. On the Review page, select the check box, the select the Create Stack button.
  5. Stack creation will take between 5 – 10 minutes. After the stack is created successfully, select the Resources tab to see the deployed resources.

    Figure 2: AWS CloudFormation Resources tab

    Figure 2: Select the Resources tab to see deployed resources

At this point, you’ve successfully created the following resources:

  • A Lambda function that will integrate AWS Config and Security Hub.
  • A Lambda execution role used by the Lambda function to call other services.
  • A CloudWatch Events rule that will trigger the Lambda function based on AWS Config compliance change events.
  • A service permission to allow Amazon CloudWatch to invoke the Lambda function.

Verify the solution

To be certain that everything is set up properly, look at the Lambda code that integrates AWS Config with AWS Security Hub by following the steps below from the console:

  1. Navigate to the AWS Lambda console.
  2. From the list of Lambda functions, open the function with the name Config-SecHub-Lambda to check that the function has been deployed successfully.

    Figure 3: AWS Lambda console

    Figure 3: Config-SecHub-Lambda

You can also verify that the CloudWatch Event rule has deployed successfully:

  1. Go to the Amazon CloudWatch service console.
  2. Under Events, select Rules. You should see a rule with the name Config-Sechub-CW-Rule.
     
    Figure 4: Look for Config-Sechub-CW-Rule

    Figure 4: Config-Sechub-CW-Rule

Test the solution

  1. Go to the AWS Config console.
  2. On the left, select Rules, then Add rule.
  3. In the search filter, enter ec2 as an input, then select the desired-instance-type rule to test.

    Figure 5: AWS Config "add rule" page

    Figure 5: AWS Config “add rule” page

  4. In the rule parameters, enter t2.micro as the value. Any other instance type launched in this account will violate this rule and be noncompliant. Then, select Save.
  5. Navigate to Amazon EC2 console, and under Create Instance, select Launch.
  6. Select any Amazon Linux AMI.

    Figure 6: Select an Amazon Linux AMI

    Figure 6: Select an Amazon Linux AMI

  7. To test the rule, select any instance type except t2.micro. In our example, we’ve selected t2.nano.

    Figure 7: Choose your instance type

    Figure 7: Choose your instance type

  8. Because you’re just testing the rule, select Review and Launch. Then choose Proceed without a key pair, check the “I acknowledge…” box, and select Launch Instances.
     
    Figure 8: Select "Launch Instances"

    Figure 8: Select “Launch Instances”

    You’ll need to wait few minutes until the instance is launched. You can check the EC2 console to view the instance status.

  9. Once the instance has launched, go back to the AWS Config console, and from the left-hand menu, select Rules again.
  10. On the Rules page, select the Noncompliant filter from the drop-down menu to view the rule you’ve created to check instance types.

    Figure 9: Select the "Noncompliant" filter on the AWS Config Rules page

    Figure 9: Select the “Noncompliant” filter on the AWS Config Rules page

  11. Under Rule name, look for your desired-instance-type rule. It will be followed by your number of noncompliant resources. In this case, there should only be 1 noncompliant resource that matches the EC2 instance you launched above. You can select the rule name to view more details about the noncompliant resource.

    Figure 10: Look for the desired-instance-type rule

    Figure 10: Look for the desired-instance-type rule

  12. Go to the AWS Security Hub console. On the left, select Findings.
  13. Select the search filter, and from the drop-down menu select Title as the filter, then desired-instance-type as the value. Then select Apply.

    Figure 11: Filter by title

    Figure 11: Filter by title

  14. You should now see your AWS Config rule under Findings in AWS Security Hub.

    Figure 12: Look for your Config rule under "Findings"

    Figure 12: Look for your Config rule under “Findings”

  15. After you’ve verified that the Lambda function is working, terminate the EC2 instance and delete the Config rule.

What’s happening in the background?

When you deploy the CloudFormation stack, a Lambda function is created along with other resources. This function integrates AWS Config with AWS Security Hub. Every time a new config rule is added, and as soon its compliance status is changed, the AWS CloudWatch Event rule will trigger the Lambda function based on the compliance event. Through this function, the CloudWatch Event is parsed to get the required data for ingestion, then the data is mapped and converted into AWS Security Finding format. After this, the BatchImportFindings API operation is used to import the findings into AWS Security Hub. You can also find the CloudFormation template and the code files in the linked GitHub repository.

Conclusion

In this blog post, we’ve shown you how to import an AWS Config rule to AWS Security Hub, so that your AWS Config rules show up alongside your other security findings. This allows you to more easily use AWS Config rules in your journey toward continuous compliance across your estate of AWS accounts. If you need help in planning, building or optimizing your infrastructure, please reach out to AWS Professional Services. If you have questions about this post, let us know in the Comments section below, or start a new thread on the AWS Security Hub forum.

Kahil author photo

Omar Kahil

Omar is a Professional Services consultant who helps customers adopt DevOps culture and best practices. He also works to simplify the adoption of AWS services by automating and implementing complex solutions.

Shahzad author photo

Muhammad Shahzad

Muhammad is a DevOps consultant in ProServe who enables customers to implement DevOps by explaining principles and integrating best practices in their organizations.

55 additional AWS services achieve HITRUST CSF Certification

Post Syndicated from Hadis Ali original https://aws.amazon.com/blogs/security/55-additional-aws-services-achieve-hitrust-csf-certification/

We’re excited to announce the addition of 55 new services in scope under our latest Health Information Trust Alliance (HITRUST) Common Security Framework (CSF) certification, for a total of 119 AWS services in scope.

You can deploy environments onto AWS and inherit our HITRUST certification provided that you use only in-scope services and apply the controls detailed in the CSF. The full list of HITRUST CSF certified AWS services is available on our Services in Scope by Compliance program page. The HITRUST CSF certification of AWS is valid for two years.

AWS has released a Quick Start reference architecture for HITRUST on AWS. The Quick Start includes AWS CloudFormation templates to automate building a baseline architecture that fits within your organization’s larger HITRUST program. It also includes a security controls reference (XLS download), which maps HITRUST controls to architecture decisions, features, and configuration of the baseline.

You can download the new HITRUST CSF certificate now through AWS Artifact in the AWS Management Console. As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Hadis Ali

Hadis is a Security and Privacy Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS Security Assurance. Hadis holds Bachelor’s degrees in Accounting and Information Systems from the University of Washington.

Fall 2019 PCI DSS report now available with 7 services added in scope

Post Syndicated from Nivetha Chandran original https://aws.amazon.com/blogs/security/fall-2019-pci-dss-report-available-7-services-added/

We’re pleased to announce that seven services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification, providing our customers more options to process and store their payment card data and architect their Cardholder Data Environment (CDE) securely in AWS.

In the past year we have increased the scope of our PCI DSS certification by 19%. With these additions, you can now select from a total of 118 PCI-certified services. You can see the full list on our Services in Scope by Compliance program page. The seven new services are:

We were evaluated by third-party auditors from Coalfire and their report is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nivetha Chandran

Nivetha is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the PCI compliance program. Nivetha holds a Master’s degree in Information Management from the University of Washington.

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 26 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-and-moderate-provisional-authorization-across-26-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

AWS continues to expand the number of services that customers can use to run sensitive and highly regulated workloads in the federal government space. Today, I’m pleased to announce another expansion of our FedRAMP program, marking a 36.2% increase in our number of FedRAMP authorizations. We’ve achieved authorizations for 26 additional services, 7 of which have been authorized for both the AWS US East/West and AWS GovCloud (US) Regions.

We’ve achieved FedRAMP authorizations for the following 22 services in our AWS US East/West Regions:

We also received 11 service authorizations in our AWS GovCloud (US) Regions:

In total, we now offer 70 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 54 services authorized in the AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.

Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

We care deeply about our customers’ needs, and compliance is my team’s priority. We want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of the AWS Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the Federal government, and over 13 years’ experience supporting CISO teams with risk management framework (RMF) activities

Top 11 posts during 2019

Post Syndicated from Tom Olsen original https://aws.amazon.com/blogs/security/top-11-posts-during-2019/

The Security Blog set new records for page views in 2019, but we’re always looking for ways to improve. Please tell us what you want to read about in the Comments section below. We read all of your feedback and do our best to act on it.

The top 11 posts during 2019 based on page views

  1. How to automate SAML federation to multiple AWS accounts from Microsoft Azure Active Directory
  2. How to securely provide database credentials to Lambda functions by using AWS Secrets Manager
  3. How to set up an outbound VPC proxy with domain whitelisting and content filtering
  4. How to centralize and automate IAM policy creation in sandbox, development, and test environments
  5. Add defense in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2 Instance Metadata Service
  6. Simplify DNS management in a multi-account environment with Route 53 Resolver
  7. How to use service control policies to set permission guardrails across accounts in your AWS Organization
  8. How to share encrypted AMIs across accounts to launch encrypted EC2 instances
  9. AWS and the CLOUD Act
  10. Guidelines for protecting your AWS account while using programmatic access
  11. How to use AWS Secrets Manager to securely store and rotate SSH key pairs

We’d also like to highlight a couple recent posts that customers have shown a lot of interest in. These posts would’ve likely made it into the top 11 given another month or so:

If you’re new to AWS and are just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful.

The top 10 posts of all time based on page views

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Securely Connect to Linux Instances Running in a Private Amazon VPC
  5. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  6. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  9. A New and Standardized Way to Manage Credentials in the AWS SDKs
  10. How to Control Access to Your Amazon Elasticsearch Service Domain

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Author

Tom Olsen

Tom shares responsibility for the AWS Security Blog with Becca Crockett. If you’ve got feedback about the blog, he wants to hear it in the Comments here or in any post. In his free time, you’ll either find him hanging out with his wife and their frog, in his woodshop, or skateboarding.

author photo

Becca Crockett

Becca co-manages the Security Blog with Tom Olsen. She enjoys guiding first-time blog contributors through the writing process, and she likes to interview people. In her free time, she drinks a lot of coffee and reads things. At work, she also drinks a lot of coffee and reads things.

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/

Today I get to share my favorite kind of announcement. It’s the sort of thing that will improve security for just about everyone that builds on AWS, it can be turned on with almost no configuration, and it costs nothing to use. We’re launching a new, first-of-its-kind capability called AWS Identity and Access Management (IAM) Access Analyzer. IAM Access Analyzer mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts. It continuously monitors all policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. With IAM Access Analyzer, you have visibility into the aggregate impact of your access controls, so you can be confident your resources are protected from unintended access from outside of your account.

Let’s look at a couple examples. An IAM Access Analyzer finding might indicate an S3 bucket named my-bucket-1 is accessible to an AWS account with the id 123456789012 when originating from the source IP 11.0.0.0/15. Or IAM Access Analyzer may detect a KMS key policy that allow users from another account to delete the key, identifying a data loss risk you can fix by adjusting the policy. If the findings show intentional access paths, they can be archived.

So how does it work? Using the kind of math that shows up on unexpected final exams in my nightmares, IAM Access Analyzer evaluates your policies to determine how a given resource can be accessed. Critically, this analysis is not based on historical events or pattern matching or brute force tests. Instead, IAM Access Analyzer understands your policies semantically. All possible access paths are verified by mathematical proofs, and thousands of policies can be analyzed in a few seconds. This is done using a type of cognitive science called automated reasoning. IAM Access Analyzer is the first service powered by automated reasoning available to builders everywhere, offering functionality unique to AWS. To start learning about automated reasoning, I highly recommend this short video explainer. If you are interested in diving a bit deeper, check out this re:Invent talk on automated reasoning from Byron Cook, Director of the AWS Automated Reasoning Group. And if you’re really interested in understanding the methodology, make yourself a nice cup of chamomile tea, grab a blanket, and get cozy with a copy of Semantic-based Automated Reasoning for AWS Access Policies using SMT.

Turning on IAM Access Analyzer is way less stressful than an unexpected nightmare final exam. There’s just one step. From the IAM Console, select Access analyzer from the menu on the left, then click Create analyzer.

Creating an Access Analyzer

Analyzers generate findings in the account from which they are created. Analyzers also work within the region defined when they are created, so create one in each region for which you’d like to see findings.

Once our analyzer is created, findings that show accessible resources appear in the Console. My account has a few findings that are worth looking into, such as KMS keys and IAM roles that are accessible by other accounts and federated users.Viewing Access Analyzer Findings

I’m going to click on the first finding and take a look at the access policy for this KMS key.

An Access Analyzer Finding

From here we can see the open access paths and details about the resources and principals involved. I went over to the KMS console and confirmed that this is intended access, so I archived this particular finding.

All IAM Access Analyzer findings are visible in the IAM Console, and can also be accessed using the IAM Access Analyzer API. Findings related to S3 buckets can be viewed directly in the S3 Console. Bucket policies can then be updated right in the S3 Console, closing the open access pathway.

An Access Analyzer finding in S3

You can also see high-priority findings generated by IAM Access Analyzer in AWS Security Hub, ensuring a comprehensive, single source of truth for your compliance and security-focused team members. IAM Access Analyzer also integrates with CloudWatch Events, making it easy to automatically respond to or send alerts regarding findings through the use of custom rules.

Now that you’ve seen how IAM Access Analyzer provides a comprehensive overview of cloud resource access, you should probably head over to IAM and turn it on. One of the great advantages of building in the cloud is that the infrastructure and tools continue to get stronger over time and IAM Access Analyzer is a great example. Did I mention that it’s free? Fire it up, then send me a tweet sharing some of the interesting things you find. As always, happy building!

— Brandon

Use AWS Fargate and Prowler to send security configuration findings about AWS services to Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/use-aws-fargate-prowler-send-security-configuration-findings-about-aws-services-security-hub/

In this blog post, I’ll show you how to integrate Prowler, an open-source security tool, with AWS Security Hub. Prowler provides dozens of security configuration checks related to services such as Amazon Redshift, Amazon ElasticCache, Amazon API Gateway and Amazon CloudFront. Integrating Prowler with Security Hub will provide posture information about resources not currently covered by existing Security Hub integrations or compliance standards. You can use Prowler checks to supplement the existing CIS AWS Foundations compliance standard Security Hub already provides, as well as other compliance-related findings you may be ingesting from partner solutions.

In this post, I’ll show you how to containerize Prowler using Docker and host it on the serverless container service AWS Fargate. By running Prowler on Fargate, you no longer have to provision, configure, or scale infrastructure, and it will only run when needed. Containers provide a standard way to package your application’s code, configurations, and dependencies into a single object that can run anywhere. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers.

Solution overview

The following diagram shows the flow of events in the solution I describe in this blog post.
 

Figure 1: Prowler on Fargate Architecture

Figure 1: Prowler on Fargate Architecture

 

The integration works as follows:

  1. A time-based CloudWatch Event starts the Fargate task on a schedule you define.
  2. Fargate pulls a Prowler Docker image from Amazon Elastic Container Registry (ECR).
  3. Prowler scans your AWS infrastructure and writes the scan results to a CSV file.
  4. Python scripts in the Prowler container convert the CSV to JSON and load an Amazon DynamoDB table with formatted Prowler findings.
  5. A DynamoDB stream invokes an AWS Lambda function.
  6. The Lambda function maps Prowler findings into the AWS Security Finding Format (ASFF) before importing them to Security Hub.

Except for an ECR repository, you’ll deploy all of the above via AWS CloudFormation. You’ll also need the following prerequisites to supply as parameters for the CloudFormation template.

Prerequisites

  • A VPC with at least 2 subnets that have access to the Internet plus a security group that allows access on Port 443 (HTTPS).
  • An ECS task role with the permissions that Prowler needs to complete its scans. You can find more information about these permissions on the official Prowler GitHub page.
  • An ECS task execution IAM role to allow Fargate to publish logs to CloudWatch and to download your Prowler image from Amazon ECR.

Step 1: Create an Amazon ECR repository

In this step, you’ll create an ECR repository. This is where you’ll upload your Docker image for Step 2.

  1. Navigate to the Amazon ECR Console and select Create repository.
  2. Enter a name for your repository (I’ve named my example securityhub-prowler, as shown in figure 2), then choose Mutable as your image tag mutability setting, and select Create repository.
     
    Figure 2: ECR Repository Creation

    Figure 2: ECR Repository Creation

Keep the browser tab in which you created the repository open so that you can easily reference the Docker commands you’ll need in the next step.

Step 2: Build and push the Docker image

In this step, you’ll create a Docker image that contains scripts that will map Prowler findings into DynamoDB. Before you begin step 2, ensure your workstation has the necessary permissions to push images to ECR.

  1. Create a Dockerfile via your favorite text editor, and name it Dockerfile.
    
    FROM python:latest
    
    # Declar Env Vars
    ENV MY_DYANMODB_TABLE=MY_DYANMODB_TABLE
    ENV AWS_REGION=AWS_REGION
    
    # Install Dependencies
    RUN \
        apt update && \
        apt upgrade -y && \
        pip install awscli && \
        apt install -y python3-pip
    
    # Place scripts
    ADD converter.py /root
    ADD loader.py /root
    ADD script.sh /root
    
    # Installs prowler, moves scripts into prowler directory
    RUN \
        git clone https://github.com/toniblyx/prowler && \
        mv root/converter.py /prowler && \
        mv root/loader.py /prowler && \
        mv root/script.sh /prowler
    
    # Runs prowler, ETLs ouput with converter and loads DynamoDB with loader
    WORKDIR /prowler
    RUN pip3 install boto3
    CMD bash script.sh
    

  2. Create a new file called script.sh and paste in the below code. This script will call the remaining scripts, which you’re about to create in a specific order.

    Note: Change the AWS Region in the Prowler command on line 3 to the region in which you’ve enabled Security Hub.

    
    #!/bin/bash
    echo "Running Prowler Scans"
    ./prowler -b -n -f us-east-1 -g extras -M csv > prowler_report.csv
    echo "Converting Prowler Report from CSV to JSON"
    python converter.py
    echo "Loading JSON data into DynamoDB"
    python loader.py
    

  3. Create a new file called converter.py and paste in the below code. This Python script will convert the Prowler CSV report into JSON, and both versions will be written to the local storage of the Prowler container.
    
    import csv
    import json
    
    # Set variables for within container
    CSV_PATH = 'prowler_report.csv'
    JSON_PATH = 'prowler_report.json'
    
    # Reads prowler CSV output
    csv_file = csv.DictReader(open(CSV_PATH, 'r'))
    
    # Create empty JSON list, read out rows from CSV into it
    json_list = []
    for row in csv_file:
        json_list.append(row)
    
    # Writes row into JSON file, writes out to docker from .dumps
    open(JSON_PATH, 'w').write(json.dumps(json_list))
    
    # open newly converted prowler output
    with open('prowler_report.json') as f:
        data = json.load(f)
    
    # remove data not needed for Security Hub BatchImportFindings    
    for element in data: 
        del element['PROFILE']
        del element['SCORED']
        del element['LEVEL']
        del element['ACCOUNT_NUM']
        del element['REGION']
    
    # writes out to a new file, prettified
    with open('format_prowler_report.json', 'w') as f:
        json.dump(data, f, indent=2)
    

  4. Create your last file, called loader.py and paste in the below code. This Python script will read values from the JSON file and send them to DynamoDB.
    
    from __future__ import print_function # Python 2/3 compatibility
    import boto3
    import json
    import decimal
    import os
    
    awsRegion = os.environ['AWS_REGION']
    prowlerDynamoDBTable = os.environ['MY_DYANMODB_TABLE']
    
    dynamodb = boto3.resource('dynamodb', region_name=awsRegion)
    
    table = dynamodb.Table(prowlerDynamoDBTable)
    
    # CHANGE FILE AS NEEDED
    with open('format_prowler_report.json') as json_file:
        findings = json.load(json_file, parse_float = decimal.Decimal)
        for finding in findings:
            TITLE_ID = finding['TITLE_ID']
            TITLE_TEXT = finding['TITLE_TEXT']
            RESULT = finding['RESULT']
            NOTES = finding['NOTES']
    
            print("Adding finding:", TITLE_ID, TITLE_TEXT)
    
            table.put_item(
               Item={
                   'TITLE_ID': TITLE_ID,
                   'TITLE_TEXT': TITLE_TEXT,
                   'RESULT': RESULT,
                   'NOTES': NOTES,
                }
            )
    

  5. From the ECR console, within your repository, select View push commands to get operating system-specific instructions and additional resources to build, tag, and push your image to ECR. See Figure 3 for an example.
     
    Figure 3: ECR Push Commands

    Figure 3: ECR Push Commands

    Note: If you’ve built Docker images previously within your workstation, pass the –no-cache flag with your docker build command.

  6. After you’ve built and pushed your Image, note the URI within the ECR console (such as 12345678910.dkr.ecr.us-east-1.amazonaws.com/my-repo), as you’ll need this for a CloudFormation parameter in step 3.

Step 3: Deploy CloudFormation template

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the CloudFormation User Guide.

You’ll need the values you noted in Step 2 and during the “Solution overview” prerequisites. The description of each parameter is provided on the Parameters page of the CloudFormation deployment (see Figure 4)
 

Figure 4: CloudFormation Parameters

Figure 4: CloudFormation Parameters

After the CloudFormation stack finishes deploying, click the Resources tab to find your Task Definition (called ProwlerECSTaskDefinition). You’ll need this during Step 4.
 

Figure 5: CloudFormation Resources

Figure 5: CloudFormation Resources

Step 4: Manually run ECS task

In this step, you’ll run your ECS Task manually to verify the integration works. (Once you’ve tested it, this step will be automatic based on CloudWatch events.)

  1. Navigate to the Amazon ECS console and from the navigation pane select Task Definitions.
  2. As shown in Figure 6, select the check box for the task definition you deployed via CloudFormation, then select the Actions dropdown menu and choose Run Task.
     
    Figure 6: ECS Run Task

    Figure 6: ECS Run Task

  3. Configure the following settings (shown in Figure 7), then select Run Task:
    1. Launch Type: FARGATE
    2. Platform Version: Latest
    3. Cluster: Select the cluster deployed by CloudFormation
    4. Number of tasks: 1
    5. Cluster VPC: Enter the VPC of the subnets you provided as CloudFormation parameters
    6. Subnets: Select 1 or more subnets in the VPC
    7. Security groups: Enter the same security group you provided as a CloudFormation parameter
    8. Auto-assign public IP: ENABLED
       
      Figure 7: ECS Task Settings

      Figure 7: ECS Task Settings

  4. Depending on the size of your account and the resources within it, your task can take up to an hour to complete. Follow the progress by looking at the Logs tab within the Task view (Figure 8) by selecting your task. The stdout from Prowler will appear in the logs.

    Note: Once the task has completed it will automatically delete itself. You do not need to take additional actions for this to happen during this or subsequent runs.

     

    Figure 8: ECS Task Logs

    Figure 8: ECS Task Logs

  5. Under the Details tab, monitor the status. When the status reads Stopped, navigate to the DynamoDB console.
  6. Select your table, then select the Items tab. Your findings will be indexed under the primary key NOTES, as shown in Figure 9. From here, the Lambda function will trigger each time new items are written into the table from Fargate and will load them into Security Hub.
     
    Figure 9: DynamoDB Items

    Figure 9: DynamoDB Items

  7. Finally, navigate to the Security Hub console, select the Findings menu, and wait for findings from Prowler to arrive in the dashboard as shown in figure 10.
     
    Figure 10: Prowler Findings in Security Hub

    Figure 10: Prowler Findings in Security Hub

If you run into errors when running your Fargate task, refer to the Amazon ECS Troubleshooting guide. Log errors commonly come from missing permissions or disabled Regions – refer back to the Prowler GitHub for troubleshooting information.

Conclusion

In this post, I showed you how to containerize Prowler, run it manually, create a schedule with CloudWatch Events, and use custom Python scripts along with DynamoDB streams and Lambda functions to load Prowler findings into Security Hub. By using Security Hub, you can centralize and aggregate security configuration information from Prowler alongside findings from AWS and partner services.

From Security Hub, you can use custom actions to send one or a group of findings from Prowler to downstream services such as ticketing systems or to take custom remediation actions. You can also use Security Hub custom insights to create saved searches from your Prowler findings. Lastly, you can use Security Hub in a master-member format to aggregate findings across multiple accounts for centralized reporting.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How to get started with security response automation on AWS

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-get-started-security-response-automation-aws/

At AWS, we encourage you to use automation to help quickly detect and respond to security events within your AWS environments. In addition to increasing the speed of detection and response, automation also helps you scale your security operations as you expand your workloads running on AWS. For these reasons, security automation is a key ­principle outlined in both the Well-Architected and Cloud Adoption frameworks as well as in the AWS Security Incident Response Guide.

In this blog post, you’ll learn to implement automated security response mechanisms within your AWS environments. This post will include common patterns, implementation considerations, and an example solution. Security response automation is a broad topic that spans many areas. The goal of this blog post is to introduce you to core concepts and help you get started.

A word from our lawyers: Please note that you are responsible for making your own independent assessment of the information in this post. This post: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers, or licensors.

What is security response automation?

Security response automation is a planned and programmed action taken to achieve a desired state for an application or resource based on a condition or event. When you implement security response automation, you should adopt an approach that draws from existing security frameworks. Frameworks are published materials which consist of standards, guidelines, and best practices in order help organizations manage cybersecurity-related risk. Using frameworks helps you achieve consistency and scalability and enables you to focus more on the strategic aspects of your security program. You should work with compliance professionals within your organization to understand any specific security frameworks that may also be relevant for your AWS environment.

Our example solution is based on the NIST Cybersecurity Framework (CSF), which is designed to help organizations assess and improve their ability to prevent, detect, and respond to security events. According to the CSF, “cybersecurity incident response” supports your ability to contain the impact of potential cybersecurity incidents. Although automation is not a CSF requirement, automating responses to events enables you to create repeatable, predictable approaches to monitoring and responding to threats.

The five main steps in the CSF are identify, protect, detect, respond and recover. We’ve expanded the detect and respond steps to include automation and investigation activities.
 

Figure 1: The five steps in the CSF

Figure 1: The five steps in the CSF

The following definitions for each step in the diagram above are based on the CSF but have been adapted for our example in this blog post. Although we will focus on the detect, automate and respond steps, it’s important to understand the entire process flow.

  • Identify: Identify and understand the resources, applications, and data within your AWS environment.
  • Protect: Develop and implement appropriate controls and safeguards to ensure delivery of services.
  • Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. This step includes the implementation of monitoring capabilities which will be discussed further in the next section.
  • Automate: Develop and implement planned, programmed actions that will achieve a desired state for an application or resource based on a condition or event.
  • Investigate: Perform a systematic examination of the security event to establish the root cause.
  • Respond: Develop and implement appropriate activities to take automated or manual actions regarding a detected security event.
  • Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a security event.

Security response automation on AWS

AWS CloudTrail, AWS Config, and Amazon EventBridge continuously record details about the resources and configuration changes in your AWS account. You can use this information to automatically detect resource changes and to react to deviations from your desired state.
 

Figure 2: Automated remediation flow

Figure 2: Automated remediation flow

As shown in the diagram above, an automated remediation flow on AWS has three stages:

  • Monitor: Your automated monitoring tools collect information about resources and applications running in your AWS environment. For example, they might collect AWS CloudTrail information about activities performed in your AWS account, usage metrics from your Amazon EC2 instances, or flow log information about the traffic going to and from network interfaces in your Amazon Virtual Private Cloud (VPC).
  • Detect: When a monitoring tool detects a predefined condition—such as a breached threshold, anomalous activity, or configuration deviation—it raises a flag within the system. A triggering condition might be an anomalous activity detected by Amazon GuardDuty, a resource becoming out of compliance with an AWS Config Rule, or a high rate of blocked requests on an Amazon VPC security group or AWS WAF web access control list.
  • Respond: When a condition is flagged, an automated response is triggered that performs an action you’ve predefined—something intended to remediate or mitigate the flagged condition. Examples of automated response actions might include modifying a VPC security group, patching an Amazon EC2 instance, or rotating credentials.

You can use the event-driven flow described above to achieve many automated response patterns with varying degrees of complexity. Your response pattern could be as simple as invoking a single AWS Lambda function, or it could be a complex series of AWS Step Function tasks with advanced logic. In this blog post, we’ll use two simple Lambda functions in our example solution.

How to define your response automation

Now that we’ve introduced the concept of security response automation, start thinking about security requirements within your environment that you’d like to enforce through automation. These design requirements might come from general best practices you’d like to follow, or they might be specific controls from compliance frameworks relevant for your business. Regardless, your objectives should be quantitative, not qualitative. Here are some examples of quantitative objectives:

  • Remote administrative network access to servers should be limited.
  • Server storage volumes should be encrypted.
  • AWS console logins should be protected by multi-factor authentication.

As an optional step, you can expand these objectives into user stories that define the conditions and remediation actions when there is an event. User stories are informal descriptions that briefly document a feature within a software system. User stories may be global and span across multiple applications or they may be specific to a single application. For example:

“Remote administrative network access to servers should be limited. Remote access ports include SSH TCP port 22 and RDP TCP port 3389. If open remote access ports are detected within the environment, they should be automatically closed and the owner will be notified.”

Once you’ve completed your user story, you can determine how to use automated remediation to help achieve these objectives in your AWS environment. User stories should be stored in a location that provides versioning support and can reference the associated automation code.

You should carefully consider the effect of your remediation mechanisms in order to prevent unintended impact on your resources and applications. Remediation actions such as instance termination, credential revocation, and security group modification can adversely affect application availability. Depending on the level of risk that’s acceptable to your organization, your automated mechanism might only provide a notification which can then be manually investigated prior to remediation. Once you’ve identified an automated remediation mechanism, you can build out the required components and test them in a non-production environment.

Sample response automation walkthrough

In the following section, we’ll walk you through an automated remediation for a simulated event that indicates potential unauthorized activity—the unintended disabling of CloudTrail logging. Outside parties might want to disable logging to prevent detection and recording of their unauthorized activity. Our response is to re-enable the CloudTrail logging and immediately notify the security contact. Here’s the user story for this scenario:

“CloudTrail logging should be enabled for all AWS accounts and regions. If CloudTrail logging is disabled, it will automatically be enabled and the security operations team will be notified.”

Note: The sample response automation below references Amazon EventBridge which extends and builds upon CloudWatch Events. Amazon EventBridge uses the same Amazon CloudWatch Events API, so the event structure and rules configuration are the same. This blog post uses base functionality that is identical in both EventBridge and CloudWatch Events.

Prerequisites

In order to use our sample remediation, you will need to enable Amazon GuardDuty and AWS Security Hub in the AWS Region you have selected. Both of these services include a 30-day free trial. See the AWS Security Hub pricing page and the Amazon GuardDuty pricing page for additional details.

Important: You’ll use AWS CloudTrail to test the sample remediation. Running more than one CloudTrail trail in your AWS account will result in charges based on the number of events processed while the trail is running. Charges for additional copies of management events recorded in a Region are applied based on the published pricing plan. To minimize the charges, follow the clean-up steps that we provide later in this post to remove the sample automation and delete the trail.

Deploy the sample response automation

In this section, we’ll show you how to deploy and test the CloudTrail logging remediation sample. Amazon GuardDuty generates the finding Stealth:IAMUser/CloudTrailLoggingDisabled when CloudTrail logging is disabled, and AWS Security Hub collects findings from GuardDuty using the standardized finding format mentioned earlier. We recommend that you deploy this sample into a non-production AWS account.

Select the Launch Stack button below to deploy a CloudFormation template with an automation sample in the us-east-1 Region. You can also download the template and implement it in another Region. The template consists of an Amazon EventBridge rule, an AWS Lambda function and the IAM permissions necessary for both components to execute. It takes several minutes for the CloudFormation stack build to complete.

Select this image to open a link that starts building the CloudFormation stack

  1. In the CloudFormation console, choose the Select Template form, and then select Next.
  2. On the Specify Details page, provide the email address for a security contact. (For the purpose of this walkthrough, it should be an email address you have access to.) Then select Next.
  3. On the Options page, accept the defaults, then select Next.
  4. On the Review page, confirm the details, then select Create.
  5. While the stack is being created, check the inbox of the email address you provided in step 2. Look for an email message with the subject AWS Notification – Subscription Confirmation. Select the link in the body of the email to confirm your subscription to the Amazon Simple Notification Service (Amazon SNS) topic. You should see a success message similar to the screenshot below:
     
    Figure 3: SNS subscription confirmation

    Figure 3: SNS subscription confirmation

  6. Return to the CloudFormation console. Once the Status field for the CloudFormation stack changes to CREATE COMPLETE (as shown in figure 4), the solution is implemented and is ready for testing.
     
    Figure 4: CREATE_COMPLETE status

    Figure 4: CREATE_COMPLETE status

Test the sample automation

You’re now ready to test the automated response by creating a test trail in CloudTrail, then trying to stop it.

  1. From the AWS Management Console, choose Services > CloudTrail.
  2. Select Trails, then select Create Trail.
  3. On the Create Trail form:
    1. Enter a value for Trail name. We use test-trail in our example below.
    2. Under Management events, select Write-only (to minimize event volume).
       
      Figure 5: Create a CloudTrail trail

      Figure 5: Create a CloudTrail trail

    3. Under Storage location, choose an existing S3 bucket or create a new one. Note that since S3 bucket names are globally unique, you must add characters (such as a random string) to the name. For example: my-test-trail-bucket-<random-string>.
  4. On the Trails page of the CloudTrail console, verify that the new trail has started. You should see a green checkmark in the Status column, as shown in figure 6.
     
    Figure 6: Verify new trail has started

    Figure 6: Verify new trail has started

  5. You’re now ready to act like an unauthorized user trying to cover their tracks! Stop the logging for the trail you just created:
    1. Select the new trail name to display its configuration page.
    2. Toggle the Logging switch in the top-right corner to OFF.
    3. When prompted with a warning dialog box, select Continue.
    4. Verify that the Logging switch is now off, as shown below.
       
      Figure 7: Verify logging switch is off

      Figure 7: Verify logging switch is off

      You have now simulated a security event by disabling logging for one of the trails in the CloudTrail service. Within the next few seconds, the near real-time automated response will detect the stopped trail, restart it, and send an email notification. You can refresh the Trails page of the CloudTrail console to verify that the trail’s status is ON again.

      Within the next several minutes, the investigatory automated response will also begin. GuardDuty will detect the action that stopped the trail and enrich the data about the source of unexpected behavior. Security Hub will then ingest that information and optionally correlate with other security events.

      Following the steps below, you can monitor findings within Security Hub for the finding type TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled to be generated:

  6. In the AWS Management Console, choose Services > Security Hub
    1. Select Findings in the left pane.
    2. Select the Add filters field, then select Type.
    3. Select EQUALS, paste TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled into the field, then select Apply.
    4. Refresh your browser periodically until the finding is generated.
    5. Figure 8: Monitor Security Hub for your finding

      Figure 8: Monitor Security Hub for your finding

While you wait on that detection, let’s dig into the components of automation.

How the sample automation works

This example incorporates two automated responses: a near real-time workflow and an investigatory workflow. The near real-time workflow provides a rapid response to an individual event, in this case the stopping of a trail. The goal is to restore the trail to a functioning state and alert security responders as quickly as possible. The investigatory workflow still includes a response to provide defense in depth and also uses services that support a more in-depth investigation of the incident.

Figure 9: Sample automation workflow

Figure 9: Sample automation workflow

In the near real-time workflow, Amazon EventBridge monitors for the undesired activity. When a trail is stopped, AWS CloudTrail publishes an event on the EventBridge bus. An EventBridge rule detects the trail-stopping event and invokes a Lambda function to respond to the event by restarting the trail and notifying the security contact via an Amazon Simple Notification Service (SNS) topic.

In the investigative workflow, CloudTrail logs are monitored for undesired activities. For example, if a trail is stopped, there will be a corresponding log record. GuardDuty detects this activity and retrieves additional data points about the source IP that executed the API call. Two common examples of those additional data points in GuardDuty findings include whether the API call came from an IP address on a threat list, or whether it came from a network not commonly used in your AWS account. An AWS Lambda function responds by restarting the trail and notifying the security contact. Finally, the finding is imported into AWS Security Hub for additional investigation.

AWS Security Hub imports findings from AWS security services such as GuardDuty, Amazon Macie and Amazon Inspector, plus from any third-party product integrations you’ve enabled. All findings are provided to Security Hub in AWS Security Finding Format, which eliminates the need for data conversion. Security Hub correlates these findings to help you identify related security events and determine a root cause. Security Hub also publishes its findings to Amazon EventBridge to enable further processing by other AWS services such as AWS Lambda.

Respond step deep dive

Amazon EventBridge and AWS Lambda work together to respond to a security finding. Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and Software-as-a-Service (SaaS) applications without writing code. In this example, EventBridge identifies a Security Hub finding that requires action and invokes a Lambda function that performs remediation. As shown in figure 10, the Lambda function both notifies the security operator via SNS and restarts the stopped CloudTrail.

Figure 10: Sample "respond" workflow

Figure 10: Sample “respond” workflow

To set this response up, we looked for an event to indicate that a trail had stopped or was disabled. We knew that the GuardDuty finding Stealth:IAMUser/CloudTrailLoggingDisabled is raised when CloudTrail logging is disabled. Therefore, we configured the default event bus to look for this event. You can learn more about all of the available GuardDuty findings in the user guide.

How the code works

When Security Hub publishes a finding to EventBridge, it includes full details of the incident as discovered by GuardDuty. The finding is published in JSON format. If you review the details of the sample finding, note that it has several fields helping you identify the specific events that you’re looking for. Here are some of the relevant details:


{
   …
   "source":"aws.securityhub",
   …
   "detail":{
      "findings": [{
		…
    	“Types”: [
			"TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"
			],
		…
      }]
}

You can build an event pattern using these fields, which an EventBridge filtering rule can then use to identify events and to invoke the remediation Lambda function. Below is a snippet from the CloudFormation template we provided earlier that defines that event pattern for the EventBridge filtering rule:


# pattern matches the nested JSON format of a specific Security Hub finding
      EventPattern:
        source:
        - aws.securityhub
        detail-type:
          - "Security Hub Findings - Imported"
        detail:
          findings:
            Types:
              - "TTPs/Defense Evasion/Stealth:IAMUser-CloudTrailLoggingDisabled"

Once the rule is in place, EventBridge continuously scans the event bus for this pattern. When EventBridge finds a match, it invokes the remediating Lambda function and passes the full details of the event to the function. The Lambda function then parses the JSON fields in the event so that it can act as shown in this Python code snippet:


# extract trail ARN by parsing the incoming Security Hub finding (in JSON format)
trailARN = event['detail']['findings'][0]['ProductFields']['action/awsApiCallAction/affectedResources/AWS::CloudTrail::Trail']   

# description contains useful details to be sent to security operations
description = event['detail']['findings'][0]['Description']

The code also issues a notification to security operators so they can review the findings and insights in Security Hub and other services to better understand the incident and to decide whether further manual actions are warranted. Here’s the code snippet that uses SNS to send out a note to security operators:


#Sending the notification that the AWS CloudTrail has been disabled.
snspublish = snsclient.publish(
	TargetArn = snsARN,
	Message="Automatically restarting CloudTrail logging.  Event description: \"%s\" " %description
	)

While notifications to human operators are important, the Lambda function will not wait to take action. It immediately remediates the condition by restarting the stopped trail in CloudTrail. Here’s a code snippet that restarts the trail to reenable logging:


#Enabling the AWS CloudTrail logging
try:
	client = boto3.client('cloudtrail')
	enablelogging = client.start_logging(Name=trailARN)
	logger.debug("Response on enable CloudTrail logging- %s" %enablelogging)
except ClientError as e:
	logger.error("An error occured: %s" %e)

After the trail has been restarted, API activity is once again logged and can be audited. This can help provide relevant data for the remaining steps in the incident response process. The data is especially important for the post-incident phase, when your team analyzes lessons learned to prevent future incidents. You can also use this phase to identify additional steps to automate in your incident response.

Clean up

After you’ve completed the sample security response automation, we recommend that you remove the resources created in this walkthrough example from your account in order to minimize any charges associated with the trail in CloudTrail and data stored in S3.

Important: Deleting resources in your account can negatively impact the applications running in your AWS account. Verify that applications and AWS account security do not depend on the resources you’re about to delete.

Here are the clean-up steps:

  1. Delete the CloudFormation stack.
  2. Delete the trail you created in CloudTrail.
  3. If you created an S3 bucket for CloudTrail logs, you can also delete that S3 bucket.
  4. New accounts can try GuardDuty at no cost for 30 days. You can suspend or disable GuardDuty before the free trial period to avoid charges.
  5. Security Hub comes with a 30-day free trial. You can avoid charges by disabling the service before the trial period is over.

Summary

You’ve learned the basic concepts and considerations behind security response automation on AWS and how to use Amazon EventBridge, Amazon GuardDuty and AWS Security Hub to automatically re-enable AWS CloudTrail when it becomes disabled unexpectedly. As a next step, you may want to start building your own response automations and dive deeper into the AWS Security Incident Response Guide, NIST Cybersecurity Framework (CSF) or the AWS Cloud Adoption Framework (CAF) Security Perspective. You can explore additional automatic remediation solutions on the AWS Security Blog. You can find the code used in this example on GitHub.

If you have feedback about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the EventBridgeGuardDuty or Security Hub forums, or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Cameron Worrell

Cameron Worrell

Cameron is a Solutions Architect with a passion for security and enterprise transformation. He joined AWS in 2015.

Alex Tomic

Alex Tomic

Alex is an AWS Enterprise Solutions Architect focused on security and compliance. He joined AWS in 2014.

Nathan Case

Nathan Case

Nathan is a Senior Security Strategist, and joined AWS in 2016. He is always interested to see where our customers plan to go and how we can help them get there. He is also interested in intel, combined data lake sharing opportunities, and open source collaboration. In the end Nathan loves technology and that we can change the world to make it a better place.

Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/rely-employee-attributes-from-corporate-directory-create-fine-grained-permissions-aws/

In my earlier post Simplify granting access to your AWS resources by using tags on AWS IAM users and roles, I explained how to implement attribute-based access control (ABAC) in AWS to simplify permissions management at scale. In that scenario, I talked about relying on attributes on your IAM users and roles for access control in AWS. But more often, customers manage workforce user identities with an identity provider (IdP) and want to use identity attributes from their IdP for fine-grained permissions in AWS. In this post I introduce a new capability that enables you to do just that.

In AWS, you can configure your IdP to allow workforce users federated access to AWS resources using credentials from your corporate directory. Along with user credentials, your directory also stores user attributes such as cost center, department, and email address. Now you can configure your IdP to pass in user attributes as tags in federated AWS sessions. These are called session tags. You can then control access to AWS resources based on these session tags. Moreover, when user attributes change or new users are added to your directory, permissions automatically apply based on these attributes. For example, developers can federate into AWS using an IAM role, but can only access resources specific to their project. This is because you define permissions that require the project attribute from their IdP to match the project tag on AWS resources. Additionally, AWS logs these attributes in AWS CloudTrail and enable security administrators to track the user identity for a given role session.

In this post, I introduce session tags and walk you through an example of how to use session tags for ABAC and tracking user activity.

What are session tags?

Session tags are attributes passed in the AWS session. You can use session tags for access control in IAM policies and for monitoring. These tags are not stored in AWS and are valid only for the duration of the session. You define session tags just like tags in AWS—consisting of a customer-defined key and an optional value.

How to pass session tags in the AWS session?

One of the most widely used mechanisms for requesting a session in AWS is by assuming an IAM role. For user identities stored in an external directory, you can configure your SAML IdP in IAM to allow your users federated access to AWS using IAM roles. To understand how to set up SAML federation using an IdP, read AWS Federated Authentication with Active Directory Federation Services (ADFS). If you’re using IAM users, you can also request a session in AWS using AssumeRole and GetFederationToken APIs or using AssumeRoleWithWebIdentity API for applications that require access to AWS resources.

For session tags, you can use all of the above-mentioned APIs to pass tags into your AWS session based on your use case. For details on how to use these APIs to pass session tags, please visit Tags in AWS Sessions.

What permissions do I need to use session tags?

To perform any action in AWS, developers need permissions. For example, to assume a role, your developers need sts:AssumeRole permission. Similarly with session tags, we’re introducing a new action, sts:TagSession, that is required to pass session tags in the session. Additionally, you can require and control session tags using existing AWS conditions:

ActionUse CaseWhere to add
sts:TagSessionRequired to pass attributes as session tags when using AssumeRole, AssumRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederatioToken APIRole’s trust policy or IAM user’s permissions policy based on the API you are using to pass session tags.
Condition KeyUse CaseActions that supports the condition key
aws:RequestTagUse this condition to require specific tags in the session.sts:TagSession
aws:TagKeysUse this condition key to control the tag keys that are allowed in the session.sts:TagSession
aws:PrincipalTag*Use this condition in IAM policies to compare tags on AWS resources.AWS Global Condition Keys (all actions across all services support this condition key)

Note: The table above explains only the additional use cases that the keys now support. Support for existing use cases, such as IAM users and roles remains unchanged. For details please visit AWS Global Condition Keys.

Now, I’ll show you how to create fine-grained permissions based on user attributes from your directory and how permissions automatically apply based on attributes when employees switch projects within your organization.

Example: Grant employees access to their project resources in AWS based on their job function

Consider a scenario where your organization deployed AWS EC2 and RDS instances in your AWS account for your company’s production web applications. Your systems engineers manage the EC2 instances and database engineers manage the RDS instances. They both access AWS by federating into your AWS account from a SAML IdP. Your organization’s security policy requires employees to have access to manage only the resources related to their job function and project they work on.

To meet these requirements, your cloud administrator, Michelle, implements attribute-based access control (ABAC) using the jobfunction and project attributes as session tags by following three steps:

  1. Michelle tags all existing EC2 and RDS instances with the corresponding project attribute.
  2. She creates a MyProjectResources IAM role and an IAM permission policy for this role such that employees can access resources with their jobfunction and project tags.
  3. She then configures your SAML IdP to pass the jobfunction and project attributes in the federated session when employees federate into AWS using the MyProjectResources role.

Let’s have a look at these steps in detail.

Step 1: Tag all the project resources

Michelle tags all the project resources with the appropriate project tag. This is important since she wants to create permission rules based on this tag to implement ABAC. To learn how to tag resources in EC2 and RDS, read tagging your Amazon EC2 resources and tagging Amazon RDS resources.

Step 2: Create an IAM role with permissions based on attributes

Next, Michelle creates an IAM role called MyProjectResources using the AWS Management Console or CLI. This is the role that your systems engineers and database engineers will assume when they federate into AWS to access and manage the EC2 and RDS instances respectively. To grant this role permissions, Michelle creates the following IAM policy and attaches it to the MyProjectResources role.

IAM Permissions Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "rds:DescribeDBInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"rds:RebootDBInstance",
				"rds:StartDBInstance",
				"rds:StopDBInstance"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "DatabaseEngineer",
					"rds:db-tag/project": "${aws:PrincipalTag/project}"
				}
			}
		},
		{
			"Effect": "Allow",
			"Action": "ec2:DescribeInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:StartInstances",
				"ec2:StopInstances",
				"ec2:RebootInstances",
				"ec2:TerminateInstances"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "SystemsEngineer",
					"ec2:ResourceTag/project": "${aws:PrincipalTag/project}"
				}
			}
		}
	]
}

In the policy above, Michelle allows specific actions related to EC2 and RDS that the systems engineers and database engineers need to manage their project instances. In the condition element of the policy statements, Michelle adds a condition based on the jobfunction and project attributes to ensure engineers can access only the instances which belong to their jobfunction and have a matching project tag.

To ensure your systems engineers and database engineers can assume this role when they federate into AWS from your IdP, Michelle modifies the role’s trust policy to trust your SAML IdP as shown in the policy statement below. Since we also want to include session tags when engineers federate in, Michelle adds the new action sts:TagSession in the policy statement as shown below. She also adds a condition that requires the jobfunction and project attributes to be included as session tags when engineers assume this role.

Role Trust Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Federated": "arn:aws:iam::999999999999:saml-provider/ExampleCorpProvider"
			},
			"Action": [
				"sts:AssumeRoleWithSAML",
				"sts:TagSession"
			],
			"Condition": {
				"StringEquals": {
					"SAML:aud": "https://signin.aws.amazon.com/saml"
				},
				"StringLike": {
					"aws:RequestTag/project": "*",
					"aws:RequestTag/jobfunction": [
						"SystemsEngineer",
						"DatabaseEngineer"
					]
				}
			}
		}
	]
}

Step 3: Configuring your SAML IdP to pass the jobfunction and project attributes as session tags

Once Michelle creates the role and permissions policy in AWS, she configures her SAML IdP to include the jobfunction and project attributes as session tags in the SAML assertion when engineers federate into AWS using this role.

To pass attributes as session tags in the federated session, the SAML assertion must contain the attributes with the following prefix:

https://aws.amazon.com/SAML/Attributes/PrincipalTag

The example given below shows a part of the SAML assertion generated from my IdP with two attributes (project:Automation and jobfunction:SystemsEngineer) that we want to pass as session tags.


<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:project">
		< AttributeValue >Automation<AttributeValue>
</ Attribute>
<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:jobfunction">
		< AttributeValue >SystemsEngineer<AttributeValue>
</ Attribute>

Note: This sample only contains the new properties in the SAML assertion. There are additional required fields in the SAML assertion that must be present to successfully federate into AWS. To learn more about creating SAML assertions with session tags, visit configuring SAML assertions for the authentication response.

AWS identity partners such as Ping Identity, OneLogin, Auth0, ForgeRock, IBM, Okta, and RSA have validated the end-to-end experience for this new capability with their identity solutions, and we look forward to additional partners validating this capability. To learn more about how to use these identity providers for configuring session tags, please visit integrating third-party SAML solution providers with AWS. If you are using Active Directory Federation Services (ADFS) for SAML federation with AWS, then please visit Configuring ADFS to start using session tags for attribute-based access control.

Now, when your systems engineers and database engineers federate into AWS using the MyProjectResources role, they only get access to their project resources based on the project and jobfunction attributes passed in their federated session. Session tags enabled Michelle to define unique permissions based on user attributes without having to create and manage multiple roles and policies. This helps simplify permissions management in her company.

Permissions automatically apply when employees change projects

Consider the same example with a scenario where your systems engineer, Bob, switches from the automation project to the integration project. Due to this switch, Michelle sets Bob’s project attribute in the IdP to integration. Now, the next time Bob federates into AWS he automatically has access to resources in integration project. Using session tags, permissions automatically apply when you update attributes or create new AWS resources with appropriate attributes without requiring any permissions updates in AWS.

Track user identity using session tags

When developers federate into AWS with session tags, AWS CloudTrail logs these tags to make it easier for security administrators to track the user identity of the session. To view session tags in CloudTrail, your administrator Michelle looks for the AssumeRoleWithSAML event in the eventName filter of CloudTrail. In the example below, Michelle has configured the SAML IdP to pass three session tags: project, jobfunction, and userID. When developers federate into your account, Michelle views the AssumeRoleWithSAML event in CloudTrail to track the user identity of the session using the session tags project, jobfunction, and userID as shown below:
 

Figure 1: Search for the logged events

Figure 1: Search for the logged events

Note: You can use session tags in conjunction with the instructions to track account activity to its origin using AWS CloudTrail to trace the identity of the session.

Summary

You can use session tags to rely on your employee attributes from your corporate directory to create fine-grained permissions at scale in AWS to simplify your permissions management workflows. To learn more about session tags, please visit tags in AWS session.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon IAM forum.

Want more AWS Security news? Follow us on Twitter.

Sulay Shah

Sulay is a Senior Product Manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Additional on-premises option for data localization with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/additional-on-premises-option-for-data-localization-with-aws/

Today, AWS released an updated resource — AWS Policy Perspectives-Data Residency — to provide an additional option for you if you need to store and process your data on premises. This white paper update discusses AWS Outposts, which offers a hybrid solution for customers that might find that certain workloads are better suited for on-premises management — whether for lower latency or other local processing needs.

Until this capability, you’d select the nearest AWS region to keep data closer in proximity. By extending AWS infrastructure and services to your environments, you can support workloads — including sensitive workloads — that need to remain on-premises while leveraging the security and operational capabilities of AWS cloud services.

Read the updated whitepaper to learn why data residency doesn’t mean better data security, and to learn about an additional option available for you to address various data protection needs, including data localization.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.

Digital signing with the new asymmetric keys feature of AWS KMS

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/

AWS Key Management Service (AWS KMS) now supports asymmetric keys. You can create, manage, and use public/private key pairs to protect your application data using the new APIs via the AWS SDK. Similar to the symmetric key features we’ve been offering, asymmetric keys can be generated as customer master keys (CMKs) where the private portion never leaves the service, or as a data key where the private portion is returned to your calling application encrypted under a CMK. The private portion of asymmetric CMKs are used in AWS KMS hardware security modules (HSMs) designed so that no one, including AWS employees, can access the plaintext key material. AWS KMS supports the following asymmetric key types – RSA 2048, RSA 3072, RSA 4096, ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1.

We’ve talked with customers and know that one popular use case for asymmetric keys is digital signing. In this post, I will walk you through an example of signing and verifying files using some of the new APIs in AWS KMS.

Background

A common way to ensure the integrity of a digital message as it passes between systems is to use a digital signature. A sender uses a secret along with cryptographic algorithms to create a data structure that is appended to the original message. A recipient with access to that secret can cryptographically verify that the message hasn’t been modified since the sender signed it. In cases where the recipient doesn’t have access to the same secret used by the sender for verification, a digital signing scheme that uses asymmetric keys is useful. The sender can make the public portion of the key available to any recipient to verify the signature, but the sender retains control over creating signatures using the private portion of the key. Asymmetric keys are used for digital signature applications such as trusted source code, authentication/authorization tokens, document e-signing, e-commerce transactions, and secure messaging. AWS KMS supports what are known as raw digital signatures, where there is no identity information about the signer embedded in the signature object. A common way to attach identity information to a digital signature is to use digital certificates. If your application relies on digital certificates for signing and signature verification, we recommend you look at AWS Certificate Manager and Private Certificate Authority. These services allow you to programmatically create and deploy certificates with keys to your applications for digital signing operations. A common application of digital certificates is TLS termination on a web server to secure data in transit.

Signing and verifying files with AWS KMS

Assume that you have an application A that sends a file to application B in your AWS account. You want the file to be digitally signed so that the receiving application B can verify it hasn’t been tampered with in transit. You also want to make sure only application A can digitally sign files using the key because you don’t want application B to receive a file thinking it’s from application A when it was really from a different sender that had access to the signing key. Because AWS KMS is designed so that the private portion of the asymmetric key pair used for signing cannot be used outside the service or by unauthenticated users, you’re able to define and enforce policies so that only application A can sign with the key.

To start, application A will submit either the file itself or a digest of the file to the AWS KMS Sign API under an asymmetric CMK. If the file is less than 4KB, AWS KMS will compute a digest for you as a part of the signing operation. If the file is greater than 4KB, you must send only the digest you created locally and you must tell AWS KMS that you’re passing a digest in the MessageType parameter of the request. You can use any of several hashing functions in your local environment to create a digest of the file, but be aware that the receiving application in account B will need to be able to compute the digest using the same hash function in order to verify the integrity of the file. In my example, I’m using SHA256 as the hash function. Once the digest is created, AWS KMS uses the private portion of the asymmetric CMK to encrypt the digest using the signing algorithm specified in the API request. The result is a binary data object, which we’ll refer to as “the signature” throughout this post.

Once application B receives the file with the signature, it must create a digest of the file. It then passes this newly generated digest, the signature object, the signing algorithm used, and the CMK keyId to the Verify API. AWS KMS uses the corresponding public key of the CMK with the signing algorithm specified in the request to verify the signature. Instead of submitting the signature to the Verify API, application B could verify the signature locally by acquiring the public key. This might be an attractive option if application B didn’t have a way to acquire valid AWS credentials to make a request of AWS KMS. However, this method requires application B to have access to the necessary cryptographic algorithms and to have previously received the public portion of the asymmetric CMK. In my example, application B is running in the same account as application A, so it can acquire AWS credentials to make the Verify API request. I’ll describe how to verify signatures using both methods in a bit more detail later in the post.

Creating signing keys and setting up key policy permissions

To start, you need to create an asymmetric CMK. When calling the CreateKey API, you’ll pass one of the asymmetric values for the CustomerMasterKeySpec parameter. In my example, I’m choosing a key spec of ECC_NIST_P384 because keys used with elliptic curve algorithms tend to be more efficient than those used with RSA-based algorithms.

As a part of creating your asymmetric CMK, you need to attach a resource policy to the key to control which cryptographic operations the AWS principals representing applications A and B can use. A best practice is to use a different IAM principal for each application in order to scope down permissions. In this case, you want application A to only be able to sign files, and application B to only be able to verify them. I will assume each of these applications are running in Amazon EC2, and so I’ll create a couple of IAM roles.

  • The IAM role for application A (SignRole) will be given kms:Sign permission in the CMK key policy
  • The IAM role for application B (VerifyRole) will be given kms:Verify permission in the CMK key policy

The stanza in the CMK key policy document to allow signing should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for digital signing",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/SignRole"},
	"Action": "kms:Sign",
	"Resource": "*"
}

The stanza in the CMK key policy document to allow verification should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:Verify",
	"Resource": "*"
}

Signing Workflow

Once you have created the asymmetric CMK and IAM roles, you’re ready to sign your file. Application A will create a message digest of the file and make a sign request to AWS KMS with the asymmetric CMK keyId, and signing algorithm. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms sign \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest

I chose the ECDSA_SHA_256 signing algorithm for this example. See the Sign API specification for a complete list of supported signing algorithms.

After validating that the API call is authorized by the credentials available to SignRole, KMS generates a signature around the digest and returns the CMK keyId, signature, and the signing algorithm.

Verify Workflow 1 — Calling the verify API

Once application B receives the file and the signature, it computes the SHA 256 digest over the copy of the file it received. It then makes a verify request to AWS KMS, passing this new digest, the signature it received from application A, signing algorithm, and the CMK keyId. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms verify \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest \
	--signature fileb://Signature

After validating that the verify request is authorized, AWS KMS verifies the signature by first decrypting the signature using the public portion of the CMK. It then compares the decrypted result to the digest received in the verify request. If they match, it returns a SignatureValid boolean of True, indicating that the original digest created by the sender matches the digest created by the recipient. Because the original digest is unique to the original file, the recipient can know that the file was not tampered with during transit.

One advantage of using the AWS KMS verify API is that the caller doesn’t have to keep track of the specific public key matching the private key used to create the signature; the caller only has to know the CMK keyId and signing algorithm used. Also, because all request to AWS KMS are logged to AWS CloudTrail, you can audit that the signature and verification operations were both executed as expected. See the Verify API specification for more detail on available parameters.

Verify Workflow 2 — Verifying locally using the public key

Apart from using the Verify API directly, you can choose to retrieve the public key in the CMK using the AWS KMS GetPublicKey API and verify the signature locally. You might want to do this if application B needs to verify multiple signatures at a high rate and you don’t want to make a network call to the Verify API each time. In this method, application B makes a GetPublicKey request to AWS KMS to retrieve the public key. The CLI command to do this is below. Replace the key-id parameter with your CMK’s specific keyId.

aws kms get-public-key \
–key-id <1234abcd-12ab-34cd-56ef-1234567890ab>

Note that the application B will need permissions to make a GetPublicKey request to AWS KMS. The stanza in the CMK key policy document to allow the VerifyRole identity to download the public key should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow retrieval of the public key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:GetPublicKey ",
	"Resource": "*"
}

Once application B has the public key, it can use your preferred cryptographic provider to perform the signature verification locally. Application B needs to keep track of the public key and signing algorithm used for each signature object it will verify locally. Using the wrong public key will fail to decrypt the signature from application A, making the signature verification operation unsuccessful.

Availability and pricing

Asymmetric keys and operations in AWS KMS are available now in the Northern Virginia, Oregon, Sydney, Ireland, and Tokyo AWS Regions with support for other regions planned. Pricing information for the new feature can be found at the AWS KMS pricing page.

Summary

I showed you a simple example of how you can use the new AWS KMS APIs to digitally sign and verify an arbitrary file. By having AWS KMS generate and store the private portion of the asymmetric key, you can limit use of the key for signing only to IAM principals you define. OIDC ID tokens, OAuth 2.0 access tokens, documents, configuration files, system update messages, and audit logs are but a few of the types of objects you might want to sign and verify using this feature.

You can also perform encrypt and decrypt operations under asymmetric CMKs in AWS KMS as an alternative to using the symmetric CMKs available since the service launched. Similar to how you can ask AWS KMS to generate symmetric keys for local use in your application, you can ask AWS KMS to generate and return asymmetric key pairs for local use to encrypt and decrypt data. Look for a future AWS Security Blog post describing these use cases. For more information about asymmetric key support, see the AWS KMS documentation page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about the asymmetric key feature, please start a new thread on the AWS KMS Discussion Forum.

Want more AWS Security news? Follow us on Twitter.

Raj Copparapu

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical. He’s a member of the AWS KMS team and focuses on defining the product roadmap to satisfy customer requirements. He spent over 5 years innovating on behalf of customers to deliver products to help customers secure their data in the cloud. Raj received his MBA from the Duke’s Fuqua School of Business and spent his early career working as an engineer and a business intelligence consultant. In his spare time, Raj enjoys yoga and spending time with his kids.

Announcing AWS Managed Rules for AWS WAF

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/announcing-aws-managed-rules-for-aws-waf/

Building and deploying secure applications is critical work, and the threat landscape is always shifting. We’re constantly working to reduce the pain of maintaining a strong cloud security posture. Today we’re launching a new capability called AWS Managed Rules for AWS WAF that helps you protect your applications without needing to create or manage the rules directly. We’ve also made multiple improvements to AWS WAF with the launch of a new, improved console and API that makes it easier than ever to keep your applications safe.

AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. Today it’s getting a number of exciting improvements. Creating rules is more straightforward with the introduction of the OR operator, allowing evaluations that would previously require multiple rules. The API experience has been greatly improved, and complex rules can now be created and updated with a single API call. We’ve removed the limit of ten rules per web access control list (ACL) with the introduction of the WAF Capacity Unit (WCU). The switch to WCUs allows the creation of hundreds of rules. Each rule added to a web access control list (ACL) consumes capacity based on the type of rule being deployed, and each web ACL has a defined WCU limit.

Using the New AWS WAF

Let’s take a look at some of the changes and turn on AWS Managed Rules for AWS WAF. First, I’ll go to AWS WAF and switch over to the new version.

Next I’ll create a new web ACL and add it to an existing API Gateway resource on my account.

Now I can start adding some rules to our web ACL. With the new AWS WAF, the rules engine has been improved. Statements can be combined with AND, OR, and NOT operators, allowing for more complex rule logic.

Screenshot of the WAF v2 Boolean operators

I’m going to create a simple rule that blocks any request that uses the HTTP method POST. Another cool feature is support for multiple text transformations, so for example, you could have all your requests transformed to decode HTML entities, and then made lowercase.

JSON objects now define web ACL rules (and web ACLs themselves), making them versionable assets you can match with your application code. You can also use these JSON documents to create or update rules with a single API call.

Using AWS Managed Rules for AWS WAF

Now let’s play around with something totally new: AWS Managed Rules. AWS Managed Rules give you instant protection. The AWS Threat Research Team maintains the rules, with new ones being added as additional threats are identified. Additional rule sets are available on the AWS Marketplace. Choose a managed rule group, add it to your web ACL, and AWS WAF immediately helps protect against common threats.

I’ve selected a rule group that protects against SQL attacks, and also enabled core rule set. The core rule set covers some of the common threats and security risks described in OWASP Top 10 publication. As soon as I create the web ACL and the changes are propagated, my app will be protected from a whole range of attacks such as SQL injections. Now let’s look at both rules that I’ve added to our ACL and see how things are shaping up.

Since my demo rule was quite simple, it doesn’t require much capacity. The managed rules use a bit more, but we’ve got plenty of room to add many more rules to this web ACL.

Things to Know

That’s a quick tour of the benefits of the new and improved AWS WAF. Before you head to the console to turn it on, there’s a few things to keep in mind.

  • The new AWS WAF supports AWS CloudFormation, allowing you to create and update your web ACL and rules using CloudFormation templates.
  • There is no additional charge for using AWS Managed Rules. If you subscribe to managed rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.
  • Pricing for AWS WAF has not changed.

As always, happy (and secure) building, and I’ll see you at re:Invent or on the re:Invent livestreams soon!

— Brandon

Ramp-Up Learning Guide available for AWS Cloud Security, Governance, and Compliance

Post Syndicated from Sireesh Pachava original https://aws.amazon.com/blogs/security/ramp-up-learning-guide-cloud-security-governance-compliance/

Cloud security is the top priority for AWS and for our customers around the world. It’s important that professionals have a way to keep up with this dynamically evolving area of cloud computing. Often, customers seek AWS guidance on cloud-specific security, governance, and compliance best practices, including skills upgrade plans. To address this need, AWS recently released the AWS Ramp-Up Learning Guide for AWS Cloud Security, Governance, and Compliance
(PDF). AWS experts curated this learning plan to teach in-demand cloud skills and real-world knowledge that you can rely on to keep up with cloud security, governance, and compliance developments and grow your career.

The guide starts with AWS Cloud fundamentals and progresses all the way through the AWS Certified Security Specialty certification. You can use the guide to find answers to questions such as, What resources are available? How do I earn AWS credentials? What order should I consume learning resources and training? Where do I find information on AWS events, blogs, and user groups to enhance my learning?

The list of resources includes free digital training offerings, classroom courses, videos, whitepapers, certifications, and other materials. The AWS Ramp-Up Guide is an extension to the AWS Training and Certification security learning path. To enroll in training and certification exams and track your progress, visit aws.training and set up a free account.

For a training plan customized for your requirements, contact your AWS Account Manager or contact us at aws.amazon.com/contact-us/aws-training/.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sireesh Pachava

Sai Sireesh Pachava is a senior leader in the AWS Professional Services, Global Security, Risk & Compliance (SRC) practice. He specializes in solving complex issues across strategy, business risk, compliance, security, digital, and cloud. Prior to AWS, he held leadership roles at Russell Investments, Microsoft, Reuters, and KPMG. He holds an M.S. and MBA. His certifications include CRISC, CCP, US FedRAMP Level 300, and Internet Law. He’s a pro-bono Seattle director for the risk professional non-profit association PRMIA.

Kate Behbehani

Kate is a Senior Product Marketing leader at AWS. Previously, Kate served as Director, Product Marketing for Veritas’s Hybrid Cloud Data Management & Symantec’s SMB Data Protection solutions, where she developed go-to-market strategy & marketing programs. She’s held numerous senior positions in enterprise marketing & channel marketing throughout the UK, EMEA, & APJ. She has a bachelor’s degree from Southampton University and a Post Graduate Diploma in Marketing from the Chartered Institute of Marketing.

How to set up Sign in with Apple for Amazon Cognito

Post Syndicated from Jason Cai original https://aws.amazon.com/blogs/security/how-to-set-up-sign-in-with-apple-for-amazon-cognito/

Amazon Cognito user pools enables you to add user sign-in and sign-up to your mobile and web applications using a secure and scalable user directory. With Amazon Cognito user pools, your end users can sign in using a user name or password, or with a third-party identity service, such as Facebook or Google. The process of using a third-party identity service is called federation. With federation, you can build applications that retrieve information about your end users that they have provided to another service and have consented to give to your applications.

Amazon Cognito user pools now supports Sign in with Apple as an identity provider (IdP). You can now federate users using the Sign in with Apple service, map these users to a user directory, and retrieve standard authentication tokens from a user pool after the user authenticates with Apple using their Apple ID credentials.

Much like login with Facebook or Google, Sign in with Apple acts as an authorization server and verifies an end user with their Apple ID credentials. Sign in with Apple is built on the OpenID Connect (OIDC) protocol. As of writing this post, there are a few notable differences about Sign in with Apple compared to other OpenID Providers.

  • Using Sign in with Apple, an end user can choose whether to share the email linked to their Apple ID or use a generated one provided by Apple. The generated email will be of the form “<randomstring>@privaterelay.appleid.com”.
  • Unlike other identity providers, Sign in with Apple only honors the scopes requested for an end user on their first authentication through the service for the app configured on Apple’s developer portal. In other words, if you start requesting name after an end user has authenticated, for example, that information will not be returned.
  • Sign in with Apple returns the requested scopes in the initial return from their authorization endpoint for the first user authentication; however, only the email associated with the Apple ID is returned in a trusted form via the ID token.

How to set up Sign in with Apple and associate it with an Amazon Cognito user pool

The prerequisites for setting up the IdP end-to-end are:

  • An Amazon Cognito user pool with an application client
  • A domain that is associated with the user pool
  • An Apple ID with two-factor authentication enabled

Step 1: Set up Sign in with Apple service in Apple’s Developer portal

  1. Enroll in the Apple Developer Program with an Apple ID and then sign in using it.
  2. On the main developer portal page, select Certificates, IDs, & Profiles.
  3. On the left navigation bar, select Identifiers.
  4. On the Identifiers page, select the + icon.
  5. On the Register a New Identifier page, select App IDs.
  6. On the Register an App ID page, under App ID Prefix, take note of the Team ID value.
  7. Select the operating system the app will be run on (choose macOS for web-based apps).
  8. Provide a description in the Description text box.
  9. Provide a string for identifying the app under Bundle ID.
  10. Under Capabilities, select Sign in with Apple, and then select either Enable as a primary App ID (default) for use in a single Apple app or Group with an existing primary App ID for use in multiple Apple apps.
  11. Select Continue, review the configuration, and then select Register.
  12. On the Identifiers page, on the right, select App IDs, and then select Services ID.
  13. Select the + icon and, on the Register a New Identifier page, select Services IDs.
  14. On the Register a Services ID page, select the Sign in with Apple checkbox to enable the service, and then select Configure.
  15. Select the App ID that you created in step 1.1.
  16. Under Web Domain, put the domain associated with your user pool.

    NOTE: You do not have to verify the domain because the verification is required for a transaction method that Amazon Cognito does not use.

  17. Under Return URLs, type https://<your domain>/oauth2/idpresponse, select Add, and then select Save.
  18. Provide a description in the Description text box.
  19. Provide an identifier in the Identifier text box.

    Important: Make a note of this identifier because you will need it later.

    Figure 1: Provide an identifier

    Figure 1: Provide an identifier

  20. Select Continue, review the information, and then select Register.
  21. On the left navigation bar, select Keys, and on the new page, select the + icon.
  22. On the Register a New Key page, select the check box next to Sign in with Apple.
  23. Select the App ID you created in 1.1 and then select Save.
  24. Provide a key name (can be anything).
  25. Click Continue, review the information, then select Register.
  26. On the page you are redirected to take note of the Key ID and download the .p8 file containing the private key.

Step 2: Set up the Sign in with Apple IdP in Amazon Cognito user pools console

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and then select the user pool that you will be using with Sign in with Apple.
  2. Under Federation, under the Identity providers tab, select Sign in with Apple.
  3. Provide the Apple Services ID, Team ID, Key ID, and private key for the Sign in with Apple application along with the desired scopes.

    Note: The private key is provided in the .p8 file; the contents are plain text. You can provide either the file or the contents within the file for the private key.

  4. Select the Attribute mapping tab, and then select the Apple tab.
  5. Select the checkboxes under Capture next to the Apple attributes, and select the user pool attribute under User pool attribute that will receive the value from the Apple attribute and that you would like to receive in the tokens from Amazon Cognito.

    Figure 2: Select checkboxes and user pool attribute

    Figure 2: Select checkboxes and user pool attribute

  6. To enable your app client to allow federation through the Sign in with Apple IdP, under the App client settings tab under App Integration, find the App client that you want to allow Sign in with Apple and select the Sign in with Apple check box.

Step 3: Get started with your application

  1. To test that you have everything configured correctly, under the configured app client, select the Launch Hosted UI link to bring you to a sample Login page.Your configured Sign in with Apple provider will be displayed on this page through a button labelled Continue with Apple.

    Figure 3: "Continue with Apple" button

    Figure 3: “Continue with Apple” button

  2. (Optional) Perform a test authentication to ensure you have everything configured correctly on Apple’s and the Amazon Cognito side.

When a user federates using Sign in with Apple, the interactions between the end user, Amazon Cognito App Client, and Sign in with Apple looks like this:

Figure 4: Federation flow

Figure 4: Federation flow

  1. New user goes to app and selects Sign in with Apple
  2. App redirects to Apple authentication web page
  3. Apple requests Apple ID credentials
  4. User provides credentials
  5. Apple requests consent for information
  6. User chooses share/don’t share email (if requested)
  7. Redirect back to Cognito app with Authorization code
  8. Requests ID token using Authorization code, client ID, and generated client secret
  9. ID token response containing requested scopes

Tips for using Sign in with Apple in your application

  • If you want to revoke the private key associated with the Sign in with Apple service, create a new private key in the Apple developer portal and provide it to Amazon Cognito prior to revoking the old key. Doing so will ensure that you do not invalidate any ongoing end-user authentication on Apple’s side.
  • If you decide to increase the requested scopes and want the additional information from existing users, those users will have to go to appleid.apple.com and, under Apps & Websites Using Apple ID, select the application, select Stop using Apple ID, and then federate again using Sign in with Apple.
  • The name provided by Sign in with Apple is not verified in any manner and should only be used for non-essential features; for example, a welcome message on the landing UI of your app after an end user logs in.
  • If you get an “invalid redirect_url” error message on Apple’s authentication page and the redirect URL in the request is correct, check that you’ve provided the Service Identifier and not the Application Identifier for the Sign in with Apple IdP settings in Amazon Cognito user pools.

For more information, see Adding Social Identity Providers to a User Pool in the Amazon Cognito Developer Guide. You can reach us by posting to the Amazon Cognito forums. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jason Cai

Jason is a Software Development Engineer for Amazon Cognito. He holds a Master of Science degree in Electrical and Computer Engineering with a focus in Software Engineering and Systems from The University of Texas at Austin.

Use attribute-based access control with AD FS to simplify IAM permissions management

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/attribute-based-access-control-ad-fs-simplify-iam-permissions-management/

AWS Identity and Access Management (IAM) allows customers to provide granular access control to resources in AWS. One approach to granting access to resources is to use attribute-based access control (ABAC) to centrally govern and manage access to your AWS resources across accounts. Using ABAC enables you to simplify your authentication strategy by enabling you to scale your authorization strategy by granting access to groups of resources, as specified by tags, as opposed to managing long lists of individual resources. The new ability to include tags in sessions—combined with the ability to tag IAM users and roles—means that you can now incorporate user attributes from your AD FS environment as part of your tagging and authorization strategy.

In other words, you can use ABAC to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the developers’ project tag. As the team of developers adds resources to projects, permissions are automatically applied based on attributes (tags, in this case). As a result, each new resource that gets added requires no update to the IAM permissions policy.

In this blog post, I walk you through how to enable AD FS to pass tags as part of the SAML 2.0 token, so that you can enable ABAC for your AWS resources.

AD FS federated authentication process

The following diagram describes the process that a user follows to authenticate to AWS by using Active Directory and AD FS as the identity provider:
 

Figure 1: AD FS federation to AWS

Figure 1: AD FS federation to AWS

  1. A corporate user accesses the corporate Active Directory Federation Services (AD FS) portal sign-in page and provides their Active Directory authentication credentials.
  2. AD FS authenticates the user against Active Directory.
  3. Active Directory returns the user’s information, including Active Directory group membership information.
  4. AD FS dynamically builds a list of Amazon Resource Names (ARNs) for IAM Roles in one or more AWS accounts; these mappings are defined in advance by the administrator and rely on user attributes and Active Directory group memberships.
  5. AD FS sends a signed SAML 2.0 token to the user’s browser with a redirect to post the token to AWS Security Token Service (STS) including the attributes that use define in the claim rules.
  6. Temporary credentials are returned using STS AssumeRoleWithSAML.
  7. The user is authenticated and provided access to the AWS Management Console.

Prerequisites

Attribute-based access control in AWS relies on the use of tags for access-control decisions. Therefore, it’s important to have in place a tagging strategy for your resources. Please see AWS Tagging Strategies.

Implementing ABAC enables organizations to enhance the use of tags from an operational and billing construct to a security construct. Ensuring that tagging is enforced and secure is essential to an enterprise-wide strategy.

For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

AD FS session tagging setup

After you’ve set up AD FS federation to AWS, you can enable additional attributes to be sent as part of the SAML token. For information about how to enable AD FS, see the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS).

Follow these steps to send standard Active Directory attributes to AWS in the SAML token:

  1. Open Server Manager, choose Tools, then choose AD FS Management.
  2. Under Relying Party Trusts, choose AWS.
  3. Choose Edit Claim Issuance Policy, choose Add Rule, choose Send LDAP Attributes as Claims, then choose Next.
  4. On the Edit Rule page, add the requested details.
    For example, to create a Department Attribute claim rule, add the following details:

    • Claim rule name: Department Attribute
    • Attribute Store: Active Directory
    • LDAP Attribute: Department
    • Outgoing Claim Type:
      https://aws.amazon.com/SAML/Attributes/PrincipalTag:<department> (where <department> is the tag that will be passed in the session)

     

    Figure 2: Claim Rule

    Figure 2: Claim Rule

  5. Repeat the previous step for each attribute you want to send, modifying the details as necessary. For more information about how to configure claim rules, see Configuring Claim Rules in the Windows Server 2012 AD FS Deployment Guide.

Send custom attributes to AWS as part of a federated session

Session Tags in AWS can be derived from Active Directory custom attributes as well as standard attributes, which we demonstrated in the example above. For more information on custom attributes, please How to Create a Custom Attribute in Active Directory.

To ensure that your setup is correct, you should confirm that AD FS is configured correctly:

  1. Perform an identity provider (IdP) initiated authentication. Go to the following URL: https://<your.domain.name>/adfs/ls/idpinitiatedsignon.aspx, where <your.domain.name> is the DNS name of your AD FS server.
  2. Select Sign in to one of the following sites, then choose AWS from the dropdown:
     
    Figure 3: Choose AWS

    Figure 3: Choose AWS

  3. Choose Sign in, then enter your credentials.
  4. After you’ve successfully logged into AWS, navigate to AWS CloudTrail events within 15 minutes of your login event.
  5. Filter on AssumeRoleWithSAML, then locate your login event and look under principalTags to ensure you see the tags that you configured.
     
    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    After you see the tags were sent, you’re ready to use the tags to build your IAM policies.

The following is an example policy that uses multiple tags.

Example: grant IAM users access to your AWS resources by using tags

In this example I will assume that two tags will be passed from AD FS to enable you to build your ABAC tagging strategy: Project and Department. This example assumes that you have multiple teams of developers who need permissions to start and stop specific Amazon Elastic Compute Cloud (Amazon EC2) instances, based on their project allocation. Because these EC2 instances are part of AWS Autoscaling Groups, they come and go depending on scaling conditions; therefore, it would be impractical to try to write policies that refers to lists of individual EC2 instances; writing policies against the tags on the EC2 instances is a more manageable approach. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element, and all resources in the Resource element of the policy. In the Condition element of the policy, I use two conditions:

  • Matching statement where the resource tag project ec2:ResourceTag/project matches the key aws:PrincipalTag for project.
  • Matching statement where the resource tag project ec2:ResourceTag/department matches the key aws:PrincipalTag for department.

This ensures that the principal is able to start and stop an instance only if the project and department tags match value of the tags on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, because you only need to manage a single policy for all your development teams that require permissions to start and stop instances, and you can rely on tag values to specify the resources.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/project": "${aws:PrincipalTag/project}",
"ec2:ResourceTag/department": "${aws:PrincipalTag/department}" }  } }
]
}

This policy will ensure that the user can only start and stop EC2 instances for the resources that are assigned to their department and project.

Conclusion

In this post, I’ve shown how you can enable AD FS to pass tags as part of the SAML token, so that you can enable ABAC for your AWS resources to simplify permissions management at scale.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Louay Shaat

Louay Shaat

Louay Shaat

Louay is a Solutions Architect with AWS based out of Melbourne. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on Security and Automation helping customers improve their security, risk, and compliance in the cloud.

New for Identity Federation – Use Employee Attributes for Access Control in AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-for-identity-federation-use-employee-attributes-for-access-control-in-aws/

When you manage access to resources on AWS or many other systems, you most probably use Role-Based Access Control (RBAC). When you use RBAC, you define access permissions to resources, group these permissions in policies, assign policies to roles, assign roles to entities such as a person, a group of persons, a server, an application, etc. Many AWS customers told us they are doing so to simplify granting access permissions to related entities, such as persons sharing similar business functions in the organisation.

For example, you might create a role for a finance database administrator and give that role access to the tables and compute resources necessary for finance. When Alice, a database admin, moves into that department, you assign her the finance database administrator role.

On AWS, you use AWS Identity and Access Management (IAM) permissions policies and IAM roles to implement your RBAC strategy.

The multiplication of resources makes it difficult to scale. When a new resource is added to the system, system administrators must add permissions for that new resource to all relevant policies. How do you scale this to thousands of resources and thousands of policies? How do you verify that a change in one policy does not grant unnecessary privileges to a user or application?

Attribute-Based Access Control
To simplify the management of permissions, in the context of an evergrowing number of resources, a new paradigm emerged: Attribute-Based Access Control (ABAC). When you use ABAC, you define permissions based on matching attributes. You can use any type of attributes in the policies: user attributes, resource attributes, environment attributes. Policies are IF … THEN rules, for example: IF user attribute role == manager THEN she can access file resources having attribute sensitivity == confidential.

Using ABAC permission control allows to scale your permission system, as you no longer need to update policies when adding resources. Instead, you ensure that resources have the proper attributes attached to them. ABAC allows you to manage fewer policies because you do not need to create policies per job role.

On AWS, attributes are called tags. You can attach tags to resources such as Amazon Elastic Compute Cloud (EC2) instance, Amazon Elastic Block Store (EBS) volumes, AWS Identity and Access Management (IAM) users and many others. Having the possibility to tag resources, combined with the possibility to define permissions conditions on tags, effectively allows you to adopt the ABAC paradigm to control access to your AWS resources.

You can learn more about how to use ABAC permissions on AWS by reading the new ABAC section of the documentation or taking the tutorial, or watching Brigid’s session at re:Inforce.

This was a big step, but it only worked if your user attributes were stored in AWS. Many AWS customers manage identities (and their attributes) in another source and use federation to manage AWS access for their users.

Pass in Attributes for Federated Users
We’re excited to announce that you can now pass user attributes in the AWS session when your users federate into AWS, using standards-based SAML. You can now use attributes defined in external identity systems as part of attributes-based access control decisions within AWS. Administrators of the external identity system manage user attributes and define attributes to pass in during federation. The attributes you pass in are called “session tags”. Session tags are temporary tags which are only valid for the duration of the federated session.

Granting access to cloud resources using ABAC has several advantages. One of them is you have fewer roles to manage. For example, imagine a situation where Bob and Alice share the same job function, but different cost centers; and you want to grant access only to resources belonging to each individual’s cost center. With ABAC, only one role is required, instead of two roles with RBAC. Alice and Bob assume the same role. The policy will grant access to resources where their cost center tag value matches the resource cost center tag value. Imagine now you have over 1,000 people across 20 cost centers. ABAC can reduce the cost center roles from 20 to 1.

Let us consider another example. Let’s say your systems engineer configures your external identity system to include CostCenter as a session tag when developers federate into AWS using an IAM role. All federated developers assume the same role, but are granted access only to AWS resources belonging to their cost center, because permissions apply based on the CostCenter tag included in their federated session and on the resources.

Let’s illustrate this example with the diagram below:


In the figure above, blue, yellow, and green represent the three cost centers my workforce users are attached to. To setup ABAC, I first tag all project resources with their respective CostCenter tags and configure my external identity system to include the CostCenter tag in the developer session. The IAM role in this scenario grants access to project resources based on the CostCenter tag. The IAM permissions might look like this :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

The access will be granted (Allow) only when the condition matches: when the value of the resources’ CostCenter tag matches the value of the principal’s CostCenter tag. Now, whenever my workforce users federate into AWS using this role, they only get access to the resources belonging to their cost center based on the CostCenter tag included in the federated session.

If a user switches from cost center green to blue, your system administrator will update the external identity system with CostCenter = blue, and permissions in AWS automatically apply to grant access to the blue cost center AWS resources, without requiring permissions update in AWS. Similarly, when your system administrator adds a new workforce user in the external identity system, this user immediately gets access to the AWS resources belonging to her cost center.

We have worked with Auth0, ForgeRock, IBM, Okta, OneLogin, Ping Identity, and RSA to ensure the attributes defined in their systems are correctly propagated to AWS sessions. You can refer to their published guidelines on configuring session tags for AWS for more details. In case you are using other Identity Providers, you may still be able to configure session tags, if they support the industry standards SAML 2.0 or OpenID Connect (OIDC). We look forward to working with additional Identity Providers to certify Session Tags with their identity solutions.

Sessions Tags are available in all AWS Regions today at no additional cost. You can read our new session tags documentation page to follow step-by-step instructions to configure an ABAC-based permission system.

— seb

15 additional AWS services receive DoD Impact Level 4 and 5 authorization

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/15-additional-aws-services-receive-dod-impact-level-4-and-5-authorization/

I’m pleased to announce that the Defense Information Systems Agency (DISA) has extended the Provisional Authorization to Operate (P-ATO) of AWS GovCloud (US) Regions for Department of Defense (DoD) workloads at DoD Impact Levels (IL) 4 and 5 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). Our authorizations at DoD IL 4 and IL 5 allow DoD Mission Owners to process unclassified, mission critical workloads for National Security Systems in the AWS GovCloud (US) Regions.

AWS successfully completed an independent, third-party evaluation that confirmed we effectively implement over 400 security controls using applicable criteria from NIST SP 800-53 Rev 4, the US General Services Administration’s FedRAMP High baseline, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the High Confidentiality, High Integrity, and High Availability impact levels.

In addition to a P-ATO extension through November 2020 for the existing 24 services, DISA authorized an additional 15 AWS services at DoD Impact Levels 4 and 5 as listed below.

Newly authorized AWS services at DoD Impact Levels 4 and 5

With these additional 15 services, we can now offer AWS customers a total of 39 services, authorized to store, process, and analyze DoD mission critical data at Impact Level 4 and Impact Level 5.

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Tyler Harding

Tyler is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.