Tag Archives: Intermediate (200)

Measure Effectiveness of Virtual Training in Real Time with AWS AI Services

Post Syndicated from Rajeswari Malladi original https://aws.amazon.com/blogs/architecture/measure-effectiveness-of-virtual-training-in-real-time-with-aws-ai-services/

As per International Data Corporation (IDC), worldwide spending on digital transformation will reach $2.3 trillion in 2023. As organizations adopt digital transformation, training becomes an important aspect of this journey. Whether these are internal trainings to upskill existing workforce or a packaged content for commercial use, these trainings need to be efficient and cost effective. With the advent of education technology, it is a common practice to deliver trainings via digital platforms. This makes it accessible for larger population and is cost effective, but it is important that the trainings are interactive and effective. According to  a recent article published by Forbes, immersive education and data driven insights are among the top five Education Technology (EdTech) innovations. These are the key characteristics of creating an effective training experience.

An earlier blog series explored how to build a virtual trainer on AWS using Amazon Sumerian. This series illustrated how to easily build an immersive and highly engaging virtual training experience without needing additional devices or a complex virtual reality platform management. These trainings are easy to maintain and are cost effective.

In this blog post, we will further extend the architecture to gather real-time feedback about the virtual trainings and create data-driven insights to measure its effectiveness with the help of Amazon artificial intelligence (AI) services.

Architecture and its benefits

Virtual training on AWS and AI Services - Architecture

Virtual training on AWS and AI Services – Architecture

Consider a scenario where you are a vendor in the health care sector. You’ve developed a cutting-edge device, such as patient vital monitoring hardware that goes through frequent software upgrades and it is about to be rolled out across different U.S. hospitals. The nursing staff needs to be well trained before it can begin using the device. Let’s take a look at an architecture to solve this problem. We will first explain the architecture for building the training and then we will show how we can measure its effectiveness.

At the core of the architecture is Amazon Sumerian. Sumerian is a managed service that lets you create and run 3D, Augmented Reality (AR), and Virtual Reality (VR) applications. Within Sumerian, real-life scenes from a hospital environment can be created by importing the assets from the assets library. Scenes consist of host(s) and an AI-driven animated character with built-in animation, speech, and behavior. The hosts act as virtual trainers that interact with the nursing staff. The speech component assigns text to the virtual trainer for playback with Amazon Polly. Polly helps convert training content from Sumerian to life-like speech in real time and ensures the nursing staff receives the latest content related to the equipment on which it’s being trained.

The nursing staff accesses the training via web browsers on iOS or Android mobile devices or laptops, and authenticates using Amazon Cognito. Cognito is a service that lets you easily add user sign-up and authentication to your mobile and web apps. Sumerian then uses the Cognito identity pool to create temporary credentials to access AWS services.

The flow of the interactions within Sumerian is controlled using a visual state machine in the Sumerian editor. Within the editor, the dialogue component assigns an Amazon Lex chatbot to an entity, in this case the virtual trainer or host. Lex is a service for building conversational interfaces with voice and text. It provides you the ability to have interactive conversations with the nursing staff, understand its areas of interest, and deliver appropriate training material. This is an important aspect of the architecture where you can customize the training per users’ needs.

Lex has native interoperability with AWS Lambda, a serverless compute offering where you just write and run your code in Lambda functions. Lambda can be used to validate user inputs or apply any business logic, such as fetching the user selected training material from Amazon DynamoDB (or another database) in real time. This material is then delivered to Lex as a response to user queries.

You can extend the state machine within the Sumerian editor to introduce new interactive flows to collect user feedback. Amazon Lex collects user feedback, which is saved in Amazon Simple Storage Service (S3) and analyzed by Amazon Comprehend. Amazon Comprehend is a natural language processing service that uses AI to find meaning and insights/sentiments in text. Insights from user feedback are stored in S3, which is a highly scalable, durable, and highly available object storage.

You can analyze the insights from user feedback using Amazon Athena, an interactive query service which analyzes data in S3 using standard SQL. You can then easily build visualizations using Amazon QuickSight.

By using this architecture, you not only deliver the virtual training to your nursing staff in an immersive environment created by Amazon Sumerian, but you can also gather the feedback interactively. You can gain insights from this feedback and iterate over it to make the training experience more effective.

Conclusion and next steps

In this blog post we reviewed the architecture to build interactive trainings and measure their effectiveness. The serverless nature of this architecture makes it cost effective, agile, and easy to manage, and you can apply it to a number of use cases. For example, an educational institution can develop training content designed for multiple learning levels and the training level can be adjusted in real time based on live interactions with the students. In the manufacturing scenario, you can build a digital twin of your process and train your resources to handle different scenarios with full interactions. You can integrate AWS services just like Lego blocks, and you can further expand this architecture to integrate with Amazon Kendra to build interactive FAQ or integrate with Amazon Comprehend Medical to build trainings for the healthcare industry. Happy building!

Introducing the Well-Architected Framework for Machine Learning

Post Syndicated from Shelbee Eigenbrode original https://aws.amazon.com/blogs/architecture/introducing-the-well-architected-framework-for-machine-learning/

We have published a new whitepaper, Machine Learning Lens, to help you design your machine learning (ML) workloads following cloud best practices. This whitepaper gives you an overview of the iterative phases of ML and introduces you to the ML and artificial intelligence (AI) services available on AWS using scenarios and reference architectures.

How often are you asking yourself “Am I doing this right?” when building and running applications in the cloud? You are not alone. To help you answer this question, we released the AWS Well-Architected Framework in 2015. The Framework is a formal approach for comparing your workload against AWS best practices and getting guidance on how to improve it.

We added “lenses” in 2017 that extended the Framework beyond its general perspective to provide guidance for specific technology domains, such as the Serverless Applications Lens, High Performance Computing (HPC) Lens, and IoT (Internet of Things) Lens. The new Machine Learning Lens further extends that guidance to include best practices specific to machine learning workloads.

A typical question we often hear is: “Should I use both the Lens whitepaper and the AWS Well-Architected Framework whitepaper when reviewing a workload?” Yes! We purposely exclude topics covered by the Framework in the Lens whitepapers. To fully evaluate your workload, use the Framework along with any applicable Lens whitepapers.

Applying the Machine Learning Lens

In the Machine Learning Lens, we focus on how to design, deploy, and architect your machine learning workloads in the AWS Cloud. The whitepaper starts by describing the general design principles for ML workloads. We then discuss the design principles for each of the five pillars of the Framework—operational excellence, security, reliability, performance efficiency, and cost optimization—as they relate to ML workloads.

Although the Lens is written to provide guidance across all pillars, each pillar is designed to be consumable by itself. This design results in some intended redundancy across the pillars. The following figure shows the structure of the whitepaper.

ML Lens-Well Architected (1)

The primary components of the AWS Well-Architected Framework are Pillars, Design Principles, Questions, and Best Practices. These components are illustrated in the following figure and are outlined in the AWS Well-Architected Framework whitepaper.

WA-Failure Management (2)

The Machine Learning Lens follows this pattern, with Design Principles, Questions, and Best Practices tailored for machine learning workloads. Each pillar has a set of questions, mapped to the design principles, which drives best practices for ML workloads.

ML Lens-Well Architected (2)

To review your ML workloads, start by answering the questions in each pillar. Identify opportunities for improvement as well as critical items for remediation. Then make a prioritized plan to address the improvement opportunities and remediation items.

We recommend that you regularly evaluate your workloads. Use the Lens throughout the lifecycle of your ML workload—not just at the end when you are about to go into production. When used during the design and implementation phase, the Machine Learning Lens can help you identify potential issues early, so that you have more time to address them.

Available now

The Machine Learning Lens is available now. Start using it today to review your existing workloads or to guide you in building new ones. Use the Lens to ensure that your ML workloads are architected with operational excellence, security, reliability, performance efficiency, and cost optimization in mind.

AWS Architecture Monthly Magazine: Education

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-magazine-education/

Young man sitting on a stack of books with his laptopOne of the missions of the education industry is to educate the next generation of the industry-ready workforce. Whether K-12, higher education, or continuing education, enabling teachers and professors to effectively deliver curriculum and improve student performance is a goal of Education Technology (EdTech) and learning companies. Two trends for AWS use cases in education are: 1) accessible remote learning; and 2) remote collaboration. For brevity, there are other innovation trend areas in education that we didn’t focus on in our “Ask an Expert” interview despite their importance. Use cases around learning accessibility, student performance, and campus experience have taken advantage of Amazon Alexa, Amazon Lex, and a variety of AWS technology areas including artificial intelligence (AI) and machine learning, data lakes, analytics, and mobile development. To dive deep into a wider range of education use cases, we invite everyone to look at our AWS Education blog.

In this month’s issue

For May’s Education issue, we asked our expert, Yuriko Horvath, about general architecture patterns in the education space as well as what education customers need to think about and ask themselves before considering AWS.

  • Ask an Expert: Yuriko Horvath, AWS Manager of Education for Solutions Architecture
  • Blog: How to Build a Chatbot for Your School in Less Than an Hour (with step-by-step video instructions)
  • Case Study: Virginia Tech: Building Modern Analytics on Amazon Web Services
  • Solution: Video on Demand on AWS
  • Whitepaper: Teaching Big Data Skills with Amazon EMR

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Enabling AWS Security Hub integration with AWS Chatbot

Post Syndicated from Ross Warren original https://aws.amazon.com/blogs/security/enabling-aws-security-hub-integration-with-aws-chatbot/

In this post, we show you how to configure AWS Chatbot to send findings from AWS Security Hub to Slack. Security Hub gives you a comprehensive view of your security high-priority alerts and security posture across your Amazon Web Services (AWS) accounts. AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources in your Slack channels and Amazon Chime chat rooms. This can enable your security teams to receive alerts in familiar Slack channels, facilitating collaboration and quick response to events.

We will be describing the preset formatting integration with AWS Chatbot, if you want to enrich the finding data you can follow the guide in How to Enable Custom Actions in AWS Security Hub. The steps listed in this post are ideal if you want to use the preset formatting of AWS Chatbot. The second path is a customized integration, which is useful when you need the finding data to be transformed or enriched.

With AWS Chatbot you can receive alerts and run commands to return diagnostic information, invoke AWS Lambda functions, and create AWS support cases so that your team can collaborate and respond to events faster.

This post is a follow up to a previous post, How to Enable Custom Actions in AWS Security Hub, where custom actions in Security Hub enabled notifications to be sent to Slack. In this post, we simplify the workflow by using an AWS CloudFormation template to create the information flow in Figure 1 below. This configures the custom action in Security Hub, an Amazon EventBridge rule, and an Amazon Simple Notification Service (Amazon SNS) topic to tie them all together.

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Configure AWS Chatbot and Security Hub

To get started you’ll need the following prerequisites:

We will now walk through configuring AWS Chatbot and Security Hub. You must have an AWS Account with GuardDuty and Security Hub enabled as well as a Slack account. Keep a virtual scratch pad handy to take note of your Slack Workspace ID and Slack Channel ID refer to as you configure the integration.

Security Hub supports two types of integration with EventBridge, both of which are supported by AWS Chatbot:

  • Standard Amazon CloudWatch events. Security Hub automatically sends all findings to EventBridge. Use this method to automatically send all Security Hub findings, or a filtered subset of findings, to an Amazon SNS topic to which AWS Chatbot subscribes.
  • Security Hub custom actions. Define custom actions in Security Hub and configure CloudWatch events rules to respond to those actions. The event rule uses its Amazon SNS topic target to forward notifications to the SNS topic AWS Chatbot is subscribed to.

We are going to focus on Security Hub custom actions. You might not initially want to have all Security Hub findings appear in Slack, so we’re going to create a Security Hub custom action to send only relevant findings to Slack. This workflow gives your security team the ability to manually provide notifications to Slack channels through AWS Chatbot. At the end of this post, I share an EventBridge Rule for those users who want all Security Hub findings in a Slack channel. I also provide some filter examples which will help you select the findings you receive in Slack.

Configure a Slack client

To allow AWS Chatbot to send notifications to your Slack channel, you must configure AWS Chatbot to work with Slack. Owners of Slack workspaces can approve the use of the AWS Chatbot and any workspace user can configure the workspace to receive notifications.

  1. Log in to your AWS console and navigate to the AWS Chatbot console.
  2. Select Slack from the dropdown menu and then Configure client.

    Figure 2: Configure a chat client

    Figure 2: Configure a chat client

  3. If you are not yet logged in to Slack, add your workspace name and log in to Slack.

    Figure 3: Slack workspace login

    Figure 3: Slack workspace login

  4. On the next screen where AWS Chatbot is requesting permission to your Slack workspace, choose Allow.
  5. Copy and save the Workspace ID. You will need it for the CloudFormation Template.

    Figure 4: Console with workplace ID

    Figure 4: Console with workplace ID

  6. You can now leave the AWS Chatbot console and log in to your Slack workspace where we can get the channel ID.
    1. If you do not have a Slack channel in your organization for findings, or you want to test this integration before deploying it to production, please follow the steps from Slack for creating a channel.
    2. If you are using the Slack desktop client, right-click on the Slack channel name and select Copy Link.

      Figure 5: Copy link from the desktop client

      Figure 5: Copy link from the desktop client

    3. If you are using the Slack web UI, right click on your Slack channel name, select Additional options, and then select Copy link.

      Figure 6: Copy link from the web UI

      Figure 6: Copy link from the web UI

  7. The last part of the resulting URL is your channel ID. For example, in the URL https://xxxxxxx.slack.com/archives/CSQRRLTHRT, CSQRRLTHRT is the channel ID. Write down your channel ID to use later.

Tie it all together

The CloudFormation template is going to create the following:

  • A Security Hub custom action named SendToSlack
  • An Amazon SNS topic named AWS Chatbot SNS Topic
  • An EventBridge rule to tie everything together
  1. Open the SecurityHub_to_AWSChatBot.yml CloudFormation template.
  2. Right-click and use Save As to save the template to your workstation.

    Note: The CloudFormation template is going to require your Slack workspace ID and channel ID from the previous step.

  3. Open the CloudFormation console.
  4. Select Create stack.
    Figure 7: Create a CloudFormation stack

    Figure 7: Create a CloudFormation stack

    1. Select Upload a template file

      Figure 8: Upload CloudFormation Template File

      Figure 8: Upload CloudFormation template file

    2. Select Choose file and navigate to the saved CloudFormation template from step 2.
    3. Select Next.
    4. Enter a stack name, such as “SecurityHubToAWSChatBot.”
    5. Enter your Slack channel ID and Slack workSpace ID (be careful not to transpose these IDs).
    6. Continue by selecting Next.
    7. On the Configure stack options screen you can add tags if required by your organization. The rest of the default options will work, click Next.
    8. Review stack details on the Review screen and scroll to the bottom.
    9. You must click the “I acknowledge that AWS CloudFormation might create IAM resources.” Check box before clicking Create Stack.

      Figure 9: IAM capabilities acknowledgment

      Figure 9: IAM capabilities acknowledgment

  5. After the CloudFormation template has completed successfully you will see ‘Create Complete’ in the CloudFormation console.

To test the configuration perform the following steps:

  1. Open the AWS Security Hub console, select a finding and choose the Actions drop down. Select Send_To_Slack — the custom action that you just created.

    Figure 10: Security Hub custom action drop down

    Figure 10: Security Hub custom action drop down

  2. Go to your Slack workspace and channel to verify receipt of the notification.

    Figure 11: Example Security Hub notification in Slack

    Figure 11: Example Security Hub notification in Slack

Bonus: Send all critical findings to Slack

You can also use this workflow to send all critical Security Hub findings to Slack.

To do this, configure an additional CloudWatch rule that can be used in conjunction with the custom action that we’ve already deployed. For example, your security team requires that all the critical severity findings go to your team’s Slack channel, but with the ability to also manually send other interesting or relevant findings to Slack.

  1. Go to the EventBridge console.
  2. Underneath Events, select Rules.
  3. Select Create Rule.
  4. Give the Rule a name ex: “All_SecurityHub_Findings_to_Slack.”
  5. In the Define Pattern section, select Event pattern and Custom pattern.

    Figure 12: EventBridge event pattern dialogue

    Figure 12: EventBridge event pattern dialogue

  6. Paste the following code into the Event pattern field and select Save.

    Note: You can edit this filter to fit your needs.

    
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "ProductFields": {
            "aws/securityhub/SeverityLabel": [
              "CRITICAL"
            ]
          }
        }
      }
    }
    

  7. Leave the event bus as “AWS default event bus.”
  8. Under Select Targets, select SNS Topic from the drop down.
  9. Choose the Topic with “SNSTopicAWSChatBot” in the name.
  10. Configure any required tags.
  11. Select Create.

When Security Hub creates findings, it will send any findings with a severity label of Critical to your Slack channel.

Note: Depending on the volume of critical findings in your Security Hub console, the signal to noise ratio might be too much in Slack for you to provide actionable results. You should look at automating the response and remediation of critical findings following best practice guidance in the Security Hub console.

Summary

In this post we showed how to send findings from Security Hub to Slack using AWS Chatbot. This can help your team collaborate, and respond faster to operational events. In addition, AWS Chatbot enables an easy way to interact with your AWS resources. Running AWS CLI commands from Slack channels includes a list of the commands you can run.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ross Warren

Ross Warren is a Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.

Author

Jose Ruiz

Jose Ruiz is a Sr. Solutions Architect – Security Specialist at AWS. He often enjoys “the road less traveled” and knows each technology has a security story often not spoken of. He takes this perspective when working with customers on highly complex solutions and driving security at the beginning of each build.

AWS Foundational Security Best Practices standard now available in Security Hub

Post Syndicated from Rima Tanash original https://aws.amazon.com/blogs/security/aws-foundational-security-best-practices-standard-now-available-security-hub/

AWS Security Hub offers a new security standard, AWS Foundational Security Best Practices

This week AWS Security Hub launched a new security standard called AWS Foundational Security Best Practices. This standard implements security controls that detect when your AWS accounts and deployed resources do not align with the security best practices defined by AWS security experts. By enabling this standard, you can monitor your own security posture to ensure that you are using AWS security best practices. These controls closely align to the Top 10 Security Best Practices outlined by AWS Chief Information Security Office, Stephen Schmidt, at AWS re:Invent 2019.

In the initial release, this standard consists of 31 fully-automated security controls in supported AWS Regions, and 27 controls in AWS GovCloud (US-West) and AWS GovCloud (US-East).

This standard is enabled by default when you enable Security Hub in a new account, so no extra steps are necessary to enable it. If you are an existing Security Hub user, when you open the Security Hub console, you will see a pop-up message recommending that you enable this standard. For more information, see AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

As an example, let’s look at one of the new security controls for Amazon Relational Database Service (Amazon RDS), [RDS.1] RDS snapshots should be private. This control checks the resource types AWS::RDS::DBSnapshot and AWS::RDS::DBClusterSnapshot. The relevant AWS Config rule is rds-snapshots-public-prohibited, which checks whether Amazon RDS snapshots are public. The control fails if Security Hub identifies that any existing or new Amazon RDS snapshots are configured to be publicly accessible. The severity label is CRITICAL when the security check fails. The severity indicates the potential impact of not enforcing this rule.

You can find additional details about all the security controls, including the remediation instructions of the misconfigured resource, in the AWS Foundational Security Best Practices standard section of the Security Hub User Guide.

Getting started

In this post, we will cover:

  • How to enable the new AWS Foundational Security Best Practices standard.
  • An overview of the security controls.
  • An explanation of the security control details.
  • How to disable and enable specific security controls.
  • How to navigate to the remediation instructions for a failed security control.

Prerequisites

For the security standards to be functional in Security Hub, when you enable Security Hub in a particular account and AWS Region, you must also enable AWS Config in that account and Region. This is because Security Hub is a regional service.

Enable the new AWS Foundational Security Best Practices Security standard

After you enable AWS Config in your account and Region, you can enable the AWS Foundational Security Best Practices standard in Security Hub. We recommend that you enable Security Hub and this standard in all accounts and in all Regions where you have activity. For a script to enable AWS Security Hub across multi-account and Regions, see the AWS Security Hub multi-account scripts page on GitHub.

If you are a new user of Security Hub, when you open the Security Hub console, you are prompted to enable Security Hub. When you enable Security Hub, the AWS Foundational Security Best Practices standard is selected by default, as shown in the following screen shot. Leave the default selection and choose Enable Security Hub to enable the AWS Foundational Security Best Practices standard, as well as the other security standards you select, in your AWS account in your selected AWS Region.

Figure 1: Welcome to AWS Security Hub page

Figure 1: Welcome to AWS Security Hub page

If you are an existing user of Security Hub, when you open the Security Hub console, you are presented with a pop-up to enable the new security standard. You will see the number of new controls that are available in your AWS Region and the number of AWS services and resources that are associated with those controls, as shown in the following screen shot. Choose Enable standard to enable the AWS Foundational Security Best Practices standard in your AWS account in your selected AWS Region.

Figure 2: AWS Foundational Security Best Practices confirmation page

Figure 2: AWS Foundational Security Best Practices confirmation page

You also have the option to enable the new AWS Foundational Security Best Practices Security standard by using the command line, which we will describe later in this post.

View the security controls

Now that you have successfully enabled the standard, on the Security standards page, you see the new the AWS Foundational Security Best Practices v1.0.0 standard is displayed with the other security standards, CIS AWS Foundations and PCI DSS.

Figure 3: Security standards page in AWS Security Hub

Figure 3: Security standards page in AWS Security Hub

View security findings

Within two hours after you enable the standard, Security Hub begins to evaluate related resources in the current AWS account and Region against the available AWS controls within the AWS Foundational Security Best Practices standard. The scope of the assessment is the AWS account.

To view security findings, on the Security standards page, for AWS Foundational Security Best Practices standard, choose View results. The following image shows an example of the dashboard page you will see that displays all of the available controls in the standard, and the status of each control within the current AWS account and Region.

Figure 4: AWS Foundational Security Best Practices controls page

Figure 4: AWS Foundational Security Best Practices controls page

At a glance, each control card provides you with the following high-level information:

  • Title and unique identifier of the AWS control. This provides you with a synopsis of the purpose and functionality of the control.
  • The current status of the AWS control evaluation. The possible values are Passed, Failed, or Unknown (evaluation is still in progress and not finished).
  • Severity information associated with the AWS control. The possible values are CRITICAL, HIGH, MEDIUM, and LOW. For Passed findings associated with the controls, the severity appears, but is INFORMATIONAL. To learn more about how Security Hub determines the severity score, see Determining the severity of security standards findings.
  • A count of AWS resources that passed or failed the check for this particular AWS control.

You can use the Filter controls to search for specific AWS controls based on their evaluation status and severity. For example, you can search for all controls that have a check status of Failed and a severity of CRITICAL.

Inspect the security finding

To see detailed information about a specific security control and its findings, choose the security control card. Choosing the control displays a page that contains detailed information about the control, including a list of the findings for the security control. The page also indicates whether the resources for the security control are Passed, Failed, or if the compliance evaluation is still in progress (Unknown).

Figure 5: RDS.1 control findings view

Figure 5: RDS.1 control findings view

For business reasons, you may sometimes need to suppress a particular finding against a particular resource using the workflow status. Setting the Workflow status to SUPPRESSED means that the finding will not be reviewed again and will not be acted upon. If you suppress a FAILED finding, it will stay suppressed as long as it remains failed. However, if the finding moves from FAILED to PASSED, a new passed finding will be generated and the workflow status will be NEW. You can’t un-suppress a finding. If you suppress all findings for a control, the control status will be Unknown until any new finding is generated.

To suppress a finding

  1. In the Findings list, select the control you want to suppress, for example [RDS.1] RDS snapshot should be private.
  2. For Change workflow status, choose Suppressed.

 

Figure 6: RDS.1 control showing change workflow status options

Figure 6: RDS.1 control showing change workflow status options

You will no longer see the finding that you suppressed.

If you do not want to generate any findings for a specific control, you can instead choose to disable the control using the Disable feature, described in the next section.

Disable a security control

You can also disable the security check for a particular security control until you manually re-enable it. This disables the control check for all resources in the context of Security Hub in your AWS account and AWS Region. This may be helpful if a particular security control is not applicable for your environment. To disable a security control, on the AWS Foundational Security Best Practices standard dashboard page, on the specific control card, choose Disable. You can always re-enable the control when you need it in the future.

Figure 7: Control cards Disable option

Figure 7: Control cards Disable option

When you disable a particular control, you are required to enter a reason in the Reason for disabling field, so that you or someone else looking into it in the future have a clear record of why the control is not being used.

Figure 8: Reason for disabling a control page - Disabling control ACM.1example

Figure 8: Reason for disabling a control page – Disabling control ACM.1example

On the AWS Foundational Security Best Practices controls page, disabled controls are marked with a Disabled badge, as shown in the following screenshot. The cards also display the date when the control was disabled, and the reason that was provided. To re-enable a disabled control, on the control card, choose Enable.

Figure 9: Disabled control example – ACM.1

Figure 9: Disabled control example – ACM.1

You can enable the control any time without providing a reason. The evaluation for the control starts from the point in time when the control is re-enabled.

Remediate a failed security control

You can get the remediation instructions for a failed control from within the Security Hub console. On the AWS Foundational Security Best Practices standard dashboard page, choose the specific control card, then in the list of findings for a control, choose the finding you want to remediate. In the finding details, expand the Remediation section, and then choose the For directions on how to fix this issue link, as shown in the following screen shot.

Figure 10: Finding remediation link in the console

Figure 10: Finding remediation link in the console

You can also get to these step-by-step remediation instructions directly from the user guide. Go to the AWS Foundational Security Best Practices controls page and scroll down to the name of the specific control that generated the finding.

Use the AWS CLI to enable or disable the standard

To use the AWS Command Line Interface (AWS CLI) to enable the AWS Foundational Security Best Practices standard in Security Hub programmatically without using the Security Hub console, use the following command. Be sure you are running AWS CLI version 2.0.7 or later, and replace REGION-NAME with your AWS Region:


aws securityhub batch-enable-standards --standards-subscription-requests '{"StandardsArn":"arn:aws:securityhub:<REGION-NAME>::standards/aws-foundational-security-best-practices/v/1.0.0"}' --region <REGION-NAME>

To check the status, run the get-enabled-standards command. Be sure to replace REGION-NAME with your AWS Region:


aws securityhub get-enabled-standards --region <REGION-NAME> 

You should see the following “StandardsStatus”: “READY” output to indicate that the AWS Foundational Security Best Practices standard is enabled and ready:


{
    "StandardsSubscriptions": [
        {
            "StandardsSubscriptionArn": "arn:aws:securityhub:us-east-1:012345678912:subscription/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsArn": "arn:aws:securityhub:us-east-1::standards/aws-foundational-security-best-practices/v/1.0.0",
            "StandardsInput": {},
            "StandardsStatus": "READY"
        }
    ]
}

To use the AWS CLI to disable the AWS Foundational Security Best Practices standard in Security Hub, use the following command. Be sure to replace ACCOUNT_ID with your account ID, and replace REGION-NAME with your AWS Region:


aws securityhub batch-disable-standards --standards-subscription-arns "arn:aws:securityhub:eu-central-1:<ACCOUNT_ID>:subscription/aws-foundational-security-best-practices/v/1.0.0" --region <REGION-NAME> 

Conclusion

In this post, you learned about how to implement the new AWS Foundational Security Best Practices standard in Security Hub, and how to interpret the findings. You also learned how to enable the standard by using the Security Hub console and AWS CLI, how to disable and enable specific controls within the standard, and how to follow remediation steps for failed findings. For more information, see the AWS Foundational Security Best Practices standard in the AWS Security Hub User Guide.

If you have comments about this post, submit them in the Comments section below. If you have questions, please start a new thread on the Security Hub forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Karthikeyan Vasuki Balasubramaniam

Karthikeyan is a Software Development Engineer on the Amazon Security Hub service team. At Amazon Web Services, he works on the infrastructure that supports the development and functioning of various security standards. He has a background in computer networking and operating systems. In his free time, you can find him learning Indian classical music and cooking.

Author

Rima Tanash

Rima Tanash is the Lead Security Engineer on the Amazon Security Hub service team. At Amazon Web Services, she applies automated technologies to audit various access and security configurations. She has a research background in data privacy using graph properties and machine learning.

Building a Scalable Document Pre-Processing Pipeline

Post Syndicated from Joel Knight original https://aws.amazon.com/blogs/architecture/building-a-scalable-document-pre-processing-pipeline/

In a recent customer engagement, Quantiphi, Inc., a member of the Amazon Web Services Partner Network, built a solution capable of pre-processing tens of millions of PDF documents before sending them for inference by a machine learning (ML) model. While the customer’s use case—and hence the ML model—was very specific to their needs, the pipeline that does the pre-processing of documents is reusable for a wide array of document processing workloads. This post will walk you through the pre-processing pipeline architecture.

Pre-processing pipeline architecture-SM

Architectural goals

Quantiphi established the following goals prior to starting:

  • Loose coupling to enable independent scaling of compute components, flexible selection of compute services, and agility as the customer’s requirements evolved.
  • Work backwards from business requirements when making decisions affecting scale and throughput and not simply because “fastest is best.” Scale components only where it makes sense and for maximum impact.
  •  Log everything at every stage to enable troubleshooting when something goes wrong, provide a detailed audit trail, and facilitate cost optimization exercises by identifying usage and load of every compute component in the architecture.

Document ingestion

The documents are initially stored in a staging bucket in Amazon Simple Storage Service (Amazon S3). The processing pipeline is kicked off when the “trigger” Amazon Lambda function is called. This Lambda function passes parameters such as the name of the staging S3 bucket and the path(s) within the bucket which are to be processed to the “ingestion app.”

The ingestion app is a simple application that runs a web service to enable triggering a batch and lists documents from the S3 bucket path(s) received via the web service. As the app processes the list of documents, it feeds the document path, S3 bucket name, and some additional metadata to the “ingest” Amazon Simple Queue Service (Amazon SQS) queue. The ingestion app also starts the audit trail for the document by writing a record to the Amazon Aurora database. As the document moves downstream, additional records are added to the database. Records are joined together by a unique ID and assigned to each document by the ingestion app and passed along throughout the pipeline.

Chunking the documents

In order to maximize grip and control, the architecture is built to submit single-page files to the ML model. This enables correlating an inference failure to a specific page instead of a whole document (which may be many pages long). It also makes identifying the location of features within the inference results an easier task. Since the documents being processed can have varied sizes, resolutions, and page count, a big part of the pre-processing pipeline is to chunk a document up into its component pages prior to sending it for inference.

The “chunking orchestrator” app repeatedly pulls a message from the ingest queue and retrieves the document named therein from the S3 bucket. The PDF document is then classified along two metrics:

  • File size
  • Number of pages

We use these metrics to determine which chunking queue the document is sent to:

  • Large: Greater than 10MB in size or greater than 10 pages
  • Small: Less than or equal to 10MB and less than or equal to 10 pages
  • Single page: Less than or equal to 10MB and exactly one page

Each of these queues is serviced by an appropriately sized compute service that breaks the document down into smaller pieces, and ultimately, into individual pages.

  • Amazon Elastic Cloud Compute (EC2) processes large documents primarily because of the high memory footprint needed to read large, multi-gigabyte PDF files into memory. The output from these workers are smaller PDF documents that are stored in Amazon S3. The name and location of these smaller documents is submitted to the “small documents” queue.
  • Small documents are processed by a Lambda function that decomposes the document into single pages that are stored in Amazon S3. The name and location of these single page files is sent to the “single page” queue.

The Dead Letter Queues (DLQs) are used to hold messages from their respective size queue which are not successfully processed. If messages start landing in the DLQs, it’s an indication that there is a problem in the pipeline. For example, if messages start landing in the “small” or “single page” DLQ, it could indicate that the Lambda function processing those respective queues has reached its maximum run time.

An Amazon CloudWatch Alarm monitors the depth of each DLQ. Upon seeing DLQ activity, a notification is sent via Amazon Simple Notification Service (Amazon SNS) so an administrator can then investigate and make adjustments such as tuning the sizing thresholds to ensure the Lambda functions can finish before reaching their maximum run time.

In order to ensure no documents are left behind in the active run, there is a failsafe in the form of an Amazon EC2 worker that retrieves and processes messages from the DLQs. This failsafe app breaks a PDF all the way down into individual pages and then does image conversion.

For documents that don’t fall into a DLQ, they make it to the “single page” queue. This queue drives each page through the “image conversion” Lambda function which converts the single page file from PDF to PNG format. These PNG files are stored in Amazon S3.

Sending for inference

At this point, the documents have been chunked up and are ready for inference.

When the single-page image files land in Amazon S3, an S3 Event Notification is fired which places a message in a “converted image” SQS queue which in turn triggers the “model endpoint” Lambda function. This function calls an API endpoint on an Amazon API Gateway that is fronting the Amazon SageMaker inference endpoint. Using API Gateway with SageMaker endpoints avoided throttling during Lambda function execution due to high volumes of concurrent calls to the Amazon SageMaker API. This pattern also resulted in a 2x inference throughput speedup. The Lambda function passes the document’s S3 bucket name and path to the API which in turn passes it to the auto scaling SageMaker endpoint. The function reads the inference results that are passed back from API Gateway and stores them in Amazon Aurora.

The inference results as well as all the telemetry collected as the document was processed can be queried from the Amazon Aurora database to build reports showing number of documents processed, number of documents with failures, and number of documents with or without whatever feature(s) the ML model is trained to look for.

Summary

This architecture is able to take PDF documents that range in size from single page up to thousands of pages or gigabytes in size, pre-process them into single page image files, and then send them for inference by a machine learning model. Once triggered, the pipeline is completely automated and is able to scale to tens of millions of pages per batch.

In keeping with the architectural goals of the project, Amazon SQS is used throughout in order to build a loosely coupled system which promotes agility, scalability, and resiliency. Loose coupling also enables a high degree of grip and control over the system making it easier to respond to changes in business needs as well as focusing tuning efforts for maximum impact. And with every compute component logging everything it does, the system provides a high degree of auditability and introspection which facilitates performance monitoring, and detailed cost optimization.

AWS Architecture Monthly Magazine: Automotive

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-magazine-automotive/

AWS-Architecture-Monthly-Automotive cover-320Connected, autonomous, shared, and electric vehicle trends are converging to revolutionize the automotive industry. In this unprecedented age of innovation, automotive companies rely on AWS to fuel their digital transformation efforts, and get their products to market faster, while retaining ownership and control of their data and brand experience.

AWS provides the broadest and deepest set of capabilities, including artificial intelligence (AI), Internet of Things (IoT), HPC, and data analytics, the highest performance and security, the largest customer and partner community, and a relentless pace of innovation.

In this month’s issue:

For April’s Automotive issue, we spoke with Dean Phillips, AWS Automotive Tech Leader, about some of the architecture pattern trends of the industry as well as what customers should ask themselves before considering AWS. Dean also talks about different trends within the industry in cloud versus on-premises.

We also look back at Amazon Automotive exhibit at the Consumer Electronics Show (CES 2020), review at a Toyota case study, and provide information on the AWS Connected Vehicle Solution.

  • Ask an Expert: Dean Phillips, AWS Tech Leader, Automotive
  • Blog: 5 Automotive Trends at CES 2020
  • Case Study: Toyota Research Institute
  •  Solution: AWS Connected Vehicle Solution
  •  Whitepaper: Connected Vehicles and the Cloud

How to Access the Magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

IAM Access Analyzer flags unintended access to S3 buckets shared through access points

Post Syndicated from Andrea Nedic original https://aws.amazon.com/blogs/security/iam-access-analyzer-flags-unintended-access-to-s3-buckets-shared-through-access-points/

Customers use Amazon Simple Storage Service (S3) buckets to store critical data and manage access to data at scale. With Amazon S3 Access Points, customers can easily manage shared data sets by creating separate access points for individual applications. Access points are unique hostnames attached to a bucket and customers can set distinct permissions using access point policies. To help you identify buckets that can be accessed publicly or from other AWS accounts or organizations, AWS Identity and Access Management (IAM) Access Analyzer mathematically analyzes resource policies. Now, Access Analyzer analyzes access point policies in addition to bucket policies and bucket ACLs. This helps you find unintended access to S3 buckets that use access points. Access Analyzer makes it easier to identify and remediate unintended public, cross-account, or cross-organization sharing of your S3 buckets that use access points. This enables you to restrict bucket access and adhere to the security best practice of least privilege.

In this post, first I review Access Analyzer and how to enable it. Then I walk through an example of how to use Access Analyzer to identify an S3 bucket that is shared through an access point. Finally, I show you how to view Access Analyzer bucket findings in the S3 Management Console.

IAM Access Analyzer overview

Access Analyzer helps you determine which resources can be accessed publicly or from other accounts or organizations. Access Analyzer determines this by mathematically analyzing access control policies attached to resources. This form of analysis, called automated reasoning, applies logic and mathematical inference to determine all possible access paths allowed by a resource policy. This is how IAM Access Analyzer uses provable security to deliver comprehensive findings for unintended bucket access. You can enable Access Analyzer by navigating to the IAM console. From there, select Access Analyzer to create an analyzer for an account or an organization.

How to use IAM Access Analyzer to identify an S3 bucket shared through an access point

Once you’ve created your analyzer, you can view findings for resources that can be accessed publicly or from other AWS accounts or organizations. For your S3 bucket findings, the Shared through column indicates whether a bucket is shared through its S3 bucket policy, one of its access points, or the bucket ACL. Looking at the Shared through column in the image below, we see the first finding is shared through an Access point.

Figure 1: IAM Access Analyzer report of findings for resources shared outside of my account

Figure 1: IAM Access Analyzer report of findings for resources shared outside of my account

If you use access points to manage bucket access and one of your buckets is shared through an access point, you will see the bucket finding indicate ‘Access Point’. In this example, I select the first finding to learn more. In the detail image below, you can see that the Shared through field lists the Amazon Resource Name (arn) of the access point that grants access to the bucket and the details of the resources and principals. If this access wasn’t your intent, you can review the access point details in the S3 console. There you can modify the access point policy to remove access.

Figure 2: IAM Access Analyzer finding details for a bucket shared through an access point

Figure 2: IAM Access Analyzer finding details for a bucket shared through an access point

How to use Access Analyzer for S3 to identify an S3 bucket shared through an access point

You can also view Access Analyzer findings for S3 buckets in the S3 Management Console with Access Analyzer for S3. This view reports S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts. This includes accounts outside of your AWS organization. For each public or shared bucket, Access Analyzer for S3 displays whether the bucket is shared through the bucket policy, access points, or the bucket ACL. In the example below, we see the my-test-public-bucket is set to public access using a Bucket policy and bucket ACL. Additionally, the my-test-bucket is shared access to other AWS accounts using a Bucket policy and one or more access points. After you identify a bucket with unintended access using Access Analyzer for S3, you can Block Public Access to the bucket. Amazon S3 block public access settings override the bucket policies that are applied to the bucket. The settings also override the access point policies applied to the bucket’s access points.

Figure 3: Access Analyzer for S3 findings report in the S3 Management Console

Figure 3: Access Analyzer for S3 findings report in the S3 Management Console

Next steps

To turn on IAM Access Analyzer at no additional cost, head over to the IAM console. IAM Access Analyzer is available in the IAM console and through APIs in all commercial AWS Regions and AWS GovCloud (US). To learn more about IAM Access Analyzer, visit the feature page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM Forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrea Nedic

Andrea is a Senior Technical Program Manager in the AWS Automated Reasoning Group. She enjoys hearing from customers about how they build on AWS. Outside of work, Andrea likes to ski, dance, and be outdoors. She holds a PhD from Princeton University.

Deploying a ASP.NET Core web application to Amazon ECS using an Azure DevOps pipeline

Post Syndicated from John Formento original https://aws.amazon.com/blogs/devops/deploying-a-asp-net-core-web-application-to-amazon-ecs-using-an-azure-devops-pipeline/

For .NET developers, leveraging Team Foundation Server (TFS) has been the cornerstone for CI/CD over the years. As more and more .NET developers start to deploy onto AWS, they have been asking questions about using the same tools to deploy to the AWS cloud. By configuring a pipeline in Azure DevOps to deploy to the AWS cloud, you can easily use familiar Microsoft development tools to build great applications.

Solution overview

This blog post demonstrates how to create a simple Azure DevOps project, repository, and pipeline to deploy an ASP.NET Core web application to Amazon ECS using Azure DevOps. The following screenshot shows a high-level architecture diagram of the pipeline:

 

Solution Architecture Diagram

In this example, you perform the following steps:

  1. Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application.
  2. Create a pipeline in Azure DevOps
  3. Build an Amazon ECS Cluster, Task and Service.
  4. Kick-off deployment of the ASP.Net Core web application using the newly create Azure DevOps pipeline.

 

Prerequisites

Ensure you have the following prerequisites set up:

  • An Amazon ECR repository
  • An IAM user with permissions for Amazon ECR and Amazon ECS (the user will need an access key and secret access key)

 

Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application

Follow these steps to deploy a .NET Core app onto your Amazon ECS cluster using the Azure DevOps (ADO) repository and pipeline:

 

  1. Login to dev.azure.com and navigate to the marketplace.
  2. Go to Visual Studio, search for “AWS”, and add the AWS Tools for Microsoft Visual Studio Team Services.
  3. Create a project in ADO: Provide a project name and choose Create.
  4. On the Project Summary page, choose Project Settings.
  5. In the Project Settings pane, navigate to the Service Connections page.
  6. Choose Create service connection, select AWS, and choose Next.
  7. Input an Access Key ID and Secret Access Key. (You’ll need an IAM user with permissions for Amazon ECR and Amazon ECS in order to deploy via the Azure DevOps pipeline.) Choose Save.
  8. Choose Repos in the left pane, then Clone in Visual Studio under Clone to your computer.
  9. Create a ASP.NET Core web application in Visual Studio, set the location to locally cloned repository, and check Enable Docker support.
  10. Once you’ve created the new project, perform an initial commit and push to the repository in Azure DevOps.

 

Creating a pipeline in Azure DevOps

Now that you have synced the repository, create a pipeline in Azure DevOps.

  1. Go to the pipeline page within Azure DevOps and choose Create Pipeline.
  2. Choose Use the classic editor.Pipeline configuration with repository
  3. Select Azure Repos Git for the location of your code and select the repository you created earlier.
  4. On the Choose a Template page, select Docker Container and choose Apply.
  5. Remove the Push an image step.
  6. Add an Amazon ECR Push task by choosing the + symbol next to Agent job 1. You can search for “AWS” in the Add tasks pane to filter for all AWS tasks.

 

Now, configure each task:

  1. Choose the Build an image task and ensure that the action is set to Build an image. Additionally, you can modify the Image Name to your standards.Pipeline configuration page Azure DevOps
  2. Choose the Push Image task and provide the following
    • Enter a name under Display Name.
    • Select the AWS Credentials that you created in Service Connections.
    • Select the AWS Region.
    • Provide the source image name, which you can find in the setting for the Build an image task.
    • Enter the name of the repository in Amazon ECR to which the image is pushedPipeline configuration page Azure DevOps
  3. Choose Save and queue.

Build Amazon ECS Cluster, Task, and Service

The goal here is to test up to building the Docker image and ensure it’s pushed to Amazon ECR. Once the Docker image is in Amazon ECR, you can create the Amazon ECS cluster, task definition, and service leveraging the newly created Docker image.

  1. Create an Amazon ECS cluster.
  2. Create an Amazon ECS task definition. When you create the task definition and configure the container, use the Amazon ECR URI for the Docker image that was just pushed to Amazon ECR.
  3. Create an Amazon ECS service.

Go back and edit the pipeline:

  1. Add the last step by choosing the + symbol next to Agent job 1.
  2. Search for “AWS CLI” in the search bar and add the task.
  3. Choose AWS CLI and configure the task.
  4. Enter a name under Display Name, such as Update ECS Service.
  5. Select the AWS Credentials that you created in Service Connections.
  6. Select the AWS Region.
  7. Input the following command, which updates the Amazon ECS service after a new image is pushed to Amazon ECR. Replace <clustername> and <servicename> with your Amazon ECS cluster and service names.
    • Command:ecs
    • Subcommand:update-service
    • Options and parameters: --cluster <clustername> --service <servicename> --force-new-deployment
  8. Now choose the Triggers tab and select Enable continuous integration with the repository you created.
  9. Choose Save and queue.

 

At this point, your build pipeline kicks off and builds a Docker image from the source code in the repository you created, pushes the image to Amazon ECR, and updates the Amazon ECS service with the new image.

You can verify by viewing the build. Choose Pipelines in Azure DevOps, selecting the entry for the latest run, and then the icon under the status column. Once it successfully completes, you can log in to the AWS console and view the updated image in Amazon ECR and the updated service in Amazon ECS.Pipeline status page Azure DevOps

Every time you commit and push your code through Visual Studio, this pipeline kicks off and builds and deploys your application to Amazon ECS.

Cleanup

At the end of this example, once you’ve completed all steps and are finished testing, follow these steps to disable or delete resources to avoid incurring costs:

  1. Go to the Amazon ECS console within the AWS Console.
  2. Navigate to the cluster you created, then choose the Tasks tab.
  3. Choose Stop all to turn off the tasks.

Conclusion

This blog post reviewed how to create a CI/CD pipeline in Azure DevOps to deploy a Docker Image to Amazon ECR and container to Amazon ECS. It provided detailed steps on how to set up a basic CI/CD pipeline, leveraging tools with which .NET developers are familiar and the steps needed to integrate with Amazon ECR and Amazon ECS.

I hope this post was informative and has helped you learn the basics of how to integrate Amazon ECR and Amazon ECS with Azure DevOps to create a robust CI/CD pipeline.

About the Authors

John Formento

 

 

John Formento is a Solution Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud.

How to track changes to secrets stored in AWS Secrets Manager using AWS Config and AWS Config Rules

Post Syndicated from Jerry Hayes original https://aws.amazon.com/blogs/security/how-to-track-changes-to-secrets-stored-in-aws-secrets-manager-using-aws-config-and-aws-config-rules/

On April 20th, AWS Config announced support for AWS Secrets Manager, making it easier to track configuration changes to the secrets you manage in AWS Secrets Manager. You can now use AWS Config to track changes to secrets’ metadata — such as secret description and rotation configuration, relationship to other AWS sources such as the KMS Key used for secret encryption, Lambda function used for secret rotation, and attributes such as tags associated with the secrets.

You can also leverage two new AWS Managed Config Rules to evaluate if your secrets’ configuration is in compliance with your organization’s security and compliance requirements, identify secrets that don’t conform to these standards, and receive notifications about them via Amazon Simple Notification Service (SNS). Once enabled, these rules will trigger every time a secret’s configuration changes.

  • secretsmanager-rotation-enabled-check: Checks whether or not secrets present in AWS Secrets Manager are configured for rotation. This rule also supports the maximumAllowedRotationFrequency parameter which, if specified, will compare the secret’s configured rotation frequency to the value set in the parameter.
  • secretsmanager-scheduled-rotation-success-check: Checks whether or not secrets present in AWS Secrets Manager configured for rotation have been rotated within their rotation schedule.

In this blog post, I walk you through two ways to use AWS Config rules to determine if your organization’s secrets are being managed in compliance with your security requirements:

  • Example 1: Drive rotation adoption by identifying secrets in a single account that aren’t configured for rotation. This maps to the first managed rule listed above.
  • Example 2: Drive compliance with your security standards across multiple AWS accounts by creating an AWS Config Aggregator, which allows you to collect configuration and compliance data from multiple accounts across multiple regions.

Example 1: Drive rotation adoption by identifying secrets that aren’t configured for rotation in a single account and region

Many organizations require regular secret rotation. Use the new managed rule secretsmanager-rotation-check to verify whether your secrets are configured for automatic rotation.

  1. From the AWS Config console, navigate to Settings and ensure that Recording is on. Under Resource types to record, turn on recording for all resources by checking the All resources box next to Record all resources supported in this region, as shown in Figure 1 below.

    Figure 1: Enable Recording

    Figure 1: Enable Recording

  2. To set up the rule, go to the Rules page in the AWS Config console and select Add rule, as shown in Figure 2.

    Figure 2: Add Rule

    Figure 2: Add Rule

  3. Search for secretsmanager-rotation-enabled-check in the search bar and select the rule that appears, as shown in Figure 3.

    Figure 3: Search for rule

    Figure 3: Search for rule

  4. In Figure 4, I use the name secretsmanager-rotation-enabled-check for the name of my rule. Trigger type is set to run upon changes to the resource’s configuration. For Scope of changes, you can monitor all applicable resources for this rule type or resources with specific tags. In my example, I am monitoring all secrets where the ENVIRONMENT tag is set to PRODUCTION. And finally, under Rule Parameters, I set maximumAllowedRotationFrequency to 30 days.

    Figure 4: Add managed rule

    Figure 4: Add managed rule

  5. In my example, I specify AWS-PublishSNSNotification as my Remediation action and enter the parameters for AutomationAssumeRole, Message, and TopicArn topic so that I can receive notifications from an Amazon SNS topic regarding non-compliant secrets, as shown in Figure 5 below. Setting a Remediation action is optional. Once the rule is set up the way you want it, select Save.
    Figure 5: Choose remediation action

    Figure 5: Choose remediation action

    Once you’ve saved the rule, it will evaluate your secrets every time there’s a change in the secret metadata, and you’ll receive an Amazon SNS notification about non-compliant secrets.

  6. In the AWS Config console, view your results by selecting Rules from the menu on the left. In Figure 6, secretsmanager-rotation-enabled-check shows that it has detected 1 noncompliant resource.

    Figure 6: View rule evaluation results

    Figure 6: View rule evaluation results

  7. Select secretsmanager-rotation-enabled-check and it provides a link to the Resource ID of the non-compliant secret, as shown in Figure 7.

    Figure 7: Detailed view of rule with noncompliant secret

    Figure 7: Detailed view of rule with noncompliant secret

Example 2: Drive security compliance across multiple AWS accounts in your AWS Organization by creating an AWS Config Aggregator

Next, I’ll show you how to use the AWS Config Aggregator to review how secrets are configured across all accounts and regions in your AWS Organization so you can see whether they’re in compliance with your organization’s security and compliance requirements. AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.

NOTE: You must enable AWS Config and the AWS Config managed rules specific to secrets in all accounts and regions that you want to monitor before creating the aggregator. You can use AWS CloudFormation StackSets to enable AWS Config and provision rules across accounts and regions as described here.

  1. In this example, I create the aggregator in my AWS Organization’s master account. From the AWS Config console, select Aggregators from the left menu, then select Add aggregator, as shown in Figure 8.

    Figure 8: Add aggregator

    Figure 8: Add aggregator

  2. Select the check box next to Allow data replication, as shown in Figure 9 below. This provides the permission for your AWS Organization’s master account to access the resource configuration and compliance details for all the accounts and regions in your AWS Organization.

    Figure 9: Allow data replication

    Figure 9: Allow data replication

  3. Provide a name for the aggregator. In Figure 10, I’ve named mine MyOrganizationsSecrets. Select Add my organization, then Choose IAM role. Select Create a Role and enter a role name and then select Choose IAM role. The IAM role allows AWS Config to get the list of accounts in your AWS Organization.
    Figure 10: Enable data replication and configure aggregator

    Figure 10: Enable data replication and configure aggregator

    NOTE: If you do not have an organization configured in AWS Organizations, you can select Add individual account IDs and then either add account IDs manually or update a comma separated list of accounts.

  4. Select Choose IAM role. Ensure Create a role is selected and enter a unique name. In Figure 11, I’ve named my role aws-config-aggregator-role. Select Choose IAM role again to create the role and again to continue.

    Figure 11: Choose IAM role

    Figure 11: Choose IAM role

  5. Select the Regions you want to aggregate data and select Save. In Figure 12, I’ve selected the two regions in which my AWS Organization uses Secrets Manager.
    Figure 12: Pick target regions for aggregation

    Figure 12: Select target regions for aggregation

    Once you’ve selected your regions, click Save.

  6. Select the aggregator you just created to see the Aggregated view. In Figure 13, I select MyOrganizationsSecrets.As noted on the console, an aggregator is an AWS Config resource type that collects AWS Config data from multiple accounts and regions, the data displayed in the dashboard is received from multiple aggregation sources and is refreshed at different intervals. Data might be delayed by a few minutes.

    Figure 13: Select aggregator

    Figure 13: Select aggregator

  7. In the Aggregated view shown in Figure 14 below, you can now see a dashboard view of all resources in your Organization, across all accounts and regions.On the top right, the Config rule compliance status shows that this organization has 11 compliant and 7 non-compliant rules. Below that is the Top 5 non-compliant rules which denotes the rule name, the region, the account number, and number of non-compliant resources.
    Figure 14: Aggregated view

    Figure 14: Aggregated view

    You can drill down into this data to view all compliant and non-compliant secrets in all your organization’s accounts and regions, and you can work with individual account or secret owners to drive security compliance — ensuring all secrets are configured for rotation, all secrets meet your organizations’ standard for rotation frequency, and secrets are rotated successfully.

  8. In Figure 15, I select secretsmanager-rotation-enabled-check for us-east-1 from the Top 5 non-complaint rules.

    Figure 15: Top 5 noncompliant rules

    Figure 15: Top 5 noncompliant rules

  9. The detail view in Figure 16 below shows the 5 non-compliant resources and their corresponding Resource IDs.

    Figure 16: Compliant and non-compliant secrets

    Figure 16: Compliant and non-compliant secrets

Summary

In this post, I showed you how to track and evaluate secret configuration using AWS Config and AWS Config Rules using the AWS Management Console. You can also do this using the AWS Config APIs or the AWS Command Line Interface (CLI).

This enables you to drive secrets management best practices in an individual account or across your AWS Organization. To get started managing secrets, open the Secrets Manager console. To learn more, read How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jerry Hayes

Jerry Hayes is a Solutions Architect Manager on the World Wide Public Sector (WWPS) Solutions Architect (SA) team where he manages a high-performing team of Specialist SAs supporting National Security customers. He holds a Master’s degree from George Washington University and a Bachelor’s degree from Virginia Tech (Go Hokies!). Outside of work, Jerry enjoys spending time with his family, watching football, running, and traveling to new and exciting places.

AWS Online Tech Talks for April 2020

Post Syndicated from Jimmy Cooper original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-for-april-2020/

Join us for live, online presentations led by AWS solutions architects and engineers. AWS Online Tech Talks cover a range of topics and expertise levels, and feature technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts.

Note – All sessions are free and in Pacific Time. Can’t join us live? Access webinar recordings and slides on our On-Demand Portal.

Tech talks this month are:

April 20, 2020 | 9:00 AM – 10:00 AM PT – Save Costs Running Kubernetes Clusters with EC2 Spot Instances – ​Learn how you can lower costs and improve application resiliency by running Kubernetes workloads on Amazon EKS with Spot Instances.​

April 20, 2020 | 11:00 AM – 12:00 PM PT – Hadoop 3.0 and Docker on Amazon EMR 6.0 – A deep dive into what’s new in EMR 6.0 including Apache Hadoop 3.0, Docker containers & Apache Hive performance improvements​.

​April 20, 2020 | 1:00 PM – 2:00 PM PT – Infrastructure as Code on AWS – ​Join this tech talk to learn how to use AWS CloudFormation and AWS CDK to provision and manage infrastructure, deploy code, and automate your software-release processes.

April 21, 2020 | 9:00 AM – 10:00 AM PT – How to Maximize Results with a Cloud Contact Center, Featuring Aberdeen Research – ​Learn how to maximize results with a cloud contact center, featuring Aberdeen Research and Amazon Connect​.

April 21, 2020 | 11:00 AM – 12:00 PM PT – Connecting Microcontrollers to the Cloud for IoT Applications – ​Learn how you can connect microcontrollers to the cloud for IoT applications​.

April 21, 2020 | 1:00 PM – 2:00 PM PT – Reducing Machine Learning Inference Cost for PyTorch Models – ​Join us for a tech talk to learn about deploying your PyTorch models for low latency at low cost.​

April 22, 2020 | 11:00 AM – 12:00 PM PT – Top 10 Security Items to Improve in Your AWS Account – Learn about the top 10 security items to improve in your AWS environment and how you can automate them.​

April 22, 2020 | 1:00 PM – 2:00 PM PT – Building Your First Application with AWS Lambda – ​Learn how to build your first serverless application with AWS Lambda, including basic design patterns and best practices.​

April 23, 2020 | 9:00 AM – 10:00 AM PT – Persistent Storage for Containers with Amazon EFS – ​Learn how to securely store your containers in the cloud with Amazon EFS​.

April 23, 2020 | 11:00 AM – 12:00 PM PT – Build Event Driven Graph Applications with AWS Purpose-Built Databases – ​Learn how to build event driven graph applications using AWS purpose-built database services including Amazon Neptune, Amazon DynamoDB, and Amazon ElastiCache.​

April 23, 2020 | 1:00 PM – 2:00 PM PT – Migrate with AWS – ​Introduction to best practice driven process for migrations to AWS, developed by the experience in helping thousands of enterprises migrate.

April 27, 2020 | 9:00 AM – 10:00 AM PT – Best Practices for Modernizing On-Premise Big Data Workloads Using Amazon EMR – ​Learn about best practices to migrate from on-premises big data (Apache Spark and Hadoop) to Amazon EMR.​

April 27, 2020 | 11:00 AM – 12:00 PM PT – Understanding Game Changes and Player Behavior with Graph Databases – ​Learn how to solve problems with highly connected data in game datasets with Amazon Neptune.

​​April 27, 2020 | 1:00 PM – 2:00 PM PT – Assess, Migrate, and Modernize from Legacy Databases to AWS: Oracle to Amazon Aurora PostgreSQL Migration – ​Punitive licensing and high cost of on-premises legacy databases could hold you back. Join this tech talk to learn how to assess, migrate, and modernize your Oracle workloads over to Amazon Aurora PostgreSQL, using Amazon Database Migration Service (DMS).​

April 28, 2020 | 9:00 AM – 10:00 AM PT – Implementing SAP in the Cloud with AWS Tools and Services – ​This tech talk will help architects and administrators to understand the automation capabilities available that can assist your SAP migration.​

April 28, 2020 | 11:00 AM – 12:00 PM PT – Choosing Events, Queues, Topics, and Streams in Your Serverless Application – ​Learn how to choose between common Lambda event sources like EventBridge, SNS, SQS, and Kinesis Data Streams.​

April 30, 2020 | 9:00 AM – 10:00 AM PT – Inside Amazon DocumentDB: The Makings of a Managed Non-relational Database – Join Rahul Pathak, GM of Emerging Databases and Blockchain at AWS, to learn about the inner workings of Amazon DocumentDB and how it provides better performance, scalability, and availability while reducing operational overhead for managing your own non-relational databases.

Enable automatic logging of web ACLs by using AWS Config

Post Syndicated from Mike George original https://aws.amazon.com/blogs/security/enable-automatic-logging-of-web-acls-by-using-aws-config/

In this blog post, I will show you how to use AWS Config, with its auto-remediation functionality, to ensure that all web ACLs have logging enabled. The AWS CloudFormation template included in this blog post will facilitate this solution, and will get you started being able to manage web ACL logging at scale.

AWS Firewall Manager can automatically deploy an AWS Web Application Firewall (WAF) rule to protect your applications when your organization creates new Application Load Balancers, API Gateways, and CloudFront distributions. However, you still have to enable logging for web ACLs on an individual basis. Information contained in web ACL logs includes the time that AWS WAF received the request for your AWS resource, detailed information about the request, and the action for the rule that each request matched. This data can be extremely important for compliance and auditing needs, debugging, or forensic research.

Web ACL logging is a best practice, and is a business requirement within many organizations. Rather than leaving logging as a manual step in a deployment process, I will show you how to use automated mechanisms to enable logging, so that your business can meet its security and compliance requirements.

Prerequisites

The solution in this blog post assumes that you are already using AWS Web Application Firewall (AWS WAF) and AWS Firewall Manager to manage your firewall rules at scale. The following is a list of all the AWS services used in this blog post:

Using AWS Config to ensure automatic logging

AWS Config is a service that enables you to evaluate the configurations of the AWS resources in your account. AWS Config continuously monitors and records resource configuration changes. AWS Config can alert you and perform actions when resources get added, removed, or change state. AWS Config has a set of built-in rules that it can evaluate your AWS resources against, or you can build your own AWS Config rules.

In fact, when you enable AWS Firewall Manager to automatically apply AWS WAF rules to your Application Load Balancers, API Gateways, or CloudFront distributions, AWS Firewall Manager creates AWS Config rules behind the scenes. These AWS Config rules are designed so that the correct web ACLs are automatically applied whenever new Application Load Balancers, API Gateways, or CloudFront distributions are created. Enterprises use AWS Config rules to ensure consistent compliance with their internal organizational policies. You can use AWS Config to ensure that your AWS WAF rules have logging enabled.

When creating custom AWS Config rules, you associate each custom rule with an AWS Lambda function, which contains the logic that evaluates whether your AWS resource complies with the rule. You can configure the custom AWS Config rule to invoke the Lambda function in response to a configuration change, or to run periodically. After the Lambda function executes, it evaluates whether your resource complies with your rule, and it then sends the results back to AWS Config. If the resource violates the conditions of the rule, then AWS Config flags the resource as noncompliant. For more information, see How AWS Config Works in the AWS Developer Documentation.

You can also perform auto-remediation on non-compliant resources by using the built-in remediation functionality in AWS Config. When AWS Config detects a noncompliant resource, it can invoke an automation function that is defined as a Systems Manager Automation document. Systems Manager has a number of pre-built Automation documents that can do things such as create an Amazon Machine Image (AMI), create a Jira issue, and create a ServiceNow incident. For the full list of built-in Automation documents, see Systems Manager Automation Document Details Reference.

You can also create your own Automation documents to support business cases not covered by the built-in Systems Manager Automation documents. Systems Manager Automation documents can run scripts, call AWS API functions, call custom Lambda functions, or execute a CloudFormation stack, and more.

Overview of the solution

The following is a high-level overview diagram of the solution described in this post, when an AWS WAF web ACL has a configuration change:
 

Figure 1: High-level solution overview

Figure 1: High-level solution overview

When an AWS WAF web ACL has a configuration change, the following steps will occur:

  1. The creation of the AWS WAF web ACL generates a ConfigurationItemChangeNotification, which is sent to AWS Config (step 1).
  2. AWS Config in turn sends the notification on to an AWS Lambda function (step 2), which determines if the web ACL in question is “compliant”. In this case, compliant means that the web ACL has logging configured.
  3. Lambda queries the web ACL (step 3) to determine if logging is enabled.
  4. The Lambda query results are then reported back to AWS Config (step 4).
  5. If logging is not enabled, the web ACL is seen as noncompliant and AWS Config kicks off an auto-remediation step (step 5) by executing a Systems Manager Automation document.
  6. The Automation document calls a Lambda function (step 6).
  7. The Lambda function attempts to enable logging on the web ACL (step 7).
  8. If logging is successfully enabled, then the web ACL automatically sends logs through a Kinesis Data Firehose delivery stream (step 8).
  9. The Kinesis Data Firehose delivery stream stores the data in an S3 bucket (step 9).
  10. After the Lambda function has completed enabling logging functionality, it reports back to Systems Manager (step 10).
  11. Systems Manager reports back to AWS Config (step 11).
  12. At this point, the web ACL compliance status still hasn’t been updated. AWS Config still believes the web ACL is noncompliant, so AWS Config calls the Lambda function (step 2) to determine if the compliance status has changed.
  13. Lambda checks the web ACL again (step 3), determines that it is compliant, and returns the results to AWS Config (step 4).

Because AWS Config stores the compliance history of the web ACL configuration, compliance team members will be able to go into AWS Config and see the history of the web ACL, as shown in the following screenshot. You will be able to see that the configuration state was noncompliant when the web ACL was created, and that it became compliant after logging was enabled.
 

Figure 2: Web ACL compliance history in AWS Config

Figure 2: Web ACL compliance history in AWS Config

Using the CloudFormation template

To automatically enable logging on all web ACLs, I created a CloudFormation template for you to use to set up all the necessary components. The CloudFormation template creates the following:

  • An S3 bucket to store the logs.
  • A Kinesis Data Firehose delivery stream.
  • An AWS Config rule.
  • A Systems Manager Automation document.
  • Two Lambda functions. The first Lambda function is used by AWS Config to evaluate whether the web ACL has logging enabled. The second Lambda function is used by the Systems Manager Automation document to automatically enable logging.
  • AWS IAM policies and roles to ensure that everything works correctly.

I designed this CloudFormation template to be executed in an AWS account that already has AWS Firewall Manager enabled, however it will not prevent you from running it in an AWS account that does not have it enabled. Accounts without AWS Firewall Manager won’t benefit from the central configuration and management that AWS Firewall Manager provides. However, this stack will still allow you to ensure that existing or new web ACLs have logging enabled.

To deploy the template

  1. Copy the CloudFormation template file that follows these instructions, and save it to your computer.
  2. Sign in to the AWS account where you want to deploy this stack.
  3. Choose Services, choose CloudFormation, and then choose Stacks.
  4. In the upper right, choose Create stack, and then choose With new resources (standard).
  5. In the Specify template section, choose Upload a template file, and then select Choose file.
  6. Navigate to the file that you saved in step 1. Choose Next.
  7. In the Stack name field, enter a stack name that is meaningful to you. Choose Next, and choose Next again.
  8. Select the checkbox that says I acknowledge that AWS CloudFormation might create IAM resources and choose the Create stack button.

CloudFormation template file


#
# This CloudFormation template enables auto-logging of web ACLs through the use of 
# AWS Config and Systems Manager Automation documents.
#
# This solution creates an S3 bucket, a Kinesis Data Firehose, an AWS Config rule, 
# a Systems Manager Automation document, and two Lambda functions to evaluate and 
# remediate when web ACLs are not configured for logging.
#

Outputs:
  S3BucketName:
    Value: !Ref S3Bucket
  FirehoseName:
    Value: !Ref Firehose

Resources:
  S3Bucket:
    Type: AWS::S3::Bucket

  Firehose:
    Type: AWS::KinesisFirehose::DeliveryStream
    Properties:
      DeliveryStreamName:
        !Join
          - ''
          - - aws-waf-logs-
            - !Ref AWS::StackName
      ExtendedS3DestinationConfiguration:
        RoleARN: !GetAtt DeliveryRole.Arn
        BucketARN: !GetAtt S3Bucket.Arn
        BufferingHints:
          IntervalInSeconds: 300
          SizeInMBs: 5
        CompressionFormat: UNCOMPRESSED

  DeliveryRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: ''
            Effect: Allow
            Principal:
              Service: firehose.amazonaws.com
            Action: 'sts:AssumeRole'
            Condition:
              StringEquals:
                'sts:ExternalId': !Ref 'AWS::AccountId'


  DeliveryPolicy:
    Type: AWS::IAM::Policy
    Properties:
      PolicyName: 'firehose_delivery_policy'
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - 's3:AbortMultipartUpload'
              - 's3:GetBucketLocation'
              - 's3:GetObject'
              - 's3:ListBucket'
              - 's3:ListBucketMultipartUploads'
              - 's3:PutObject'
            Resource:
              - !GetAtt S3Bucket.Arn
              - !Join
                - ''
                - - !GetAtt S3Bucket.Arn
                  - '*'
      Roles:
        - !Ref DeliveryRole

  AutomationDoc:
    Type: "AWS::SSM::Document"
    Properties:
      Content:
        schemaVersion: "0.3"
        description: "Adds logging to non-compliant WebACLs"
        assumeRole: "{{ AutomationAssumeRole }}"
        parameters:
          WebACLId:
            type: "String"
            description: "(Required) The WebACLId of the WebACL"
          AutomationAssumeRole:
            type: "String"
            description: "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf"
        mainSteps:
          - name: performRemediation
            action: aws:invokeLambdaFunction
            inputs:
              FunctionName: !GetAtt WafLambda.Arn
              Payload: '{"webAclName":"{{ WebACLId }}"}'
      DocumentType: Automation


  WafLambda:
    Type: AWS::Lambda::Function
    Properties:
      # The AmazonSSMAutomationRole role expects the Lambda function name to begin with Automation*
      FunctionName: !Sub Automation-${AWS::StackName}-EnableWafLogging
      Code:
        ZipFile:
          !Sub |
            #CODE GOES HERE
            import boto3
            import json
            import os

            #
            # This Lambda function ensures that all WAF web ACLs have logging enabled.
            #
            # Trigger Type: SSM Automation
            # Scope of Automation: AWS::WAF::WebACL & AWS::WAFRegional::WebACL
            #

            FIREHOSE_ARN = os.environ['FIREHOSE_ARN']
            CONFIG_RULE_NAME = os.environ['CONFIG_RULE_NAME']

            def evaluate_compliance(webAclName):
              hasConfig = False

              #Setting up variables
              client = ''
              response = ''
              wafArn = ''

              #Check if this is a WAFv2. The ResourceId passed in is already the ARN
              if webAclName.find('arn:aws:wafv2:') >= 0:
                wafArn = webAclName
                client = boto3.client('wafv2')
              else:

                isWebAcl = True
                #Test if this is AWS::WAF::WebACL
                try:
                  print('Testing for WAF::WebACL')
                  client = boto3.client('waf')
                  response = client.get_web_acl(WebACLId=webAclName)
                except:
                  isWebAcl = False
                  pass

                if not isWebAcl:
                  #Test if this is AWS::WAFRegional::WebACL
                  try:
                    print('Testing for WAFRegional::WebACL')
                    client = boto3.client('waf-regional')
                    response = client.get_web_acl(WebACLId=webAclName)
                  except:
                    pass

                wafArn = response['WebACL']['WebACLArn']

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                print('Attempting to fix non-compliance')
                print('WAF ARN: ' + wafArn)
                response = client.put_logging_configuration(LoggingConfiguration={'ResourceArn': wafArn,'LogDestinationConfigs': [ FIREHOSE_ARN ]})

            def regen_compliance():
              try:
                print("Attempting to re-run AWS Config rule to update compliance status")
                client = boto3.client('config')
                response = client.start_config_rules_evaluation(ConfigRuleNames=[CONFIG_RULE_NAME])
              except:
                pass

            def handler(event, context):
              aclName = event['webAclName']
              evaluate_compliance(aclName)

              regen_compliance()

      Handler: "index.handler"
      Environment:
        Variables:
          FIREHOSE_ARN: !GetAtt Firehose.Arn
          CONFIG_RULE_NAME: !Ref ConfigRule
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  ConfigRule:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName:
        !Join
          - ''
          - - Enable-WebACL-Logging-
            - !Ref AWS::StackName
      Description: 'Ensures that all new web ACLs have logging enabled'
      Scope:
        ComplianceResourceTypes:
          - AWS::WAF::WebACL
          - AWS::WAFv2::WebACL
          - AWS::WAFRegional::WebACL
      Source:
        Owner: "CUSTOM_LAMBDA"
        SourceDetails:
        - EventSource: "aws.config"
          MessageType: ConfigurationItemChangeNotification
        - EventSource: "aws.config"
          MessageType: OversizedConfigurationItemChangeNotification
        SourceIdentifier: !GetAtt Lambda.Arn
    DependsOn: PermissionToCallLambda

  WebACLRemediation:
    Type: "AWS::Config::RemediationConfiguration"
    Properties:
      # AutomationAssumeRole, MaximumAutomaticAttempts and RetryAttemptSeconds are Required if Automatic is true
      Automatic: true
      ConfigRuleName: !Ref ConfigRule
      MaximumAutomaticAttempts: 1
      Parameters:
        AutomationAssumeRole:
          StaticValue:
            Values:
              - !GetAtt AutoRemediationIamRole.Arn
        WebACLId:
          ResourceValue:
            Value: RESOURCE_ID
      RetryAttemptSeconds: 60
      TargetId: !Ref AutomationDoc
      TargetType: SSM_DOCUMENT


  AutoRemediationIamRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ssm.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole'
      Policies: []

  PermissionToCallRemediationLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt WafLambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "ssm.amazonaws.com"

  PermissionToCallLambda:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt Lambda.Arn
      Action: "lambda:InvokeFunction"
      Principal: "config.amazonaws.com"

  Lambda:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile:
          !Sub |
            import boto3
            import json
            #
            # This Lambda function determines if WAF web ACLs have logging enabled
            #
            # Trigger Type: Config: Change Triggered
            # Scope of Changes: AWS::WAF::WebACL, AWS::WAFv2::WebACL & AWS::WAFRegional::WebACL
            #

            def is_applicable(config_item, event):
              status = config_item['configurationItemStatus']
              event_left_scope = event['eventLeftScope']
              test = ((status in ['OK', 'ResourceDiscovered']) and
                event_left_scope == False)
              return test

            def evaluate_compliance(config_item):
              wafArn = config_item['ARN']
              hasConfig = False

              client = ''
              if (config_item['resourceType'] == 'AWS::WAF::WebACL'):
                client = boto3.client('waf')
              elif (config_item['resourceType'] == 'AWS::WAFRegional::WebACL'):
                client = boto3.client('waf-regional')
              elif (config_item['resourceType'] == 'AWS::WAFv2::WebACL'):
                client = boto3.client('wafv2')

              try:
                response = client.get_logging_configuration(ResourceArn=wafArn)
                hasConfig = True
              except:
                pass

              if not hasConfig:
                return 'NON_COMPLIANT'
              else:
                return 'COMPLIANT'

            def handler(event, context):
              invoking_event = json.loads(event['invokingEvent'])
              compliance_value = 'NOT_APPLICABLE'

              if is_applicable(invoking_event['configurationItem'], event):
                compliance_value = evaluate_compliance(invoking_event['configurationItem'])

              config = boto3.client('config')
              response = config.put_evaluations(
                Evaluations=[
                  {
                  'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
                  'ComplianceResourceId': invoking_event['configurationItem']['resourceId'],
                  'ComplianceType': compliance_value,
                  'OrderingTimestamp': invoking_event['configurationItem']['configurationItemCaptureTime']
                  },
                ],
                ResultToken=event['resultToken'])
      Handler: "index.handler"
      Runtime: python3.7
      Timeout: 30
      Role: !GetAtt LambdaExecutionRole.Arn

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: lambda-logging
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:*
            Resource: arn:aws:logs:*:*:*
      - PolicyName: waf-config
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - waf:PutLoggingConfiguration
            - waf:GetLoggingConfiguration
            - waf:GetWebACL
            - wafv2:PutLoggingConfiguration
            - wafv2:GetLoggingConfiguration
            - wafv2:GetWebACL
            - waf-regional:PutLoggingConfiguration
            - waf-regional:GetLoggingConfiguration
            - waf-regional:GetWebACL
            Resource:
            - arn:aws:waf::*:*
            - arn:aws:wafv2:*:*:*/*/*
            - arn:aws:waf-regional:*:*:*
      - PolicyName: config-evaluate
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            - config:StartConfigRulesEvaluation
            Resource: '*'
      - PolicyName: allow-lambda-servicelinkedrole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - iam:CreateServiceLinkedRole
            Resource: arn:aws:iam::*:role/aws-service-role/*

How the CloudFormation template works

To enable logging on a web ACL, the web ACL expects a Kinesis Data Firehose delivery stream that has a name that starts with aws-waf-logs-. You typically configure a Kinesis Data Firehose delivery stream to deliver data to an S3 bucket. This CloudFormation template creates a Kinesis Data Firehose delivery stream with a name that the web ACL is expecting and is configured to deliver data to an S3 bucket. The Kinesis Data Firehose delivery stream has the name of aws-waf-logs-StackName, where StackName is the name you provided when you created this CloudFormation stack.

The CloudFormation template also creates an AWS Config rule with the name Enable-WebACL-Logging-StackName. This AWS Config rule is configured to monitor resources of type AWS::WAF::WebACL (typically a CloudFront distribution), AWS::WAFRegional::WebACL (typically an API Gateway or an Application Load Balancer), and AWS::WAFv2::WebACL, which is the latest version of the AWS WAF API. When AWS Config detects a change to one of your web ACLs (for example, an AWS WAF rule being added to an Application Load Balancer), the event is sent off to a Lambda function for evaluation against your rule.

The Lambda function is where all the heavy lifting is performed. When the Lambda function is invoked, control is passed to the handler method. This method calls the evaluate_compliance method, which uses the Boto3 Python library to pull the logging configuration of the web ACL in question. The function simply checks to see if it can pull a logging configuration from the web ACL. If it can pull a logging configuration, that means that logging is enabled. If it cannot pull a logging configuration, it means logging is not enabled. The Lambda function then reports back the status of COMPLIANT (meaning logging is enabled) or NON_COMPLIANT (meaning logging is not enabled) to AWS Config.

This AWS Config rule is configured to auto-remediate noncompliant web ACLs. When a noncompliant web ACL is identified, AWS Config executes a Systems Manager Automation document, which calls a Lambda function to enable logging. This Lambda function is configured with an environment variable called FIREHOSE_ARN, which is the ARN of the Kinesis Data Firehose delivery stream that is created as part of this CloudFormation stack. In this Lambda function, if it cannot pull a logging configuration, it creates a new logging configuration using the Kinesis Data Firehose delivery stream that has already been configured. The Lambda function then attempts to call a method on AWS Config to re-evaluate compliance for this rule.

When you view the details of this rule within the AWS Config console, you’ll see all web ACLs listed under the Resource ID column. The Resource compliance status column will show as Compliant, meaning that these web ACLs comply with your AWS Config rule. Because the AWS Config rule enforces logging on web ACLs, you can be confident that logging is properly enabled.
 

Figure 3: Compliance status of web ACLs in AWS Config

Figure 3: Compliance status of web ACLs in AWS Config

The remaining parts of the CloudFormation template are in place to ensure that the system has sufficient permissions to work correctly. The Kinesis Data Firehose delivery stream is assigned to an IAM role, which has a policy assigned that grants it appropriate permissions to write to your S3 bucket. The AWS Config rule is granted permission to call the first Lambda function, and then Systems Manager is granted permission to call the second Lambda function. Finally, the Lambda functions are assigned to an IAM role that has permissions to request and modify the logging configurations of the web ACLs, and to update AWS Config with the results of those actions.

The CloudFormation template in this post provides a simple solution for automatically enabling logging of all web ACLs within an AWS Region. If your organization is looking for additional operational control, you can extend this example CloudFormation template to verify that all web ACLs are using the same logging configuration. This change could be accomplished by modifying the Lambda functions to ensure that the web ACL has both a logging configuration and is using the same Kinesis Data Firehose delivery stream that is defined within the CloudFormation template. If a logging configuration exists for a web ACL, but it is using the wrong Kinesis Data Firehose delivery stream, a Lambda function can delete that logging configuration and re-create it using the correct Kinesis Data Firehose delivery stream.

While this solution described in this blog post uses custom AWS Config rules and Automation documents for enabling logging on web ACLs, this approach can be generalized to use custom AWS Config rules for other contexts and for other resource types. For example, you can use this same approach to ensure that your Amazon Elastic Compute Cloud (Amazon EC2) instances comply with your internal IT security policies.

Cost Considerations

For customers who already use AWS WAF and AWS Firewall Manager, this solution adds additional costs for the use of AWS Config, Amazon Kinesis Data Firehose, and Amazon S3.

With AWS Config, you pay per configuration item recorded in your AWS account per AWS Region and the number of active rule evaluations recorded. For more information, see AWS Config pricing.

With AWS Systems Manager, you pay for the number of initiated actions performed (called steps) in the Automation and the duration of each step per second. I expect that my usage for this solution would fall under the free tier, but your usage may vary. For more information, see AWS Systems Manager pricing.

With AWS Lambda, you pay for the number of requests and the duration of those requests. However, because I don’t expect a lot of requests to Lambda in this solution, I expect that my usage would fall under the free tier, but your usage may vary. For more information, see AWS Lambda pricing.

With Amazon Kinesis Data Firehose, you pay only for the volume of data you ingest into the service. For more information, see Amazon Kinesis Data Firehose pricing.

For customers who want managed distributed denial of service (DDoS) protection, AWS Shield Advanced may be a good solution. Additionally, AWS Shield Advanced customers get AWS WAF and AWS Firewall Manager at no additional cost for usage on their resources that are protected by AWS Shield Advanced. For more information, see AWS Shield Pricing.

Conclusion

AWS Firewall Manager is a powerful solution for managing web ACLs at scale. By using a custom AWS Config rule—the same underlying technology used by AWS Firewall Manager—you can create a scalable approach to verify that all your web ACLs within an AWS Region have logging enabled. The CloudFormation template included in this blog post gives your organization a good starting point for being able to manage web ACL logging at scale.

Find out more:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mike George

Mike is a Senior Solutions Architect based out of Salt Lake City, Utah. He enjoys helping customers solve their technology problems. His interests include software engineering, security, and AI/ML

Monitoring and management with Amazon QuickSight and Athena in your CI/CD pipeline

Post Syndicated from Umair Nawaz original https://aws.amazon.com/blogs/devops/monitoring-and-management-with-amazon-quicksight-and-athena-in-your-ci-cd-pipeline/

One of the many ways to monitor and manage required CI/CD metrics is to use Amazon QuickSight to build customized visualizations. Additionally, by applying Lean management to software delivery processes, organizations can improve delivery of features faster, pivot when needed, respond to compliance and security changes, and take advantage of instant feedback to improve the customer delivery experience. This blog post demonstrates how AWS resources and tools can provide monitoring and information pertaining to their CI/CD pipelines.

There are three principles in Lean management that this artifact enables and to which it contributes:

  • Limiting work in progress by establishing constraints that drive process improvement and increase throughput.
  • Creating and maintaining dashboards displaying key quality information, productivity metrics, and current status of work (including defects).
  • Using data from development performance and operations monitoring tools to enable business decisions more frequently.

Overview

The following architectural diagram shows how to use AWS services to collect metrics from a CI/CD pipeline and deliver insights through Amazon QuickSight dashboards.

Architecture diagram showing an overview of how CI/CD metrics are extracted and transformed to create a dynamic QuickSight dashboard

In this example, the orchestrator for the CI/CD pipeline is AWS CodePipeline with the entry point as an AWS CodeCommit Git repository for source control. When a developer pushes a code change into the CodeCommit repository, the change goes through a series of phases in CodePipeline. AWS CodeBuild is responsible for performing build actions and, upon successful completion of this phase, AWS CodeDeploy kicks off the actions to execute the deployment.

For each action in CodePipeline, the following series of events occurs:

  • An Amazon CloudWatch rule creates a CloudWatch event containing the action’s metadata.
  • The CloudWatch event triggers an AWS Lambda function.
  • The Lambda function extracts relevant reporting data and writes it to a CSV file in an Amazon S3 bucket.
  • Amazon Athena queries the Amazon S3 bucket and loads the query results into SPICE (an in-memory engine for Amazon QuickSight).
  • Amazon QuickSight obtains data from SPICE to build dashboard displays for the management team.

Note: This solution is for an AWS account with an existing CodePipeline(s). If you do not have a CodePipeline, no metrics will be collected.

Getting started

To get started, follow these steps:

  • Create a Lambda function and copy the following code snippet. Be sure to replace the bucket name with the one used to store your event data. This Lambda function takes the payload from a CloudWatch event and extracts the field’s pipeline, time, state, execution, stage, and action to transform into a CSV file.

Note: Athena’s performance can be improved by compressing, partitioning, or converting data into columnar formats such as Apache Parquet. In this use-case, the dataset size is negligible therefore, a transformation from CSV to Parquet is not required.


import boto3
import csv
import datetime
import os

 # Analyze payload from CloudWatch Event
 def pipeline_execution(data):
     print (data)
     # Specify data fields to deliver to S3
     row=['pipeline,time,state,execution,stage,action']
     
     if "stage" in data['detail'].keys():
         stage=data['detail']['execution']
     else:
         stage='NA'
         
     if "action" in data['detail'].keys():
         action=data['detail']['action']
     else:
         action='NA'
     row.append(data['detail']['pipeline']+','+data['time']+','+data['detail']['state']+','+data['detail']['execution']+','+stage+','+action)  
     values = '\n'.join(str(v) for v in row)
     return values

 # Upload CSV file to S3 bucket
 def upload_data_to_s3(data):
     s3=boto3.client('s3')
     runDate = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S:%f")
     csv_key=runDate+'.csv'
     response = s3.put_object(
         Body=data,
         Bucket='*<example-bucket>*',
         Key=csv_key
     )

 def lambda_handler(event, context):
     upload_data_to_s3(pipeline_execution(event))
  • Create an Athena table to query the data stored in the Amazon S3 bucket. Execute the following SQL in the Athena query console and provide the bucket name that will hold the data.
CREATE EXTERNAL TABLE `devops`(
   `pipeline` string, 
   `time` string, 
   `state` string, 
   `execution` string, 
   `stage` string, 
   `action` string)
 ROW FORMAT DELIMITED 
   FIELDS TERMINATED BY ',' 
 STORED AS INPUTFORMAT 
   'org.apache.hadoop.mapred.TextInputFormat' 
 OUTPUTFORMAT 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
 LOCATION
   's3://**<example-bucket>**/'
 TBLPROPERTIES (
   'areColumnsQuoted'='false', 
   'classification'='csv', 
   'columnsOrdered'='true', 
   'compressionType'='none', 
   'delimiter'=',', 
   'skip.header.line.count'='1',  
   'typeOfData'='file')  
  • Create a CloudWatch event rule that passes events to the Lambda function created in Step 1. In the event rule configuration, set the Service Name as CodePipeline and, for Event Type, select All Events.

Sample Dataset view from Athena.

Sample Athena query and the results

Amazon QuickSight visuals

After the initial setup is done, you are ready to create your QuickSight dashboard. Be sure to check that the Athena permissions are properly set before creating an analysis to be published as an Amazon QuickSight dashboard.

Below are diagrams and figures from Amazon QuickSight that can be generated using the event data queried from Athena. In this example, you can see how many executions happened in the account and how many were successful.

The following screenshot shows that most pipeline executions are failing. A manager might be concerned that this points to a significant issue and prompt an investigation in which they can allocate resources to improve delivery and efficiency.

QuickSight Dashboard showing total execution successes and failures

The visual for this solution is dynamic in nature. In case the pipeline has more or fewer actions, the visual will adjust automatically to reflect all actions. After looking at the success and failure rates for each CodePipeline action in Amazon QuickSight, as shown in the following screenshot, users can take targeted actions quickly. For example, if the team sees a lot of failures due to vulnerability scanning, they can work on improving that problem area to drive value for future code releases.

QuickSight Dashboard showing the successes and failures of pipeline actions

Day-over-day visuals reflect date-specific activity and enable teams to see their progress over a period of time.

QuickSight Dashboard showing day over day results of successful CI/CD executions and failures

Amazon QuickSight offers controls that can be configured to apply filters to visuals. For example, the following screenshot demonstrates how users can toggle between visuals for different applications.

QuickSight's control function to switch between different visualization options

Cleanup (optional)

In order to avoid unintended charges, delete the following resources:

  • Amazon CloudWatch event rule
  • Lambda function
  • Amazon S3 Bucket (the location in which CSV files generated by the Lambda function are stored)
  • Athena external table
  • Amazon QuickSight data sets
  • Analysis and dashboard

Conclusion

In this blog, we showed how metrics can be derived from a CI/CD pipeline. Utilizing Amazon QuickSight to create visuals from these metrics allows teams to continuously deliver updates on the deployment process to management. The aggregation of the captured data over time allows individual developers and teams to improve their processes. That is the goal of creating a Lean DevOps process: to oversee the meta-delivery pipeline and optimize all future releases by identifying weak spots and points of risk during the entire release process.

___________________________________________________________

About the Authors

Umair Nawaz is a DevOps Engineer at Amazon Web Services in New York City. He works on building secure architectures and advises enterprises on agile software delivery. He is motivated to solve problems strategically by utilizing modern technologies.
Christopher Flores is an Engagement Manager at Amazon Web Services in New York City. He leads AWS developers, partners, and client teams in using the customer engagement accelerator framework. Christopher expedites stakeholder alignment, enterprise cohesion and risk mitigation while ensuring feedback loops to close the engagement lifecycle.
Carol Liao is a Cloud Infrastructure Architect at Amazon Web Services in New York City. She enjoys designing and developing modern IT solutions in the cloud where there is always more to learn, more problems to solve, and more to build.

 

Manage your AWS KMS API request rates using Service Quotas and Amazon CloudWatch

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/manage-your-aws-kms-api-request-rates-using-service-quotas-and-amazon-cloudwatch/

AWS Key Management Service (KMS) publishes API usage metrics to Amazon CloudWatch and Service Quotas allowing you to both monitor and manage your AWS KMS API request rate quotas. This functionality helps you understand trends in your usage of AWS KMS and can help prevent API request throttling as you grow your use of AWS KMS.

When you surpass your AWS KMS API request rate quotas, you receive an error “You have exceeded the rate at which you may call KMS. Reduce the frequency of your calls.” Such errors can also be caused by an increased use of AWS services that encrypt your data under keys managed in AWS KMS. For example, if you are using Amazon Redshift Spectrum, you might encounter this error – “HTTP response error code: 503 Message: SlowDown. Please reduce your request rate for operations involving AWS KMS.” Historically, in order to understand how close to a request rate quota you were, you had to perform three tasks: (i) send AWS CloudTrail events generated by AWS KMS to Amazon CloudWatch Logs; (ii) write queries in Amazon CloudWatch Logs Insights to track your API request usage; and (iii) submit an AWS Support case to request a quota increase. Now, you can view your AWS KMS API usage and request quota increases within the AWS Service Quotas console itself without doing any special configuration.

In this post, we will show you how to 1) view your KMS API utilization within Service Quotas 2) create a CloudWatch Alarm that alerts you to an approaching quota so you can request quota increases before you are throttled.

View your AWS KMS API utilization

Background

API utilization is the percentage rate at which you are calling a particular API compared to that API’s request rate quota in your account. For AWS KMS, the default request rate for cryptographic operations using symmetric keys is 10,000 requests per second in 6 specific AWS Regions*, aggregated across all requesting clients in an account. AWS KMS aggregates your API requests every minute and sends it to CloudWatch, where it is consumed by AWS Service Quotas for you to see. Because quota usage is aggregated by the minute, your effective quota would be 600,000 requests per minute.

*See Request Quotas for Each AWS KMS API Operation for the specific quotas in the AWS Region in which you operate.

Scenario

Imagine that all the applications in your account using AWS KMS collectively made 100,000 requests to the Decrypt API, 100,000 requests to the GenerateDataKey API, and 100,000 requests to the Encrypt API in a minute. AWS KMS sends a count of 300,000 requests to Amazon CloudWatch for that particular minute. Your utilization for that minute will be 50% of your quota (300,000 divided by 600,000, which is 60 seconds times your quota of 10,000 requests per second). Within the Service Quotas console, you can view utilization across several time frames, from the most recent hour up to a week.

Here are the steps to view your AWS KMS API Utilization within Service Quotas:

  1. Sign in to the AWS Management Console.
  2. Click on “Services” dropdown on the top left corner and search for “Service Quotas” and select it from the dropdown.
  3. Click on the AWS Key Management Service (AWS KMS) tile on the Service Quotas dashboard.
  4. Search for “symmetric” and click on the link for “Cryptographic operations (symmetric) request rate”.
  5. The Monitoring section will display the combined utilization percentage for the following APIs – Decrypt, Encrypt, GenerateDataKey, GenerateDataKeyWithoutPlaintext, GenerateRandom, and ReEncrypt. All these APIs are grouped under the shared “Cryptographic operations (symmetric) request rate”.
  6. Adjust the graph to view the utilization trend over a week by selecting “1w” from the top right corner of the graph.

You can view the utilization for any of the other available AWS KMS APIs from the Service Quotas dashboard in a similar fashion.

The API utilization provides you the overall trend of your API usage. Because the requests sent from AWS KMS are aggregated per minute, you could still experience throttling errors at a less than 100% utilization, especially if your usage is spiky and if you do not have exponential back off built into your applications’ error handling logic. For example, you might have surpassed the requests per second quota between the 12th second and the 15th second of the minute, but you were below the quota for the other 57 seconds of that minute.
 
Customizable CloudWatch graph

The utilization shown is across your entire AWS account in a given region, so if you are introducing a new application, you can monitor and see how it impacts your overall utilization. If you need a request rate quota increase before deploying your new application to production, you can request a quota increase at the top right portion of the Details section of the AWS Service Quotas page.

Create a CloudWatch Alarm

In the previous section we described how you can view historical utilization of API request rates from the Monitoring section of the AWS Service Quotas console. What if you want to be alerted when you have reached a predetermined utilization percentage so you can request a quota increase before you begin to experience extended throttling?

Here are the steps to do so:

  1. Click on the API of your interest from the Service Quotas console. In this example, let’s select Cryptographic operations (symmetric) request rate.
  2. In the Amazon CloudWatch alarms section (under the Monitoring section), click Create on the right hand corner.
  3. From the Alarm threshold dropdown select “80% of applied quota value”.
  4. Enter “80threshold” as the Alarm name and click the orange Create button on the right side.
  5. Click on the “80threshold” link that now appears in the table. A new browser window will appear that takes you to the Amazon CloudWatch console.
  6. Click Edit on the top right corner.
  7. Leave all the default values selected on the Specify metrics and condition page and click Next on the bottom right.
  8. Click Add notification and select Create new topic under the Select an SNS topic section. Enter “SNS-Topic” as the topic name. Add your email address to receive notifications when the alarm is set. Click Create topic.
  9. Click Update alarm.
  10. Confirm your SNS subscription by clicking on View SNS Subscriptions.
     
  11.  

  12. Select your email address endpoint and click Request confirmation.
  13. You will receive an email to confirm your subscription. Once you confirm the subscription, you are all set to receive email notifications on the new alarm.
     
    User interface after CloudWatch alarm created

Here are more details on creating CloudWatch alarms if you want to make additional modifications to your alarms. We recommend 80% as a good threshold to set your alarm to begin with. When you are testing a new application, you can start with this threshold and run your application for a period of time and monitor its utilization. When an alarm fires, you can you can proactively request a quota increase at the top right portion of the Details section of the AWS Service Quotas page.

Conclusion

We’ve explored how to view your AWS KMS API request usage, how to add alarms on the most critical items in your application’s use of AWS KMS, and how to request quota increases. These items provide visibility and control over how your applications interact with AWS KMS.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the AWS Key Management Service forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical on the AWS KMS team who focuses on defining the product roadmap to satisfy customer requirements. Raj has spent over 5 years innovating to deliver products that help customers secure their data in the cloud. In his spare time, he enjoys yoga and spending time with family.

How to improve LDAP security in AWS Directory Service with client-side LDAPS

Post Syndicated from Dave Martinez original https://aws.amazon.com/blogs/security/how-to-improve-ldap-security-in-aws-directory-service-with-client-side-ldaps/

You can now better protect your organization’s identity data by encrypting Lightweight Directory Access Protocol (LDAP) communications between AWS Directory Service products (AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector) and self-managed Active Directory. Client-side secure LDAP (LDAPS) support enables applications that integrate with AWS Directory Service, such as Amazon WorkSpaces and AWS Single Sign-On, to connect to AD using Secure Sockets Layer/Transport Layer Security (SSL/TLS).

Note: In 2017, AWS Directory Service released server-side LDAPS support in AWS Managed Microsoft AD. This update adds client-side LDAPS support to both AWS Managed Microsoft AD and AD Connector.

In this post, I’ll step through configuring client-side LDAPS to enable encrypted communications between Amazon WorkSpaces and an Amazon Elastic Compute Cloud (Amazon EC2)-based self-managed AD.

Solution architecture

When you have completed the steps outlined in this post, your solution will look like Figure 1:

Figure 1: Solution architecture

Figure 1: Solution architecture

To build the solution, you will follow a three step process:

  1. Prepare all prerequisites, including the setup of certificate-based security in the self-managed AD environment.
  2. Register your certificate authority (CA) certificate into AWS Directory Service and enable client-side LDAPS (purple arrow in diagram above).
  3. Test client-side LDAPS using Amazon WorkSpaces and AWS Directory Service (yellow arrows in diagram above).

Step one: Set up prerequisites

To follow the steps described in this blog, you will need:

  1. A self-managed AD deployment to store your user identities. You can find setup guidance in “Step 1: Set Up Your Environment for Trusts” of the Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  2. A server authentication certificate installed on your self-managed AD domain controller. Creating the certificate is typically done one of two ways:
    1. Using Active Directory Certificate Services (AD CS) in Windows Server to deploy an in-house CA for issuing server certificates. For help with setting up an AD CS deployment that supports LDAPS, see Microsoft’s LDAP over SSL (LDAPS) Certificate.
    2. Purchasing SSL certificates from a commercial CA like Verisign or AWS Certificate Manager. For help using commercial certificates with AD, see How to enable LDAP over SSL with a third-party certification authority.
  3. An AWS Directory Service directory, either AWS Managed Microsoft AD or AD Connector, to act as a bridge from AWS to your self-managed AD. See the documentation for AWS Managed Microsoft AD or AD Connector for detailed steps and tutorials. If you’re using AWS Managed Microsoft AD, also set up a two-way trust with your self-managed AD using Tutorial: Creating a Trust from AWS Managed Microsoft AD to a Self-Managed Active Directory Installation on Amazon EC2.
  4. Amazon WorkSpaces connected to your AWS Directory Service directory to look up and authenticate users. See the WorkSpaces documentation for detailed steps on using AWS Managed Microsoft AD with a Trusted Domain or AD Connector.

The remainder of this post assumes you have:

  1. Created an AWS Managed Microsoft AD instance called corp.example.com
  2. Connected corp.example.com via two-way trust to an EC2-based self-managed AD called example.local
  3. Deployed an AD CS enterprise root certificate authority in example.local with the common name Example SelfManaged CA.

When you perform the steps described below, you should replace these names with the names you selected.

Step two: Configure client-side LDAPS in AWS Directory Service

Now, you’ll retrieve the CA certificate — which represents the issuing certificate authority — from your self-managed AD and use it to enable client-side LDAPS in AWS Directory Service. To review CA certificate requirements for AWS Directory Service, see the client-side LDAPS documentation for AWS Managed Microsoft AD or AD Connector.

  1. Export the CA certificate from the example.local CA:
    1. To open the Certification Authority MMC snap-in, on the example.local server hosting AD CS, right-click the Windows icon, select Run, type certsrv.msc, and select OK.
    2. Right-click the name of the CA (in this case, Example SelfManaged CA) and select Properties.
    3. In the Properties window, on the General tab, under CA certificates, select the CA certificate listed, and then select View Certificate.
       
      Figure 2: View the CA certificate

      Figure 2: View the CA certificate

    4. In the Certificate window, on the Details tab, select Copy to File.
    5. In the Certificate Export Wizard, select Next.
    6. In the Export File Format screen, select Base-64 encoded X.509 (.CER), and then select Next. This saves the file in the format required by AWS.
       
      Figure 3: Select the base-64 encoded export file format

      Figure 3: Select the base-64 encoded export file format

    7. Select Browse, and then select a file name and save location for the CA certificate.
    8. Select Save, and then click Next.
    9. Select Finish, then select OK to complete the export process.
    10. Copy the file to a location accessible by the machine where you will be performing the AWS Directory Service configuration.
  2. Register the example.local CA certificate in AWS Directory Service:
    1. In the AWS Management Console, select Directory Service, and then select the Directory ID link for the AWS Directory Service directory connected to example.local (in this case, corp.example.com).
       
      Figure 4: Select the Directory ID

      Figure 4: Select the Directory ID

    2. On the Directory details page, in the Networking & security tab, in the Client-side LDAPS section (shown in Figure 5), select the Actions menu, and then select Register certificate.
       
      Figure 5: Select “Register certificate”

      Figure 5: Select “Register certificate”

    3. In the Register a CA certificate dialog box, select Browse, navigate to the location where you stored the CA certificate for your AD CS certificate authority, select Open, and then select Register certificate.
       
      Figure 6: Register a CA certificate

      Figure 6: Register a CA certificate

  3. Enable client-side LDAPS in AWS Directory Service:
    1. In the Client-side LDAPS section, once the Registration status field for the certificate reads Registered, select the Enable button. Click the Refresh button for updated status.
       
      Figure 7: Check the “Registration status” and then select “Enable”

      Figure 7: Check the “Registration status” and then select “Enable”

    2. In the Enable client-side LDAPS dialog box, select Enable.
    3. In the Client-side LDAPS section, under Status, when the status field changes to Enabled, LDAPS is successfully configured. Click the Refresh button for updated status.
       
      Figure 8: LDAPS successfully configured

      Figure 8: LDAPS successfully configured

Step three: Test client-side LDAPS with Amazon WorkSpaces

The last step is to test client-side LDAPS with an AWS application. Now that client-side LDAPS has been configured, all LDAP traffic to the self-managed AD will be encrypted and travel over port 636.

Note: Ensure that AWS security group, network firewall, and Windows firewall settings applied to the AWS Directory Service directory (outbound) and self-managed AD (inbound) allow TCP communications on port 636.

To test your client-side LDAPS configuration, perform a WorkSpaces user look up:

  1. In the AWS Management Console, choose WorkSpaces, and then click Launch WorkSpaces.
  2. On the Select a Directory screen, pick corp.example.com and then select Next Step.
  3. On the Identify Users screen, In the Select trust from forest menu, select example.local, and then select Show All Users (see Figure 9 for an example). This search will be executed over LDAPS.
     
    Figure 9: Searching users from a trusted domain with client-side LDAPS

    Figure 9: Searching users from a trusted domain with client-side LDAPS

Summary

In this post, we’ve explored how client-side LDAPS support in AWS Managed Microsoft AD and AD Connector improves LDAP security for AWS applications and services like Amazon WorkSpaces, AWS Single Sign-On, and Amazon QuickSight by encrypting sensitive network traffic between AWS and Active Directory.

To learn more about using AWS Managed Microsoft AD or AD Connector, visit the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page. If you have comments about this blog post, submit a comment in the Comments section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dave Martinez

Dave is a Senior Product Manager working on AWS Directory Service. Outside of work he enjoys Seattle sports and coaching his son’s Little League baseball team.

Transforming DevOps at Broadridge on AWS

Post Syndicated from Som Chatterjee original https://aws.amazon.com/blogs/devops/transforming-devops-for-a-fintech-on-aws/

by Tom Koukourdelis (Broadridge – Vice President, Head of Global Cloud Platform Development and Engineering), Sreedhar Reddy (Broadridge – Vice President, Enterprise Cloud Architecture)

We have seen large enterprises in all industry segments meaningfully utilizing AWS to build new capabilities and deliver business value. While doing so, enterprises have to balance existing systems, processes, tools, and culture while innovating at pace with industry disruptors. Broadridge Financial Solutions, Inc. (NYSE: BR) is no exception. Broadridge is a $4 billion global FinTech leader and a leading provider of investor communications and technology-driven solutions to banks, broker-dealers, asset and wealth managers, and corporate issuers.

This blog post explores how we adopted AWS at scale while being secured and compliant, as well as delivering a high degree of productivity for our builders on AWS. It also describes the steps we took to create technical (a cloud solution as a foundation based on AWS) and procedural (organizational) capabilities by leveraging AWS cloud adoption constructs. The improvement in our builder productivity and agility directly contributes to rolling out differentiated business capabilities addressing our customer needs in a timely manner. In this post, we share real-life learnings and takeaways to adopt AWS at scale, transform business and application team experiences, and deliver customer delight.

Background

At Broadridge we have number of distributed and mainframe systems supporting multiple financial services domains and sub-domains such as post trade, proxy communications, financial and regulatory reporting, portfolio management, and financial operations. The majority of these systems were built and deployed years ago at on-premises data centers all over the US and abroad.

Builder personas at Broadridge are diverse in terms of location, culture, and the technology stack they use to build and support applications (we use a number of front-end JS frameworks; .NET; Java; ColdFusion for web development; ORMs for data entity relational mapping; IBM MQ; Apache Camel for messaging; databases like SQL, Oracle, Sybase, and other open source stacks for transaction management; databases, and batch processing on virtualized and bare metal instances). With more than 200 on-premises distributed applications and mainframe systems across front-, mid-, and back-office ecosystems, we wanted to leverage AWS to improve efficiency and build agility, and to reduce costs. The ability to reach customers at new geographies, reduced time to market, and opportunities to build new business competencies were key parameters as well.

Broadridge’s core tenents for cloud adoption

When AWS adoption within Broadridge attained a critical mass (known as the Foundation stage of adoption), the business and technology leadership teams defined our posture of cloud adoption and shared them with teams across the organization using the following tenets. Enterprises looking to adopt AWS at scale should define similar tenets fit for their organizations in plain language understandable by everyone across the board.

  • Iterate: Understanding that we cannot disrupt ongoing initiatives, small and iterative approach of moving workloads to cloud in waves— rinse and repeat— was to be adopted. Staying away from long-drawn, capital-intensive big bangs were to be avoided.
  • Fully automate: Starting from infrastructure deployment to application build, test, and release, we decided early on that automation and no-touch deployment are the right approach both to leverage cloud capabilities and to fuel a shift toward a matured DevOps culture.
  • Trust but verify only exceptions: Security and regulatory compliance are paramount for an organization like Broadridge. Guardrails (such as service control policies, managed AWS Config rules, multi-account strategy) and controls (such as PCI, NIST control frameworks) are iteratively developed to baseline every AWS account and AWS resource deployed. Manual security verification of workloads isn’t needed unless an exception is raised. Defense in depth (distancing attack surface from sensitive data and resources using multi-layered security) strategies were to be applied.
  • Go fast; re-hosting is acceptable: Not every workload needs to go through years of rewriting and refactoring before it is deemed suitable for the cloud. Minor tweaking (light touch re-platforming) to go fast (such as on-premises Oracle to RDS for Oracle) is acceptable.
  • Timeliness and small wins are key: Organizations spend large sums of capital to completely rewrite applications and by the time they are done, the business goal and customer expectation will have changed. That leads to material dissatisfaction with customers. We wanted to avoid that by setting small, measurable targets.
  • Cloud fluency: Investment in training and upskilling builders and leaders across the organization (developers, infra-ops, sec-ops, managers, salesforce, HR, and executive leadership) were to be to made to build fluency on the cloud.

The first milestone

The first milestone in our adoption journey was synonymous with Project stage of adoption and had the following characteristics.

A controlled sprawl of shadow IT

We first gave small teams with little to no exposure to critical business functions (such as customer data and SLA-oriented workloads) sandboxes to test out proofs of concepts (PoC) on AWS. We created the cloud sandboxes with least privilege, and added additional privileges upon request after verification. During this time, our key AWS usage characteristics were:

  • Manual AWS account setup with least privilege
  • Manual IAM role creation with role boundaries and authentication and authorization from the existing enterprise Active Directory
  • Integration with existing Security Information and Event Management (SIEM) tools to audit role sprawl and config changes
  • Proofs of concepts only
  • Account tagging for chargeback and tracking purposes
  • No automated build, test, deploy, or integration with existing delivery pipeline
  • Small and definitive timeframes for PoCs with defined goals

A typical AWS environment at this stage will resemble that shown in the following diagram:

Representative AWS usage during first milestone

As shown above, at this time the corporate assets were connected to a highly restrictive AWS environment through VPN. The access to the AWS environment were setup based on AWS Identity primitives or IAM roles mapped to and federated with the on-premises Active Directory. There was a single VPC setup for a sandbox account with no egress to the internet. There were no customer data hosted on this AWS environment and the AWS environment was connected with our SIEM of choice.

Early adopters became first educators and mentors

Members of the first teams to carry out proofs of concept on AWS shared learnings with each other and with the leadership team within Broadridge. This helped build communities of practices (CoPs) over time. Initial CoPs established were for networking and security, and were later extended to various practices like Terraform, Chef, and Jenkins.

Tech PMO team within Broadridge as the quasi-central cloud team

Ownership is vital no matter how small the effort and insignificant the impact of risky experimentation. The ownership of account setup, role creation, integration with on-premises AD and SIEM, and oversight to ensure that the experimentation does not pose any risk to the brand led us to build a central cloud team with experienced AWS and infrastructure practitioners. This team created a process for cloud migration with first manual guardrails of allowed and disallowed actions, manual interventions, and checkpoints built in every step.

At this stage, a representative pattern of work products across teams resembles what is shown below.

Work products across teams during initial stages of AWS usage

As the diagram suggests, individual application teams built overlapping—and, in many cases, identical—technical building blocks across the teams. This was acceptable as the teams were experimenting and running PoCs on AWS. In an actual production application delivery, the blocks marked with a * would be considered technical and functional waste—that is, undifferentiated lift which increases the cost of doing business.

The second milestone

In hindsight, this is perhaps the most important milestone in our cloud adoption journey. This step was marked with following key characteristics:

  • Every new team doing PoCs are rebuilding the same building blocks: This includes networking (VPCs and security groups), identity primitives (account, roles, and policies), monitoring (Amazon CloudWatch setup and custom metrics), and compute (images with org-mandated security patches).
  • The teams usually asking the same first fundamental questions: These include questions such as: What is an ideal CIDR block range? How do we integrate with SIEM? How do we spin up web servers on Amazon EC2? How do we secure access to data? How do we setup workload monitoring?
  • Security reviews rarely finding new security gaps but adding time to the process: A central security group as part of the central cloud team reviewed every new account request and every new service usage request without finding new security gaps when the application team used the baseline guardrails.
  • Manual effort is spent on tagging, chargeback, and other approvals: A portion of the application PoC/minimum viable product (MVP) lifecycle was spent on housekeeping. While housekeeping was necessary, the effort spent was undifferentiated.

The follow diagram represents the efforts for every team during the first phase.

Team wise efforts showing duplicative work

As shown above, every application team spent effort on building nearly the same capabilities before they could begin developing their team specific application functionalities and assets. The common blocks of work are undifferentiated and leads to spending effort which also varies depending on the efficiency of the team.

During this step, learnings from the PoCs led us to establish the tenets shared earlier in this post. To address the learnings, Broadridge established a cloud platform team. The cloud platform team, also referred to as the cloud enablement engine (CEE), is a team of builders who create the foundational building blocks on AWS that address common infrastructure, security, monitoring, auditing, and break-glass controls. At the same time, we established a cloud business office (CBO) as a liaison between the application and business teams and the CEE. CBO exists to manage and prioritize foundational requirements from multiple application teams as they go online on AWS and helps create the product backlog for CEE.

Cloud Enablement Engine Responsibilities:

  • Build out foundational building blocks utilizing AWS multi-account strategy
  • Build security guardrails, compliance controls, infrastructure as code automation, auditing and monitoring controls
  • Implement cloud platform backlog that funnel from CBO as common asks from app teams
  • Work with our AWS team to understand service roadmap, future releases, and provide feedback

Cloud Business Office Responsibilities:

  • Identify and prioritize repeating technical building blocks that cuts across multiple teams
  • Establish acceptable architecture patterns based on application use cases
  • Manage cloud programs to ensure CEE deliverables and business expectations align
  • Identify skilling needs, budget, and track spend
  • Contribute to the cloud platform backlog
  • Work with AWS team to understand service roadmap, future releases, and provide feedback

These teams were set up to scale AWS adoption, put building blocks into the hands of the applications teams, and ultimately deliver differentiated capabilities to Broadridge’s business teams and end customers. The following diagram translates the relationship and modus operandi among the teams:

CEE and CBO working model

Upon establishing the conceptual working model, the CBO and CEE teams looked at solutions from AWS to enable them to achieve the working model quickly. The starting point was AWS Landing Zone (ALZ). ALZ is an AWS solution based on the AWS multi-account strategy. It is a set of vetted constructs and best practices that we use as mechanisms to accelerate AWS adoption.

AWS multi-account strategy

The multi-account strategy employs best practices around separation of concerns, reduction of blast radius, account setup based on Software Development Life Cycle (SDLC) phases, and base operational roles for auditing, monitoring, security, and compliance, as shown in the above diagram. This strategy defines the need for having centralized shared or core accounts, which works as the master account for monitoring, governance, security, and auditing. A number of AWS services like Amazon GuardDuty, AWS Security Hub, and AWS Config configurations are set in these centralized accounts. Spoke or child accounts are vended as per a team’s requirement which are spun up with these governance, monitoring, and security defaults connected to the centralized account for log capturing, threat detection, configuration management, and security management.

The third milestone

The third milestone is synonymous with the Foundation stage of adoption

Using the ALZ construct, our CEE team developed a core set of principles to be used by every application team. Based on our core tenets, the CEE team built out an entry point (a web-based UI workflow application). This web UI was the entry point for any application team requesting an environment within AWS for experimentation or to begin the application development life cycle. Simplistically, the web UI sat on top of an automation engine built using APIs from AWS, ALZ components (Account Vending Machine, Shared Services Account, Logging Account, Security Account, default security groups, default IAM roles, and AD groups), and Terraform based code. The CBO team helped establish the common architecture patterns that was codified into this engine.

Team on-boarding workflow using foundational building blocks on AWS

An Angular based web UI is the starting point for application team to request for the AWS accounts. The web UI entry point asks a number of questions validating the type of account requested along with its intended purpose, ingress/egress requirements, high availability and disaster recovery requirements, business unit for charge back and ownership purposes. Once all information is entered, it sends out a notification based on a preset organization dispatch matrix rule. Upon receiving the request, the approver has the option to approve it or asks further clarification question. Once satisfactorily answered the approver approves the account vending request and a Terraform code is kicked in to create the default account.

When an account is created through this process, the following defaults are set up for a secure environment for development, testing, and staging. Similar guardrails are deployed in the production accounts as well.

  • Creates a new account under an existing AWS Organizational Unit (OU) based on the input parameters. Tags the chargeback codes, custom tags, and also integrates the resources with existing CMDB
  • Connects the new account to the master shared services and logging account as per the AWS Landing Zone constructs
  • Integrates with the CloudWatch event bus as a sender account
  • Runs stsAssumeRole commands on the new account to create infosec cross-account roles
  • Defines actions, conditions, role limits, and account policies
  • Creates environment variables related to the account in the parameter store within AWS Systems Manager
  • Connects the new account to TrendMicro for AV purposes
  • Attaches the default VPC of the new account to an existing AWS Transit Gateway
  • Generates a Splunk key for the account to store in the Splunk KV store
  • Uses AWS APIs to attach Enterprise support to the new account
  • Creates or amends a new AD group based on the IAM role
  • Integrates as an Amazon Macie member account
  • Enables AWS Security Hub for the account by running an enable-security-hub call
  • Sets up Chef runner for the new account
  • Runs account setting lock procedures to set Amazon S3 public settings, EBS default encryption setting
  • Enable firewall by setting AWS WAF rules for the account
  • Integrates the newly created account with CloudHealth and Dome9

Deploying all these guardrails in any new accounts removes the need for manual setup and intervention. This gives application developers the needed freedom to stop worrying about infrastructure and access provisioning while giving them a higher speed to value.

Using these technical and procedural cloud adoption constructs, we have been able to reduce application onboarding time. This has led to quicker delivery of business capability with the application teams focusing only on what differentiates their business rather than repeatedly building undifferentiated work products. This has also led to creation of mature building blocks over time for use of the application teams. Using these building blocks the teams are also modernizing applications by iteratively replacing old application blocks.

Conclusion

In summary, we are able to deliver better business outcomes and differentiated customer experience by:

  • Building common asks as reusable and automated enterprise assets and improving the overall enterprise-wide maturity by indexing on and growing these assets.
  • Depending on an experienced team to deliver baseline operational controls and guardrails.
  • Improving their security posture with higher-level and managed AWS security services instead of rebuilding everything from the ground up.
  • Using the Cloud Business Office to improve funneling of common asks. This helps the next team on AWS to benefit from a readily available set of approved services and application blueprints.

We will continue to build on and maturing these reusable building blocks by using AWS services and new feature releases.

 

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

AWS Architecture Monthly Magazine: Manufacturing

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-magazine-manufacturing/

Architecture Monthly Magazine - Nov-Dec 2019

For more than 25 years, Amazon has designed and manufactured smart products and distributed billions of products through its globally connected distribution network using cutting edge automation, machine learning and AI, and robotics, with AWS at its core. From product design to smart factory and smart products, AWS helps leading manufacturers transform their manufacturing operations with the most comprehensive and advanced set of cloud solutions available today, while taking advantage of the highest level of security.

In this Manufacturing-themed end-of-year issue of the AWS Architecture Monthly magazine, Steve Blackwell, AWS Manufacturing Tech Leader, talks about how manufacturers can experiment with and take advantage of emerging technologies using three main architectural patterns: demand forecasting, smart factories, and extending the manufacturing value chain with smart products.

In This Issue

We’ve assembled architectural best practices about Manufacturing from all over AWS, and we’ve made sure that a broad audience can appreciate it. Note that this will be our last issue of the year. We’ll be back in January with highlights and insights about AWS re:Invent 2019 (December 2-6 in Las Vegas).

  • Case Study: iRobot Ready to Unlock the Next Generation of Smart Homes Using the AWS Cloud
  • Ask an Expert: Steve Blackwell, Manufacturing Tech Leader
  • Blog Post: Reinventing the IoT Platform for Discrete Manufacturers
  • Solution: Smart Product Solution
  • AWS Coffee Break: IoT Helps Manufacturing Hit the Right Note
  • Whitepaper: Practical Ways To Achieve Smarter, Faster, and More Responsive Operations
  • Reference Architecture: EDA on AWS with IBM Spectrum LSF

How to Access the Magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

How to set up Sign in with Apple for Amazon Cognito

Post Syndicated from Jason Cai original https://aws.amazon.com/blogs/security/how-to-set-up-sign-in-with-apple-for-amazon-cognito/

Amazon Cognito user pools enables you to add user sign-in and sign-up to your mobile and web applications using a secure and scalable user directory. With Amazon Cognito user pools, your end users can sign in using a user name or password, or with a third-party identity service, such as Facebook or Google. The process of using a third-party identity service is called federation. With federation, you can build applications that retrieve information about your end users that they have provided to another service and have consented to give to your applications.

Amazon Cognito user pools now supports Sign in with Apple as an identity provider (IdP). You can now federate users using the Sign in with Apple service, map these users to a user directory, and retrieve standard authentication tokens from a user pool after the user authenticates with Apple using their Apple ID credentials.

Much like login with Facebook or Google, Sign in with Apple acts as an authorization server and verifies an end user with their Apple ID credentials. Sign in with Apple is built on the OpenID Connect (OIDC) protocol. As of writing this post, there are a few notable differences about Sign in with Apple compared to other OpenID Providers.

  • Using Sign in with Apple, an end user can choose whether to share the email linked to their Apple ID or use a generated one provided by Apple. The generated email will be of the form “<randomstring>@privaterelay.appleid.com”.
  • Unlike other identity providers, Sign in with Apple only honors the scopes requested for an end user on their first authentication through the service for the app configured on Apple’s developer portal. In other words, if you start requesting name after an end user has authenticated, for example, that information will not be returned.
  • Sign in with Apple returns the requested scopes in the initial return from their authorization endpoint for the first user authentication; however, only the email associated with the Apple ID is returned in a trusted form via the ID token.

How to set up Sign in with Apple and associate it with an Amazon Cognito user pool

The prerequisites for setting up the IdP end-to-end are:

  • An Amazon Cognito user pool with an application client
  • A domain that is associated with the user pool
  • An Apple ID with two-factor authentication enabled

Step 1: Set up Sign in with Apple service in Apple’s Developer portal

  1. Enroll in the Apple Developer Program with an Apple ID and then sign in using it.
  2. On the main developer portal page, select Certificates, IDs, & Profiles.
  3. On the left navigation bar, select Identifiers.
  4. On the Identifiers page, select the + icon.
  5. On the Register a New Identifier page, select App IDs.
  6. On the Register an App ID page, under App ID Prefix, take note of the Team ID value.
  7. Select the operating system the app will be run on (choose macOS for web-based apps).
  8. Provide a description in the Description text box.
  9. Provide a string for identifying the app under Bundle ID.
  10. Under Capabilities, select Sign in with Apple, and then select either Enable as a primary App ID (default) for use in a single Apple app or Group with an existing primary App ID for use in multiple Apple apps.
  11. Select Continue, review the configuration, and then select Register.
  12. On the Identifiers page, on the right, select App IDs, and then select Services ID.
  13. Select the + icon and, on the Register a New Identifier page, select Services IDs.
  14. On the Register a Services ID page, select the Sign in with Apple checkbox to enable the service, and then select Configure.
  15. Select the App ID that you created in step 1.1.
  16. Under Web Domain, put the domain associated with your user pool.

    NOTE: You do not have to verify the domain because the verification is required for a transaction method that Amazon Cognito does not use.

  17. Under Return URLs, type https://<your domain>/oauth2/idpresponse, select Add, and then select Save.
  18. Provide a description in the Description text box.
  19. Provide an identifier in the Identifier text box.

    Important: Make a note of this identifier because you will need it later.

    Figure 1: Provide an identifier

    Figure 1: Provide an identifier

  20. Select Continue, review the information, and then select Register.
  21. On the left navigation bar, select Keys, and on the new page, select the + icon.
  22. On the Register a New Key page, select the check box next to Sign in with Apple.
  23. Select the App ID you created in 1.1 and then select Save.
  24. Provide a key name (can be anything).
  25. Click Continue, review the information, then select Register.
  26. On the page you are redirected to take note of the Key ID and download the .p8 file containing the private key.

Step 2: Set up the Sign in with Apple IdP in Amazon Cognito user pools console

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and then select the user pool that you will be using with Sign in with Apple.
  2. Under Federation, under the Identity providers tab, select Sign in with Apple.
  3. Provide the Apple Services ID, Team ID, Key ID, and private key for the Sign in with Apple application along with the desired scopes.

    Note: The private key is provided in the .p8 file; the contents are plain text. You can provide either the file or the contents within the file for the private key.

  4. Select the Attribute mapping tab, and then select the Apple tab.
  5. Select the checkboxes under Capture next to the Apple attributes, and select the user pool attribute under User pool attribute that will receive the value from the Apple attribute and that you would like to receive in the tokens from Amazon Cognito.

    Figure 2: Select checkboxes and user pool attribute

    Figure 2: Select checkboxes and user pool attribute

  6. To enable your app client to allow federation through the Sign in with Apple IdP, under the App client settings tab under App Integration, find the App client that you want to allow Sign in with Apple and select the Sign in with Apple check box.

Step 3: Get started with your application

  1. To test that you have everything configured correctly, under the configured app client, select the Launch Hosted UI link to bring you to a sample Login page.Your configured Sign in with Apple provider will be displayed on this page through a button labelled Continue with Apple.

    Figure 3: "Continue with Apple" button

    Figure 3: “Continue with Apple” button

  2. (Optional) Perform a test authentication to ensure you have everything configured correctly on Apple’s and the Amazon Cognito side.

When a user federates using Sign in with Apple, the interactions between the end user, Amazon Cognito App Client, and Sign in with Apple looks like this:

Figure 4: Federation flow

Figure 4: Federation flow

  1. New user goes to app and selects Sign in with Apple
  2. App redirects to Apple authentication web page
  3. Apple requests Apple ID credentials
  4. User provides credentials
  5. Apple requests consent for information
  6. User chooses share/don’t share email (if requested)
  7. Redirect back to Cognito app with Authorization code
  8. Requests ID token using Authorization code, client ID, and generated client secret
  9. ID token response containing requested scopes

Tips for using Sign in with Apple in your application

  • If you want to revoke the private key associated with the Sign in with Apple service, create a new private key in the Apple developer portal and provide it to Amazon Cognito prior to revoking the old key. Doing so will ensure that you do not invalidate any ongoing end-user authentication on Apple’s side.
  • If you decide to increase the requested scopes and want the additional information from existing users, those users will have to go to appleid.apple.com and, under Apps & Websites Using Apple ID, select the application, select Stop using Apple ID, and then federate again using Sign in with Apple.
  • The name provided by Sign in with Apple is not verified in any manner and should only be used for non-essential features; for example, a welcome message on the landing UI of your app after an end user logs in.
  • If you get an “invalid redirect_url” error message on Apple’s authentication page and the redirect URL in the request is correct, check that you’ve provided the Service Identifier and not the Application Identifier for the Sign in with Apple IdP settings in Amazon Cognito user pools.

For more information, see Adding Social Identity Providers to a User Pool in the Amazon Cognito Developer Guide. You can reach us by posting to the Amazon Cognito forums. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jason Cai

Jason is a Software Development Engineer for Amazon Cognito. He holds a Master of Science degree in Electrical and Computer Engineering with a focus in Software Engineering and Systems from The University of Texas at Austin.

Use attribute-based access control with AD FS to simplify IAM permissions management

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/attribute-based-access-control-ad-fs-simplify-iam-permissions-management/

AWS Identity and Access Management (IAM) allows customers to provide granular access control to resources in AWS. One approach to granting access to resources is to use attribute-based access control (ABAC) to centrally govern and manage access to your AWS resources across accounts. Using ABAC enables you to simplify your authentication strategy by enabling you to scale your authorization strategy by granting access to groups of resources, as specified by tags, as opposed to managing long lists of individual resources. The new ability to include tags in sessions—combined with the ability to tag IAM users and roles—means that you can now incorporate user attributes from your AD FS environment as part of your tagging and authorization strategy.

In other words, you can use ABAC to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the developers’ project tag. As the team of developers adds resources to projects, permissions are automatically applied based on attributes (tags, in this case). As a result, each new resource that gets added requires no update to the IAM permissions policy.

In this blog post, I walk you through how to enable AD FS to pass tags as part of the SAML 2.0 token, so that you can enable ABAC for your AWS resources.

AD FS federated authentication process

The following diagram describes the process that a user follows to authenticate to AWS by using Active Directory and AD FS as the identity provider:
 

Figure 1: AD FS federation to AWS

Figure 1: AD FS federation to AWS

  1. A corporate user accesses the corporate Active Directory Federation Services (AD FS) portal sign-in page and provides their Active Directory authentication credentials.
  2. AD FS authenticates the user against Active Directory.
  3. Active Directory returns the user’s information, including Active Directory group membership information.
  4. AD FS dynamically builds a list of Amazon Resource Names (ARNs) for IAM Roles in one or more AWS accounts; these mappings are defined in advance by the administrator and rely on user attributes and Active Directory group memberships.
  5. AD FS sends a signed SAML 2.0 token to the user’s browser with a redirect to post the token to AWS Security Token Service (STS) including the attributes that use define in the claim rules.
  6. Temporary credentials are returned using STS AssumeRoleWithSAML.
  7. The user is authenticated and provided access to the AWS Management Console.

Prerequisites

Attribute-based access control in AWS relies on the use of tags for access-control decisions. Therefore, it’s important to have in place a tagging strategy for your resources. Please see AWS Tagging Strategies.

Implementing ABAC enables organizations to enhance the use of tags from an operational and billing construct to a security construct. Ensuring that tagging is enforced and secure is essential to an enterprise-wide strategy.

For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

AD FS session tagging setup

After you’ve set up AD FS federation to AWS, you can enable additional attributes to be sent as part of the SAML token. For information about how to enable AD FS, see the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS).

Follow these steps to send standard Active Directory attributes to AWS in the SAML token:

  1. Open Server Manager, choose Tools, then choose AD FS Management.
  2. Under Relying Party Trusts, choose AWS.
  3. Choose Edit Claim Issuance Policy, choose Add Rule, choose Send LDAP Attributes as Claims, then choose Next.
  4. On the Edit Rule page, add the requested details.
    For example, to create a Department Attribute claim rule, add the following details:

    • Claim rule name: Department Attribute
    • Attribute Store: Active Directory
    • LDAP Attribute: Department
    • Outgoing Claim Type:
      https://aws.amazon.com/SAML/Attributes/PrincipalTag:<department> (where <department> is the tag that will be passed in the session)

     

    Figure 2: Claim Rule

    Figure 2: Claim Rule

  5. Repeat the previous step for each attribute you want to send, modifying the details as necessary. For more information about how to configure claim rules, see Configuring Claim Rules in the Windows Server 2012 AD FS Deployment Guide.

Send custom attributes to AWS as part of a federated session

Session Tags in AWS can be derived from Active Directory custom attributes as well as standard attributes, which we demonstrated in the example above. For more information on custom attributes, please How to Create a Custom Attribute in Active Directory.

To ensure that your setup is correct, you should confirm that AD FS is configured correctly:

  1. Perform an identity provider (IdP) initiated authentication. Go to the following URL: https://<your.domain.name>/adfs/ls/idpinitiatedsignon.aspx, where <your.domain.name> is the DNS name of your AD FS server.
  2. Select Sign in to one of the following sites, then choose AWS from the dropdown:
     
    Figure 3: Choose AWS

    Figure 3: Choose AWS

  3. Choose Sign in, then enter your credentials.
  4. After you’ve successfully logged into AWS, navigate to AWS CloudTrail events within 15 minutes of your login event.
  5. Filter on AssumeRoleWithSAML, then locate your login event and look under principalTags to ensure you see the tags that you configured.
     
    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    After you see the tags were sent, you’re ready to use the tags to build your IAM policies.

The following is an example policy that uses multiple tags.

Example: grant IAM users access to your AWS resources by using tags

In this example I will assume that two tags will be passed from AD FS to enable you to build your ABAC tagging strategy: Project and Department. This example assumes that you have multiple teams of developers who need permissions to start and stop specific Amazon Elastic Compute Cloud (Amazon EC2) instances, based on their project allocation. Because these EC2 instances are part of AWS Autoscaling Groups, they come and go depending on scaling conditions; therefore, it would be impractical to try to write policies that refers to lists of individual EC2 instances; writing policies against the tags on the EC2 instances is a more manageable approach. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element, and all resources in the Resource element of the policy. In the Condition element of the policy, I use two conditions:

  • Matching statement where the resource tag project ec2:ResourceTag/project matches the key aws:PrincipalTag for project.
  • Matching statement where the resource tag project ec2:ResourceTag/department matches the key aws:PrincipalTag for department.

This ensures that the principal is able to start and stop an instance only if the project and department tags match value of the tags on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, because you only need to manage a single policy for all your development teams that require permissions to start and stop instances, and you can rely on tag values to specify the resources.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/project": "${aws:PrincipalTag/project}",
"ec2:ResourceTag/department": "${aws:PrincipalTag/department}" }  } }
]
}

This policy will ensure that the user can only start and stop EC2 instances for the resources that are assigned to their department and project.

Conclusion

In this post, I’ve shown how you can enable AD FS to pass tags as part of the SAML token, so that you can enable ABAC for your AWS resources to simplify permissions management at scale.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Louay Shaat

Louay Shaat

Louay Shaat

Louay is a Solutions Architect with AWS based out of Melbourne. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on Security and Automation helping customers improve their security, risk, and compliance in the cloud.