Tag Archives: Intermediate (200)

Predictive User Engagement using Amazon Pinpoint and Amazon Personalize

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/predictive-user-engagement-using-amazon-pinpoint-and-amazon-personalize/

Note: This post was written by John Burry, a Solution Architect on the AWS Customer Engagement team.


Predictive User Engagement (PUE) refers to the integration of machine learning (ML) and customer engagement services. By implementing a PUE solution, you can combine ML-based predictions and recommendations with real-time notifications and analytics, all based on your customers’ behaviors.

This blog post shows you how to set up a PUE solution by using Amazon Pinpoint and Amazon Personalize. Best of all, you can implement this solution even if you don’t have any prior machine learning experience. By completing the steps in this post, you’ll be able to build your own model in Personalize, integrate it with Pinpoint, and start sending personalized campaigns.

Prerequisites

Before you complete the steps in this post, you need to set up the following:

  • Create an admin user in Amazon Identity and Account Management (IAM). For more information, see Creating Your First IAM Admin User and Group in the IAM User Guide. You need to specify the credentials of this user when you set up the AWS Command Line Interface.
  • Install Python 3 and the pip package manager. Python 3 is installed by default on recent versions of Linux and macOS. If it isn’t already installed on your computer, you can download an installer from the Python website.
  • Use pip to install the following modules:
    • awscli
    • boto3
    • jupyter
    • matplotlib
    • sklearn
    • sagemaker

    For more information about installing modules, see Installing Python Modules in the Python 3.X Documentation.

  • Configure the AWS Command Line Interface (AWS CLI). During the configuration process, you have to specify a default AWS Region. This solution uses Amazon Sagemaker to build a model, so the Region that you specify has to be one that supports Amazon Sagemaker. For a complete list of Regions where Sagemaker is supported, see AWS Service Endpoints in the AWS General Reference. For more information about setting up the AWS CLI, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
  • Install Git. Git is installed by default on most versions of Linux and macOS. If Git isn’t already installed on your computer, you can download an installer from the Git website.

Step 1: Create an Amazon Pinpoint Project

In this section, you create and configure a project in Amazon Pinpoint. This project contains all of the customers that we will target, as well as the recommendation data that’s associated with each one. Later, we’ll use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You’ll need this value later.

Step 2: Create an Endpoint

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer, such as their email address (for email messages) or their phone number (for SMS messages). Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this example, we use these attributes to store the recommendation data that we receive from Amazon Personalize.

In this section, we create a new endpoint and user by using the AWS CLI. We’ll use this endpoint to test the SMS channel, and to test the recommendations that we receive from Personalize.

To create an endpoint by using the AWS CLI

  1. At the command line, enter the following command:
    aws pinpoint update-endpoint --application-id <project-id> \
    --endpoint-id 12456 --endpoint-request "Address='<mobile-number>', \
    ChannelType='SMS',User={UserAttributes={recommended_items=['none']},UserId='12456'}"

    In the preceding example, replace <project-id> with the Amazon Pinpoint project ID that you copied in Step 1. Replace <mobile-number> with your phone number, formatted in E.164 format (for example, +12065550142).

Note that this endpoint contains hard-coded UserId and EndpointId values of 12456. These IDs match an ID that we’ll create later when we generate the Personalize data set.

Step 3: Create a Segment and Campaign in Amazon Pinpoint

Now that we have an endpoint, we need to add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign

  1. Open the Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in Step 1.
  2. In the navigation pane, choose Segments, and then choose Create a segment.
  3. Name the segment “No recommendations”. Under Segment group 1, on the Add a filter menu, choose Filter by user.
  4. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “none”.
  5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  6. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  7. Name the campaign “SMS to users with no recommendations”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  8. On the Choose a segment page, choose the “No recommendations” segment that you just created, and then choose Next.
  9. In the message editor, type a test message, and then choose Next.
  10. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  11. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Step 4: Load sample data into Amazon Personalize

At this point, we’ve finished setting up Amazon Pinpoint. Now we can start loading data into Amazon Personalize.

To load the data into Amazon Personalize

  1. At the command line, enter the following command to clone the sample data and Jupyter Notebooks to your computer:
    git clone https://github.com/markproy/personalize-car-search.git

  2. At the command line, change into the directory that contains the data that you just cloned. Enter the following command:
    jupyter notebook

    A new window opens in your web browser.

  3. In your web browser, open the first notebook (01_generate_data.ipynb). On the Cell menu, choose Run all. Wait for the commands to finish running.
  4. Open the second notebook (02_make_dataset_group.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  5. Open the third notebook (03_make_campaigns.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  6. Open the fourth notebook (04_use_the_campaign.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete.
  7. After the fourth notebook is finished running, choose Quit to terminate the Jupyter Notebook. You don’t need to run the fifth notebook for this example.
  8. Open the Amazon Personalize console at http://console.aws.amazon.com/personalize. Verify that Amazon Personalize contains one dataset group named car-dg.
  9. In the navigation pane, choose Campaigns. Verify that it contains all of the following campaigns, and that the status for each campaign is Active:
    • car-popularity-count
    • car-personalized-ranking
    • car-hrnn-metadata
    • car-sims
    • car-hrnn

Step 5: Create the Lambda function

We’ve loaded the data into Amazon Personalize, and now we need to create a Lambda function to update the endpoint attributes in Pinpoint with the recommendations provided by Personalize.

The version of the AWS SDK for Python that’s included with Lambda doesn’t include the libraries for Amazon Personalize. For this reason, you need to download these libraries to your computer, put them in a .zip file, and upload the entire package to Lambda.

To create the Lambda function

  1. In a text editor, create a new file. Paste the following code.
    # Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #
    # This file is licensed under the Apache License, Version 2.0 (the "License").
    # You may not use this file except in compliance with the License. A copy of the
    # License is located at
    #
    # http://aws.amazon.com/apache2.0/
    #
    # This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
    # OF ANY KIND, either express or implied. See the License for the specific
    # language governing permissions and limitations under the License.
    
    AWS_REGION = "<region>"
    PROJECT_ID = "<project-id>"
    CAMPAIGN_ARN = "<car-hrnn-campaign-arn>"
    USER_ID = "12456"
    endpoint_id = USER_ID
    
    from datetime import datetime
    import json
    import boto3
    import logging
    from botocore.exceptions import ClientError
    
    DATE = datetime.now()
    
    personalize           = boto3.client('personalize')
    personalize_runtime   = boto3.client('personalize-runtime')
    personalize_events    = boto3.client('personalize-events')
    pinpoint              = boto3.client('pinpoint')
    
    def lambda_handler(event, context):
        itemList = get_recommended_items(USER_ID,CAMPAIGN_ARN)
        response = update_pinpoint_endpoint(PROJECT_ID,endpoint_id,itemList)
    
        return {
            'statusCode': 200,
            'body': json.dumps('Lambda execution completed.')
        }
    
    def get_recommended_items(user_id, campaign_arn):
        response = personalize_runtime.get_recommendations(campaignArn=campaign_arn, 
                                                           userId=str(user_id), 
                                                           numResults=10)
        itemList = response['itemList']
        return itemList
    
    def update_pinpoint_endpoint(project_id,endpoint_id,itemList):
        itemlistStr = []
        
        for item in itemList:
            itemlistStr.append(item['itemId'])
    
        pinpoint.update_endpoint(
        ApplicationId=project_id,
        EndpointId=endpoint_id,
        EndpointRequest={
                            'User': {
                                'UserAttributes': {
                                    'recommended_items': 
                                        itemlistStr
                                }
                            }
                        }
        )    
    
        return
    

    In the preceding code, make the following changes:

    • Replace <region> with the name of the AWS Region that you want to use, such as us-east-1.
    • Replace <project-id> with the ID of the Amazon Pinpoint project that you created earlier.
    • Replace <car-hrnn-campaign-arn> with the Amazon Resource Name (ARN) of the car-hrnn campaign in Amazon Personalize. You can find this value in the Amazon Personalize console.
  2. Save the file as pue-get-recs.py.
  3. Create and activate a virtual environment. In the virtual environment, use pip to download the latest versions of the boto3 and botocore libraries. For complete procedures, see Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide. Also, add the pue-get-recs.py file to the .zip file that contains the libraries.
  4. Open the IAM console at http://console.aws.amazon.com/iam. Create a new role. Attach the following policy to the role:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:DescribeLogGroups",
                    "logs:CreateLogGroup",
                    "logs:PutLogEvents",
                    "personalize:GetRecommendations",
                    "mobiletargeting:GetUserEndpoints",
                    "mobiletargeting:GetApp",
                    "mobiletargeting:UpdateEndpointsBatch",
                    "mobiletargeting:GetApps",
                    "mobiletargeting:GetEndpoint",
                    "mobiletargeting:GetApplicationSettings",
                    "mobiletargeting:UpdateEndpoint"
                ],
                "Resource": "*"
            }
        ]
    }
    
  5. Open the Lambda console at http://console.aws.amazon.com/lambda, and then choose Create function.
  6. Create a new Lambda function from scratch. Choose the Python 3.7 runtime. Under Permissions, choose Use an existing role, and then choose the IAM role that you just created. When you finish, choose Create function.
  7. Upload the .zip file that contains the Lambda function and the boto3 and botocore libraries.
  8. Under Function code, change the Handler value to pue-get-recs.lambda_handler. Save your changes.

When you finish creating the function, you can test it to make sure it was set up correctly.

To test the Lambda function

  1. On the Select a test event menu, choose Configure test events. On the Configure test events window, specify an Event name, and then choose Create.
  2. Choose the Test button to execute the function.
  3. If the function executes successfully, open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint.
  4. In the navigation pane, choose Segments, and then choose the “No recommendations” segment that you created earlier. Verify that the number under total endpoints is 0. This is the expected value; the segment is filtered to only include endpoints with no recommendation attributes, but when you ran the Lambda function, it added recommendations to the test endpoint.

Step 7: Create segments and campaigns based on recommended items

In this section, we’ll create a targeted segment based on the recommendation data provided by our Personalize dataset. We’ll then use that segment to create a campaign.

To create a segment and campaign based on personalized recommendations

  1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. On the All projects page, choose the project that you created earlier.
  2. In the navigation pane, choose Segments, and then choose Create a segment. Name the new segment “Recommendations for product 26304”.
  3. Under Segment group 1, on the Add a filter menu, choose Filter by user. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “26304”. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  4. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  5. Name the campaign “SMS to users with recommendations for product 26304”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  6. On the Choose a segment page, choose the “Recommendations for product 26304” segment that you just created, and then choose Next.
  7. In the message editor, type a test message, and then choose Next.
  8. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  9. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Next steps

Your PUE solution is now ready to use. From here, there are several ways that you can make the solution your own:

  • Expand your usage: If you plan to continue sending SMS messages, you should request a spending limit increase.
  • Extend to additional channels: This post showed the process of setting up an SMS campaign. You can add more endpoints—for the email or push notification channels, for example—and associate them with your users. You can then create new segments and new campaigns in those channels.
  • Build your own model: This post used a sample data set, but Amazon Personalize makes it easy to provide your own data. To start building a model with Personalize, you have to provide a data set that contains information about your users, items, and interactions. To learn more, see Getting Started in the Amazon Personalize Developer Guide.
  • Optimize your model: You can enrich your model by sending your mobile, web, and campaign engagement data to Amazon Personalize. In Pinpoint, you can use event streaming to move data directly to S3, and then use that data to retrain your Personalize model. To learn more about streaming events, see Streaming App and Campaign Events in the Amazon Pinpoint User Guide.
  • Update your recommendations on a regular basis: Use the create-campaign API to create a new recurring campaign. Rather than sending messages, include the hook property with a reference to the ARN of the pue-get-recs function. By completing this step, you can configure Pinpoint to retrieve the most up-to-date recommendation data each time the campaign recurs. For more information about using Lambda to modify segments, see Customizing Segments with AWS Lambda in the Amazon Pinpoint Developer Guide.

Forward Incoming Email to an External Destination

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/forward-incoming-email-to-an-external-destination/

Note: This post was written by Vesselin Tzvetkov, an AWS Senior Security Architect, and by Rostislav Markov, an AWS Senior Engagement Manager.


Amazon SES has included support for incoming email for several years now. However, some customers have told us that they need a solution for forwarding inbound emails to domains that aren’t managed by Amazon SES. In this post, you’ll learn how to use Amazon SES, Amazon S3, and AWS Lambda to create a solution that forwards incoming email to an email address that’s managed outside of Amazon SES.

Architecture

This solution uses several AWS services to forward incoming emails to a single, external email address. The following diagram shows the flow of information in this solution.

The following actions occur in this solution:

  1. A new email is sent from an external sender to your domain. Amazon SES handles the incoming email for your domain.
  2. An Amazon SES receipt rule saves the incoming message in an S3 bucket.
  3. An Amazon SES receipt rule triggers the execution of a Lambda function.
  4. The Lambda function retrieves the message content from S3, and then creates a new message and sends it to Amazon SES.
  5. Amazon SES sends the message to the destination server.

Limitations

This solution works in all AWS Regions where Amazon SES is currently available. For a complete list of supported Regions, see AWS Service Endpoints in the AWS General Reference.

Prerequisites

In order to complete this procedure, you need to have a domain that receives incoming email. If you don’t already have a domain, you can purchase one through Amazon Route 53. For more information, see Registering a New Domain in the Amazon Route 53 Developer Guide. You can also purchase a domain through one of several third-party vendors.

Procedures

Step 1: Set up Your Domain

  1. In Amazon SES, verify the domain that you want to use to receive incoming email. For more information, see Verifying Domains in the Amazon SES Developer Guide.
  2. Add the following MX record to the DNS configuration for your domain:
    10 inbound-smtp.<regionInboundUrl>.amazonaws.com

    Replace <regionInboundUrl> with the URL of the email receiving endpoint for the AWS Region that you use Amazon SES in. For a complete list of URLs, see AWS Service Endpoints – Amazon SES in the AWS General Reference.

  3. If your account is still in the Amazon SES sandbox, submit a request to have it removed. For more information, see Moving Out of the Sandbox in the Amazon SES Developer Guide.

Step 2: Configure Your S3 Bucket

  1. In Amazon S3, create a new bucket. For more information, see Create a Bucket in the Amazon S3 Getting Started Guide.
  2. Apply the following policy to the bucket:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowSESPuts",
                "Effect": "Allow",
                "Principal": {
                    "Service": "ses.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucketName>/*",
                "Condition": {
                    "StringEquals": {
                        "aws:Referer": "<awsAccountId>"
                    }
                }
            }
        ]
    }

  3. In the policy, make the following changes:
    • Replace <bucketName> with the name of your S3 bucket.
    • Replace <awsAccountId> with your AWS account ID.

    For more information, see Using Bucket Policies and User Policies in the Amazon S3 Developer Guide.

Step 3: Create an IAM Policy and Role

  1. Create a new IAM Policy with the following permissions:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:CreateLogGroup",
                    "logs:PutLogEvents"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "ses:SendRawEmail"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucketName>/*",
                    "arn:aws:ses:<region>:<awsAccountId>:identity/*"
                ]
            }
        ]
    }

    In the preceding policy, make the following changes:

    • Replace <bucketName> with the name of the S3 bucket that you created earlier.
    • Replace <region> with the name of the AWS Region that you created the bucket in.
    • Replace <awsAccountId> with your AWS account ID. For more information, see Create a Customer Managed Policy in the IAM User Guide.
  2. Create a new IAM role. Attach the policy that you just created to the new role. For more information, see Creating Roles in the IAM User Guide.

Step 4: Create the Lambda Function

  1. In the Lambda console, create a new Python 3.7 function from scratch. For the execution role, choose the IAM role that you created earlier.
  2. In the code editor, paste the following code.
    # Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #
    # This file is licensed under the Apache License, Version 2.0 (the "License").
    # You may not use this file except in compliance with the License. A copy of the
    # License is located at
    #
    # http://aws.amazon.com/apache2.0/
    #
    # This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
    # OF ANY KIND, either express or implied. See the License for the specific
    # language governing permissions and limitations under the License.
    
    import os
    import boto3
    import email
    import re
    from botocore.exceptions import ClientError
    from email.mime.multipart import MIMEMultipart
    from email.mime.text import MIMEText
    from email.mime.application import MIMEApplication
    
    region = os.environ['Region']
    
    def get_message_from_s3(message_id):
    
        incoming_email_bucket = os.environ['MailS3Bucket']
        incoming_email_prefix = os.environ['MailS3Prefix']
    
        if incoming_email_prefix:
            object_path = (incoming_email_prefix + "/" + message_id)
        else:
            object_path = message_id
    
        object_http_path = (f"http://s3.console.aws.amazon.com/s3/object/{incoming_email_bucket}/{object_path}?region={region}")
    
        # Create a new S3 client.
        client_s3 = boto3.client("s3")
    
        # Get the email object from the S3 bucket.
        object_s3 = client_s3.get_object(Bucket=incoming_email_bucket,
            Key=object_path)
        # Read the content of the message.
        file = object_s3['Body'].read()
    
        file_dict = {
            "file": file,
            "path": object_http_path
        }
    
        return file_dict
    
    def create_message(file_dict):
    
        sender = os.environ['MailSender']
        recipient = os.environ['MailRecipient']
    
        separator = ";"
    
        # Parse the email body.
        mailobject = email.message_from_string(file_dict['file'].decode('utf-8'))
    
        # Create a new subject line.
        subject_original = mailobject['Subject']
        subject = "FW: " + subject_original
    
        # The body text of the email.
        body_text = ("The attached message was received from "
                  + separator.join(mailobject.get_all('From'))
                  + ". This message is archived at " + file_dict['path'])
    
        # The file name to use for the attached message. Uses regex to remove all
        # non-alphanumeric characters, and appends a file extension.
        filename = re.sub('[^0-9a-zA-Z]+', '_', subject_original) + ".eml"
    
        # Create a MIME container.
        msg = MIMEMultipart()
        # Create a MIME text part.
        text_part = MIMEText(body_text, _subtype="html")
        # Attach the text part to the MIME message.
        msg.attach(text_part)
    
        # Add subject, from and to lines.
        msg['Subject'] = subject
        msg['From'] = sender
        msg['To'] = recipient
    
        # Create a new MIME object.
        att = MIMEApplication(file_dict["file"], filename)
        att.add_header("Content-Disposition", 'attachment', filename=filename)
    
        # Attach the file object to the message.
        msg.attach(att)
    
        message = {
            "Source": sender,
            "Destinations": recipient,
            "Data": msg.as_string()
        }
    
        return message
    
    def send_email(message):
        aws_region = os.environ['Region']
    
    # Create a new SES client.
        client_ses = boto3.client('ses', region)
    
        # Send the email.
        try:
            #Provide the contents of the email.
            response = client_ses.send_raw_email(
                Source=message['Source'],
                Destinations=[
                    message['Destinations']
                ],
                RawMessage={
                    'Data':message['Data']
                }
            )
    
        # Display an error if something goes wrong.
        except ClientError as e:
            output = e.response['Error']['Message']
        else:
            output = "Email sent! Message ID: " + response['MessageId']
    
        return output
    
    def lambda_handler(event, context):
        # Get the unique ID of the message. This corresponds to the name of the file
        # in S3.
        message_id = event['Records'][0]['ses']['mail']['messageId']
        print(f"Received message ID {message_id}")
    
        # Retrieve the file from the S3 bucket.
        file_dict = get_message_from_s3(message_id)
    
        # Create the message.
        message = create_message(file_dict)
    
        # Send the email and print the result.
        result = send_email(message)
        print(result)
  3. Create the following environment variables for the Lambda function:

    KeyValue
    MailS3BucketThe name of the S3 bucket that you created earlier.
    MailS3PrefixThe path of the folder in the S3 bucket where you will store incoming email.
    MailSenderThe email address that the forwarded message will be sent from. This address has to be verified.
    MailRecipientThe address that you want to forward the message to.
    RegionThe name of the AWS Region that you want to use to send the email.
  4. Under Basic settings, set the Timeout value to 30 seconds.

(Optional) Step 5: Create an Amazon SNS Topic

You can optionally create an Amazon SNS topic. This step is helpful for troubleshooting purposes, or if you just want to receive additional notifications when you receive a message.

  1. Create a new Amazon SNS topic. For more information, see Creating a Topic in the Amazon SNS Developer Guide.
  2. Subscribe an endpoint, such as an email address, to the topic. For more information, see Subscribing an Endpoint to a Topic in the Amazon SNS Developer Guide.

Step 6: Create a Receipt Rule Set

  1. In the Amazon SES console, create a new Receipt Rule Set. For more information, see Creating a Receipt Rule Set in the Amazon SES Developer Guide.
  2. In the Receipt Rule Set that you just created, add a Receipt Rule. In the Receipt Rule, add an S3 Action. Set up the S3 Action to send your email to the S3 bucket that you created earlier.
  3. Add a Lambda action to the Receipt Rule. Configure the Receipt Rule to invoke the Lambda function that you created earlier.

For more information, see Setting Up a Receipt Rule in the Amazon SES Developer Guide.

Step 7: Test the Function

  • Send an email to an address that corresponds with an address in the Receipt Rule you created earlier. Make sure that the email arrives in the correct S3 bucket. In a minute or two, the email arrives in the inbox that you specified in the MailRecipient variable of the Lambda function.

Troubleshooting

If you send a test message, but it is never forwarded to your destination email address, do the following:

  • Make sure that the Amazon SES Receipt Rule is active.
  • Make sure that the email address that you specified in the MailRecipient variable of the Lambda function is correct.
  • Subscribe an email address or phone number to the SNS topic. Send another test email to your domain. Make sure that SNS sends a Received notification to your subscribed email address or phone number.
  • Check the CloudWatch Log for your Lambda function to see if any errors occurred.

If you send a test email to your receiving domain, but you receive a bounce notification, do the following:

  • Make sure that the verification process for your domain completed successfully.
  • Make sure that the MX record for your domain specifies the correct Amazon SES receiving endpoint.
  • Make sure that you’re sending to an address that is handled by the receipt rule.

Costs of using this solution

The cost of implementing this solution is minimal. If you receive 10,000 emails per month, and each email is 2KB in size, you pay $1.00 for your use of Amazon SES. For more information, see Amazon SES Pricing.

You also pay a small charge to store incoming emails in Amazon S3. The charge for storing 1,000 emails that are each 2KB in size is less than one cent. Your use of Amazon S3 might qualify for the AWS Free Usage Tier. For more information, see Amazon S3 Pricing.

Finally, you pay for your use of AWS Lambda. With Lambda, you pay for the number of requests you make, for the amount of compute time that you use, and for the amount of memory that you use. If you use Lambda to forward 1,000 emails that are each 2KB in size, you pay no more than a few cents. Your use of AWS Lambda might qualify for the AWS Free Usage Tier. For more information, see AWS Lambda Pricing.

Note: These cost estimates don’t include the costs associated with purchasing a domain, since many users already have their own domains. The cost of obtaining a domain is the most expensive part of implementing this solution.

Conclusion

This solution makes it possible to forward incoming email from one of your Amazon SES verified domains to an email address that isn’t necessarily verified. It’s also useful if you have multiple AWS accounts, and you want incoming messages to be sent from each of those accounts to a single destination. We hope you’ve found this tutorial to be helpful!

New Issue of Architecture Monthly: Games

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/new-issue-of-architecture-monthly-games/

Architecture Monthyl Magazine - September 2019 (Games)This month’s Architecture Monthly magazine is all about games—not Scrabble, not Uno, not Twister, and certainly not hide-and-seek.

No, we’re talking the big business of online, multiplayer games. And did you know that approximately 90% of large, public game companies are running on the AWS cloud? Yep, I’m talking Epic (ever heard of Fortnite?), Ubisoft, Nintendo, and more. I had the opportunity to sit down with a senior tech leader for AWS Games, who talked about why companies are moving to the cloud from on-premise, and it’s about a whole lot more than just games for entertainment. We got into the big-money world of competitive eSports as well as the gamification of learning processes and economics.

Consider Twitch, often defined as Amazon’s live streaming platform for gamers. But Twitch is much more than a gaming platform. For example, AWS Live Video on Twitch offers live streaming about everything from how to develop serverless apps and robots to interactive quiz shows that help you prepare for AWS Certification exams. And of course, you can also learn about the technology that powers your favorite video games.

September’s Issue

For September’s issue, we’ve assembled architectural best practices about games from all over AWS, and we’ve made sure that a broad audience can appreciate it.

How to Access the Magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle page or contact us anytime at [email protected].

Automating notifications when AMI permissions change

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/automating-notifications-when-ami-permissions-change/

This post is courtesy of Ernes Taljic, Solutions Architect and Sudhanshu Malhotra, Solutions Architect

This post demonstrates how to automate alert notifications when users modify the permissions of an Amazon Machine Image (AMI). You can use it as a blueprint for a wide variety of alert notifications by making simple modifications to the events that you want to receive alerts about. For example, updating the specific operation in Amazon CloudWatch allows you to receive alerts on any activity that AWS CloudTrail captures.

This post walks you through on how to configure an event rule in CloudWatch that triggers an AWS Lambda function. The Lambda function uses Amazon SNS to send an email when an AMI changes to public, private, shared, or unshared with one or more AWS accounts.

Solution overview

The following diagram describes the solution at a high level:

  1. A user changes an attribute of an AMI.
  2. CloudTrail logs the change as a ModifyImageAttribute API event.
  3. A CloudWatch Events rule captures this event.
  4. The CloudWatch Events rule triggers a Lambda function.
  5. The Lambda function publishes a message to the defined SNS topic.
  6. SNS sends an email alert to the topic’s subscribers.

Deployment walkthrough

To implement this solution, you must create:

  • An SNS topic
  • An IAM role
  • A Lambda function
  • A CloudWatch Events rule

Step 1: Creating an SNS topic

To create an SNS topic, complete the following steps:

  1. Open the SNS console.
  2. Under Create topic, for Topic name, enter a name and choose Create topic. You can now see the MySNSTopic page. The Details section displays the topic’s Name, ARN, Display name (optional), and the AWS account ID of the Topic owner.
  3. In the Details section, copy the topic ARN to the clipboard, for example:
    arn:aws:sns:us-east-1:123456789012:MySNSTopic
  4. On the left navigation pane, choose Subscriptions, Create subscription.
  5. On the Create subscription page, do the following:
    1. Enter the topic ARN of the topic you created earlier:
      arn:aws:sns:us-east-1:123456789012:MySNSTopic
    2. For Protocol, select Email.
    3. For Endpoint, enter an email address that can receive notifications.
    4. Choose Create subscription.

Step 2: Creating an IAM role

To create an IAM role, complete the following steps. For more information, see Creating an IAM Role.

  1. In the IAM console, choose Policies, Create Policy.
  2. On the JSON tab, enter the following IAM policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ],
            "Effect": "Allow",
            "Sid": "LogStreamAccess"
        },
        {
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:*:*:*"
            ],
            "Effect": "Allow",
            "Sid": "SNSPublishAllow"
        },
        {
            "Action": [
                "iam:ListAccountAliases"
            ],
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "ListAccountAlias"
        }
    ]
}

3. Choose Review policy.

4. Enter a name (MyCloudWatchRole) for this policy and choose Create policy. Note the name of this policy for later steps.

5. In the left navigation pane, choose Roles, Create role.

6. On the Select role type page, choose Lambda and the Lambda use case.

7. Choose Next: Permissions.

8. Filter policies by the policy name that you just created, and select the check box.

9. Choose Next: Tags, and give it an appropriate tag.

10. Choose Next: Review. Give this IAM role an appropriate name, and note it for future use.

11.   Choose Create role.

Step 3: Creating a Lambda function

To create a Lambda function, complete the following steps. For more information, see Create a Lambda Function with the Console.

  1. In the Lambda console, choose Author from scratch.
  2. For Function Name, enter the name of your function.
  3. For Runtime, choose Python 3.7.
  4. For Execution role, select Use an existing role, then select the IAM role created in the previous step.
  5. Choose Create Function, remove the default function, and copy the following code into the Function Code window:
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file.
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific language governing permissions
# and limitations under the License.
#
# Description: This Lambda function sends an SNS notification to a given AWS SNS topic when an API event of \"Modify Image Attribute\" is detected.
#              The SNS subject is- "API call-<insert call event> by < insert user name> detected in Account-<insert account alias>, see message for further details". 
#              The JSON message body of the SNS notification contains the full event details.
# 
#
# Author: Sudhanshu Malhotra


import json
import boto3
import logging
import os
import botocore.session
from botocore.exceptions import ClientError
session = botocore.session.get_session()

logging.basicConfig(level=logging.DEBUG)
logger=logging.getLogger(__name__)

import ipaddress
import traceback

def lambda_handler(event, context):
	logger.setLevel(logging.DEBUG)
	eventname = event['detail']['eventName']
	snsARN = os.environ['snsARN']          #Getting the SNS Topic ARN passed in by the environment variables.
	user = event['detail']['userIdentity']['type']
	srcIP = event['detail']['sourceIPAddress']
	imageId = event['detail']['requestParameters']['imageId']
	launchPermission = event['detail']['requestParameters']['launchPermission']
	imageAction = list(launchPermission.keys())[0]
	accnt_num =[]
	
	if imageAction == "add":
		if "userId" in launchPermission['add']['items'][0].keys():
			accnt_num = [li['userId'] for li in launchPermission['add']['items']]      # Get the AWS account numbers that the image was shared with
			imageAction = "Image shared with AWS account: " + str(accnt_num)[1:-1]
		else:
			imageAction = "Image made Public"
	
	elif imageAction == "remove":
		if "userId" in launchPermission['remove']['items'][0].keys():
			accnt_num = [li['userId'] for li in launchPermission['remove']['items']]
			imageAction = "Image Unshared with AWS account: " + str(accnt_num)[1:-1]    # Get the AWS account numbers that the image was unshared with
		else:
			imageAction = "Image made Private"
	
	
	logger.debug("Event is --- %s" %event)
	logger.debug("Event Name is--- %s" %eventname)
	logger.debug("SNSARN is-- %s" %snsARN)
	logger.debug("User Name is -- %s" %user)
	logger.debug("Source IP Address is -- %s" %srcIP)
	
	client = boto3.client('iam')
	snsclient = boto3.client('sns')
	response = client.list_account_aliases()
	logger.debug("List Account Alias response --- %s" %response)
	
	# Check if the source IP is a valid IP or AWS service DNS name.
	# If DNS name then we ignore the API activity as this is internal AWS operation
	# For more information check - https://aws.amazon.com/premiumsupport/knowledge-center/cloudtrail-root-action-logs/
	try:
	    validIP = ipaddress.ip_address(srcIP)
	    logger.debug("IP addr is-- %s" %validIP)
	except Exception as e:
	    logger.error("Catching the traceback error: %s" %traceback.format_exc())
	    logger.debug("Seems like the root API activity was caused by an internal operation. IP address is internal service DNS name")
	    return
	try:
		if not response['AccountAliases']:
			accntAliase = (boto3.client('sts').get_caller_identity()['Account'])
			logger.info("Account Aliase is not defined. Account ID is %s" %accntAliase)
		else:
			accntAliase = response['AccountAliases'][0]
			logger.info("Account Aliase is : %s" %accntAliase)
	
	except ClientError as e:
		logger.error("Client Error occured")
	
	try: 
		publish_message = ""
		publish_message += "\nImage Attribute change summary" + "\n\n"
		publish_message += "##########################################################\n"
		publish_message += "# Event Name- " +str(eventname) + "\n" 
		publish_message += "# Account- " +str(accntAliase) +  "\n"
		publish_message += "# AMI ID- " +str(imageId) +  "\n"
		publish_message += "# Image Action- " +str(imageAction) +  "\n"
		publish_message += "# Source IP- " +str(srcIP) +   "\n"
		publish_message += "##########################################################\n"
		publish_message += "\n\n\nThe full event is as below:- \n\n" +str(event) +"\n"
		logger.debug("MESSAGE- %s" %publish_message)
		
		#Sending the notification...
		snspublish = snsclient.publish(
						TargetArn= snsARN,
						Subject=(("Image Attribute change API call-\"%s\" detected in Account-\"%s\"" %(eventname,accntAliase))[:100]),
						Message=publish_message
						)
	except ClientError as e:
		logger.error("An error occured: %s" %e)

6. In the Environment variables section, enter the following key-value pair:

  • Key= snsARN
  • Value= the ARN of the MySNSTopic created earlier

7. Choose Save.

Step 4: Creating a CloudWatch Events rule

To create a CloudWatch Events rule, complete the following steps. This rule catches when a user performs a ModifyImageAttribute API event and triggers the Lambda function (set as a target).

1.       In the CloudWatch console, choose Rules, Create rule.

  • On the Step 1: Create rule page, under Event Source, select Event Pattern.
  • Copy the following event into the preview pane:
{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "ec2.amazonaws.com"
    ],
    "eventName": [
      "ModifyImageAttribute"
    ]
  }
}
  • For Targets, select Lambda function, and select the Lambda function created in Step 2.

  • Choose Configure details.

2. On the Step 2: Configure rule details page, enter a name and description for the rule.

3. For State, select Enabled.

4. Choose Create rule.

Solution validation

Confirm that the solution works by changing an AMI:

  1. Open the Amazon EC2 console. From the menu, select AMIs under the Images heading.
  2. Select one of the AMIs, and choose Actions, Modify Image Permissions. To create an AMI, see How do I create an AMI that is based on my EBS-backed EC2 instance?
  3. Choose Private.
  4. For AWS Account Number, choose an account with which to share the AMI.
  5. Choose Add Permission, Save.
  6. Check your inbox to verify that you received an email from SNS. The email contains a summary message with the following information, followed by the full event:
  • Event Name
  • Account
  • AMI ID
  • Image Action
  • Source IP

For Image Action, the email lists one of the following events:

  • Image made Public
  • Image made Private
  • Image shared with AWS account
  • Image Unshared with AWS account

The message also includes the account ID of any shared or unshared accounts.

Creating other alert notifications

This post shows you how to automate notifications when AMI permissions change, but the solution is a blueprint for a wide variety of use cases that require alert notifications. You can use the CloudTrail event history to find the API associated with the event that you want to receive notifications about, and create a new CloudWatch Events rule for that event.

  1. Open the CloudTrail console and choose Event history.
  2. Explore the CloudTrail event history and locate the event name associated with the actions that you performed in your AWS account.
  3. Make a note of the event name and modify the eventName parameter in Step 4, 1.b to configure alerts for that particular event.

Automating Notifications - CloudTrail console

Conclusion

This post demonstrated how you can use CloudWatch to create automated notifications when AMI permissions change. Additionally, you can use the CloudTrail event history to find the API for other events, and use the preceding walkthrough to create other event alerts.

For further reading, see the following posts:

 

 

Wag!: Why Even Your Dog Loves a Canary Deployment

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/wag-why-even-your-dog-loves-a-canary-deployment/

Since August 26 was National Dog Day, we thought it might be a great time to talk about why Wag!,an on-demand dog-walking, boarding, and pet-setting service that’s available in 43 states and 100 cities, deployed blue-green (or canary) architecture for increased availability and reduced risk using Amazon ECS.

Last June, Dave Bullock, Director of Engineering from Wag Labs Inc., talked with AWS Senior Solutions Architect Peter Tilsen about how the company needed to find a faster solution for updates and rollbacks than what the previously solution, AWS OpsWorks, could provide. What used to take up to 10 minutes for a rolling deployment for all of their cluster instances, now is done in a few minutes.

Listen to Dave as he shows us how Wag! runs canary deployments (a technique for releasing applications by shifting traffic between two identical environments running different versions of the application) of containerized applications with ECS.

*Check out more This Is My Architecture video series.

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

Improve Productivity and Reduce Overhead Expenses with Red Hat OpenShift Dedicated on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/improve-productivity-and-reduce-overhead-expenses-with-red-hat-openshift-dedicated-on-aws/

Red Hat OpenShift on AWS helps you develop, deploy, and manage container-based applications across on-premises and cloud environments. A recent case study from Cathay Pacific Airways proved that the use of the Red Hat OpenShift application platform can significantly improve developer productivity and reduce operational overhead by automating infrastructure, application deployment, and scaling. In this post, I explore how the architectural implementation and customization options of Red Hat OpenShift dedicated on AWS can cater to a variety of customer needs.

Red Hat OpenShift is a turn key solution providing  a container runtime, Kubernetes orchestration, container image repositories, pipeline, build process, monitoring, logging, role-based access control, granular policy-based control, and abstractions to simplify functions. Deploying a single turnkey solution, instead of building and integrating a collection of independent solutions or services, allows you to invest more time and effort in building meaningful applications for your business.

In the past, Red Hat OpenShift deployed on Amazon EC2 using an automated provisioning process with an open source solution, like the Red Hat OpenShift on AWS Quick Start. The Red Hat OpenShift Quick Start is an infrastructure as code solution which accelerates customer provisioning of Red Hat OpenShift on AWS. The OpenShift Quick Start adheres to the reference architecture to deploy Red Hat OpenShift on AWS in a resilient, scalable, well-architected manner. This reference architecture sees the control plane as a collection of load balanced master nodes for traffic routing, session state, scheduling, and monitoring. It also contains the application nodes where the customer’s containerized workloads run. This solution allowed customers to get up and running within three hours; however, it did not reduce management overhead because customers were required to monitor and maintain the infrastructure of the Red Hat OpenShift cluster.

Red Hat and AWS listened to customer feedback and created Red Hat OpenShift dedicated, a fully managed OpenShift implementation running exclusively on AWS. This implementation monitors the layers and functions, scales the layers to cater to consumption needs, and addresses operational concerns.

Customers now have access to a platform that helps manage control planes for business-critical solutions, like their developer and operational platforms.

Red Hat OpenShift Dedicated Infrastructure on AWS

You can purchase Red Hat OpenShift dedicated through the Red Hat account team. Red Hat OpenShift dedicated comes in two varieties: the Standard edition and the Cloud Choice edition (bring your own cloud).

redhar-openshit options

Figure 1: Red Hat OpenShift architecture illustrating master and infrastructure nodes spread over three Availability Zones and placed behind elastic load balancers.

Red Hat OpenShift dedicated adheres to the reference architecture defined by AWS and Red Hat. Master and infrastructure layers are spread across three AWS availability zones providing resilience within the OpenShift solution, as well as the underlying infrastructure.

Red Hat OpenShift Dedicated Standard Edition

In the Red Hat OpenShift dedicated standard edition, Red Hat deploys the OpenShift cluster into an AWS account owned and managed by Red Hat. Red Hat provides an aggregated bill for the OpenShift subscription fees, management fees, and AWS billing. This edition is ideal for customers who want everything to be managed for them. The Red Hat site reliability engineering  team (SRE) will monitor and manage healing, scaling, and patching of the cluster.

Red Hat OpenShift Cloud Choice Edition

The cloud choice edition allows customers to create their own AWS account, and then have the Red Hat OpenShift dedicated infrastructure provisioned into their existing account. The Red Hat SRE team provisions the Red Hat OpenShift cluster into the customer owned AWS account and manages the solution via IAM roles.

Figure 2: Red Hat OpenShift Cloud Choice IAM role separation

Red Hat provides billing for the Red Hat OpenShift Cloud Choice subscription and management fees, and AWS provides billing for the AWS resources. Keeping the Red Hat OpenShift infrastructure within your AWS account allows better cost controls.

Red Hat OpenShift Cloud Choice provides visibility into the resources running in your account; which is desirable if you have regulatory and auditing concerns. You can inspect, monitor, and audit resources within the AWS account — taking advantage of the rich AWS service set (AWS CloudTrail, AWS config, AWS CloudWatch, and AWS cost explorer).

You can also take advantage of cost management solutions like AWS organizations and consolidated billing. Customers with multiple business units using AWS can combine the usage across their accounts to share the volume pricing discounts resulting in cost savings for projects, departments, and companies.

Red Hat OpenShift Cloud Choice dedicated cannot be deployed into an account currently hosting other applications and resources. In order to maintain separation of control with the managed service, Red Hat OpenShift Cloud Choice dedicated requires an AWS account dedicated to the managed Red Hat OpenShift solution.

You can take advantage of cost reductions of up to 70% using Reserved Instances, which match the pervasive running instances. This is ideal for the master and infrastructure nodes of the Red Hat OpenShift solutions running in your account. The reference architecture for Red Hat OpenShift on AWS recommends spanning  nodes over three availability zones, which translates to three master instances. The master and infrastructure nodes scale differently; so, there will be three additional instances for the infrastructure nodes. Purchasing reserved instances to offset the costs of the master nodes and the infrastructure nodes can free up funds for your next project.

Interactions

DevOps teams using either edition of Red Hat OpenShift dedicated have a rich console experience providing control over networking between application workloads, storage, and monitoring. Granular drill down consoles enable operations teams to focus on what is most critical to their organization.

Each interface is controlled through granular role-based access control. Teams have visibility of high-level cluster overviews where they are able to see visualizations of the overall health of the cluster; and they have access to more granular overviews of views of hosts, nodes, and containers. Application owners, key stake holders, and operations teams have access to a customizable dashboard displaying the running state. Teams can drill down to the underlying nodes, and further into the PODs and containers, should they wish to explore the status or overall health of the containerized micro services. The cluster-wide event stream provides the same drill down experience to logging events.

The drill down console menu options are illustrated in the screenshots below:

In summary, the partnership of Red Hat and AWS created a fully managed solution which directly answers customer feedback requests for a fully managed application platform running on the availability, scalability, and cost benefits of AWS. The solution allows visibility and control whenever and wherever you need it.

About the author

Ryan Niksch

Ryan Niksch is a Partner Solutions Architect focusing on application platforms, hybrid application solutions, and modernization. Ryan has worn many hats in his life and has a passion for tinkering and a desire to leave everything he touches a little better than when he found it.

Things to Consider When You Build REST APIs with Amazon API Gateway

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/things-to-consider-when-you-build-rest-apis-with-amazon-api-gateway/

A few weeks ago, we kicked off this series with a discussion on REST vs GraphQL APIs. This post will dive deeper into the things an API architect or developer should consider when building REST APIs with Amazon API Gateway.

Request Rate (a.k.a. “TPS”)

Request rate is the first thing you should consider when designing REST APIs. By default, API Gateway allows for up to 10,000 requests per second. You should use the built in Amazon CloudWatch metrics to review how your API is being used. The Count metric in particular can help you review the total number of API requests in a given period.

It’s important to understand the actual request rate that your architecture is capable of supporting. For example, consider this architecture:

REST API 1

This API accepts GET requests to retrieve a user’s cart by using a Lambda function to perform SQL queries against a relational database managed in RDS.  If you receive a large burst of traffic, both API Gateway and Lambda will scale in response to the traffic. However, relational databases typically have limited memory/cpu capacity and will quickly exhaust the total number of connections.

As an API architect, you should design your APIs to protect your down stream applications.  You can start by defining API Keys and requiring your clients to deliver a key with incoming requests. This lets you track each application or client who is consuming your API.  This also lets you create Usage Plans and throttle your clients according to the plan you define.  For example, you if you know your architecture is capable of of sustaining 200 requests per second, you should define a Usage plan that sets a rate of 200 RPS and optionally configure a quota to allow a certain number of requests by day, week, or month.

Additionally, API Gateway lets you define throttling settings for the whole stage or per method. If you know that a GET operation is less resource intensive than a POST operation you can override the stage settings and set different throttling settings for each resource.

Integrations and Design patterns

The example above describes a synchronous, tightly coupled architecture where the request must wait for a response from the backend integration (RDS in this case). This results in system scaling characteristics that are the lowest common denominator of all components. Instead, you should look for opportunities to design an asynchronous, loosely coupled architecture. A decoupled architecture separates the data ingestion from the data processing and allows you to scale each system separately. Consider this new architecture:

REST API 2

This architecture enables ingestion of orders directly into a highly scalable and durable data store such as Amazon Simple Queue Service (SQS).  Your backend can process these orders at any speed that is suitable for your business requirements and system ability.  Most importantly,  the health of the backend processing system does not impact your ability to continue accepting orders.

Security

Security with API Gateway falls into three major buckets, and I’ll outline them below. Remember, you should enable all three options to combine multiple layers of security.

Option 1 (Application Firewall)

You can enable AWS Web Application Firewall (WAF) for your entire API. WAF will inspect all incoming requests and block requests that fail your inspection rules. For example, WAF can inspect requests for SQL Injection, Cross Site Scripting, or whitelisted IP addresses.

Option 2 (Resource Policy)

You can apply a Resource Policy that protects your entire API. This is an IAM policy that is applied to your API and you can use this to white/black list client IP ranges or allow AWS accounts and AWS principals to access your API.

Option 3 (AuthZ)

  1. IAM:This AuthZ option requires clients to sign requests with the AWS v4 signing process. The associated IAM role or user must have permissions to perform the execute-api:Invoke action against the API.
  2. Cognito: This AuthZ option requires clients to login into Cognito and then pass the returned ID or Access JWT token in the Authentication header.
  3. Lambda Auth: This AuthZ option is the most flexible and lets you execute a Lambda function to perform any custom auth strategy needed. A common use case for this is OpenID Connect.

A Couple of Tips

Tip #1: Use Stage variables to avoid hard coding your backend Lambda and HTTP integrations. For example, you probably have multiple stages such as “QA” and “PROD” or “V1” and “V2.” You can define the same variable in each stage and specify different values. For example, you might an API that executes a Lambda function. In each stage, define the same variable called functionArn. You can reference this variable as your Lambda ARN during your integration configuration using this notation: ${stageVariables.functionArn}. API Gateway will inject the corresponding value for the stage dynamically at runtime, allowing you to execute different Lambda functions by stage.

Tip #2: Use Path and Query variables to inject dynamic values into your HTTP integrations. For example, your cart API may define a userId Path variable that is used to lookup a user’s cart: /cart/profile/{userId}. You can inject this variable directly into your backend HTTP integration URL settings like this: http://myapi.someds.com/cart/profile/{userId}

Summary

This post covered strategies you should use to ensure your REST API architectures are scalable and easy to maintain.  I hope you’ve enjoyed this post and our next post will cover GraphQL API architectures with AWS AppSync.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Architecture Monthly Magazine for July: Machine Learning

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-for-july-machine-learning/

Every month, AWS publishes the AWS Architecture Monthly Magazine (available for free on Kindle and Flipboard) that curates some of the best technical and video content from around AWS.

In the June edition, we offered several pieces of content related to Internet of Things (IoT). This month we’re talking about artificial intelligence (AI), namely machine learning.

Machine Learning: Let’s Get it Started

Alan Turing, the British mathematician whose life and work was documented in the movie The Imitation Game, was a pioneer of theoretical computer science and AI. He was the first to put forth the idea that machines can think.

Jump ahead 80 years to this month when researchers asked four-time World Poker Tour title holder Darren Elias to play Texas Hold’em with Pluribus, a poker-playing bot (actually, five of these bots were at the table). Pluribus learns by playing against itself over and over and remembering which strategies worked best. The bot became world-class-level poker player in a matter of days. Read about it in the journal Science.

If AI is making a machine more human, AI’s subset, machine learning, involves the techniques that allow these machines to make sense of the data we feed them. Machine learning is mimicking how humans learn, and Pluribus is actually learning from itself.

From self-driving cars, medical diagnostics, and facial recognition to our helpful (and sometimes nosy) pals Siri, Alexa, and Cortana, all these smart machines are constantly improving from the moment we unbox them. We humans are teaching the machines to think like us.

For July’s magazine, we assembled architectural best practices about machine learning from all over AWS, and we’ve made sure that a broad audience can appreciate it.

  • Interview: Mahendra Bairagi, Solutions Architect, Artificial Intelligence
  • Training: Getting in the Voice Mindset
  • Quick Start: Predictive Data Science with Amazon SageMaker and a Data Lake on AWS
  • Blog post: Amazon SageMaker Neo Helps Detect Objects and Classify Images on Edge Devices
  • Solution: Fraud Detection Using Machine Learning
  • Video: Viz.ai Uses Deep Learning to Analyze CT Scans and Save Lives
  • Whitepaper: Power Machine Learning at Scale

We hope you find this edition of Architecture Monthly useful, and we’d like your feedback. Please give us a star rating and your comments on Amazon. You can also reach out to [email protected] anytime. Check back in a month to discover what the August magazine will offer.

How to deploy CloudHSM to securely share your keys with your SaaS provider

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/security/how-to-deploy-cloudhsm-securely-share-keys-with-saas-provider/

If your organization is using software as a service (SaaS), your data is likely stored and protected by the SaaS provider. However, depending on the type of data that your organization stores and the compliance requirements that it must meet, you might need more control over how the encryption keys are stored, protected, and used. In this post, I’ll show you two options for deploying and managing your own CloudHSM cluster to secure your keys, while still allowing trusted third-party SaaS providers to securely access your HSM cluster in order to perform cryptographic operations. You can also use this architecture when you want to share your keys with another business unit or with an application that’s running in a separate AWS account.

AWS CloudHSM is one of several cryptography services provided by AWS to help you secure your data and keys in the AWS cloud. AWS CloudHSM provides single-tenant HSMs based on third-party FIPS 140-2 Level 3 validated hardware, under your control, in your Amazon Virtual Private Cloud (Amazon VPC). You can generate and use keys on your HSM using CloudHSM command line tools or standards-compliant C, Java, and OpenSSL SDKs.

A related, more widely used service is AWS Key Management Service (KMS). KMS is generally easier to use, cheaper to operate, and is natively integrated with most AWS services. However, there are some use cases for which you may choose to rely on CloudHSM to meet your security and compliance requirements.

Solution Overview

There are two ways you can set up your VPC and CloudHSM clusters to allow trusted third-party SaaS providers to use the HSM cluster for cryptographic operations. The first option is to use VPC peering to allow traffic to flow between the SaaS provider’s HSM client VPC and your CloudHSM VPC, and to utilize a custom application to harness the HSM.

The second option is to use KMS to manage the keys, specifying a custom key store to generate and store the keys. AWS KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage.

Decision Criteria: VPC Peering vs Custom Key Store

The right solution for you will depend on factors like your VPC configuration, security requirements, network setup, and the type of cryptographic operations you need. The following table provides a high-level summary of how these two options compare. Later in this post, I’ll go over both options in detail and explain the design considerations you need to be aware of before deploying the solution in your environment.

Technical ConsiderationsSolution
VPC PeeringCustom Keystore
Are you able to peer or connect your HSM VPC with your SaaS provider?✔
Is your SaaS provider sensitive to costs from KMS usage in their AWS account?✔
Do you need CloudHSM-specific cryptographic tasks like signing, HMAC, or random number generation?✔
Does your SaaS provider need to encrypt your data directly with the Master Key?✔
Does your application rely on a PKCS#11-compliant or JCE-compliant SDK?✔
Does your SaaS provider need to use the keys in AWS services?✔
Do you need to log all key usage activities when SaaS providers use your HSM keys?✔

Option 1: VPC Peering

 

Figure 1: Architecture diagram showing VPC peering between the SaaS provider's HSM client VPC and the customer's HSM VPC

Figure 1: Architecture diagram showing VPC peering between the SaaS provider’s HSM client VPC and the customer’s HSM VPC

Figure 1 shows how you can deploy a CloudHSM cluster in a dedicated HSM VPC and peer this HSM VPC with your service provider’s VPC to allow them to access the HSM cluster through the client/application. I recommend that you deploy the CloudHSM cluster in a separate HSM VPC to limit the scope of resources running in that VPC. Since VPC peering is not transitive, service providers will not have access to any resources in your application VPCs or any other VPCs that are peered with the HSM VPC.

It’s possible to leverage the HSM cluster for other purposes and applications, but you should be aware of the potential drawbacks before you do. This approach could make it harder for you to find non-overlapping CIDR ranges for use with your SaaS provider. It would also mean that your SaaS provider could accidentally overwrite HSM account credentials or lock out your HSMs, causing an availability issue for your other applications. Due to these reasons, I recommend that you dedicate a CloudHSM cluster for use with your SaaS providers and use small VPC and subnet sizes, like /27, so that you’re not wasting IP space and it’s easier to find non-overlapping IP addresses with your SaaS provider.

If you’re using VPC peering, your HSM VPC CIDR cannot overlap with your SaaS provider’s VPC. Deploying the HSM cluster in a separate VPC gives you flexibility in selecting a suitable CIDR range that is non-overlapping with the service provider since you don’t have to worry about your other applications. Also, since you’re only hosting the HSM Cluster in this VPC, you can choose a CIDR range that is relatively small.

Design considerations

Here are additional considerations to think about when deploying this solution in your environment:

  • VPC peering allow resources in either VPC to communicate with each other as long as security groups, NACLS, and routing allow for it. In order to improve security, place only resources that are meant to be shared in the VPC, and secure communication at the port/protocol level by using security groups.
  • If you decide to revoke the SaaS provider’s access to your CloudHSM, you have two choices:
    • At the network layer, you can remove connectivity by deleting the VPC peering or by modifying the CloudHSM security groups to disallow the SaaS provider’s CIDR ranges.
    • Alternately, you can log in to the CloudHSM as Crypto Officer (CO) and change the password or delete the Crypto user that the SaaS provider is using.
  • If you’re deploying CloudHSM across multiple accounts or VPCs within your organization, you can also use AWS Transit Gateway to connect the CloudHSM VPC to your application VPCs. Transit Gateway is ideal when you have multiple application VPCs that needs CloudHSM access, as it easily scales and you don’t have to worry about the VPC peering limits or the number of peering connections to manage.
  • If you’re the SaaS provider, and you have multiple clients who might be interested in this solution, you must make sure that one customer IP space doesn’t overlap with yours. You must also make sure that each customer’s HSM VPC doesn’t overlap with any of the others. One solution is to dedicate one VPC per customer, to keep the client/application dedicated to that customer, and to peer this VPC with your application VPC. This reduces the overlapping CIDR dependency among all your customers.

Option 2: Custom Key Store

As the AWS KMS documentation explains, KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage. When you use a CMK in a custom key store, the cryptographic operations are performed in the HSMs in the cluster. This feature combines the convenience and widespread integration of AWS KMS with the added control of an AWS CloudHSM cluster in your AWS account. This option allows you to keep your master key in the CloudHSM cluster but allows your SaaS provider to use your master key securely by using KMS.

Each custom key store is associated with an AWS CloudHSM cluster in your AWS account. When you connect the custom key store to its cluster, AWS KMS creates the network infrastructure to support the connection. Then it logs into the key AWS CloudHSM client in the cluster using the credentials of a dedicated crypto user in the cluster. All of this is automatically set up, with no need to peer VPCs or connect to your SaaS provider’s VPC.

You create and manage your custom key stores in AWS KMS, and you create and manage your HSM clusters in AWS CloudHSM. When you create CMKs in an AWS KMS custom key store, you view and manage the CMKs in AWS KMS. But you can also view and manage their key material in AWS CloudHSM, just as you would do for other keys in the cluster.

The following diagram shows how some keys can be located in a CloudHSM cluster but be visible through AWS KMS. These are the keys that AWS KMS can use for crypto operations performed through KMS.
 

Figure 2: High level overview of KMS custom key store

Figure 2: High level overview of KMS custom key store

While this option eliminates many of the networking components you need to set up for Option 1, it does limit the type of cryptographic operations that your SaaS provider can perform. Since the SaaS provider doesn’t have direct access to CloudHSM, the crypto operations are limited to the encrypt and decrypt operations supported by KMS, and your SaaS provider must use KMS APIs for all of their operations. This is easy if they’re using AWS services which use KMS already, but if they’re performing operations within their application before storing the data in AWS storage services, this approach could be challenging, because KMS doesn’t support all the same types of cryptographic operations that CloudHSM supports.

Figure 3 illustrates the various components that make up a custom key store and shows how a CloudHSM cluster can connect to KMS to create a customer controlled key store.
 

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Design Considerations

  • Note that when using custom key store, you’re creating a kmsuser CU account in your AWS CloudHSM cluster and providing the kmsuser account credentials to AWS KMS.
  • This option requires your service provider to be able to use KMS as the key management option within their application. Because your SaaS provider cannot communicate directly with the CloudHSM cluster, they must instead use KMS APIs to encrypt the data. If your SaaS provider is encrypting within their application without using KMS, this option may not work for you.
  • When deploying a custom key store, you must not only control access to the CloudHSM cluster, you must also control access to AWS KMS.
  • Because the custom key store and KMS are located in your account, you must give permission to the SaaS provider to use certain KMS keys. You can do this by enabling cross account access. For more information, please refer to the blog post “Share custom encryption keys more securely between accounts by using AWS Key Management Service.”
  • I recommend dedicating an AWS account to the CloudHSM cluster and custom key store, as this simplifies setup. For more information, please refer to Controlling Access to Your Custom Key Store.

Network architecture that is not supported by CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4 shows various networking technologies, like AWS PrivateLink, Network Address Translation (NAT), and AWS Load Balancers, that cannot be used with CloudHSM when placed between the CloudHSM cluster and the client/application. All of these methods mask the real IPs of the HSM cluster nodes from the client, which breaks the communication between the CloudHSM client and the HSMs.

When the CloudHSM client successfully connects to the HSM cluster, it downloads a list of HSM IP addresses which is then stored and used for subsequent connections. When one of the HSM nodes is unavailable, the client/application will automatically try the IP address of the HSM nodes it knows about. When HSMs are added or removed from the cluster, the client is automatically reconfigured. Since the client relies on a current list of IP addresses to transparently handle high availability and failover within the cluster, masking the real IP address of the HSM node thus breaks the communication between the cluster and the client.

You can read more about how the CloudHSM client works in the AWS CloudHSM User Guide.

Summary

In this blog post, I’ve shown you two options for deploying CloudHSM to store your key material while allowing your SaaS provider to access and use those keys on your behalf. This allows you to remain in control of your encryption keys and use a SaaS solution without compromising security.

It’s important to understand the security requirements, network setup, and type of cryptographic operation for each approach, and to choose the option that aligns the best with your goals. As a best practice, it’s also important to understand how to secure your CloudHSM and KMS deployment and to use necessary role-based access control with minimum privilege. Read more about AWS KMS Best Practices and CloudHSM Best Practices.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Vinod Madabushi

Vinod is an Enterprise Solutions Architect with AWS. He works with customers on building highly available, scalable, and secure applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

Introducing the “Preparing for the California Consumer Privacy Act” whitepaper

Post Syndicated from Julia Soscia original https://aws.amazon.com/blogs/security/introducing-the-preparing-for-the-california-consumer-privacy-act-whitepaper/

AWS has published a whitepaper, Preparing for the California Consumer Protection Act, to provide guidance on designing and updating your cloud architecture to follow the requirements of the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for qualified security assessors (QSAs) and internal security assessors (ISAs) so that you can better understand the range of AWS products and services that are available for you to use.

The CCPA was enacted into law on June 28, 2018 and grants California consumers certain privacy rights. The CCPA grants consumers the right to request that a business disclose the categories and specific pieces of personal information collected about the consumer, the categories of sources from which that information is collected, the “business purposes” for collecting or selling the information, and the categories of third parties with whom the information is shared. This whitepaper looks to address the three main subsections of the CCPA: data collection, data retrieval and deletion, and data awareness.

To read the text of the CCPA please visit the website for California Legislative Information.

If you have questions or want to learn more, contact your account executive or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Julia Soscia

Julia is a Solutions Architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

Author photo

Anthony Pasquarielo

Anthony is a Solutions Architect at Amazon Web Services. He’s based in New York City. His main focus is providing customers technical guidance and consultation during their cloud journey. Anthony enjoys delighting customers by designing well-architected solutions that drive value and provide growth opportunity for their business.

Author photo

Justin De Castri

Justin is a Manager of Solutions Architecture at Amazon Web Services based in New York City. His primary focus is helping customers build secure, scaleable, and cost optimized solutions that are aligned with their business objectives.

How to Architect APIs for Scale and Security

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/how-to-architect-apis-for-scale-and-security/

We hope you’ve enjoyed reading our posts on best practices for your serverless applications. This series of posts will focus on best practices and concepts you should be familiar with when you architect APIs for your applications. We’ll kick this first post off with a comparison between REST and GraphQL API architectures.

Introduction

Developers have been creating RESTful APIs for a long time, typically using HTTP methods, such as GET, POST, DELETE to perform operations against the API. Amazon API Gateway is designed to make it easy for developers to create APIs at any scale without managing any servers. API Gateway will handle all of the heavy lifting needed including traffic management, security, monitoring, and version/environment management.

GraphQL APIs are relatively new, with a primary design goal of allowing clients to define the structure of the data that they require. AWS AppSync allows you to create flexible APIs that access and combine multiple data sources.

REST APIs

Architecting a REST API is structured around creating combinations of resources and methods.  Resources are paths  that are present in the request URL and methods are HTTP actions that you take against the resource. For example, you may define a resource called “cart”: http://myapi.somecompany.com/cart. The cart resource can respond to HTTP POSTs for adding items to a shopping cart or HTTP GETs for retrieving the items in your cart. With API Gateway, you would implement the API like this:

Behind the scenes, you can integrate with nearly any backend to provide the compute logic, data persistence, or business work flows.  For example, you can configure an AWS Lambda function to perform the addition of an item to a shopping cart (HTTP POST).  You can also use API Gateway to directly interact with AWS services like Amazon DynamoDB.  An example is using API Gateway to retrieve items in a cart from DynamoDB (HTTP GET).

RESTful APIs tend to use Path and Query parameters to inject dynamic values into APIs. For example, if you want to retreive a specific cart with an id of abcd123, you could design the API to accept a query or path parameter that specifies the cartID:

/cart?cartId=abcd123 or /cart/abcd123

Finally, when you need to add functionality to your API, the typical approach would be to add additional resources.  For example, to add a checkout function, you could add a resource called /cart/checkout.

GraphQL APIs

Architecting GraphQL APIs is not structured around resources and HTTP verbs, instead you define your data types and configure where the operations will retrieve data through a resolver. An operation is either a query or a mutation. Queries simply retrieve data while mutations are used when you want to modify data. If we use the same example from above, you could define a cart data type as follows:

type Cart {

  cartId: ID!

  customerId: String

  name: String

  email: String

  items: [String]

}

Next, you configure the fields in the Cart to map to specific data sources. AppSync is then responsible for executing resolvers to obtain appropriate information. Your client will send a HTTP POST to the AppSync endpoint with the exact shape of the data they require. AppSync is responsible for executing all configured resolvers to obtain the requested data and return a single response to the client.

Rest API

With GraphQL, the client can change their query to specify the exact data that is needed. The above example shows two queries that ask for different sets of information. The first getCart query asks for all of the static customer (customerId, name, email) and a list of items in the cart. The second query just asks for the customer’s static information. Based on the incoming query, AppSync will execute the correct resolver(s) to obtain the data. The client submits the payload via a HTTP POST to the same endpoint in both cases. The payload of the POST body is the only thing that changes.

As we saw above, a REST based implementation would require the API to define multiple HTTP resources and methods or path/query parameters to accomplish this.

AppSync also provides other powerful features that are not possible with REST APIs such as real-time data synchronization and multiple methods of authentication at the field and operation level.

Summary

As you can see, these are two different approaches to architecting your API. In our next few posts, we’ll cover specific features and architecture details you should be aware of when choosing between API Gateway (REST) and AppSync (GraphQL) APIs. In the meantime, you can read more about working with API Gateway and Appsync.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Ten Things Serverless Architects Should Know

Post Syndicated from Justin Pirtle original https://aws.amazon.com/blogs/architecture/ten-things-serverless-architects-should-know/

Building on the first three parts of the AWS Lambda scaling and best practices series where you learned how to design serverless apps for massive scale, AWS Lambda’s different invocation models, and best practices for developing with AWS Lambda, we now invite you to take your serverless knowledge to the next level by reviewing the following 10 topics to deepen your serverless skills.

1: API and Microservices Design

With the move to microservices-based architectures, decomposing monothlic applications and de-coupling dependencies is more important than ever. Learn more about how to design and deploy your microservices with Amazon API Gateway:

Get hands-on experience building out a serverless API with API Gateway, AWS Lambda, and Amazon DynamoDB powering a serverless web application by completing the self-paced Wild Rydes web application workshop.

Figure 1: WildRydes serverless web application workshop

2: Event-driven Architectures and Asynchronous Messaging Patterns

When building event-driven architectures, whether you’re looking for simple queueing and message buffering or a more intricate event-based choreography pattern, it’s valuable to learn about the mechanisms to enable asynchronous messaging and integration. These are enabled primarily through the use of queues or streams as a message buffer and topics for pub/sub messaging. Understand when to use each and the unique advantages and features of all three:

Gets hands-on experience building a real-time data processing application using Amazon Kinesis Data Streams and AWS Lambda by completing the self-paced Wild Rydes data processing workshop.

3: Workflow Orchestration in a Distributed, Microservices Environment

In distributed microservices architectures, you must design coordinated transactions in different ways than traditional database-based ACID transactions, which are typically implemented using a monolithic relational database. Instead, you must implement coordinated sequenced invocations across services along with rollback and retry mechanisms. For workloads where there a significant orchestration logic is required and you want to use more of an orchestrator pattern than the event choreography pattern mentioned above, AWS Step Functions enables the building complex workflows and distributed transactions through integration with a variety of AWS services, including AWS Lambda. Learn about the options you have to build your business workflows and keep orchestration logic out of your AWS Lambda code:

Get hands-on experience building an image processing workflow using computer vision AI services with AWS Rekognition and AWS Step Functions to orchestrate all logic and steps with the self-paced Serverless image processing workflow workshop.

Figure 2: Several AWS Lambda functions managed by an AWS Step Functions state machine

4: Lambda Computing Environment and Programming Model

Though AWS Lambda is a service that is quick to get started, there is value in learning more about the AWS Lambda computing environment and how to take advantage of deeper performance and cost optimization strategies with the AWS Lambda runtime. Take your understanding and skills of AWS Lambda to the next level:

5: Serverless Deployment Automation and CI/CD Patterns

When dealing with a large number of microservices or smaller components—such as AWS Lambda functions all working together as part of a broader application—it’s critical to integrate automation and code management into your application early on to efficiently create, deploy, and version your serverless architectures. AWS offers several first-party deployment tools and frameworks for Serverless architectures, including the AWS Serverless Application Model (SAM), the AWS Cloud Development Kit (CDK), AWS Amplify, and AWS Chalice. Additionally, there are several third party deployment tools and frameworks available, such as the Serverless Framework, Claudia.js, Sparta, or Zappa. You can also build your own custom-built homegrown framework. The important thing is to ensure your automation strategy works for your use case and team, and supports your planned data source integrations and development workflow. Learn more about the available options:

Learn how to build a full CI/CD pipeline and other DevOps deployment automation with the following workshops:

6: Serverless Identity Management, Authentication, and Authorization

Modern application developers need to plan for and integrate identity management into their applications while implementing robust authentication and authorization functionality. With Amazon Cognito, you can deploy serverless identity management and secure sign-up and sign-in directly into your applications. Beyond authentication, Amazon API Gateway also allows developers to granularly manage authorization logic at the gateway layer and authorize requests directly, without exposing their using several types of native authorization.

Learn more about the options and benefits of each:

Get hands-on experience working with Amazon Cognito, AWS Identity and Access Management (IAM), and Amazon API Gateway with the Serverless Identity Management, Authentication, and Authorization Workshop.

Figure 3: Serverless Identity Management, Authentication, and Authorization Workshop

7: End-to-End Security Techniques

Beyond identity and authentication/authorization, there are many other areas to secure in a serverless application. These include:

  • Input and request validation
  • Dependency and vulnerability management
  • Secure secrets storage and retrieval
  • IAM execution roles and invocation policies
  • Data encryption at-rest/in-transit
  • Metering and throttling access
  • Regulatory compliance concerns

Thankfully, there are AWS offerings and integrations for each of these areas. Learn more about the options and benefits of each:

Get hands-on experience adding end-to-end security with the techniques mentioned above into a serverless application with the Serverless Security Workshop.

8: Application Observability with Comprehensive Logging, Metrics, and Tracing

Before taking your application to production, it’s critical that you ensure your application is fully observable, both at a microservice or component level, as well as overall through comprehensive logging, metrics at various granularity, and tracing to understand distributed system performance and end user experiences end-to-end. With many different components making up modern architectures, having centralized visibility into all of your key logs, metrics, and end-to-end traces will make it much easier to monitor and understand your end users’ experiences. Learn more about the options for observability of your AWS serverless application:

9. Ensuring Your Application is Well-Architected

Adding onto the considerations mentioned above, we suggest architecting your applications more holistically to the AWS Well-Architected framework. This framework includes the five key pillars: security, reliability, performance efficiency, cost optimization, and operational excellence. Additionally, there is a serverless-specific lens to the Well-Architected framework, which more specifically looks at key serverless scenarios/use cases such as RESTful microservices, Alexa skills, mobile backends, stream processing, and web applications, and how they can implement best practices to be Well-Architected. More information:

10. Continuing your Learning as Serverless Computing Continues to Evolve

As we’ve discussed, there are many opportunities to dive deeper into serverless architectures in a variety of areas. Though the resources shared above should be helpful in familiarizing yourself with key concepts and techniques, there’s nothing better than continued learning from others over time as new advancements come out and patterns evolve.

Finally, we encourage you to check back often as we’ll be continuing further blog post series on serverless architectures, with the next series focusing on API design patterns and best practices.

About the author

Justin PritleJustin Pirtle is a specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. He’s responsible for helping customers design, deploy, and scale serverless applications using services such as AWS Lambda, Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. He is a regular speaker at AWS conferences, including re:Invent, as well as other AWS events. Justin holds a bachelor’s degree in Management Information Systems from the University of Texas at Austin and a master’s degree in Software Engineering from Seattle University.

Intuit: Serving Millions of Global Customers with Amazon Connect

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/intuit-serving-millions-of-global-customers-with-amazon-connect/

Recently, Bill Schuller, Intuit Contact Center Domain Architect met with AWS’s Simon Elisha to discuss how Intuit manages its customer contact centers with AWS Connect.

As a 35-year-old company with an international customer base, Intuit is widely known as the maker of Quick Books and Turbo Tax, among other software products. Its 50 million customers can access its global contact centers not just for password resets and feature explanations, but for detailed tax interpretation and advice. As you can imagine, this presents a challenge of scale.

Using Amazon Connect, a self-service, cloud-based contact center service, Intuit has been able to provide a seamless call-in experience to Intuit customers from around the globe. When a customer calls in to Amazon Connect, Intuit is able to do a “data dip” through AWS Lambda out to the company’s CRM system (in this case, SalesForce) in order to get more information from the customer. At this point, Intuit can leverage other services like Amazon Lex for national language feedback and then get the customer to the right person who can help. When the call is over, instead of having that important recording of the call locked up in a proprietary system, the audio is moved into an S3 bucket, where Intuit can do some post-call processing. It can also be sent it out to third parties for analysis, or Intuit can use Amazon Transcribe or Amazon Comprehend to get a transcription or sentiment analysis to understand more about what happened during that particular call.

Watch the video below to understand the reasons why Intuit decided on this set of AWS services (hint: it has to do with the ability to experiment with speed and scale but without the cost overhead).

*Check out more This Is My Architecture video series.

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

Best Practices for Developing on AWS Lambda

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/best-practices-for-developing-on-aws-lambda/

In our previous post we discussed the various ways you can invoke AWS Lambda functions. In this post, we’ll provide some tips and best practices you can use when building your AWS Lambda functions.

One of the benefits of using Lambda, is that you don’t have to worry about server and infrastructure management. This means AWS will handle the heavy lifting needed to execute your Lambda functions. You should take advantage of this architecture with the tips below.

Tip #1: When to VPC-Enable a Lambda Function

Lambda functions always operate from an AWS-owned VPC. By default, your function has full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs. For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records. You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet. An RDS instance is a good example.

RDS instance: When to VPC enable a Lambda function

Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet. If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.

Tip #2: Deploy Common Code to a Lambda Layer (i.e. the AWS SDK)

If you intend to reuse code in more than one function, consider creating a Layer and deploying it there. A great candidate would be a logging package that your team is required to standardize on. Another great example is the AWS SDK. AWS will include the AWS SDK for NodeJS and Python functions (and update the SDK periodically). However, you should bundle your own SDK and pin your functions to a version of the SDK you have tested.

Tip #3: Watch Your Package Size and Dependencies

Lambda functions require you to package all needed dependencies (or attach a Layer) — the bigger your deployment package, the slower your function will cold-start. Remove all unnecessary items, such as documentation and unused libraries. If you are using Java functions with the AWS SDK, only bundle the module(s) that you actually need to use — not the entire SDK.

Good:

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>dynamodb</artifactId>
    <version>2.6.0</version>
</dependency>

Bad:

<!-- https://mvnrepository.com/artifact/software.amazon.awssdk/aws-sdk-java -->
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>aws-sdk-java</artifactId>
    <version>2.6.0</version>
</dependency>

Tip #4: Monitor Your Concurrency (and Set Alarms)

Our first post in this series talked about how concurrency can effect your down stream systems. Since Lambda functions can scale extremely quickly, this means you should have controls in place to notify you when you have a spike in concurrency. A good idea is to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds your threshold. You should create an AWS Budget so you can monitor costs on a daily basis. Here is a great example of how to set up automated cost controls.

Tip #5: Over-Provision Memory (in some use cases) but Not Function Timeout

Lambda allocates compute power in proportion to the memory you allocate to your function. This means you can over provision memory to run your functions faster and potentially reduce your costs. You should benchmark your use case to determine where the breakeven point is for running faster and using more memory vs running slower and using less memory.

However, we recommend you do not over provision your function time out settings. Always understand your code performance and set a function time out accordingly. Overprovisioning function timeout often results in Lambda functions running longer than expected and unexpected costs.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Understanding the Different Ways to Invoke Lambda Functions

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/understanding-the-different-ways-to-invoke-lambda-functions/

In our first post, we talked about general design patterns to enable massive scale with serverless applications. In this post, we’ll review the different ways you can invoke Lambda functions and what you should be aware of with each invocation model.

Synchronous Invokes

Synchronous invocations are the most straight forward way to invoke your Lambda functions. In this model, your functions execute immediately when you perform the Lambda Invoke API call. This can be accomplished through a variety of options, including using the CLI or any of the supported SDKs.

Here is an example of a synchronous invoke using the CLI:

aws lambda invoke —function-name MyLambdaFunction —invocation-type RequestResponse —payload  “[JSON string here]”

The Invocation-type flag specifies a value of “RequestResponse”. This instructs AWS to execute your Lambda function and wait for the function to complete. When you perform a synchronous invoke, you are responsible for checking the response and determining if there was an error and if you should retry the invoke.

Many AWS services can emit events that trigger Lambda functions. Here is a list of services that invoke Lambda functions synchronously:

Asynchronous Invokes

Here is an example of an asynchronous invoke using the CLI:

aws lambda invoke —function-name MyLambdaFunction —invocation-type Event —payload  “[JSON string here]”

Notice, the Invocation-type flag specifies “Event.” If your function returns an error, AWS will automatically retry the invoke twice, for a total of three invocations.

Here is a list of services that invoke Lambda functions asynchronously:

Asynchronous invokes place your invoke request in Lambda service queue and we process the requests as they arrive. You should use AWS X-Ray to review how long your request spent in the service queue by checking the “dwell time” segment.

Poll based Invokes

This invocation model is designed to allow you to integrate with AWS Stream and Queue based services with no code or server management. Lambda will poll the following services on your behalf, retrieve records, and invoke your functions. The following are supported services:

AWS will manage the poller on your behalf and perform Synchronous invokes of your function with this type of integration. The retry behavior for this model is based on data expiration in the data source. For example, Kinesis Data streams store records for 24 hours by default (up to 168 hours). The specific details of each integration are linked above.

Conclusion

In our next post, we’ll provide some tips and best practices for developing Lambda functions. Happy coding!

 

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

How to Design Your Serverless Apps for Massive Scale

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/how-to-design-your-serverless-apps-for-massive-scale/

Serverless is one of the hottest design patterns in the cloud today, allowing you to focus on building and innovating, rather than worrying about the heavy lifting of server and OS operations. In this series of posts, we’ll discuss topics that you should consider when designing your serverless architectures. First, we’ll look at architectural patterns designed to achieve massive scale with serverless.

Scaling Considerations

In general, developers in a “serverful” world need to be worried about how many total requests can be served throughout the day, week, or month, and how quickly their system can scale. As you move into the serverless world, the most important question you should understand becomes: “What is the concurrency that your system is designed to handle?”

The AWS Serverless platform allows you to scale very quickly in response to demand. Below is an example of a serverless design that is fully synchronous throughout the application. During periods of extremely high demand, Amazon API Gateway and AWS Lambda will scale in response to your incoming load. This design places extremely high load on your backend relational database because Lambda can easily scale from thousands to tens of thousands of concurrent requests. In most cases, your relational databases are not designed to accept the same number of concurrent connections.

Serverless at scale-1

This design risks bottlenecks at your relational database and may cause service outages. This design also risks data loss due to throttling or database connection exhaustion.

Cloud Native Design

Instead, you should consider decoupling your architecture and moving to an asynchronous model. In this architecture, you use an intermediary service to buffer incoming requests, such as Amazon Kinesis or Amazon Simple Queue Service (SQS). You can configure Kinesis or SQS as out of the box event sources for Lambda. In design below, AWS will automatically poll your Kinesis stream or SQS resource for new records and deliver them to your Lambda functions. You can control the batch size per delivery and further place throttles on a per Lambda function basis.

Serverless at scale - 2

This design allows you to accept extremely high volume of requests, store the requests in a durable datastore, and process them at the speed which your system can handle.

Conclusion

Serverless computing allows you to scale much quicker than with server-based applications, but that means application architects should always consider the effects of scaling to your downstream services. Always keep in mind cost, speed, and reliability when you’re building your serverless applications.

Our next post in this series will discuss the different ways to invoke your Lambda functions and how to design your applications appropriately.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/set-permission-guardrails-using-iam-access-advisor-analyze-service-last-accessed-information-aws-organization/

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

How to host and manage an entire private certificate infrastructure in AWS

Post Syndicated from Josh Rosenthol original https://aws.amazon.com/blogs/security/how-to-host-and-manage-an-entire-private-certificate-infrastructure-in-aws/

AWS Certificate Manager (ACM) Private Certificate Authority (CA) now offers the option for managing online root CAs and a full online PKI hierarchy. You can now host and manage your organization’s entire private certificate infrastructure in AWS. Supporting a full hierarchy expands AWS Certificate Manager (ACM) Private Certificate Authority capabilities.

CA administrators can use ACM Private CA to create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs. Customers can create secure and highly available CAs in any one of the AWS Regions in which ACM Private CA is available, without building and maintaining their own on-premises CA infrastructure. ACM Private CA provides essential security for operating a CA in accordance with your internal compliance rules and security best practices. ACM Private CA is secured with AWS-managed hardware security modules (HSMs), removing the operational and cost burden from customers.

An overview of CA hierarchy

Certificates are used to establish identity and secure connections. A resource presents a certificate to a server to establish its identity. If the certificate is valid, and a chain can be constructed from the certificate to a trusted root CA, the server can positively identify and trust the resource.

A CA hierarchy provides strong security and restrictive access controls for the most-trusted root CA at the top of the trust chain, while allowing more permissive access and bulk certificate issuance for subordinate CAs lower in the chain.

The root CA is a cryptographic building block (root of trust) upon which certificates can be issued. It’s comprised of a private key for signing (issuing) certificates and a root certificate that identifies the root CA and binds the private key to the name of the CA. The root certificate is distributed to the trust stores of each entity in an environment. When resources attempt to connect with one another, they check the certificates that each entity presents. If the certificates are valid and a chain can be constructed from the certificate to a root certificate installed in the trust store, a “handshake” occurs between resources that cryptographically prove the identity of each entity to the other. This creates an encrypted communication channel (TLS/SSL) between them.

How to configure a CA hierarchy with ACM Private CA

You can use root CAs to create a CA hierarchy without the need for an external root CA, and start issuing certificates to identify resources within your organizations. You can create root and subordinate CAs in nearly any configuration you want, including defining a CA structure to fit your needs or replicating an existing CA structure.

To get started, you can use the ACM Private CA console, APIs, or CLI to create a root and subordinate CA and issue certificates from the subordinate CA.
 

Figure 1: Issue certificates after creating a root and subordinate CA

Figure 1: Ceating a root CA

You can create a two-level CA hierarchy using the ACM console in less than 10 minutes using the ACM Private CA console wizard, which walks you through each step of creating a root or subordinate CA. When you create a subordinate CA, the wizard prompts you to chain the subordinate to a parent CA.
 

Figure 2: Walk through each step with the ACM Private CA console wizard

Figure 2: The “Install subordinate CA certificate” page

After creating a new root CA, you need to distribute the new root to the trust stores in your servers’ operating systems and browsers. If you want a simple, one-level CA hierarchy for development and testing, you can create a root certificate authority and start issuing private certificates directly from the root CA.

Note: The trade-off of this approach is that you can’t revoke the root CA certificate because the root CA certificate is installed in your trust stores. To effectively “untrust” the root CA in this scenario, you would need to replace the root CA in your trust stores with a new root CA.

Offline versus online root CAs

Some organizations, and all public CAs, keep their root CAs offline (that is, disconnected from the network) in a physical vault. In contrast, most organizations have root CAs that are connected to the network only when they’re used to sign the certificates of CAs lower in the chain. For example, customers might create a root CA with a 20-year lifetime, and disable it under normal circumstances to prevent it from being used except when enabled by a privileged administrator when it’s necessary to sign CA certificates for a child CA. Because using the root CA to issue a certificate is a rare and carefully controlled operation, customers monitor logs, audit reports, and generate alarms notifying them when their root CA is used to issue a certificate. Subordinate issuing CAs are the lowest in the hierarchy. They are typically used for bulk certificate issuance that identify devices and resources. Subordinate issuing CAs typically have shorter lifetimes (1-2 years), and fewer policy controls and monitors.

With ACM Private CA, you can create a trusted root CA with a lifetime of 10 or more years. All CA private keys are protected by FIPS 140-2 level 3 HSMs. You can verify the CA is used only for authorized purposes by reviewing AWS CloudTrail logs and audit reports. You can further protect against mis-issuance by configuring AWS Identity and Access Management (IAM) permissions that limit access to your CA. With an ACM Private CA, you can revoke certificates issued from your CA and use the certificate revocation list (CRL) generated by ACM Private CA to provide revocation information to clients. This simplifies configuration and deployment.

Customer use cases for root CA hierarchy

There are three common use cases for root CA hierarchy.

The most common use case is customers who are advanced PKI users and already have an offline root CA protected by an HSM. However, when it comes to development and network staging, they don’t want to use the same root CA and certificate. The new root CA hierarchy feature allows them to easily stand up a PKI for their test environment that mimics production, but uses a separate root of trust.

The second use case is customers who are interested in using a private CA but don’t have strong knowledge of PKI, nor have they invested in HSMs. These customers have gotten by, generating a root CA using OpenSSL. With the offering of root CA hierarchy, they’re now able to stand up a root CA within ACM Private CA that is protected by an HSM and restricted by IAM access policy. This increases the security of their hierarchy and simplifies their deployment.

The third use case is customers who are evaluating an internal PKI and also looking at managing an offline HSM. These customers recognize the significant process, management, cost, and training investments to stand up the full infrastructure required. Customers can remove these costs by managing their organization’s entire private certificate infrastructure in AWS.

How to get started

With ACM Private CA root CA hierarchy feature, you can create a PKI hierarchy and begin issuing private certificates for identity and securing TLS communication. To get started, open the ACM Private CA console. To learn more, read getting started with AWS Certificate Manager and getting started in the ACM Private CA user guide.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Josh Rosenthol

Josh is a Product Manager who helps solve customer problems with public and private certificate and CAs from AWS. He enjoys listening to customers describe their use cases and translate them into improvements to AWS Certificate Manager and ACM Private CA.

Author

Todd Cignetti

Todd Cignetti is a Principal Product Manager at Amazon Web Services. He is responsible for AWS Certificate Manager (ACM) and ACM Private CA. He focuses on helping AWS customers identify and secure their resources and endpoints with public and private certificates.

How to prompt users to reset their AWS Managed Microsoft AD passwords proactively

Post Syndicated from Tekena Orugbani original https://aws.amazon.com/blogs/security/how-to-prompt-users-to-reset-their-aws-managed-microsoft-ad-passwords-proactively/

If you’re an AWS Directory Service administrator, you can reset your directory users’ passwords from the AWS console or the CLI when their passwords expire. However, you can improve your efficiency by reducing the number of requests for password resets. You can also help improve the security of your organization by having your users proactively reset their directory passwords before they expire. In this post, I describe the steps you can take to set up a solution to send regular reminders to your AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) users to prompt them to change their password before it expires. This will help prevent users from being locked out when their passwords expire and also reduce the number of reset requests sent to administrators.

Solution Overview

When users’ passwords expire, they typically contact their directory service administrator to help them reset their password. For security reasons, they then need to reset their password again on their computer so that the administrator has no knowledge of the new password. This process is time-consuming and impacts productivity. In this post, I present a solution to remind users automatically to reset AWS Managed Microsoft AD passwords. The following diagram and description explains how the solution works.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A script running on an AWS Managed Microsoft AD domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance (Notification Server) searches the AWS Managed Microsoft AD for all enabled user accounts and retrieves their names, email addresses, and password expiry dates.
  2. Using the permissions of the IAM role attached to the Notification Server, the script obtains the SES SMTP credentials stored in AWS Secrets Manager.
  3. With the SMTP credentials obtained in Step 2, the script then securely connects to Amazon Simple Email Service (Amazon SES.)
  4. Based on your preferences, Amazon SES sends domain password expiry notifications to the users’ mailboxes.

A separate process for updating the SES credentials stored in AWS Secrets Manager occurs as follows:

  1. A CloudWatch rule triggers a Lambda function.
  2. The Lambda function generates new SES SMTP credentials from the SES IAM Username.
  3. The Lambda function then updates AWS Secrets Manager with the new SES credentials.
  4. The Lambda function then deletes the previous IAM access key.

Prerequisites

The instructions in this post assume that you’re familiar with how to create Amazon EC2 for Windows Server instances, use Remote Desktop Protocol (RDP) to log in to the instances, and have completed the following tasks:

  1. Create an AWS Microsoft AD directory.
  2. Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain to use as your Notification Server.
  3. Sign up for Amazon Simple Email Service (Amazon SES).
  4. Remove Amazon EC2 throttling on port 25 for your EC2 instance.
  5. Remove your Amazon SES account from the Amazon SES sandbox so you can also send email to unverified recipients.

Note: You can use your AWS Microsoft Directory management instance as the Notification Server. For the steps below, use any account that is a member of the AWS delegated Administrators’ group.

Summary of the steps

  1. Verify an Amazon SES email address.
  2. Create Amazon SES SMTP credentials.
  3. Store the Amazon SES SMTP credentials in AWS Secrets Manager.
  4. Create an IAM role with read permissions to the secret in AWS Secrets Manager.
  5. Set up and test the notification script.
  6. Set up Windows Task Scheduler.
  7. Configure automatic rotation of the SES Credentials stored in Secrets Manager.

STEP 1: Verify an Amazon SES email address

To prevent unauthorized use, Amazon SES requires that you verify the email address that you use as a “From,” “Source,” “Sender,” or “Return-Path”.

To verify the email address you will use as the sending address, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation pane, under Identity Management, select Email Addresses.
  3. Select Verify a New Email Address, and then enter the email address.
  4. Select Verify This Email Address.

An email will be sent to the specified email address with a link to verify the email address. Once you verify the email, you’ll see the Verification Status as verified in the SES console.

In the image below, I have four verified email addresses:
 

Figure 2: Verified email addresses

Figure 2: Verified email addresses

STEP 2: Create Amazon SES SMTP credentials

You must create an Amazon SES SMTP user name and password to access the Amazon SES SMTP interface and send email using the service. To do this, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation bar, select SMTP Settings.
  3. In the content pane, make a note of the Server Name as you will use this when sending the email in Step 5. Select Create My SMTP Credentials.
     
    Figure 3: Make a note of the SES SMTP Server Name

    Figure 3: Make a note of the SES SMTP Server Name

  4. Specify a value for the IAM User Name field. Make a note of this IAM User Name as you will need in Step 7 later. In this post, I use the placeholder, ses-smtp-user-eu-west-1, as the user name (as shown below):
     
    Figure 4: Make a note of SES IAM User Name

    Figure 4: Make a note of SES IAM User Name

  5. Select Create.

Make a note of the SMTP Username and SMTP Password you created because you’ll use these in later steps. This is as shown below in my example.
 

Figure 5: Make a note of the SES SMTP Username and SMTP Password

Figure 5: Make a note of the SES SMTP Username and SMTP Password

STEP 3: Store the Amazon SES SMTP credentials in AWS Secrets Manager

In this step, use AWS Secrets Manager to store the Amazon SES SMTP credentials created in Step 2. You will reference this credential when you execute the script in the Notification Server.

Complete the following steps to store the Amazon SES SMTP credentials in AWS Secrets Manager:

  1. Sign in to the AWS Secrets Manager Console.
  2. Select Store a new secret, and then select Other types of secrets.
  3. Under Secret Key/value, enter the Amazon SES SMTP Username in the left box and the Amazon SES SMTP Password in the right box, and then select Next.
     
    Figure 6: Enter the Amazon SES SMTP user name and password

    Figure 6: Enter the Amazon SES SMTP user name and password

  4. In the next screen, enter the string AWS-SES as the name of the secret. Enter an optional description for the secret and add an optional tag and select Next.

    Note: I recommend using AWS-SES as the name of your secret. If you choose to use some other name, you will have to update PowerShell script in Step 5. I also recommend creating the secret in the same region as the Notification Server. If you create your secret in a different region, you will also have to update PowerShell script in Step 5.

     

    Figure 7: Enter "AWS-SES" as the secret name

    Figure 7: Enter “AWS-SES” as the secret name

  5. On next screen, leave the default setting as Disable automatic rotation and select Next. You will come back later in Step 7 where you will use a Lambda function to rotate the secret at specified intervals.
  6. To store the secret, in the last screen, select Store. Now select the secret and make a note of the ARN of the secret as shown in in Figure 8.
     
    Figure 8: Make a note of the Secret ARN

    Figure 8: Make a note of the Secret ARN

Step 4: Create IAM role with permissions to read the secret

Create an IAM role that grants permissions to read the secret created in Step 3. Then, attach this role to the Notification Server to enable your script to read this secret. Complete the following steps:

  1. Log in to the IAM Console.
  2. In the navigation bar, select Policies.
  3. In the content pane, select Create Policy, and then select JSON.
  4. Replace the content with the following snippet while specifying the ARN of the secret you created earlier in step 3:
    
        {
            "Version": "2012-10-17",
            "Statement": {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "<arn-of-the-secret-created-in-step-3>"
            }
        }                
        

    Here is how it looks in my example after I replace with the ARN of my Secrets Manager secret:
     

    Figure 9: Example policy

    Figure 9: Example policy

  5. Select Review policy.
  6. On the next screen, specify a name for the policy. In my example, I have specified Access-Ses-Secret as the name of the policy. Also specify a description for the policy, and then select Create policy.
  7. In the navigation pane, select Roles.
  8. In the content pane, select Create role.
  9. On the next page, select EC2, and then select Next: Permissions.
  10. Select the policy you created, and then select Next: Tags.
  11. Select Next: Review, provide a name for the role. In my example, I have specified SecretsManagerReadAccessRole as the name. Select Create Role.

Now, complete the following steps to attach the role to the Notification Server:

  1. From the Amazon EC2 Console, select the Notification Server instance.
  2. Select Actions, select Instance Settings, and then select Attach/Replace IAM Role.
     
    Figure 10: Select "Attach/Replace IAM Role"

    Figure 10: Select “Attach/Replace IAM Role”

  3. On the Attach/Replace IAM Role page, choose the role to attach from the drop-down list. For this post, I choose SecretsManagerReadAccessRole and select Apply.

    Here is how it looks in my example:
     

    Figure 11: Example "Attach/Replace IAM Role"

    Figure 11: Example “Attach/Replace IAM Role”

STEP 5: Setup and Test the Notification Script

In this section, you’re going to test the script by sending a sample notification email to an end user to remind the user to change their password. To test the script, log into your Notification Server using your AWS Microsoft Managed AD default Admin account. Then, complete the following steps:

  1. Install the PowerShell Module for Active Directory by opening PowerShell as Administrator and run the following command:

    Install-WindowsFeature -Name RSAT-AD-PowerShell

  2. Download the script to the Notification Server. In my example, I downloaded the script and stored in the location

    c:\scripts\PasswordExpiryNotify.ps1

  3. Create a new user in Active Directory and ensure you enter a valid email address for the new user.

    Note: Make sure to clear the User must change password at next logon check box when creating the user; otherwise, you will get an invalid output from the command in the next step.

    For this example, I created a test user named RandomUser in Active Directory.

  4. In the PowerShell Window, execute the following command to determine the number of days remaining before the password for the user expires. In this example, I run the following to determine the number of days remaining before the RandomUser account password expires:

    (New-TimeSpan -Start ((Get-Date).ToLongDateString()) -End ((Get-ADUser -Identity ‘RandomUser’ -Properties “msDS-UserPasswordExpiryTimeComputed”|Select @{Name=”exp”;Expression={[datetime]::FromFileTime($_.”msDS-UserPasswordExpiryTimeComputed”).tolongdatestring()}}) | Select -ExpandProperty exp)).Days

    In my example, I get “15” as the output.

  5. To test the script, navigate to the location of the script on your Notification Server and execute the following:

    .\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2> ” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <NUMBER OF DAYS>

    In this example, I navigate to c:\scripts\ and execute:

    .\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk [email protected]” -NotifyDays 15

A new email will be sent to user’s mailbox. Verify the user has received the email.

Note: I can update these instructions to send multiple email reminders to users. For example, if I want to notify users on three occasions (first notification 15 days before password expiration, then 7 days, and one more when there is only 1 day) I would execute the following:

.\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk <[email protected]>” -NotifyDays 1,7,15

Step 6: Set up a Windows Task Scheduler

Now that you have tested the script and confirmed that the solution is working as expected, you can set up a Windows Scheduled Task to execute the script daily. To do this:

  1. Open Task Scheduler.
  2. Right-click Task Scheduler Library, and then select Create Task.
  3. Specify a name for the task.
  4. On the Triggers tab, select New.
  5. Select Daily, and then select OK.
  6. On the Actions tab, select New.
  7. Inside Program/Script, type PowerShell.exe
  8. In the Add arguments (optional) box, type the following command, including the full path to the script.

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2>” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <DAY,DAY,DAY>

    In my example, I type the following:

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer ’email-smtp.eu-west-1.amazonaws.com’ -from ‘IT Servicedesk [email protected]’ -NotifyDays 1,7,15”

  9. Select OK twice, and then enter your password when prompted to complete the steps.

The script will now run daily at the specified time and will send password expiration email notifications to your AWS Managed Microsoft AD users. In my example, a password expiration reminder email is sent to my AWS Managed Microsoft AD users 15 days before expiration, 7 days before expiration, and then 1 day before expiration.

Here is a sample email:
 

Figure 12: Sample password expiration email

Figure 12: Sample password expiration email

Note: You can edit the script to change the notification message to suit your requirements.

Step 7: Configure automatic update of the SES credentials

In this final section, you’re going to setup the configuration to automatically update the secret (that is, the SES credentials stored in AWS Secrets Manager) at regular intervals. To achieve this, you will use an Amazon Lambda function that will do the following:

  1. Create a new access key using the IAM user you used to create the SES SMTP Credentials in Step 2 (ses-smtp-user-eu-west-1 in my example).
  2. Generate a new SES SMTP User password from the created IAM secret access key.
  3. Update the SES credentials stored in AWS Secrets Manager.
  4. Delete the old IAM access key.

Complete the following steps to enable automatic update of the SES credentials:

First you will create the IAM policy which you will attach to a role that will be assumed by the lambda function. This policy will provide the permissions to create new access keys for the SES IAM user and permissions to update the SES credentials stored in AWS Secrets Manager.

  1. Log in to the IAM Console, and in the navigation bar, select Policies.
  2. In the content pane, select Create Policy, and then select JSON.
  3. Replace the content with the following script while specifying the ARN of the IAM user we used to create the SES SMTP credentials in Step 2 and the ARN of the secret stored in Secrets Manager that you noted in Step 3.
    
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": "iam:*AccessKey*",
                    "Resource": "<arn-of-iam-user-created-in-step-2>"
                },
                {
                    "Effect": "Allow",
                    "Action": "secretsmanager:UpdateSecret",
                    "Resource": "<arn-of-secret-stored-in-secret-manager>"
                }
                ]
        }              
        

    Here is the JSON for the policy in my example:
     

    Figure 13: Example policy

    Figure 13: Example policy

  4. Select Review Policy, and then specify a name and a description for the policy. In my example, I have specified the name of the policy as iam-secretsmanager-access-for-lambda.

    Here is how it looks in my example:
     

    Figure 14: Specify a name and description for the policy

    Figure 14: Specify a name and description for the policy

  5. Select Create Policy

Now, create an IAM role and attach this policy.

  1. In the navigation bar, select Roles and select Create Role.
  2. Under the Choose the service that will use this role, select Lambda, and then select Next: Permissions.
  3. On the next page, select the policy you just created and select Next: Tags. Add an optional tag and select Next: Review.
  4. Specify a name for the role and description, and then select Create role. In my example, I have named the role: LambdaRoleRotatateSesSecret.

Now, you will create a Lambda function that will assume the created role:

  1. Log on to the AWS Lambda console and select Create Function
  2. Specify a name for the function, and then, under Runtime, select Python 3.7.
  3. Under execution role, select User an existing role, and then select the role you created earlier.

    Here are the settings I used in my example:
     

    Figure 15: Settings on the "Create function" page

    Figure 15: Settings on the “Create function” page

  4. Select Create function, copy the following Python code, and then paste it in the Function Code section.
    
        import boto3
        import os      #required to fetch environment variables
        import hmac    #required to compute the HMAC key
        import hashlib #required to create a SHA256 hash
        import base64  #required to encode the computed key
        import sys     #required for system functions
        
        iam = boto3.client('iam')
        sm = boto3.client('secretsmanager')
        
        SES_IAM_USERNAME = os.environ['SES_IAM_USERNAME']
        SECRET_ID = os.environ['SECRET_ID']
        
        def lambda_handler(event, context):
            print("Getting current credentials...")
            old_key = iam.list_access_keys(UserName=SES_IAM_USERNAME)['AccessKeyMetadata'][0]['AccessKeyId']
        
            print("Creating new credentials...")
            new_key = iam.create_access_key(UserName=SES_IAM_USERNAME)
            print("New credentials created...")
            
            smtp_username = '%s' % (new_key['AccessKey']['AccessKeyId'])
            iam_sec_access_key = '%s' % (new_key['AccessKey']['SecretAccessKey'])
            
             
            # These variables are used when calculating the SMTP password.
            message = 'SendRawEmail'
            version = '\x02'
            
            # Compute an HMAC-SHA256 key from the AWS secret access key.
            signatureInBytes = hmac.new(iam_sec_access_key.encode('utf-8'),message.encode('utf-8'),hashlib.sha256).digest()
            # Prepend the version number to the signature.
            signatureAndVersion = version.encode('utf-8') + signatureInBytes
            # Base64-encode the string that contains the version number and signature.
            smtpPassword = base64.b64encode(signatureAndVersion)
            # Decode the string and print it to the console.
            ses_smtp_pass = smtpPassword.decode('utf-8')
            secret_string = '{"%s": "%s"}' % (new_key['AccessKey']['AccessKeyId'], ses_smtp_pass)
            print("Updating credentials in SecretsManager...")
            sm_res = sm.update_secret(
                SecretId=SECRET_ID,
                SecretString=secret_string
                )
            print(sm_res)
            
            print("Deleting old key")
            del_res = iam.delete_access_key(
                UserName=SES_IAM_USERNAME,
                AccessKeyId=old_key
                )
                print(del_res) 
        

    Here is what it will look like:
     

    Figure 16: The Python code pasted in the "Function Code" section

    Figure 16: The Python code pasted in the “Function Code” section

  5. In the Environment variables section, specify the two environment variables required by the Lambda Python code as follows:
    
                SECRET_ID: AWS-SES
                SES_IAM_USERNAME: <SES-IAM-USERNAME-NOTED-IN-STEP 2>  
        

    Here is how my environment variables look:
     

    Figure 17: The Python code pasted in the "Function Code" section page

    Figure 17: The Python code pasted in the “Function Code” section

  6. Select Save.

    You have now created a Lambda function that can update the SES credentials stored in AWS Secrets Manager.

    You will now set up CloudWatch to trigger the Lambda function at scheduled intervals.

  7. Open the Amazon CloudWatch Console.
  8. In the navigation pane, select Rules and, in the content pane, select Create Rule.
  9. Under Event Source, select Schedule, and then select Fixed rate of. Specify how often you would like CloudWatch to trigger the Lambda function. In my example, I have chosen to update the SES credentials every 30 days.
  10. Under Targets, select Add Target, and then select Lambda Function.
  11. In Function, select the Lambda function you just created, and then select Configure details.
     
    Figure 18: Create new CloudWatch rule

    Figure 18: Create new CloudWatch rule

  12. Specify a name for the rule, enter a description, make sure the State check box is selected, and then select Create rule.

The SES credentials stored in AWS Secrets Manager will now be updated based on the scheduled intervals you specified in CloudWatch.

Conclusion

In this post, I showed how you can set up a solution to remind your AWS Directory Service for Microsoft Active Directory users to change their passwords before expiration. I demonstrated how you can achieve this using a combination of a script and Amazon SES. I also showed you how you can configure rotation of the Amazon SES credentials on your preferred schedule.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the Amazon SES forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tekena Orugbani

Tekena is a Cloud Support Engineer at the AWS Cape Town office. He has many years of experience working with Windows Systems, virtualization/cloud technologies, and directory services. When he’s not helping customers make the most of their cloud investments, he enjoys hanging out with his family and watching Premier League football (soccer).

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/working-backward-from-iam-policies-and-principal-tags-to-standardized-names-and-tags-for-your-aws-resources/

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag nameTag value
access-projectweb
cost-center123456

Tags for the website application administrator role:

Tag nameTag value
access-projectweb
access-applicationnginx
cost-center123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag nameTag value
access-projectweb
access-applicationnginx
access-environmentdev
cost-center123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.