Tag Archives: Customer Solutions

Integrating AWS CloudFormation Guard into CI/CD pipelines

Post Syndicated from Sergey Voinich original https://aws.amazon.com/blogs/devops/integrating-aws-cloudformation-guard/

In this post, we discuss and build a managed continuous integration and continuous deployment (CI/CD) pipeline that uses AWS CloudFormation Guard to automate and simplify pre-deployment compliance checks of your AWS CloudFormation templates. This enables your teams to define a single source of truth for what constitutes valid infrastructure definitions, to be compliant with your company guidelines and streamline AWS resources’ deployment lifecycle.

We use the following AWS services and open-source tools to set up the pipeline:

Solution overview

The CI/CD workflow includes the following steps:

  1. A code change is committed and pushed to the CodeCommit repository.
  2. CodePipeline automatically triggers a CodeBuild job.
  3. CodeBuild spins up a compute environment and runs the phases specified in the buildspec.yml file:
  4. Clone the code from the CodeCommit repository (CloudFormation template, rule set for CloudFormation Guard, buildspec.yml file).
  5. Clone the code from the CloudFormation Guard repository on GitHub.
  6. Provision the build environment with necessary components (rust, cargo, git, build-essential).
  7. Download CloudFormation Guard release from GitHub.
  8. Run a validation check of the CloudFormation template.
  9. If the validation is successful, pass the control over to CloudFormation and deploy the stack. If the validation fails, stop the build job and print a summary to the build job log.

The following diagram illustrates this workflow.

Architecture Diagram

Architecture Diagram of CI/CD Pipeline with CloudFormation Guard

Prerequisites

For this walkthrough, complete the following prerequisites:

Creating your CodeCommit repository

Create your CodeCommit repository by running a create-repository command in the AWS CLI:

aws codecommit create-repository --repository-name cfn-guard-demo --repository-description "CloudFormation Guard Demo"

The following screenshot indicates that the repository has been created.

CodeCommit Repository

CodeCommit Repository has been created

Populating the CodeCommit repository

Populate your repository with the following artifacts:

  1. A buildspec.yml file. Modify the following code as per your requirements:
version: 0.2
env:
  variables:
    # Definining CloudFormation Teamplate and Ruleset as variables - part of the code repo
    CF_TEMPLATE: "cfn_template_file_example.yaml"
    CF_ORG_RULESET:  "cfn_guard_ruleset_example"
phases:
  install:
    commands:
      - apt-get update
      - apt-get install build-essential -y
      - apt-get install cargo -y
      - apt-get install git -y
  pre_build:
    commands:
      - echo "Setting up the environment for AWS CloudFormation Guard"
      - echo "More info https://github.com/aws-cloudformation/cloudformation-guard"
      - echo "Install Rust"
      - curl https://sh.rustup.rs -sSf | sh -s -- -y
  build:
    commands:
       - echo "Pull GA release from github"
       - echo "More info https://github.com/aws-cloudformation/cloudformation-guard/releases"
       - wget https://github.com/aws-cloudformation/cloudformation-guard/releases/download/1.0.0/cfn-guard-linux-1.0.0.tar.gz
       - echo "Extract cfn-guard"
       - tar xvf cfn-guard-linux-1.0.0.tar.gz .
  post_build:
    commands:
       - echo "Validate CloudFormation template with cfn-guard tool"
       - echo "More information https://github.com/aws-cloudformation/cloudformation-guard/blob/master/cfn-guard/README.md"
       - cfn-guard-linux/cfn-guard check --rule_set $CF_ORG_RULESET --template $CF_TEMPLATE --strict-checks
artifacts:
  files:
    - cfn_template_file_example.yaml
  name: guard_templates
  1. An example of a rule set file (cfn_guard_ruleset_example) for CloudFormation Guard. Modify the following code as per your requirements:
#CFN Guard rules set example

#List of multiple references
let allowed_azs = [us-east-1a,us-east-1b]
let allowed_ec2_instance_types = [t2.micro,t3.nano,t3.micro]
let allowed_security_groups = [sg-08bbcxxc21e9ba8e6,sg-07b8bx98795dcab2]

#EC2 Policies
AWS::EC2::Instance AvailabilityZone IN %allowed_azs
AWS::EC2::Instance ImageId == ami-0323c3dd2da7fb37d
AWS::EC2::Instance InstanceType IN %allowed_ec2_instance_types
AWS::EC2::Instance SecurityGroupIds == ["sg-07b8xxxsscab2"]
AWS::EC2::Instance SubnetId == subnet-0407a7casssse558

#EBS Policies
AWS::EC2::Volume AvailabilityZone == us-east-1a
AWS::EC2::Volume Encrypted == true
AWS::EC2::Volume Size == 50 |OR| AWS::EC2::Volume Size == 100
AWS::EC2::Volume VolumeType == gp2
  1. An example of a CloudFormation template file (.yaml). Modify the following code as per your requirements:
AWSTemplateFormatVersion: "2010-09-09"
Description: "EC2 instance with encrypted EBS volume for AWS CloudFormation Guard Testing"

Resources:

 EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: 'ami-0323c3dd2da7fb37d'
      AvailabilityZone: 'us-east-1a'
      KeyName: "your-ssh-key"
      InstanceType: 't3.micro'
      SubnetId: 'subnet-0407a7xx68410e558'
      SecurityGroupIds:
        - 'sg-07b8b339xx95dcab2'
      Volumes:
         - 
          Device: '/dev/sdf'
          VolumeId: !Ref EBSVolume
      Tags:
       - Key: Name
         Value: cfn-guard-ec2

 EBSVolume:
   Type: AWS::EC2::Volume
   Properties:
     Size: 100
     AvailabilityZone: 'us-east-1a'
     Encrypted: true
     VolumeType: gp2
     Tags:
       - Key: Name
         Value: cfn-guard-ebs
   DeletionPolicy: Snapshot

Outputs:
  InstanceID:
    Description: The Instance ID
    Value: !Ref EC2Instance
  Volume:
    Description: The Volume ID
    Value: !Ref  EBSVolume
AWS CodeCommit

Optional CodeCommit Repository Structure

The following screenshot shows a potential CodeCommit repository structure.

Creating a CodeBuild project

Our CodeBuild project orchestrates around CloudFormation Guard and runs validation checks of our CloudFormation templates as a phase of the CI process.

  1. On the CodeBuild console, choose Build projects.
  2. Choose Create build projects.
  3. For Project name, enter your project name.
  4. For Description, enter a description.
AWS CodeBuild

Create CodeBuild Project

  1. For Source provider, choose AWS CodeCommit.
  2. For Repository, choose the CodeCommit repository you created in the previous step.
AWS CodeBuild

Define the source for your CodeBuild Project

To setup CodeBuild environment we will use managed image based on Ubuntu 18.04

  1. For Environment Image, select Managed image.
  2. For Operating system, choose Ubuntu.
  3. For Service role¸ select New service role.
  4. For Role name, enter your service role name.
CodeBuild Environment

Setup the environment, the OS image and other settings for the CodeBuild

  1. Leave the default settings for additional configuration, buildspec, batch configuration, artifacts, and logs.

You can also use CodeBuild with custom build environments to help you optimize billing and improve the build time.

Creating IAM roles and policies

Our CI/CD pipeline needs two AWS Identity and Access Management (IAM) roles to run properly: one role for CodePipeline to work with other resources and services, and one role for AWS CloudFormation to run the deployments that passed the validation check in the CodeBuild phase.

Creating permission policies

Create your permission policies first. The following code is the policy in JSON format for CodePipeline:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "codecommit:UploadArchive",
                "codecommit:CancelUploadArchive",
                "codecommit:GetCommit",
                "codecommit:GetUploadArchiveStatus",
                "codecommit:GetBranch",
                "codestar-connections:UseConnection",
                "codebuild:BatchGetBuilds",
                "codedeploy:CreateDeployment",
                "codedeploy:GetApplicationRevision",
                "codedeploy:RegisterApplicationRevision",
                "codedeploy:GetDeploymentConfig",
                "codedeploy:GetDeployment",
                "codebuild:StartBuild",
                "codedeploy:GetApplication",
                "s3:*",
                "cloudformation:*",
                "ec2:*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEqualsIfExists": {
                    "iam:PassedToService": [
                        "cloudformation.amazonaws.com",
                        "ec2.amazonaws.com"
                    ]
                }
            }
        }
    ]
}

To create your policy for CodePipeline, run the following CLI command:

aws iam create-policy --policy-name CodePipeline-Cfn-Guard-Demo --policy-document file://CodePipelineServiceRolePolicy_example.json

Capture the policy ARN that you get in the output to use in the next steps.

The following code is the policy in JSON format for AWS CloudFormation:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": [
                        "autoscaling.amazonaws.com",
                        "ec2scheduled.amazonaws.com",
                        "elasticloadbalancing.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAcl",
                "s3:GetObject",
                "cloudwatch:*",
                "ec2:*",
                "autoscaling:*",
                "s3:List*",
                "s3:HeadBucket"
            ],
            "Resource": "*"
        }
    ]
}

Create the policy for AWS CloudFormation by running the following CLI command:

aws iam create-policy --policy-name CloudFormation-Cfn-Guard-Demo --policy-document file://CloudFormationRolePolicy_example.json

Capture the policy ARN that you get in the output to use in the next steps.

Creating roles and trust policies

The following code is the trust policy for CodePipeline in JSON format:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codepipeline.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create your role for CodePipeline with the following CLI command:

aws iam create-role --role-name CodePipeline-Cfn-Guard-Demo-Role --assume-role-policy-document file://RoleTrustPolicy_CodePipeline.json

Capture the role name for the next step.

The following code is the trust policy for AWS CloudFormation in JSON format:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "cloudformation.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create your role for AWS CloudFormation with the following CLI command:

aws iam create-role --role-name CF-Cfn-Guard-Demo-Role --assume-role-policy-document file://RoleTrustPolicy_CloudFormation.json

Capture the role name for the next step.

 

Finally, attach the permissions policies created in the previous step to the IAM roles you created:

aws iam attach-role-policy --role-name CodePipeline-Cfn-Guard-Demo-Role  --policy-arn "arn:aws:iam::<AWS Account Id >:policy/CodePipeline-Cfn-Guard-Demo"

aws iam attach-role-policy --role-name CF-Cfn-Guard-Demo-Role  --policy-arn "arn:aws:iam::<AWS Account Id>:policy/CloudFormation-Cfn-Guard-Demo"

Creating a pipeline

We can now create our pipeline to assemble all the components into one managed, continuous mechanism.

  1. On the CodePipeline console, choose Pipelines.
  2. Choose Create new pipeline.
  3. For Pipeline name, enter a name.
  4. For Service role, select Existing service role.
  5. For Role ARN, choose the service role you created in the previous step.
  6. Choose Next.
CodePipeline Setup

Setting Up CodePipeline environment

  1. In the Source section, for Source provider, choose AWS CodeCommit.
  2. For Repository name¸ enter your repository name.
  3. For Branch name, choose master.
  4. For Change detection options, select Amazon CloudWatch Events.
  5. Choose Next.
AWS CodePipeline Source

Adding CodeCommit to CodePipeline

  1. In the Build section, for Build provider, choose AWS CodeBuild.
  2. For Project name, choose the CodeBuild project you created.
  3. For Build type, select Single build.
  4. Choose Next.
CodePipeline Build Stage

Adding Build Project to Pipeline Stage

Now we will create a deploy stage in our CodePipeline to deploy CloudFormation templates that passed the CloudFormation Guard inspection in the CI stage.

  1. In the Deploy section, for Deploy provider, choose AWS CloudFormation.
  2. For Action mode¸ choose Create or update stack.
  3. For Stack name, choose any stack name.
  4. For Artifact name, choose BuildArtifact.
  5. For File name, enter the CloudFormation template name in your CodeCommit repository (In case of our demo it is cfn_template_file_example.yaml).
  6. For Role name, choose the role you created earlier for CloudFormation.
CodePipeline - Deploy Stage

Adding deploy stage to CodePipeline

22. In the next step review your selections for the pipeline to be created. The stages and action providers in each stage are shown in the order that they will be created. Click Create pipeline. Our CodePipeline is ready.

Validating the CI/CD pipeline operation

Our CodePipeline has two basic flows and outcomes. If the CloudFormation template complies with our CloudFormation Guard rule set file, the resources in the template deploy successfully (in our use case, we deploy an EC2 instance with an encrypted EBS volume).

CloudFormation Deployed

CloudFormation Console

If our CloudFormation template doesn’t comply with the policies specified in our CloudFormation Guard rule set file, our CodePipeline stops at the CodeBuild step and you see an error in the build job log indicating the resources that are non-compliant:

[EBSVolume] failed because [Encrypted] is [false] and the permitted value is [true]
[EC2Instance] failed because [t3.2xlarge] is not in [t2.micro,t3.nano,t3.micro] for [InstanceType]
Number of failures: 2

Note: To demonstrate the above functionality I changed my CloudFormation template to use unencrypted EBS volume and switched the EC2 instance type to t3.2xlarge which do not adhere to the rules that we specified in the Guard rule set file

Cleaning up

To avoid incurring future charges, delete the resources that we have created during the walkthrough:

  • CloudFormation stack resources that were deployed by the CodePipeline
  • CodePipeline that we have created
  • CodeBuild project
  • CodeCommit repository

Conclusion

In this post, we covered how to integrate CloudFormation Guard into CodePipeline and fully automate pre-deployment compliance checks of your CloudFormation templates. This allows your teams to have an end-to-end automated CI/CD pipeline with minimal operational overhead and stay compliant with your organizational infrastructure policies.

Field Notes: Implementing Hardware-in-the-Loop for Autonomous Driving Development on AWS

Post Syndicated from Bryan Berezdivin original https://aws.amazon.com/blogs/architecture/field-notes-implementing-hardware-in-the-loop-for-autonomous-driving-development-on-aws/

Automotive customers use AWS as their platform for advanced driving assistance systems (ADAS) and autonomous driving (AD) development to accelerate their development cycles and experience faster time-to-market.  In the blog post, Autonomous Vehicle and ADAS development on AWS Part 1: Achieving Scale, we illustrated how software in the loop (SiL) and hardware in the loop (HiL) simulations are part of the workflow used to develop and validate safe AD and ADAS functionality. In this post, I run through some of the more common questions and patterns for implementing HiL on AWS, while looking at some of the differences from running your development all on-premises.

HiL simulations leverage test drive data and derived synthetic data to develop and validate various functions in the AD software stack.  Test drive log data is ingested and stored in Amazon S3 for use for HiL simulations in parallel with other AD development workloads including visualization, processing, labeling, analysis, and model and algorithm development.

As such, we see customers exist in a hybrid context with their HiL workloads running on-premises to support customized equipment. For ADAS and ADS customers, this poses a few questions and considerations:

  • What are the recommendations to deploy HiL for AD on AWS?
  • How is this different from what customers were used to on-premises?

For hybrid customers, there are assumptions and misconceptions:

  • Do I need to replicate the test drive data locally, and if so, what are the considerations and consequences?

For the purposes of brevity, the remainder of this blog post will use the term AD development to encompass ADAS and AD unless specifically called out.

HiL Building Blocks

Simulations and validations make up an important aspect of AD development. According to Rand’s analysis, there is a need to demonstrate safe driving on billions of miles for an autonomous vehicle to have a lower failure rate than a human driver. While this analysis is statistically derived for fully autonomous driving (SAE Level 5), it demonstrates a need for further validation on millions of miles. This pattern is reflected in most ADAS and AD development projects, where software in the loop (SiL) and HiL are used for verification and validation.

The following diagram is an illustration of the ISO 26262 V-Model, a product development approach for matching requirements with corresponding tests, where HiL simulation is required for much of system level testing and validation phases on the right side of the overlaid V-Model.

Figure 1: V-Model as defined by ISO-26262

Figure 1: V-Model as defined by ISO-26262

HiL simulations require a few key elements. The main component is the device under test (DUT), such as one or more electronic control units (ECUs) running the AD software stack. HiL simulations allow customers to put the device under test (DUT) under the rigor of real-world signals found in a vehicle. By providing more accurate environments and scenarios for the DUT, it can be fine-tuned for key performance indicators (KPIs) such as power utilization, response time, and accuracy.

The DUT is connected to a “HIL Rig,” a high performance server with multiple expansion boards to connect to various components of the AD system. The various interfaces are identified in Figure 2:  High Level Hardware-in-the-loop (HiL) System and Interfaces and include Controller Area Network (CAN), Automotive Ethernet, Low Voltage Differential Signal (LVDS), and PCIExpress. These interfaces emulate the vehicular topology for testing purposes and allow system level validation of the DUT.

The HiL Rigs and the corresponding software tooling are offered by companies like Elektrobit, dSpace, National Instruments, and Opal-RT. The HiL systems facilitate time synchronized inputs and outputs to the DUT and measures system performance. These solutions have optimizations for large-scale operations aligned with faster validation cycles for customers. This latter point is relevant when a project requires validation across a large number of miles. Elektrobit provides the ability to orchestrate and deploy large HiL server farms that can be deployed to work in parallel. The larger server farms allow parallel HiL simulations to reduce the time to validate thousands of miles of drive time and assess the key performance indicators (KPIs) for the feature sets. Results can be acted on more quickly and reduce the overall development time.

Figure 2: High Level Hardware-in-the-loop (HiL) System and Interfaces

Figure 2: High Level Hardware-in-the-loop (HiL) System and Interfaces

The HIL system loads sensor data derived from test drive logs. These logs vary in format, but often are captured and stored as MDF4, ADTF, rosbag, or other data logger proprietary formats. These are then processed for HIL simulations to implement open-loop and closed-loop simulations.

  • Open loop simulations refer to replay of log data from test drives.
  • Closed-loop simulations rely on the behavior of the system as inputs vary based on new outputs of the simulation.

Both open loop and closed loop simulations are part of autonomous driving development, but open-loop simulations require the largest datasets (multiple petabytes on average) due to reliance on the log data from test drives making them a primary concern for deploying HiL in a hybrid manner.

Overview of Solution

Architectures for supporting HIL simulations with AWS for AD development vary based primarily on the networking available at HiL locations. A common pattern for AWS customers is to have the HiL systems directly interfacing with Amazon S3 over high-bandwidth network links leveraging AWS Direct Connect. This is the simplest approach to deploying HiL and avoids hybrid data management of the petabytes of data in Amazon S3 to a local storage system.

AWS Direct Connect provides customers options to deploy their HIL rigs at their data center or in AWS colocation facilities with low latency connections. AWS has the largest number of Direct Connection locations and points-of-presence (POPs) to enable low latency connectivity to any of the >24 AWS Regions. The following diagram illustrates a reference architecture leveraging a direct interface from the HiL systems and Amazon S3.

Figure 3 : Reference Architecture for Hardware-in-the-Loop (HiL) Direct to Amazon S3

As shown in Figure 3, we illustrate the common interfaces, topology, and AWS services used for autonomous driving customers.

  • Amazon S3 is used to store and analyze the test drive logs used by the HiL simulations and also the results from the simulation runs for further analysis.
  • Metadata of test drive data is populated in various database and analytics services, referred to as the data catalogues, with metadata crawlers and processing pipelines that extract from the drive log and test result data on Amazon S3.
  • The data catalogues provide flexible search interfaces for developers and validation engineers or advanced analytics tools. These systems provide keyword search in Amazon Elasticsearch or SQL queries in Amazon Redshift or Amazon RDS and noSQL interfaces using Amazon DynamoDB. Amazon Partner Network solutions for these database and analysis tools are common as well, such as those in AWS Marketplace.
  • Validation engineer, data scientists, or developers use these data catalogues to find scenarios for testing. These personas also use the HiL management interfaces to configure and orchestrate the HiL simulation runs on the scenarios identified and ensure traceability.
  • HiL management systems control the HiL Rigs that interface to the DUT and implement the HiL simulations using the test drive logs. The HiL management system then writes results back to S3 for further analysis via various tool chains.

A common question AWS customers have is how to determine an optimal hybrid architecture using this approach. The primary factors are properly sized network links to accommodate data sets used by the HiL simulations as well as low latency network links between Amazon S3 and the HiL rigs. As a result, a key factor is ensuring use of an AWS region for your AWS storage that is in close proximity to your HiL testing site(s).

Based on current HiL implementations, open-loop simulations can sustain latencies of 30-50 ms RTT. AWS has numerous AWS Direct Connect locations in co-location facilities with latencies <5ms RTT. Sizing for these network links can be calculated based on the expected dataset sizes and the interval of time targeted for simulation run. We show a basic formula used for network sizing.

Average_Throughput (Gbps) = Average_Dataset_Size(GB)*8 / Time_Interval (seconds)

As an example, for a scenario where an average of 20PB is needed by the HIL rig every 2 weeks, we require ~200Gbps for the AWS Direct Connect bandwidth.

Figure 4 shows an example of a high-level architecture supported by Elektrobit with multiple EB 9101 test racks grouped together. This architecture supports multiple ECUs to be tested at once, leveraging drive log data in Amazon S3. This system is controlled with a central management software that allows optimal orchestration to keep the Elektrobit HiL system running optimally.

Use cases include:

  • The automated replay of all relevant sensor data with high time precision to ECU
  • Capture of ECU responses including debug data
  • Integration of customer components inside the HiL rack for visualization or post-processing.

Another common question from AWS customers is whether this architecture is supported for their HiL implementation. Many HiL providers are adding AWS functionality to their software and hardware stacks in response to customers transitioning to cloud for the development platforms.  Some vendors still require Amazon S3 as a supported interface in their HiL Rigs. The work needed to accommodate Amazon S3 is usually a small level of effort for any developer by using Amazon SDKs on the HiL rig software stack. If there is a project where this is needed, contact the AWS account teams and your HiL vendors to ensure a successful and cost efficient project implementation.

Figure 4: Elektrobit HiL Architecture with AWS

Figure 4: Elektrobit HiL Architecture with AWS

An alternative HiL solution shown in Figure 5 includes Amazon S3 as the primary storage for drive log data and the scale out NAS storage system is located on-premises operating as a cache for the HIL rigs. This is common when the networking options at the HIL site are limited in bandwidth or latency to handle the target datasets and time windows.

AWS customers calculate the size of the cache to transfer the entire dataset over the intended time interval. Following is a simple calculation to demonstrate this.

Cache_Size(GB) = Average_Dataset_Size(GB) - Average_Throughput (Gbps) /8 * Time_Interval (seconds)

In this example, a customer has 40 Gbps AWS Direct Connect available and a 10PB dataset needed for HIL simulations every 2 weeks. Using the preceding formula there is a need for local cache of four PB capable of high read-rates.

Figure 5: Reference Architecture for Hardware-in-the Loop (HiL) with Local Cache to Amazon S3

Figure 5: Reference Architecture for Hardware-in-the Loop (HiL) with Local Cache to Amazon S3

In this hybrid architecture there is a need to orchestrate the data movement in line with the needs of the HIL simulation data set. This requires third party software generally or built in functionality into workflow orchestration tools like Apache Airflow. At CES 2019, Dell EMC and AWS illustrated a solution for this hybrid architecture documented in this short solution brief using Isilon as the scale out NAS storage system and DataIQ as the data movement and orchestration mechanism.

Any of these architectures can be cost-optimized, and AWS has programs and pricing options for Amazon Direct Connect as well as the other AWS services involved. There are Enterprise Agreements and Migration Acceleration Programs (MAP)  in line with the holistic AD development platform needs, that reduce the costs for hybrid architecture functionality needed in the HiL solutions. One common need is support for AWS Direct Connect “flat rate” pricing option to accommodate the data transfer out (DTO) needs for the HiL workload. If you need details on these programs for your AD development project, contact your AWS account team.

Conclusion

In this blog post, we discussed two common architectural patterns for supporting HiL simulations for ADAS and Autonomous Driving development. These help customers decide on the right networking, storage, and hybrid topologies for these systems.

HiL systems directly interfacing with Amazon S3 is the most common pattern as you see with Elektrobit HiL solutions, but for customers with limited network links the use of a local cache is an option. Autonomous driving customers looking to increase velocity in their SAE Level 2-5 development programs with HiL simulations have achieved success with AWS as the development platform using these patterns. AWS has a team dedicated to autonomous driving, so contact your AWS account team to get a more prescriptive solution for your HiL or related ADAS and AD development needs.

Also, check out the Automotive issue of the AWS Architecture Monthly Magazine.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Architecture Monthly Magazine: AWS Solutions

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-aws-solutions/

Architecture Monthly - October 2020 - AWS SolutionsFor October’s issue of AWS Architecture Monthly Magazine, we decided to do a deep dive into the AWS Solutions Library, a virtual treasure trove of cloud-based solutions for dozens of technical and business problems. Whether you want to combine pre-built, well-architected multi-service patterns to create your own solution, deploy vetted architecture directly into your AWS account, or get help deploying vetted architecture from AWS Competency Partners, we can help. Our expert runs us though the various offerings you can take advantage of, and some of our other guest writers will go more deeply into the individual options.

In this month’s AWS Solutions issue

  • Ask an Expert: Tom Begley, Manager, AWS Solutions Builder
  • Customer Success Story: App8: Helping Restaurants Succeed during COVID-19
  • AWS Solutions Implementations: Detailed architectures, a deployment guide, and instructions for both automated and manual deployment
  • AWS Solutions Constructs: Building faster and more confidently with vetted architecture patterns
  • AWS Solutions Consulting Offers: Enhancing the AWS Solutions Library to address customer needs
  • Related Videos: Watch what AWS Solutions can do for you

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Architecture Patterns for Red Hat OpenShift on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/architecture-patterns-for-red-hat-openshift-on-aws/

Editor’s note: Although this blog post and its accompanying code make use of the word “Master,” Red Hat is making open source code more inclusive by eradicating “problematic language.” Read more about this.

Introduction

Red Hat OpenShift is an application platform that provides customers with turnkey application platform that is much more than a simple Kubernetes orchestration.

OpenShift customers choose AWS as their cloud of choice because of the efficiency, security, and reliability, scalability, and elasticity it provides. Customers seeking to modernize their business, process, and application stacks are drawn to the rich AWS service and feature sets.

As such, we see some customers migrate from on-premises to AWS or exist in a hybrid context with application workloads running in various locations. For OpenShift customers, this poses a few questions and considerations:

  • What are the recommendations for the best way to deploy OpenShift on AWS?
  • How is this different from what customers were used to on-premises?
  • How does this ensure resilience and availability?
  • Do customers need a multi-region, multi-account approach?

For hybrid customers, there are assumptions and misconceptions:

  • Where does the control plane exist?
  •  Is there replication, and if so, what are the considerations and ramifications?

In this post I will run through some of the more common questions and patterns for OpenShift on AWS, while looking at some of the terminology and conceptual differences of AWS. I’ll explore migration and hybrid use cases and address some misconceptions.

OpenShift building blocks

On AWS, OpenShift 4x is the norm. To that effect, I will focus on OpenShift 4, but many of the considerations will apply to both OpenShift 3 and OpenShift 4.

Let’s unpack some of the OpenShift building blocks. An OpenShift cluster consists of Master, infrastructure, and worker nodes. The Master forms the control plane and infrastructure nodes cater to a routing layer and additional functions, such as logging, monitoring etc. Worker nodes are the nodes that customer application container workloads will exist on.

When deployed on-premises, OpenShift nodes will be placed in separate network subnets. Depending on distance, latency, etc., a single OpenShift cluster may span two data centers that have some nodes in a subnet in one data center and other subnets in a different data center. This applies to customers with data centers within a few miles of each other with high-speed connectivity. An alternative would be an OpenShift cluster in each data center.

AWS concepts and terminology

At AWS, the concept of “region” is a geolocation, such as EMEA (Europe, Middle East, and Africa) or APAC (Asian Pacific) rather than a data center or specific building. An Availability Zone (AZ) is the closest construct on AWS that maps to a physical data center. Within each region you will find multiple (typically three or more) AZs. Note that a single AZ will contain multiple physical data centers but we treat it as a single point of failure. For example, an event that impacts an AZ would be expected to impact all the data centers within that AZ. To this effect, customers should deploy workloads spanning multiple AZs to protect against any event that would impact a single AZ.

Read more about Regions, Availability Zones, and Edge Locations.

Deploying OpenShift

When deploying an OpenShift cluster on AWS, we recommend starting with three Master nodes spread across three AWS AZs and three worker nodes spread across three AZs. This allows for the combination of resilience and availably constructs provided by AWS as well as Red Hat OpenShift. The OpenShift installer provides a means of deploying the underlying AWS infrastructure in two ways: IPI Installer-provisioned infrastructure and UPI user-provisioned infrastructure. Both Red Hat and AWS collect customer feedback and use this to drive recommended patterns that are then included in the OpenShift installer. As such, the OpenShift installer IPI mode becomes a living reference architecture for deploying OpenShift on AWS.

Deploying OpenShift

The installer will require inputs for the environment on which it’s being deployed. In this case, since I am deploying on AWS, I will need to provide the AWS region, AZs, or subnets that related to the AZs, as well as EC2 instance type. The installer will then generate a set of ignition files that will be used during the deployment of OpenShift:

apiVersion: v1
baseDomain: example.com 
controlPlane: 
  hyperthreading: Enabled   
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: m5.xlarge 
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: m5.xlarge
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
pullSecret: '{"auths": ...}' 
fips: false 
sshKey: ssh-ed25519 AAAA... 

What does this look like at scale?

For larger implementations, we would see additional worker nodes spread across three or more AZs. As more worker nodes are added, use of the control plane increases. Initially scaling up the Amazon Elastic Compute Cloud (EC2) instance type to a larger instance type is an effective way of addressing this. It’s possible to add more Master nodes, and we recommend that an odd number of nodes are maintained. It is more common to see scaling out of the infrastructure nodes before there is a need to scale Masters. For large-scale implementations, infrastructure functions such as the router, monitoring, and logging functions can be moved to separate EC2 instances from the Master nodes, as well as from each other. It is important to spread the routing layer across multiple AZs, which is critical to maintaining availability and resilience.

The process of resource separation is now controlled by infrastructure machine sets within OpenShift. An infrastructure machine set would need to be defined, then the infrastructure role edited to be moved from the default to this new infrastructure machine set. Read about this in greater detail.

OpenShift in a multi-account context

Using AWS accounts as a means of separation is a common well-architected pattern. AWS Organizations and AWS Control Tower are services that are commonly adopted as part of a multi-account strategy. This is very much the case when looking to enable teams to use their own accounts and when an account vending process is needed to cater for self-service account provisioning.

OpenShift in a multi-account context

OpenShift clusters are deployed into multiple accounts. An OpenShift dev cluster is deployed into an AWS Dev account. This account would typically have AWS Developer Support associated with it. A separate production OpenShift cluster would be provisioned into an AWS production account with AWS Enterprise Support. Enterprise support provides for faster support case response times, and you get the benefit of dedicated resources such as a technical account manager and solutions architect.

CICD pipelines and processes are then used to control the application life cycle from code to dev to production. The pipelines would push the code to different OpenShift cluster end points at different stages of the life cycle.

Hybrid use case implementation

A common misconception of hybrid implementations is that there is a single cluster or control plan that has worker nodes in various locations. For example, there could be a cluster where the Master and infrastructure nodes are deployed in one location, but also worker nodes registered with this cluster that exist on-premises as well as in the cloud.

Having a single customer control plane for a hybrid implementation, even if technically possible, introduces undesired risks.

There is the potential to take multiple environments with very different resilience characteristics and make them interdependent of each other. This can result in performance and reliability issues, and these may increase not only the possibility of the risk manifesting, but also increase in the impact or blast radius.

Instead, hybrid implementations will see separate OpenShift clusters deployed into various locations. A customer may deploy clusters on-premises to cater for a workload that can’t be migrated to the cloud in the short term. Separate OpenShift clusters can then deployed into accounts in AWS for workloads on the cloud. Customers can also deploy separate OpenShift clusters in different AWS regions to cater for proximity to the consuming customer.

Though adding multiple clusters doesn’t add significant administrative overhead, there is a desire to be able to gain visibility and telemetry to all the deployed clusters from a central location. This may see the OpenShift clusters registered with Red Hat Advanced Cluster Manager for Kubernetes.

Summary

Take advantage of the IPI model, not only as a guide but to also save time. Make AWS Organizations, AWS Control Tower, and the AWS Service catalog part of your cloud and hybrid strategies. These will not only speed up migrations but also form building blocks for a modernized business with a focus of enabling prescriptive self-service. Consider Red Hat advanced cluster manager for multi cluster management.

Automated CloudFormation Testing Pipeline with TaskCat and CodePipeline

Post Syndicated from Raleigh Hansen original https://aws.amazon.com/blogs/devops/automated-cloudformation-testing-pipeline-with-taskcat-and-codepipeline/

Researchers at Academic Medical Centers (AMCs) use programs such as Observational Health Data Sciences and Informatics (OHDSI) and Research Electronic Data Capture (REDCap) to interact with healthcare data. Our internal team at AWS has provided solutions such as OHDSI-on-AWS and REDCap environments on AWS to help clinicians analyze healthcare data in the AWS Cloud. Occasionally, these solutions break due to a change in some portion of the solution (e.g. updated services). The Automated Solutions Testing Pipeline enables our team to take a proactive approach to discovering these breaks and their cause in order to expedite the repair process.

OHDSI-on-AWS provides these AMCs with the ability to store and analyze observational health data in the AWS cloud. REDCap is a web application for managing surveys and databases with HIPAA-compliant environments. Using our solutions, these programs can be spun up easily on the AWS infrastructure using AWS CloudFormation templates.

Updates to AWS services and other program libraries can cause the CloudFormation template to fail during deployment. Other times, the outputs may not be operating correctly, or the template may not work on every AWS region. This can create a negative customer experience. Some customers may discover this kind of break and decide to not move forward with using the solution. Other customers may not even realize the solution is broken, so they might be unknowingly working with an uncooperative environment. Furthermore, we cannot always provide fast support to the customers who contact us about broken solutions. To meet our team’s needs and the needs of our customers, we decided to focus our efforts on taking a CI/CD approach to maintain these solutions. We developed the Automated Testing Pipeline which regularly tests solution deployment and changes to source files.

This post shows the features of the Automated Testing Pipeline and provides resources to help you get started using it with your AWS account.

Overview of Automated Testing Pipeline Solution

The Automated Testing Pipeline solution as a whole is designed to automatically deploy CloudFormation templates, run tests against the deployed environments, send notifications if an issue is discovered, and allow for insightful testing data to be easily explored.

CloudFormation templates to be tested are stored in an Amazon S3 bucket. Custom test scripts and TaskCat deployment configuration are stored in an AWS CodeCommit repository.

The pipeline is triggered in one of three ways: an update to the CloudFormation Template in S3, an Amazon CloudWatch events rule, and an update to the testing source code repository. Once the pipeline has been triggered, AWS CodeBuild pulls the source code to deploy the CloudFormation template, test the deployed environment, and store the results in an S3 bucket. If any failures are discovered, subscribers to the failure topic are notified. The following diagram shows its overall architecture.

Diagram of Automated Testing Pipeline architecture

Diagram of Automated Testing Pipeline architecture

In order to create the Automated Testing Pipeline, two interns collaborated over the course of 5 weeks to produce the architecture and custom test scripts. We divided the work of constructing a serverless architecture and writing out test scripts for the output urls for OHDSI-on-AWS and REDCap environments on AWS.

The following tasks were completed to build out the Automated Testing Pipeline solution:

  • Setup AWS IAM roles for accessing AWS resources securely
  • Create CloudWatch events to trigger AWS CodePipeline
  • Setup CodePipeline and CodeBuild to run TaskCat and testing scripts
  • Configure TaskCat to deploy CloudFormation solutions in various AWS Regions
  • Write test scripts to interact with CloudFormation solutions’ deployed environments
  • Subscribe to receive emails detailing test results
  • Create a CloudFormation template for the Automated Testing Pipeline

The architecture can be extended to test any CloudFormation stack. For this particular use case, we wrote the test scripts specifically to test the urls output by the CloudFormation solutions. The Automated Testing Pipeline has the following features:

  • Deployed in a single AWS Region, with the exception of the tested CloudFormation solution
  • Has a serverless architecture operating at the AWS Region level
  • Deploys a pipeline which can deploy and test the CloudFormation solution
  • Creates CloudWatch events to activate the pipeline on a schedule or when the solution is updated
  • Creates an Amazon SNS topic for notifying subscribers when there are errors
  • Includes code for running TaskCat and scripts to test solution functionality
  • Built automatically in minutes
  • Low in cost with free tier benefits

The pipeline is triggered automatically when an event occurs. These events include a change to the CloudFormation solution template, a change to the code in the testing repository, and an alarm set off by a regular schedule. Additional events can be added in the CloudWatch console.

When the pipeline is triggered, the testing environment is set up by CodeBuild. CodeBuild uses a build specification file kept within our source repository to set up the environment and run the test scripts. We created a CodeCommit repository to host the test scripts alongside the build specification. The build specification includes commands run TaskCat — an open-source tool for testing the deployment of CloudFormation templates. TaskCat provides the ability to test the deployment of the CloudFormation solution, but we needed custom test scripts to ensure that we can interact with the deployed environment as expected. If the template is successfully deployed, CodeBuild handles running the test scripts against the CloudFormation solution environment. In our case, the environment is accessed via urls output by the CloudFormation solution.

We used a Selenium WebDriver for interacting with the web pages given by the output urls. This allowed us to programmatically navigate a headless web browser in the serverless environment and gave us the ability to use text output by JavaScript functions to understand the state of the test. You can see this interaction occurring in the code snippet below.

def log_in(driver, user, passw, link, btn_path, title):
    """Enter username and password then submit to log in

        :param driver: webdriver for Chrome page
        :param user: username as String
        :param passw: password as String
        :param link: url for page being tested as String
        :param btn_path: xpath to submit button
        :param title: expected page title upon successful sign in
        :return: success String tuple if log in completed, failure description tuple String otherwise
    """
    try:
        # post username and password data
        driver.find_element_by_xpath("//input[ @name='username' ]").send_keys(user)
        driver.find_element_by_xpath("//input[ @name='password' ]").send_keys(passw)

        # click sign in button and wait for page update
        driver.find_element_by_xpath(btn_path).click()
    except NoSuchElementException:
        return 'FAILURE', 'Unable to access page elements'

    try:
        WebDriverWait(driver, 20).until(ec.url_changes(link))
        WebDriverWait(driver, 20).until(ec.title_is(title))
    except TimeoutException as e:
        print("Timeout occurred (" + e + ") while attempting to sign in to " + driver.current_url)
        if "Sign In" in driver.title or "invalid user" in driver.page_source.lower():
            return 'FAILURE', 'Incorrect username or password'
        else:
            return 'FAILURE', 'Sign in attempt timed out'

    return 'SUCCESS', 'Sign in complete'

We store the test results in JSON format for ease of parsing. TaskCat generates a dashboard which we customize to display these test results. We are able to insert our JSON results into the dashboard in order to make it easy to find errors and access log files. This dashboard is a static html file that can be hosted on an S3 bucket. In addition, messages are published to topics in SNS whenever an error occurs which provide a link to this dashboard.

Dashboard containing descriptions of tests and their results

Customized TaskCat dashboard

In true CI/CD fashion, this end-to-end design automatically performs tasks that would otherwise be performed manually. We have shown how deploying solutions, testing solutions, notifying maintainers, and providing a results dashboard are all actions handled entirely by the Automated Testing Pipeline.

Getting Started with the Automated Testing Pipeline

Prerequisite tasks to complete before deploying the pipeline:

Once the prerequisite tasks are completed, the pipeline is ready to be deployed. Detailed information about deployment, altering the source code to fit your use case, and troubleshooting issues can be found at the GitHub page for the Automated Testing Pipeline.

For those looking to jump right into deployment, click the Launch Stack button below.

Button to click to deploy the Automated Testing Pipeline via CloudFormation

Tasks to complete after deployment:

  • Subscribe to SNS topic for error messages
  • Update the code to match the parameters and CloudFormation template that were chosen
  • Skip this step if you are testing OHDSI-on-AWS. Upload the desired CloudFormation template to the created source S3 Bucket
  • Push the source code to the created CodeCommit Repository

After the code is pushed to the CodeCommit repository and the CloudFormation template has been uploaded to S3, the pipeline will run automatically. You can visit the CodePipeline console to confirm that the pipeline is running with an “in progress” status.

You may desire to alter various aspects of the Automated Testing Pipeline to better fit your use case. Listed below are some actions you can take to modify the solution to fit your needs:

  • Go to CloudWatch Events and update rules for automatically started the pipeline.
  • Scale out testing by providing custom testing scripts or altering the existing ones.
  • Test a different CloudFormation template by uploading it to the source S3 bucket created and configuring the pipeline accordingly. Custom test scripts will likely be required for this use case.

Challenges Addressed by the Automated Testing Pipeline

The Automated Testing Pipeline directly addresses the challenges we faced with maintaining our OHDSI and REDCap solutions. Additionally, the pipeline can be used whenever there is a need to test CloudFormation templates that are being used on a regular basis or are distributed to other users. Listed below is the set of specific challenges we faced maintaining CloudFormation solutions and how the pipeline addresses them.

Table describing challenges faced with their direct solution offered by Testing Pipeline

The desire to better serve our customers guided our decision to create the Automated Testing Pipeline. For example, we know that source code used to build the OHDSI-on-AWS environment changes on occasion. Some of these changes have caused the environment to stop functioning correctly. This left us with cases where our customers had to either open an issue on GitHub or reach out to AWS directly for support. Our customers depend on OHDSI-on-AWS functioning properly, so fixing issues is of high priority to our team. The ability to run tests regularly allows us to take action without depending on notice from our customers. Now, we can be the first ones to know if something goes wrong and get to fixing it sooner.

“This automation will help us better monitor the CloudFormation-based projects our customers depend on to ensure they’re always in working order.” — James Wiggins, EDU HCLS SA Manager

Cleaning Up

If you decide to quit using the Automated Testing Pipeline, follow the steps below to get rid of the resources associated with it in your AWS account.

  • Delete CloudFormation solution root Stack
  • Delete pipeline CloudFormation Stack
  • Delete ATLAS S3 Bucket if OHDSI-on-AWS was chosen

Deleting the pipeline CloudFormation stack handles removing the resources associated with its architecture. Depending on the CloudFormation template chosen for testing, additional resources associated with it may need to be removed. Visit our GitHub page for more information on removing resources.

Conclusion

The ability to continuously test preexisting solutions on AWS has great benefits for our team and our customers. The automated nature of this testing frees up time for us and our customers, and the dashboard makes issues more visible and easier to resolve. We believe that sharing this story can benefit anyone facing challenges maintaining CloudFormation solutions in AWS. Check out the Getting Started with the Automated Testing Pipeline section of this post to deploy the solution.

Additional Resources

More information about the key services and open-source software used in our pipeline can be found at the following documentation pages:

About the Authors

Raleigh Hansen is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. She is passionate about solving problems and improving upon existing systems. She also adores spending time with her two cats.

Dan Le is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. He is passionate about technology and enjoys doing art and music.

Fundbox: Simplifying Ways to Query and Analyze Data by Different Personas

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/fundbox-simplifying-ways-to-query-and-analyze-data-by-different-personas/

Fundbox is a leading technology platform focused on disrupting the $21 trillion B2B commerce market by building the world’s first B2B payment and credit network. With Fundbox, sellers of all sizes can quickly increase average order volumes (AOV) and improve close rates by offering more competitive net terms and payment plans to their SMB buyers. With heavy investments in machine learning and the ability to quickly analyze the transactional data of SMB’s, Fundbox is reimagining B2B payments and credit products in new category-defining ways.

Learn how how the company simplified the way different personas in the organization query and analyze data by building a self-service data orchestration platform. The platform architecture is entirely serverless, which simplifies the ability to scale and adopt to unpredictable demand. The platform was built using AWS Step Functions, AWS Lambda, Amazon API Gateway, Amazon DynamoDB, AWS Fargate, and other AWS Serverless managed services.

For more content like this, subscribe to our YouTube channels This is My Architecture, This is My Code, and This is My Model, or visit the This is My Architecture on AWS, which has search functionality and the ability to filter by industry, language, and service.

AWS Architecture Monthly Magazine: Agriculture

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-magazine-agriculture/

Architecture Monthly Magazine cover - AgricultureIn this month’s issue of AWS Architecture Monthly, Worldwide Tech Lead for Agriculture, Karen Hildebrand (who’s also a fourth generation farmer) refers to agriculture as “the connective tissue our world needs to survive.” As our expert for August’s Agriculture issue, she also talks about what role cloud will play in future development efforts in this industry and why developing personal connections with our AWS agriculture customers is one of the most important aspects of our jobs.

You’ll also buzz through the world of high tech beehives, milk the information about data analytics-savvy cows, and see what the reference architecture of a Smart Farm looks like.

In August’s issue Agriculture issue

  • Ask an Expert: Karen Hildebrand, AWS WW Agriculture Tech Leader
  • Customer Success Story: Tine & Crayon: Revolutionizing the Norwegian Dairy Industry Using Machine Learning on AWS
  • Blog Post: Beewise Combines IoT and AI to Offer an Automated Beehive
  • Reference Architecture:Smart Farm: Enabling Sensor, Computer Vision, and Edge Inference in Agriculture
  • Customer Success Story: Farmobile: Empowering the Agriculture Industry Through Data
  • Blog Post: The Cow Collar Wearable: How Halter benefits from FreeRTOS
  • Related Videos: DuPont, mPrest & Netafirm, and Veolia

Survey opportunity

This month, we’re also asking you to take a 10-question survey about your experiences with this magazine. The survey is hosted by an external company (Qualtrics), so the below survey button doesn’t lead to our website. Please note that AWS will own the data gathered from this survey, and we will not share the results we collect with survey respondents. Your responses to this survey will be subject to Amazon’s Privacy Notice. Please take a few moments to give us your opinions.

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

BBVA: Architecture for Large-Scale Macie Implementation

Post Syndicated from Neel Sendas original https://aws.amazon.com/blogs/architecture/bbva-architecture-for-large-scale-macie-implementation/

This post was co-written by Andrew Alaniz , Technical Information Security Officer, and Brady Pratt, Cloud Security Enginner, both at BBVA USA.

Introduction

Data Loss Prevention (DLP) is a common topic among companies that work with any type of sensitive data. One of the challenges is that many people either don’t fully understand what DLP is, or rather, have their own definition of what it is. Regardless of one’s interpretation of DLP, one thing is certain: before you can control data loss, you need to locate find the data sources.

If an organization can’t identify its data, it can’t protect it. BBVA USA, a bank holding company, turned to AWS for advice, and decided to use Amazon Macie to accomplish this in Amazon Simple Storage Service (Amazon S3). Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. This blog post will share some of the design and architecture we used to deploy Macie using services, such as AWS Lambda and Amazon CloudWatch.

Data challenges in Amazon S3

Although all S3 buckets are private by default, everyone is aware of the challenges of unsecured S3 buckets exposing data publicly. Amazon has provided a way to prevent that by removing the ability to make buckets public. As with other data storage mechanisms, this doesn’t stop anyone from storing sensitive data within AWS and exposing it another way. With Macie, one can classify the data stored in S3, centrally, and through AWS Organizations.

Recommended architecture

We can break the Macie architecture into two main parts: S3 discovery and evaluation, and S3 sensitive data discovery:

Macie architecture

The setup of discovery and evaluation is simple and straightforward, and should be enabled through Amazon Organizations and across all accounts. The cost of this piece is minimal, and it provides valuable insights into the compliance state of S3 buckets.

Once setup of discovery and evaluation is completed, we are a ready to move to the next step and configure discovery jobs for our S3 buckets. The architecture includes the use of S3, Amazon CloudWatch Events, Amazon EventBridge, and Lambda. All of the execution should happen in a centralized account, but the event triggers should come from each individual account.

Architectural considerations

When determining the architectural design of the solution, consider a few main components:

Centralization

Utilize AWS Organizations: Macie allows native integration with AWS Organizations. This is a significant advantage for Macie. Additionally, within AWS Organizations, it allows the delegation of the Macie master account to a subordinate account. The benefit of this is that it allows centralized management while allowing for the compartmentalization of roles.

Ease of management

One of the most challenging things to manage is non-conforming configurations. It’s much easier to manage a standard way to create, name, and configure settings. Once the classification jobs were ready to be created, we had to take into consideration the following when deploying Macie for our use case:

  1. Macie classifies content in a single job across one account.
  2. If you submit multiple jobs that contain the same bucket, Macie will scan the objects multiple times.
  3. Macie jobs are immutable.

Due to these considerations, we decided to create one job per S3 bucket. This allows administrators to search more easily for jobs related to findings.

Cost considerations

Macie plays an essential role, not only in identifying data and improving data collection, but also in compliance. We needed to make a decision about how to determine if an S3 bucket would be included in a classification job. Initially, we considered including all buckets no matter what. The logic here was that even if we make an assumption that a bucket would never have sensitive data in it, an entity with the right role could always add something at a later date.

Finally, we implemented a solution to tag specific buckets that were known to have immutable properties and which would never allow sensitive data to be added. We could do this because we knew exactly what data was in the bucket, who or what created the bucket, and exactly who or what had access to the bucket.

An example of this type of bucket is the S3 bucket used to store VPC Flow Logs. We know that this bucket is only created by provisioning scripts and is only going to store VPC flow logs that contain no sensitive data based on data classification standards. Also, only VPC services and specific security services can access this bucket for anything other than READ. This is controlled organizationally and can be tagged with a simple ignore key/value pair upon creation.

Deploying Macie at BBVA USA

BBVA USA developed an approach to working within AWS that allows guardrails to be applied as accounts are created. One of those guardrails identifies if developers have stored sensitive data in an account. BBVA needed to be able to do this, and do it at scale. If there is a roadblock or a challenge with AWS services, the first place BBVA looks is to support, but the second place is the Technical Account Manager.

After initiating conversations with its account team, BBVA determined that AWS Macie was the tool to help them with this challenge.

With the help of its technical account manager (TAM), BBVA was able to meet with the Macie Product team and discuss the best options for deploying at scale. Through these conversations, they were even able to influence the Macie product roadmap.

Getting Macie ready to deploy at scale was actually quite simple once the architectural pattern was designed.

Initial job creation

In order to set up jobs for each existing bucket in the organization, it’s a matter of scripting the job creation and adding each bucket from each account into its own job, which is pretty straightforward.

Job creation for new buckets

The recommended architecture and implementation for existing buckets:

  1. Whenever a new S3 bucket is added to Organization accounts, trigger a CloudWatch Event in the target account.
  2. Set up a cross account EventBridge to consume the Event. Using the EventBridge allows for a simpler configuration and centralized management of both Events and Lambda.
  3. Trigger a Lambda function in a delegated Macie admin account, which creates classification jobs to apply Macie to all the newly created S3 buckets.
  4. Repeat the same process when a bucket is deleted by triggering a cancel job.

Evaluate the state of S3 buckets

To evaluate the S3 accounts, turn on Macie at the organization master account and delegate administration to a subordinate account used for Macie. This enables management consolidation of security features into a centralized security account. This helps further restrict access from those that may need access to the master billing account. Finally, enable Macie by default on all organization accounts.

Evaluate the state of S3 buckets

Conclusion

BBVA USA worked directly with the Macie product team by leveraging its relationship with the AWS account team and Enterprise Support. This allowed the company to eventually deploy Macie quickly and at scale. Through Macie, the company is able to track any changes to configurations on buckets that allow a bucket to be public, shared, or replicated with external accounts and if the encryption policies are disabled. Using Macie, BBVA was able to identify buckets that contained sensitive information and put in another control to bolster its AWS governance profile.

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

Post Syndicated from Japjot Walia original https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMaker, Amazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

 

Liberty IT Adopts Serverless Best Practices Using AWS Cloud Development Kit

Post Syndicated from Andrew Robinson original https://aws.amazon.com/blogs/architecture/liberty-it-adopts-serverless-best-practices-using-aws-cdk/

This post was co-written with Matthew Coulter, Lead Technical Architect of Global Risk at Liberty Mutual

Liberty IT Solutions, part of Liberty Mutual Group, has been using AWS CloudFormation to deploy serverless applications on AWS for the last four years. These deployments typically involve defining, integrating, and monitoring services such as AWS Lambda, Amazon API Gateway, and Amazon DynamoDB.

In this post, we will explore how Liberty took the AWS Cloud Development Kit (CDK), the AWS Well-Architected framework, and the Serverless Application Lens for the AWS Well-Architected Framework and built a set of CDK patterns in to help developers launch serverless resources.

Since each team was responsible for building and maintaining these applications, as soon as CloudFormation templates grew larger, it quickly became a challenge. As well, it was becoming increasingly time consuming to extract the relevant parts of the template for re-use. Even when teams were building similar applications with similar requirements, everything was built from scratch for that application.

Before we dive in, let’s cover some of the terminology in this blog post and provide you with some additional reading:

  • AWS CDK was launched as a software development framework to help you define cloud infrastructure as code and provision it through AWS CloudFormation.
  • AWS Well-Architected was developed to help cloud architects build secure, high performing, resilient, and efficient infrastructures for their applications.
  • Serverless Application Lens focuses on how to design, deploy, and architect your serverless application workloads on the AWS Cloud.

Background

The model Liberty used to develop these applications was based on small components that led to repeatable patterns, which made them ideal for sharing architectures across teams.

For example, if different teams built a web application using API Gateway, AWS Lambda, and Amazon DynamoDB, they could build a reusable pattern that included best practices after they battle tested it. This known infrastructure pattern could then be easily shared across multiple teams, ultimately increasing developer productivity.

Due to compliance requirements and controls in place, Liberty built these applications using CloudFormation with specific configurations. Teams had to frequently reverse engineer their CloudFormation templates from previous proof of concepts (PoCs), as there were no high-level abstractions available. If there were similar examples out there, they’d have to engineer it back to CloudFormation with our controls in place in order to use it in production.

That’s where CDK comes in

Matt Coulter at Liberty (and co-author of this blog post), said, “After researching CDK, we thought it would be a good fit for producing the patterns we wanted to, and providing an open source framework for others to contribute to the patterns, and to build their own.”

In an initial PoC, Liberty was able to take more than 2,000 lines of YAML CloudFormation down to just 14 lines of code, which came packaged with its own unit tests. This could then be published to NPM or GitHub to share with the rest of the team and with no loss of functionality over the original CloudFormation template.

Discovering and implementing AWS Serverless best practices

Liberty wanted to start the patterns using industry-wide best practices before applying its own specific best practices for compliance requirements.

The ideal starting point for this was the AWS Well-Architected framework, specifically the Serverless Application Lens. The Serverless Lens is more focused and specific, and it provides guidance that speaks to developers more closely than the broader framework. Using Serverless Lens saved Liberty significant time investments on discovering and implementing those best practices across the patterns they had built, and it allowed for faster uptake.

Let’s take a look at one of the patterns in more detail.

The X-Ray tracer pattern

This pattern introduces the concept of distributed tracing: as workloads scale it can become more challenging to find anomalies and figure out where those anomalies occur within your application. This is not defined by the components used, but how the information is sent back information to AWS X-Ray. The pattern includes some common use cases with different data flows through AWS Serverless services.

Common use cases with different data flows through AWS Serverless services

After this pattern has been deployed, a service map will be generated in X-Ray showing the interconnectivity between the different resources deployed. (Your map may look slightly different than this one.)

Service map in X-Ray showing interconnectivity between different resources deployed

This pattern will provide you with an API Gateway endpoint you can use to trigger the flow and see the results in near real time in the X-Ray service map. The pattern also includes an SSL Certificate error in a Lambda function that connects to an external HTTP endpoint. When this happens, you can see the error in the X-Ray service map. You can then go into the trace details, which shows the specific error:

Trace details showing the specific error

Want to know more?

Liberty has deployed more than 1,000 applications into non-production environments using CDK Patterns, and more than 100 applications into production. This saved its developers time in deploying best-of-breed serverless applications for the company and has encouraged sharing across the entire organization.

All of these patterns are now available at CDK Patterns under the MIT License. There are currently 17 serverless patterns available in both Typescript and Python, and you can find CDK patterns by any of the five Well-Architected Pillars.

You can also reach out to Liberty IT about this project on Twitter as well as contribute either directly or on GitHub.

 

Field Notes: Integrating HTTP APIs with AWS Cloud Map and Amazon ECS Services

Post Syndicated from Greg Share original https://aws.amazon.com/blogs/architecture/field-notes-integrating-http-apis-with-aws-cloud-map-and-amazon-ecs-services/

This post was cowritten with Preeti Pragya Jha, a senior software developer in Tata Consultancy Services (TCS).

Companies are continually looking for ways to optimize cost. This is true of RS Components, a global trading brand of Electrocomponents plc, a global omni-channel provider of industrial and electronic products and solutions. RS Components set out to build an efficient container-based platform on AWS to underpin their RS Industrial IoT Application. This formed part of their Connected Factories program to develop a customer solution for industrial data.

The recent announcements of HTTP APIs for API Gateway, and that of private integrations with AWS Elastic Load Balancing (ELB) and AWS Cloud Map soon after, demonstrated the security governance and possible cost savings potential for RS Components.

The team at RS had initially built their container solution on AWS Fargate for Amazon Elastic Container Service (ECS) with Amazon API Gateway. The  HTTP Endpoints front the Application Load Balancers and target groups with AWS Fargate Launch type.

HTTP API private integrations with ELB and Cloud Map meant that the team at RS could replace the Application Load Balancers with an AWS Cloud Map private integration for AWS API Gateway HTTP endpoints. This integration also meant that RS Components could leverage API Gateway as a single-entry point for their API’s from an authentication perspective.

In this post, we walk you through the steps required to provision an Amazon API Gateway with AWS Cloud Map to integrate Amazon Elastic Container Service for HTTP API endpoints.

The Technology Components

Amazon API Gateway helps developers easily create, publish, and maintain secure APIs at any scale.  API Gateway handles all the heavy lifting of managing thousands of API calls.  There are no minimum fees and you only pay for the API calls you receive.

AWS Cloud Map provides cloud resource discovery services, keeping track of application components in your microservices architecture. Your applications simply query AWS Cloud Map using the AWS SDK, API or even DNS to discover the locations of dependencies.

Amazon Elastic Container Service (Amazon ECS) provided orchestration services for your docker containers and AWS Fargate provides a serverless option to consume container compute without the need to manage actual servers.

The Implementation

This post assumes the AWS London Region but you could use any region which includes the services listed as per the AWS Region Table.

Pre-requisite Steps

We perform two tasks as pre-requisites for the Cloud Map implementation:

  1. Create an Amazon Elastic Container Registry repository with an example Docker image
  2. Create the base components for the Cloud Map Implementation upon which our Cloud Map integration will be based with a Cloudformation Template

Step 1: Set up Amazon ECR and create an Image

a.         Download the following example Flask container configuration file and unzip: https://cloudmap-shared.s3.eu-central-1.amazonaws.com/ExampleFlask.zip

b.         Navigate to the unzipped Example Flask directory from your local console and proceed to C.

c.         Create the Docker image locally and test locally running the following commands:

docker build -t flask-app-fargate-fargate .

docker run -p 80:80 flask-app-fargate-fargate

curl localhost:80

Serving flask app

If you navigate to http://0.0.0.0:80 you should see:

d.      Now we need to create an ECR repository and push out image to it by running the following AWS CLI commands. Remember to replace the variables in brackets:

aws ecr create-repository –repository-name flask-app-fargate-fargate –region <region>

$(aws ecr get-login –region <region> –no-include-email)

This should log you in to the Elastic Container Registry for the relevant region.

docker tag flask-app-fargate-fargate:latest <account number>.dkr.ecr.<region>.amazonaws.com/flask-app-fargate-fargate:latest

docker push <account number>.dkr.ecr.<region>.amazonaws.com/flask-app-fargate-fargate:latest

e.      In the AWS Management Console open up Amazon Elastic Container Registry (ECR)

  •  Check you are in the right region
  •  Open up the repository and make a note of the image URI. You will need this for the next step.

Step 2: Build from AWS CloudFormation Base Template

A.      Open up the AWS CloudFormation Console and create a stack using the following template: https://cloudmap-shared.s3.eu-central-1.amazonaws.com/MasterTemplate3.yaml

B.      You have the option to modify the parameters but one parameter you need to include is the Image parameter.  Paste the Image URI previously noted: <account number>.dkr.ecr.<region>.amazonaws.com/flask-app-fargate-fargate:latest

The following components will be created:

Three Amazon Virtual Private Cloud (VPC) Interface Endpoints configured against your VPC and Subnet – one for each of these:

1.       Fargate cluster service access to Amazon CloudWatch for logging – com.amazonaws.<region>.logs

2.       For Fargate cluster service access to the Amazon Elastic Container Registry (ECR) – two endpoints.

                      a.      com.amazonaws.<region>.ecr.dkr

                      b.      com.amazonaws.<region>.ecr.api

3.      An Amazon Virtual Private Cloud (VPC) Gateway Endpoint configured against your VPC and relevant Route Table with your Subnets associated for:

a.       Fargate cluster access to Amazon S3  where ECR container images are hosted – com.amazonaws.<region>.s3

Fargate Service and Cloud Map name space

The CloudFormation build should complete in 5-10 minutes. Next we will walk you through the creation of the ECS Fargate service to be referenced by the Amazon API Gateway HTTP Endpoint.

1. Open Amazon ECS in the Console

a. Access your cluster

b. Open the Services tab

c. Click the Create button.

2. Create a service referencing your ECS task definition and revision (your ECS cluster)

a. Provide a service name

b. Provide the number of tasks (in this instance 1)

c. Leave Deployments as default

d. Choose Next Step:

Launch type Fargate

3.       Choose the VPC, Subnets and Security Group previously created and set Auto-assign public IP to Disabled:

 

4.       Leave Load Balancer type as None and scroll down to Service Discovery:

a.       Check the enable service discovery integration box

b.       Create new private namespace

c.       Provide a Namespace name

d.       Provide a Service discovery name (should be auto populated).

e.       For DNS Records and Service Discovery – DNS Record Type choose SRV and port 80, choose Next Step:

DNS records for service discovery

5.       Click Next Step again, review the configuration and Create.

You have now created an Amazon ECS Fargate service in your Amazon ECS Fargate cluster referencing your previously created task definition. You have also created an AWS Cloud Map namespace as part of the service setup. Additional evidence of the namespace is the creation of an Amazon Route 53 private hosted zone for the namespace.

The process should complete in 5-10 minutes.  Review the Tasks tab for your cluster and you should see a task in a RUNNING state.

HTTP API and AWS Cloud Map Integrations

API Gateway VPC Links need to be created to enable AWS API Gateway to access the VPC in which the Amazon ECS environment is deployed.

1. Open the Amazon API Gateway console and click on VPC Links in the left hand—side menu and choose Create

2. Choose VPC link for HTTP APIs

a. Give the VPC Link a Name

b. Choose the VPC, Subnets and Security Group previously defined

c. Choose Create.

Create a VPC link

Now that we have an access mechanism for API to interface with the ECS environment in the VPC, let’s create the API itself.

3.       Click on APIs on the left pane and select to build a HTTP API type, and give it a name and choose Next:

Create an API

4. Choose Next on the Configure Routes step – we will configure a route in a subsequent step.

5. On the Define Stages step leave the Stage Name as $default – in this post I have turned off Auto-deploy. This is not mandatory, if it is enabled, API is ready to serve content immediately using the $default stage. Any changes you make to the API are also reflected immediately.

6. Choose Next.

7. Review the configuration and choose Create.

We have now created a basic unconfigured HTTP API. Next, we need to configure a route and in this case we are creating a route to ANY destination, in reality you may have many different routes in your API.

8. Form the landing page of the HTTP API you have just created – choose Routes from the left pane.

9. Choose Create, leave the route as ANY on the Create a Route page

10. Choose Create.

We now have a route configured. The next step is adding an integration with the AWS Cloud Map namespace and service created when we defined the Amazon ECS service previously.

11.        Form the landing page of the HTTP API you have just created – click on Integrations from the left pane.

12.        On the Integrations page, click on Manage Integrations and then click Create

13.        On the Create an Integration page Attach this integration to the ANY/ route previously created.

a.          Integrate with a Private Resource

b.          Choose Cloud Map as the integration target

c.          Choose your Namespace and Service previously created

d.         Choose the VPCLink previously created

e.         Choose Create.

Attach this integration to a route

Lastly, we need to deploy the API we have just configured and test.

14.       Click Deploy on the upper right of the page.

15.       Choose the $default stage and choose Deploy to Stage.

16.       Select your HTTP API on the left-hand pane and test clicking on the Invoke URL. If you used the code snippet for your container provided earlier you should see the following:

ECS Fargate Cluster

Pricing

Initial estimates for RS Component’ configuration based on a single instance Application Load Balancer based environment would cost approximately $25, a AWS Cloud Map equivalent would be under $5 per month.

The full understanding of cost savings with this configuration will become apparent over time as customers and load ramp up on their platform over time but comparisons can be made by referring to the AWS Cloud Map Pricing and Elastic Load Balancing pricing pages.

RS Components preferred API Gateway to be a single point of entry from an authentication perspective – this configuration has enabled this for them and based on initial analysis, this configuration for them could potentially optimize cost.

If you use this guide to modify existing HTTP APIs to integrate with Cloud Map, you can leverage your existing API configuration, with minimal disruption.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

Author picture

Preeti Pragya Jha is currently working with RS Components Ltd, part of ElectroComponents plc to design, develop and deploy their Industrial IoT Applications. She also loves to spend time with her family (husband and her little Angel) and to write in her personal blog (https://themirroraddiction.com) in her spare time.

Low-latency computing with AWS Local Zones – Part 1

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/low-latency-computing-with-aws-local-zones-part-1/

This post was contributed by: Pranav Chachra, Rob Chen, Alan Goodman

AWS launched a Local Zone in Los Angeles (LA), California, in late 2019 at re:Invent. Since the launch, we have seen a lot of interest from you, and have worked to bring additional features and services based on your feedback.

In this blog series, we share best practices and recommendations for hosting your applications in Local Zones based on what has worked for customers. Part one focuses on sharing more about Local Zones and how customers are already using the LA Local Zone. In this blog, we also address key questions asked by customers since the launch of the LA Local Zone. In the upcoming parts of this blog series, we do a deep dive into specific use cases for which our customers are using the LA Local Zone. We also list best practices to host your applications in Local Zones and plan to provide related architectural considerations.

AWS Local Zones introduction

local zone base picture

Today, there are a significant number of applications that can run in the AWS Cloud. Most applications work well in our Regions. However, for workloads that require low-latency or local data processing, you need us to bring AWS infrastructure closer to you. Without local AWS presence, you have to procure, operate, and maintain IT infrastructure in your own data center or colocation facility for such workloads. And, you end up building and running such application components with a different set of APIs and tools than the other parts of your applications running in the AWS Cloud. This results in a lot of extra effort and expenses.

And that’s why, based on your feedback, we launched AWS Local Zones, a new type of AWS infrastructure deployment that places compute, storage, and other select services closer to large cities. We designed Local Zones as a powerful construct that extends our existing Regions into new locations. This gives you the ability to run applications on AWS that require single-digit millisecond latencies to your end-users or on-premises installations in the local area. Just like Regions, Local Zones are fully managed and supported by AWS, giving you the elasticity, scalability, and security benefits of running on the AWS. The first Local Zone is generally available in LA, and is an extension of the US West (Oregon) Region (the parent Region).

Use cases

Like Regions, customers leverage the LA Local Zone for many different use cases. The top customer use cases include:

  1. Media and Entertainment (M&E) content creation: M&E customers are migrating expensive on-premises workstations to the LA Local Zone. They do this to accelerate content creation by getting rid of capacity constraints while improving security and operational efficiency. For artist workstations, latency is the key to having a jitter-free experience on a remote instance. These customers typically require less than five millisecond latency from their offices to virtual instances. With Direct Connect, most of these customers are able to achieve as low as 1-2 millisecond latency from their animation hubs in LA to the Local Zone, and run latency-sensitive workloads, such as live production, video editing.
  2. Enterprise migration with hybrid architecture: Enterprises have workloads running in their existing on-premises data centers in the LA metro area. These customers use the Local Zone to migrate complex legacy on-premises applications to AWS without expensive revamp of their architecture. Customers have told us that it can be daunting to migrate a portfolio of interdependent applications to the cloud. Now with a Direct Connect to the LA Local Zone, customers can establish a hybrid environment that provides ultra-low latency communication between applications running in the LA Local Zone and on-premises installations. In turn, this enables customers to migrate applications incrementally, simplifying migrations drastically and enabling on-going hybrid deployments in the LA area.
  3. Real-time multiplayer gaming: This category includes gaming companies that deploy game servers for multiplayer sessions all over the world to be closer to gamers. A latency of 20 milliseconds or less is considered ideal for good gameplay experience. Until now, these customers were using on-premises installations in the LA area to supplement AWS presence. However, now, customers are deploying latency-sensitive game servers in the LA Local Zone to run real-time and interactive multiplayer game sessions, enabling them to provide end users in the Southern California area with a great experience.

In the next parts of the blog series, we further dive into these use cases, cover respective best practices, and review architectural guidance for you.

Services and features

AWS launched the LA Local Zone with support for seven Nitro based Amazon EC2 instance types (T3, C5, M5, R5, R5d, I3en, and G4), two EBS volume types (io1 and gp2), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Application Load Balancer, Amazon VPC and Amazon Direct Connect. Since launch, AWS added support for Amazon RDS, Amazon EMR, AWS Shield, and Amazon EC2 Dedicated Hosts, and are adding more services based on your feedback.

Other parts of AWS, like AWS CloudFormation templates, CloudWatch, IAM resources, and Organizations, will continue to work as expected, providing you a consistent experience. You can also leverage the full suite of services like Amazon S3 in the parent Region, US West (Oregon), over AWS’s private network backbone with ~20–30 milliseconds latency.

Getting started and using the Local Zone

Now that we reviewed some common use cases and services of Local Zones, we want to get you started using the Local Zone. First, enable the LA Local Zone from the new “Zone settings” section of the EC2 console, as shown in the following image:

console Zone Settings

Once enabled, Local Zones looks and behaves similarly to an Availability Zones (AZs).  You can access Local Zones through the parent Region’s console and API endpoints. The following image shows that the LA Local Zone is visible as us-west-2-lax-1a along with other AZs in the EC2 console:

service health of zone

Once the LA Local Zone is enabled, you can extend your existing VPC from the parent Region to a Local Zone by creating a new VPC subnet assigned to the LA Local Zone:

creating subnet

Once a VPC subnet is established for the LA Local Zone, simply select the subnet while creating local resources. For example, you can launch an EC2 instance in the LA Local Zone by selecting the local subnet as the following:

configuring instance details

Local resources are then ready within seconds. You can manage these resources in the LA Local Zone just like resources in AZs:

shows the local zone created

Just like Regions, you can set up an Internet Gateway to access your local resources in the LA Local Zone over the internet. Or you can use AWS Direct Connect to route your traffic over a private network connection from any Direct Connect location. To get the best latency performance, you should use one of the Direct Connect locations available in LA:

  • T5 at El Segundo, Los Angeles, CA (recommended for lowest latency to the LA Local Zone)
  • CoreSite LA1, Los Angeles, CA
  • Equinix LA3, El Segundo, CA

Pricing

From a pricing perspective, Instances, and other local AWS resources running in a Local Zone have their own prices that might differ from the parent Region. Billing reports include a prefix, “LAX1,” or location name, “US West (Los Angeles),” that is specific to a group of Local Zones in LA. EC2 instances in Local Zones are available in On-Demand and Spot form, and you can also purchase Savings Plans. For pricing information, you can visit the pricing section on the respective services and filter pricing information by choosing the Local Zone location as “US West (Los Angeles)” in the dropdown.

Data transfer charges in AWS Local Zones are the same as in the AZs in the parent Region today. For example, data transferred between Amazon EC2 instances in the LA Local Zone and Amazon S3 in the parent Region, US West (Oregon), is free. Similarly, data transferred IN and OUT from Amazon EC2 in the Local Zone to the Amazon EC2 in the Parent Region is charged at $0.01/GB in each direction. You can learn more about data transfer prices “in” and “out” of Amazon EC2 here.

Thinking ahead

Later this year, AWS plans to open a second Local Zone in LA (us-west-2-lax-1b). The two Local Zones in LA will be interconnected with high-bandwidth, low-latency AWS networking allowing you to architect your low-latency applications for high availability and fault tolerance. Based on your feedback, we are also working on adding Local Zones in other locations along with the availability of the additional services, including Amazon ECS, Amazon Elastic Kubernetes Service, Amazon ElastiCache, Amazon ES, and Amazon Managed Streaming for Apache Kafka.

Conclusion

Now that we provided AWS Local Zones that you requested; we are really looking forward to seeing what all you can do with them. AWS would love to get your advice on locations or additional local services/features or other interesting use cases, so feel free to leave us your comments!

Implementing geohashing at scale in serverless web applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/

Many web and mobile applications use geospatial data, often used with map overlays. This results in dataset queries based upon proximity, for questions such as “How far is the nearest business?” or “How many users are nearby?” Applications with significant traffic need an efficient way to handle geolocation queries. This blog post explores a simple geohashing solution for serverless applications, and how this can work at scale.

Geohashing is a popular public domain geocode system that converts geographic information into an alphanumeric hash. A geohash is used to identify a rectangular area around a fixed point. The length of the hash determines the precision of the area identified. This allows you to use a hierarchical search where the length of the geohash corresponds to the size of a search area.

This blog post references the Ask Around Me example application, which helps users ask and answer questions in their local geographic area. This post explores a solution suited for the expected volumes in this web application. See part 1 of the blog application series to learn more.

Choosing geohashing precision for your application

One of the most important considerations in using geohashing is understanding the expected distribution of the locations and the size of the search area. Selecting the correct geohash length is important for the efficiency of the search. There is a balance between the number of cells searched and the number of items within each cell.

A geohash corresponds to a grid cell but a typical search corresponds to multiple overlapping cells. In the following diagram, the search radius overlaps nine distinct geohash cells. The query returns all location pins within the cells but only the green pins are relevant to the radial search. The gray pins are outside of the overlapping geohash cells so are immediately discarded in the query:

Discarded pins from a query

If the geohash is too precise, meaning the cell is small, the search radius may include many more cells:

Search radius with too many cells

If the geohash is too coarse, meaning the cell is too large, the query may return too many results for a single cell for the search to be efficient. You may also find a large number of results in those cells are not within the search radius:

Too many results in a single cell

The geohashing algorithm creates a hash that makes it easy to find the neighboring eight cells of any target cell. For optimal performance, you should ideally select a hashing resolution where most queries are resolved by searching only the target cell and its immediate neighbors.

In the canonical implementation of geohash, there are some areas on the globe where physically neighboring cells are not logically close. This can cause errors in these edge cases where there are “seams”. The S2 geometry library solves this problem by using a spherical reference, meaning you can use this approach anywhere in the world. The library has been ported to TypeScript and is available as an npm package called nodes2ts.

Using Amazon DynamoDB for geohashing queries

Amazon DynamoDB is a serverless NoSQL database that offers single-digit millisecond performance at any scale. For an application with moderate load, you can set read and write capacity to allocate a dedicated amount of throughout, or you can set the provisioning to On-Demand. DynamoDB is well suited to key-based queries needing fast, consistent performance.

For web application developers using Node.js or JavaScript, there is an npm package called dynamodb-geo that ports the Java Geo Library for DynamoDB. Both packages are based on the S2 geometry library. This provides a simple interface to use DynamoDB for geospatial data. The library is a wrapper for DynamoDB and maintains an underlying table. After configuring the table and the library, you interact with the data using the library’s API.

For example, to add a new geospatial item:

    // Add a new location to the database
const result = await myGeoTableManager.putPoint({
        RangeKeyValue: { S: 'location-id-1234' }, // unique ID
        GeoPoint: { 
            latitude: 40.6892534,
            longitude: -74.0466891
        },
        PutItemInput: { 
            Item: { 
                country: { S: 'USA' },
                state: { S: 'New York' },
                pointOfInterest: { S: 'Statue of Liberty' }
            }
        }
    }).promise()

The library also provides methods for updating and deleting data points, and supports DynamoDB batch operations. Querying the data via the API can only be done via geo-point requests. You can retrieve a dataset of items based around a rectangular or radial area:

    // Querying 10km (~6.2 miles) around Boston, Massachusetts.

    const result = await myGeoTableManager.queryRadius({
        RadiusInMeter: 10000,
        CenterPoint: {
            latitude: 42.3145186,
            longitude: -71.1103666
        }
    }).promise()

    // Outputs results as an array of DynamoDB.AttributeMaps
    console.log(result)

Moving locations and moving consumers

In Ask Around Me, questions have a fixed latitude and longitude – once a question is asked, the location never changes. The dynamodb-geo library uses the hashKey as a primary key in the underlying DynamoDB table. To update the primary key of a DynamoDB item, you must delete and recreate the entire item.

Many geolocation implementations use static data, such as a list of retail store locations. But if you need to track moving locations, such as the location of mobile users, this is not the best approach. As a result, this library works well for static data (or data where the location rarely changes) but is not suitable where locations change frequently.

In the Ask Around Me app, users receive alerts when new questions appear around their current location. Real-time messaging is implemented using AWS IoT Core, where publish-subscribe topics connect the frontend and backend.

The application retrieves the current latitude and longitude via the browser and uses the S2 geometry library to convert this to a geohash key. It then subscribes to this geohash topic. On the backend, when new questions are saved, these are published to a topic using the geohash key. As a result, users in the same geohash area receive notifications when new questions are asked nearby.

This broadcast pattern allows the application to fan out a single new question in the DynamoDB table to thousands of live users in frontend application. As users move their location, the front-end application can detect if they moved from one geohash cell to another. If so, the frontend unsubscribes from the outdated geohash identifier, and subscribes to the new. This ensures that moving consumers are always listening to questions near their current location. For more information on the broadcast pattern, see the AWS whitepaper, Designing MQTT Topics for AWS IoT Core.

Sorting, aging, and expiring location data

For an application with many writes to the underlying geolocation table, the user may expect to see newer data first. Also, you may need a strategy for removing stale data from the table. Even with a highly performant database like DynamoDB, you must ensure the first page of results return the most relevant items.

The dynamodb-geo library uses the geohash identifier as a primary key, but allows the developer to choose an identifying range key. You need this key to uniquely identify items that have the same geohash. However, you can also use this key to for sorting and paging data that’s returned by the library’s search APIs.

The Ask Around Me app uses a concatenated userId-timestamp pattern as a range key, for example jbeswick-1589202456. This helps in implementing two data access patterns:

  • Finding by user: using the begins_with operator, you can identify questions asked by a specific user. For example, begins_with(‘jbeswick’) returns all the questions for this user.
  • Sorting results by time added: as query results are always sorted by the sort key value, the Unix timestamp ensures that these are returned from oldest to newest. You can reverse this order to return newest first by setting ScanIndexForward to false in the DynamoDB query.

In this application, you might decide that questions more than a year old should be expired from the table. You can have DynamoDB expire items automatically using the time to live (TTL) feature.

To use this, create a custom numeric attribute in the table that contains the expiration time of the item, set as a Unix timestamp. For example, an item expiring on a midnight on January 1, 2023 uses the timestamp 1672531200.

When you enable TTL on a DynamoDB table, you specify which attribute contains the expiration value. A background job checks the TTL values against the current time. For any items found where the TTL timestamp is older than the current time, it expires those items within a 48-hour time window.

Conclusion

This blog post explores how you can solve geolocation queries using geohashing. I discuss how you should decide on the resolution of a geohash for your specific workload.

I explain why DynamoDB is a good fit for many geohashing applications. I cover how you can use the dynamodb-geo library to easily implement location queries in your web applications. Using AWS IoT Core, you can also use MQTT topics to fan out updates to moving subscribers.

Finally, I show how to use the DynamoDB table’s range key to help with sorting data by age and supporting addition access patterns. For application with many writes, you can also automatically expire items using DynamoDB’s TTL feature.

To learn more about how the Ask Around Me application implements geolocation queries, see the blog series.

Architecture Monthly Magazine: Media & Entertainment

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-media-entertainment/

AWS Architecture Monthly - June 2020 - Media &EntertainmentAWS makes it fast and easy for the Media and Entertainment (M&E) industry to produce, process, and deliver broadcast and over-the-top video. These pay-as-you-go services and appliance products offer the video infrastructure you need to deliver great viewing experiences to any screen.

For June’s issue of AWS Architecture Monthly, WW Tech Leader Konstantin Wilms talks about industry and architectural pattern trends in the cloud-based M&E space, and explains why this industry is unique in that almost any business problem can be solved using many AWS services. He also offers advice on what prospective customers should be thinking about and considering when moving to AWS.

In June’s Media & Entertainment issue

  • Ask an Expert: Konstantin Wilms, WW Tech Leader, AWS M&E
  • Blog: Deploying Your Favorite Post Production Applications on AWS Virtual Desktop Infrastructure
  • Case Study: L Benfica: Launching a Full-Featured VOD Platform in Just Four Weeks
  • Quick Start: Cloud Video Editing on AWS
  • Tutorial: Reimagine Your Studio
  • Whitepaper: Building Media & Entertainment Predictive Analytics Solutions on AWS

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Field Notes: Using Agile/EDF to Work Remotely with Distributed Teams

Post Syndicated from Steve Shen original https://aws.amazon.com/blogs/architecture/field-notes-using-agile-edf-to-work-remotely/

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

In this case study, we share our experience delivering results for remote customer engagements during the COVID-19 outbreak in China in Q1 2020.

We reduced two months of expected COVID-19 delay to two weeks’ delay by deploying a distributed remote Agile/AWS Engagement Delivery Framework (AWS EDF). We share three major takeaways in this blog post:

  1. Invest in your distributed team. Remote delivery costs more and requires preparation and setup effort, especially during your first remote sprint.
  2. Respond to challenges and adapt your setup accordingly. Don’t just stick to a playbook.
  3. Be diligent with your sprint ceremonies. This helps eliminate any communication overhead that could derail your progress.

Situation

In August 2019, we engaged one of the largest joint venture commercial banks in the Greater China Region and began to modernize their Merchant Service System into a microservice-based system. We also began to enable the customer team to be self-sufficient on microservices, using the two-pizza Agile/EDF working model. Our consultants’ main job was to work out tech spike stories, share our findings, and teach customer development teams, while also coaching Agile/EDF at the sprint and program level to the whole project team.

The customer’s team members were distributed across three cities (HangZhou, ChengDu, ShenZhen) and four offices. They were organized around three microservice teams, including customer PgM/PO/BA, UI, frontend, backend, SysTest, UAT, and various other customer teams specializing in spring-boot-like framework, PaaS, or CI/CD pipeline, etc. The backend Developer and program-level Product Owner (PO) led the entire development progress and process.

When COVID-19 broke out in China, it had a two-month impact on the engagement and introduced several constraints. For example, all team members were required to work from home (WFH), we had no access to the customer’s Dev/IT environment, we had no remote collaboration tools, and one consultant’s hometown was locked down. We had to find a solution to work around these constraints. Otherwise, we would have been forced to bring the engagement to a halt. Additionally, the customer tightened the guidance that the engagement delivery date could not be changed, and that no one was to return to the office until the government lifted its COVID-19 regulations.

Action

We decided to change our delivery model from onsite to remote. This changed how we engaged with the customer, how we collaborated internally, and how we augmented our team to ensure that our remote work was effective.

Methodology adaptation: deploying remote and distributed agile and EDF

Before COVID-19, we were using Agile/EDF for delivery and enablement. After COVID-19, we enhanced and evolved our existing sprint ceremonies, and added more interactions for remote and distributed teams. Refer to Table 1 for a quick reference.

  1. Daily morning standup: For whole-project members to discuss key tasks, progress, and blockers. Not for the backend-specific tech tasks. Started at 40 minutes at the beginning of the WFH period, and evolved to 15-20 minutes.
  2. Added one daily late afternoon standup: Only for the backend team, focusing on more details about the backend specific tasks, progress, blockers, and AWS consultants’ tech support progress. Started at 30 minutes at the beginning of WFH, and evolved to <15 minutes.
  3. Sprint day-6 for Sprint+1’s replenish/grooming: (DoR start, took about 60 minutes) and enforced with quick check (DoR check ~ 15 min on Day-10) for Sprint+1’s requirement’s readiness. DoR = Definition of Done, referring to requirements’ definition readiness in Agile terminology.
  4. Sprint day-10 for review/retrospective: This included: (a) Review in turn by each microservice team for sprint incremental, and (b) Retrospective including every backend dev and proxy for each other functional role. Took around 1.5 hours.
  5. Sprint +1 day-1 for planning: This includes: (a) Sprint +1 scope confirmation at Acceptance Criteria level and other supporting material; (b) Each story had its subtasks broken down with deadline planning, especially on key handover tasks, for example, backend API definition (Day-2), Front/Backend completion (Day-5), ST complete (Day-10); and (c) Late on day-1, sprint planning result check and take follow-up actions offline. Took around 1 hour.
  6. Regular remote AWS consultants’ internal sync-up: (a) Daily sync-up right after the morning standup < 15 minutes; (b) Biweekly AWS tech tasks replenish ~30 minutes; (c) Biweekly AWS tech task planning ~1 hour. All these were supplemented by ad hoc Instant Messaging group sync-ups.
  7. Remote weekly enablement meeting for the customer: 1-hour meeting on Fridays, one tech topic at a time, focusing on the most critical and short-term topics.
  8. AWS standby for unplanned asks: Used for code review and dev-related consulting. After the meeting, one developer documented the consulting key points.

Engagement setup changes

  1. Remote Access: We used the customer’s own VPN tool – it took about one week to get it up and running. It was slightly less efficient than onsite direct access.
  2. Collaboration Setup: ZXXM/ running on AWS was chosen for the team’s collaboration platform. We also used Wiki/Jira as the Agile PM tool and to document key project information.
  3. Rotating the customer’s Proxy Project Manager (PjM) role to assist Project Manager: With the rotating proxy PjM(s), the customer PjM could focus more on the responsibility as a chief BA for the project and dependencies on external teams, allowing the proxy PjM(s) to load-balance.
  4. Rapid response for development blockers: No tasks should be self-studied for over 1 hour. Quicker consulting with AWS consultants, triggered when customer developer is blocked on dev task around 1 hour. We used instant messaging to raise requests, and the responses were recorded in Jira/Wiki.

AWS team changes

  1. Added +1 consultant: Because we had to put more effort into remote communication/access and the chief tech consultant’s home city was locked down, we decided to augment the team with one consultant (80% time) to increase AWS capacity.
  2. More focused consultant dedication: One consultant for each microservice team, with one chief tech consultant covering cross-team items.

Result

The first remote sprint velocity was half that of a normal sprint before COVID-19. However, by the end of the third sprint, we recovered completely and managed to reduce two months of expected COVID-19 impact to one sprint time (two weeks) when compared to our pre-COVID-19 schedule.

If you have questions or feedback, contact AWS Professional Services.

Table 1: Enhanced sprint ceremonies for a remote sprint

Newly added remote ceremonies are in bold and underlined.

Enhanced sprint ceremonies for a remote sprint

Building a location-based, scalable, serverless web app – part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-1/

Web applications represent a major category of serverless usage. When used with single-page application (SPA) frameworks for front-end development, you can create highly responsive apps. With a serverless backend, these apps can scale to hundreds of thousands of users without you managing a single server.

In this 3-part series, I demonstrate how to build an example serverless web application. The application includes authentication, real-time updates, and location-specific features. I explore the functionality, architecture, and design choices involved. I provide a complete code repository for both the front-end and backend. By the end of these posts, you can use these patterns and examples in your own web applications.

In this series:

  • Part 1: Deploy the frontend and backend applications, and learn about how SPA web applications interact with serverless backends.
  • Part 2: Review the backend architecture, Amazon API Gateway HTTP APIs, and the geohashing implementation.
  • Part 3: Understand the backend data processing and aggregation with Amazon DynamoDB, and the final deployment of the application to production.

The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for usage beyond development and testing.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Introducing “Ask Around Me” – The app for finding answers from local users

Ask Around Me is a web application that allows you to ask questions to a community of local users. It’s designed to be used on a smartphone browser.

 

Ask Around Me front end application

The front-end uses Auth0 for authentication. For simplicity, it supports social logins with other identity providers. Once a user is logged in, the app displays their local area:

No questions in your area

Users can then post questions to the neighborhood. Questions can be ratings-based (“How relaxing is the park?”) or geography-based (“Where is best coffee?”).

Ask a new question

Posted questions are published to users within a 5-mile radius. Any user in this area sees new questions appear in the list automatically:

New questions in Ask Around Me

Other users answer questions by providing a star-rating or dropping a pin on a map. As the question owner, you see real-time average scores or a heat map, depending on the question type:

Ask Around Me Heatmap

The app is designed to be fun and easy to use. It uses authentication to ensure that votes are only counted once per user ID. It uses geohashing to ensure that users only see and answer questions within their local area. It also keeps the question list and answers up to date in real time to create a sense of immediacy.

In terms of traffic, the app is expected to receive 1,000 questions and 10,000 answers posted per hour. The query that retrieves local questions is likely to receive 50,000 requests per hour. In the course of these posts, I explore the architecture and services chosen to handle this volume. All of this is built serverlessly with cost effectiveness in mind. The cost scales in line with usage, and I discuss how to make the best use of the app budget in this scenario.

SPA frameworks and serverless backends

While you can apply a serverless backend to almost any type of web or mobile framework, SPA frameworks can make development much easier. For modern web development, SPA frameworks like React.js, Vue.js, Angular have grown in popularity for serverless development. They have become the standard way to build complex, rich front-ends.

These frameworks offer benefits to both front-end developers and users. For developers, you can create the application within an IDE and test locally with hot reloading, which renders new content in the same context in the browser. For users, it creates a web experience that’s similar to a traditional application, with reactive content and faster interactive capabilities.

When you build a SPA-based application, the build process creates HTML, JavaScript, and CSS files. You serve these static assets from an Amazon CloudFront distribution with an Amazon S3 bucket set as the origin. CloudFront serves these files from 216 global points of presence, resulting in low latency downloads regardless of where the user is located.

CloudFront/S3 app distribution

You can also use AWS Amplify Console, which can automate the build and deployment process. This is triggered by build events in your code repo so once you commit code changes, these are automatically deployed to production.

A traditional webserver often serves both the application’s static assets together with dynamic content. In this model, you offload the serving of all of the application assets to a global CDN. The dynamic application is a serverless backend powered by Amazon API Gateway and AWS Lambda. By using a SPA framework with a serverless backend, you can create performant, highly scalable web applications that are also easy to develop.

Configuring Auth0

This application integrates Auth0 for user authentication. The front-end calls out to this service when users are not logged in, and Auth0 provides an open standard JWT token after the user is authenticated. Before you can install and use the application, you must sign up for an Auth0 account and configure the application:

  1. Navigate to https://auth0.com/ and choose Sign Up. Complete the account creation process.
  2. From the dashboard, choose Create Application. Enter AskAroundMe as the name and select Single Page Web Applications for the Application Type. Choose Create.Auth0 configuration
  3. In the next page, choose the Settings tab. Copy the Client ID and Domain values to a text editor – you need these for setting up the Vue.js application later.Auth0 configuration next step
  4. Further down on this same tab, enter the value http://localhost:8080 into the Allowed Logout URLs, Allowed Callback URLs and Allowed Web Origins fields. Choose Save Changes.
  5. On the Connections tab, in the Social section, add google-oauth2 and twitter and ensure that the toggles are selected. This enables social sign-in for your application.Auth0 Connections tab

This configuration allows the application to interact with the Auth0 service from your local machine. In production, you must enter the domain name of the application in these fields. For more information, see Auth0’s documentation for Application Settings.

Deploying the application

In the code repo, there are separate directories for the front-end and backend applications. You must install the backend first. To complete this step, follow the detailed instructions in the repo’s README.md.

There are several important environment variables to note from the backend installation process:

  • IoT endpoint address and Cognito Pool ID: these are used for real-time messaging between the backend and frontend applications.
  • API endpoint: the base URL path for the backend’s APIs.
  • Region: the AWS Region where you have deployed the application.

Next, you deploy the Vue.js application from the frontend directory:

  1. The application uses the Google Maps API – sign up for a developer account and make a note of your API key.
  2. Open the main.js file in the src directory. Lines 45 through 62 contain the configuration section where you must add the environment variables above:Ask Around Me Vue.js configuration

Ensure you complete the Auth0 configuration and remaining steps in the README.md file, then you are ready to test.

To launch the frontend application, run npm run serve to start the development server. The terminal shows the local URL where the application is now running:

Running the Vue.js app

Open a web browser and navigate to http://localhost:8080 to see the application.

How Vue.js applications work with a serverless backend

Unlike a traditional web application, SPA applications are loaded in the user’s browser and start executing JavaScript on the client-side. The app loads assets and initializes itself before communicating with the serverless backend. This lifecycle and behavior is comparable to a conventional desktop or mobile application.

Vue.js is a component-based framework. Each component optionally contains a user interface with related code and styling. Overall application state may be managed by a store – this example uses Vuex. You can use many of the patterns employed in this application in your own apps.

Auth0 provides a Vue.js component that automates storing and parsing the JWT token in the local browser. Each time the app starts, this component verifies the token and makes it available to your code. This app uses Vuex to manage the timing between the token becoming available and the app needing to request data.

The application completes several initialization steps before querying the backend for a list of questions to display:

Initialization process for the app

Several components can request data from the serverless backend via API Gateway endpoints. In src/views/HomeView.vue, the component loads a list of questions when it determines the location of the user:

const token = await this.$auth.getTokenSilently()
const url = `${this.$APIurl}/questions?lat=${this.currentLat}&lng=${this.currentLng}`
console.log('URL: ', url)
// Make API request with JWT authorization
const { data } = await axios.get(url, {
  headers: {
    // send access token through the 'Authorization' header
    Authorization: `Bearer ${token}`   
  }
})

// Commit question list to global store
this.$store.commit('setAllQuestions', data)

This process uses the Axios library to manage the HTTP request and pass the authentication token in the Authorization header. The resulting dataset is saved in the Vuex store. Since SPA applications react to changes in data, any frontend component displaying data is automatically refreshed when it changes.

The src/components/IoT.vue component uses MQTT messaging via AWS IoT Core. This manages real-time updates published to the frontend. When a question receives a new answer, this component receives an update. The component updates the question status in the global store, and all other components watching this data automatically receive those updates:

        mqttClient.on('message', function (topic, payload) {
          const msg = JSON.parse(payload.toString())
          
          if (topic === 'new-answer') {
            _store.commit('updateQuestion', msg)
          } else {
            _store.commit('saveQuestion', msg)
          }
        })

The application uses both API Gateway synchronous queries and MQTT WebSocket updates to communicate with the backend application. As a result, you have considerable flexibility for tracking overall application state and providing your users with a responsive application experience.

Conclusion

In this post, I introduce the Ask Around Me example web application. I discuss the benefits of using single-page application (SPA) frameworks for both developers and users. I cover how they can create highly scalable and performant web applications when powered with a serverless backend.

In this section, you configure Auth0 and deploy the frontend and backend from the application’s GitHub repo. I review the backend SAM template and the architecture it deploys.

In part 2, I will explain the backend architecture, the Amazon API Gateway configuration, and the geohashing implementation.

Measure Effectiveness of Virtual Training in Real Time with AWS AI Services

Post Syndicated from Rajeswari Malladi original https://aws.amazon.com/blogs/architecture/measure-effectiveness-of-virtual-training-in-real-time-with-aws-ai-services/

As per International Data Corporation (IDC), worldwide spending on digital transformation will reach $2.3 trillion in 2023. As organizations adopt digital transformation, training becomes an important aspect of this journey. Whether these are internal trainings to upskill existing workforce or a packaged content for commercial use, these trainings need to be efficient and cost effective. With the advent of education technology, it is a common practice to deliver trainings via digital platforms. This makes it accessible for larger population and is cost effective, but it is important that the trainings are interactive and effective. According to  a recent article published by Forbes, immersive education and data driven insights are among the top five Education Technology (EdTech) innovations. These are the key characteristics of creating an effective training experience.

An earlier blog series explored how to build a virtual trainer on AWS using Amazon Sumerian. This series illustrated how to easily build an immersive and highly engaging virtual training experience without needing additional devices or a complex virtual reality platform management. These trainings are easy to maintain and are cost effective.

In this blog post, we will further extend the architecture to gather real-time feedback about the virtual trainings and create data-driven insights to measure its effectiveness with the help of Amazon artificial intelligence (AI) services.

Architecture and its benefits

Virtual training on AWS and AI Services - Architecture

Virtual training on AWS and AI Services – Architecture

Consider a scenario where you are a vendor in the health care sector. You’ve developed a cutting-edge device, such as patient vital monitoring hardware that goes through frequent software upgrades and it is about to be rolled out across different U.S. hospitals. The nursing staff needs to be well trained before it can begin using the device. Let’s take a look at an architecture to solve this problem. We will first explain the architecture for building the training and then we will show how we can measure its effectiveness.

At the core of the architecture is Amazon Sumerian. Sumerian is a managed service that lets you create and run 3D, Augmented Reality (AR), and Virtual Reality (VR) applications. Within Sumerian, real-life scenes from a hospital environment can be created by importing the assets from the assets library. Scenes consist of host(s) and an AI-driven animated character with built-in animation, speech, and behavior. The hosts act as virtual trainers that interact with the nursing staff. The speech component assigns text to the virtual trainer for playback with Amazon Polly. Polly helps convert training content from Sumerian to life-like speech in real time and ensures the nursing staff receives the latest content related to the equipment on which it’s being trained.

The nursing staff accesses the training via web browsers on iOS or Android mobile devices or laptops, and authenticates using Amazon Cognito. Cognito is a service that lets you easily add user sign-up and authentication to your mobile and web apps. Sumerian then uses the Cognito identity pool to create temporary credentials to access AWS services.

The flow of the interactions within Sumerian is controlled using a visual state machine in the Sumerian editor. Within the editor, the dialogue component assigns an Amazon Lex chatbot to an entity, in this case the virtual trainer or host. Lex is a service for building conversational interfaces with voice and text. It provides you the ability to have interactive conversations with the nursing staff, understand its areas of interest, and deliver appropriate training material. This is an important aspect of the architecture where you can customize the training per users’ needs.

Lex has native interoperability with AWS Lambda, a serverless compute offering where you just write and run your code in Lambda functions. Lambda can be used to validate user inputs or apply any business logic, such as fetching the user selected training material from Amazon DynamoDB (or another database) in real time. This material is then delivered to Lex as a response to user queries.

You can extend the state machine within the Sumerian editor to introduce new interactive flows to collect user feedback. Amazon Lex collects user feedback, which is saved in Amazon Simple Storage Service (S3) and analyzed by Amazon Comprehend. Amazon Comprehend is a natural language processing service that uses AI to find meaning and insights/sentiments in text. Insights from user feedback are stored in S3, which is a highly scalable, durable, and highly available object storage.

You can analyze the insights from user feedback using Amazon Athena, an interactive query service which analyzes data in S3 using standard SQL. You can then easily build visualizations using Amazon QuickSight.

By using this architecture, you not only deliver the virtual training to your nursing staff in an immersive environment created by Amazon Sumerian, but you can also gather the feedback interactively. You can gain insights from this feedback and iterate over it to make the training experience more effective.

Conclusion and next steps

In this blog post we reviewed the architecture to build interactive trainings and measure their effectiveness. The serverless nature of this architecture makes it cost effective, agile, and easy to manage, and you can apply it to a number of use cases. For example, an educational institution can develop training content designed for multiple learning levels and the training level can be adjusted in real time based on live interactions with the students. In the manufacturing scenario, you can build a digital twin of your process and train your resources to handle different scenarios with full interactions. You can integrate AWS services just like Lego blocks, and you can further expand this architecture to integrate with Amazon Kendra to build interactive FAQ or integrate with Amazon Comprehend Medical to build trainings for the healthcare industry. Happy building!

BBVA: Helping Global Remote Working with Amazon AppStream 2.0

Post Syndicated from Joe Luis Prieto original https://aws.amazon.com/blogs/architecture/bbva-helping-global-remote-working-with-amazon-appstream-2-0/

This post was co-written with Javier Jose Pecete, Cloud Security Architect at BBVA, and Javier Sanz Enjuto, Head of Platform Protection – Security Architecture at BBVA.

Introduction

Speed and elasticity are key when you are faced with unexpected scenarios such as a massive employee workforce working from home or running more workloads on the public cloud if data centers face staffing reductions. AWS customers can instantly benefit from implementing a fully managed turnkey solution to help cope with these scenarios.

Companies not only need to use technology as the foundation to maintain business continuity and adjust their business model for the future, but they also must work to help their employees adapt to new situations.

About BBVA

BBVA is a customer-centric, global financial services group present in more than 30 countries throughout the world, has more than 126,000 employees, and serves more than 78 million customers.

Founded in 1857, BBVA is a leader in the Spanish market as well as the largest financial institution in Mexico. It has leading franchises in South America and the Sun Belt region of the United States and is the leading shareholder in Turkey’s Garanti BBVA.

The challenge

BBVA implemented a global remote working plan that protects customers and employees alike, including a significant reduction of the number of employees working in its branch offices. It also ensured continued and uninterrupted operations for both consumer and business customers by strengthening digital access to its full suite of services.

Following the company’s policies and adhering to new rules announced by national authorities in the last weeks, more than 86,000 employees from across BBVA’s international network of offices and its central service functions now work remotely.

BBVA, subject to a set of highly regulated requirements, and was looking for a global architecture to accommodate remote work. The solution needed to be fast to implement, adaptable to scale out gradually in the various countries in which it operates, and able to meet its operational, security, and regulatory requirements.

The architecture

BBVA selected Amazon AppStream 2.0 for particular use cases of applications that, due to their sensitivity, are not exposed to the internet (such as financial, employee, and security applications). Having had previous experience with the service, BBVA chose AppStream 2.0 to accommodate the remote work experience.

AppStream 2.0 is a fully managed application streaming service that provides users with instant access to their desktop applications from anywhere, regardless of what device they are using.

AppStream 2.0 works with IT environments, can be managed through the AWS SDK or console, automatically scales globally on demand, and is fully managed on AWS. This means there is no hardware or software to deploy, patch, or upgrade.

AppStream 2.0 can be managed through the AWS SDK (1)

  1. The streamed video and user inputs are sent over HTTPS and are SSL-encrypted between the AppStream 2.0 instance executing your applications, and your end users.
  2. Security groups are used to control network access to the customer VPC.
  3. AppStream 2.0 streaming instance access to the internet is through the customer VPC.

AppStream 2.0 fleets are created by use case to apply security restrictions depending on data sensitivity. Setting clipboard, file transfer, or print to local device options, the fleets control the data movement to and from employees’ AppStream 2.0 streaming sessions.

BBVA relies on a proprietary service called Heimdal to authenticate employees through the corporate identity provider. Heimdal calls the AppStream 2.0 API CreateStreamingURL operation to create a temporary URL to start a streaming session for the specified user, and tries to abstract the user from the service using:

  • FleetName to connect the most appropriate fleet based on the user’s location (BBVA has fleets deployed in Europe and America to improve the user’s experience.)
  • ApplicationId to launch the proper application without having to use an intermediate portal
  • SessionContext in situations where, for instance, the authentication service generates a token and needs to be forwarded to a browser application and injected as a session cookie

BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology (2)

To simplify its overall network architecture, BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology with full control over network routing and security.

There are situations where the application streamed in AppStream 2.0 needs to connect:

  1. On-premises, using AWS Direct Connect plus VPN providing an IPsec-encrypted private connection
  2. To the Internet through an outbound VPC proxy with domain whitelisting and content filtering to control the information and threats in the navigation of the employee

AppStream 2.0 activity is logged into a centralized repository support by Amazon S3 for detecting unusual behavior patterns and by regulatory requirements.

Conclusion

BBVA built a global solution reducing implementation time by 90% compared to on-premises projects, and is meeting its operational and security requirements. As well, the solution is helping with the company’s top concern: protecting the health and safety of its employees.

Enhancing site security with new Lightsail firewall features

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/enhancing-site-security-with-new-lightsail-firewall-features/

This post is contributed by Mike Coleman, AWS Senior Developer Advocate – Lightsail

Amazon Lightsail provides an easy way to get started with AWS for many customers. The service balances ease of use, security, and flexibility. The Lightsail firewall now offers additional features to help customers secure their Lightsail instances. This update offers three new capabilities:

  • The ability to specify source IP addresses for firewall rules
  • Explicitly allowing or disallowing remote access to instances via Lightsail’s web-based console
  • Support for PING

This blog explores each of these new features in detail, starting with source IP addresses.

Before this update, any open ports in the Lightsail firewall were open to the internet. In many cases, this is a reasonable approach. For example, for new WordPress servers, you likely need broad public access.

However, in some cases you want to restrict access to an instance. If you are staging a new website and it’s not ready for publication, you may want to limit access. One way to ensure that only certain people can visit the site is to only allow certain IP addresses to connect.

Another common use case is limiting remote access to an instance. With the new changes to the Lightsail firewall, you would be able to limit SSH or RDP access by source IP address. Additionally, you can now enable or disable remote access via Lightsail’s built-in web client.

Access can be restricted from one or more IP addresses (for example, the IP address for your home computer) or a continuous range of IP addresses (such as the address range for your corporate network).

Next, I review how you configure these options to restrict remote access via SSH to a single source IP address.

Finding your IP address

Most computers do not have an internet routable IP address assigned. Internet routable IP addresses are scarce and usually assigned to your internet gateway device. The devices on the network are assigned private IP addresses. To communicate between the private IP network and the internet, the network router typically uses network address translation (NAT).

This tutorial assumes you are using NAT. This means the IP address used to restrict SSH access is the IP routable address of your network gateway device (usually your wireless router). Consequently, this limits access to all devices on the network behind this IP address.

There are many ways to find your internet routable IP address. You can log into your network gateway device and find it there (consult your device’s user manual for more details). Alternatively, use one of several public services to determine your IP address – search online for “what is my IP” to list several options.

Restricting SSH access to a single IP address

  1. Start by creating a new Lightsail instance – you can select any blueprint.
  2. Once the instance state shows Running, choose the name of the instance to open the Instance details page.
    firewall test instance
  3. Choose Networking from the menu.
    networking tab
  4. Scroll down to find the current firewall settings. Under Allow connections from, it lists Any IP address for all of the applications. To change this, choose the edit icon for the SSH rule.
    IP address
  5. Check the box next to Restrict to IP address and enter your internet routable IP address under Source IP or range.restrict IP address
    Note: The next section shows how to restrict access from Lightsail’s browser-based SSH client. Currently, Allow Lightsail browser SSH box is checked.
  6. Choose Save.

Now, SSH into your Lightsail instance from your local machine. You can learn more about how to connect to your Lightsail instance using SSH from our documentation.

You should be able to connect to your instance successfully. Next, you test the connection from a different IP address. You do this by restricting a different IP address, and attempting to connect again:

  1. Edit the SSH firewall rule again follows the instructions above. This time, under IP or IP Range enter 192.168.2.150.
  2. Choose Save.

Attempt to connect to your instance once more. The connection fails because your IP address does not match an IP address in the range.

Restricting access from the Lightsail browser-based SSH client

The browser-based SSH client makes it easy to access instances without needing to manage SSH keys on locally. However, there may be cases where you must disable browser-based access.

To do this:

  1. Navigate to the firewall rules for the instance you created earlier.
  2. Choose the edit icon for the SSH rule.edit IP address
  3. Uncheck the box next to Allow Lightsail browser SSH. Choose Save.
  4. From the menu, choose Connect, then choose Connect using SSH. The browser window opens, but you are not connected to the instance.

PING Support

There is now support for PING, a command line utility used to check if a computer is reachable over the network. PING sends a packet to a remote computer, which sends a simple response back. Before this release, you could not PING Lightsail instances.

To activate this feature, add a firewall rule:

  1. Navigate to the networking page for your instance.      add rule to firewall<
  2. Under the firewall section, choose +Add rule.
  3. From the application list, choose PING (ICMP). Choose Save.
  4. From a terminal window on your local machine, send a ping command to your Lightsail instance’s IP address. You can find the IP address from the Connect tab of the instance details page or from instance card on the Lightsail home page.
ping -c 5 192.168.2.143

You see a response similar to:

<ping -c 5 192.168.2.143
PING 192.168.2.143 (192.168.2.143): 56 data bytes
64 bytes from 192.168.2.143: icmp_seq=0 ttl=54 time=19.383 ms
64 bytes from 192.168.2.143: icmp_seq=1 ttl=54 time=16.821 ms
64 bytes from 192.168.2.143: icmp_seq=2 ttl=54 time=16.363 ms
64 bytes from 192.168.2.143: icmp_seq=3 ttl=54 time=27.335 ms
64 bytes from 192.168.2.143: icmp_seq=4 ttl=54 time=19.429 ms

--- 192.168.2.143 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.363/19.866/27.335/3.943 ms

Conclusion

In this blog I covered how you can increase the security of your Lightsail instances by taking advantage of three new features: source IP restrictions, limiting access to the Lightsail browser SSH and RDP clients, and the addition of PING (ICMP) as an application type. These new features provide you an extra level of flexibility and security when deploying applications on Lightsail.

To learn more about the Lightsail firewall, see the documentation. Additionally, there are Getting Started tutorials for Lightsail, including launching a LAMP stack application or .NET application.

Translating documents at enterprise scale with serverless

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/translating-documents-at-enterprise-scale-with-serverless/

For organizations operating in multiple countries, helping customers in different languages is an everyday reality. But in many IT systems, data remains static in a single language, making it difficult or impossible for international customers to use. In this blog post, I show how you can automate language translation at scale to solve a number of common enterprise problems.

Many types of data are good targets for translation. For example, product catalog information, for sharing with a geographically broad customer base. Or customer emails and interactions in multiple languages, translating back to a single language for analytics. Or even resource files for mobile applications, to automate string processing into different languages during the build process.

Building the machine learning models for language translation is extraordinarily complex, so fortunately you have a pay-as-you-go service available in Amazon Translate. This can accurately translate text between 54 languages, and automatically detects the source language.

Developing a scalable translation solution for thousands of documents can be challenging using traditional, server-based architecture. Using a serverless approach, this becomes much easier since you can use storage and compute services that scale for you – Amazon S3 and AWS Lambda:

Integrating S3 with Translate via Lambda.

In this post, I show two solutions that provide an event-based architecture for automated translation. In the first, S3 invokes a Lambda function when objects are stored. It immediately reads the file and requests a translation from Amazon Translate. The second solution explores a more advanced method for scaling to large numbers of documents, queuing the requests and tracking their state. The walkthrough creates resources covered in the AWS Free Tier but you may incur costs for usage.

To set up both example applications, visit the GitHub repo and follow the instructions in the README.md file. Both applications use the AWS Serverless Application Model (SAM) to make it easy to deploy in your AWS account.

Translating in near real time

In the first application, the workflow is straightforward. The source text is sent immediately to Translate for processing, and the result is saved back into the S3 bucket. It provides near real-time translation whenever an object is saved. This uses the following architecture:

Architecture for the first example application.

  1. The source text is saved in the Batching S3 bucket.
  2. The S3 put event invokes the Batching Lambda function. Since Translate has a limit of 5,000 characters per request, it slices the contents of the input into parts small enough for processing.
  3. The resulting parts are saved in the Translation S3 bucket.
  4. The S3 put events invoke the Translation function, which scales up concurrently depending on the number of parts.
  5. Amazon Translate returns the translations back to the Lambda function, which saves the results in the Translation bucket.

The repo’s SAM template allows you to specify a list of target languages, as a space-delimited list of supported language codes. In this case, any text uploaded to S3 is translated into French, Spanish, and Italian:

Parameters:
  TargetLanguage:
    Type: String
    Default: 'fr es it'

Testing the application

  1. Deploy the first application by following the README.md in the GitHub repo. Note the application’s S3 Translation and Batching bucket names shown in the output:Output values after SAM deployment.
  2. The testdata directory contains several sample text files. Change into this directory, then upload coffee.txt to the S3 bucket, replacing your-bucket below with your Translation bucket name:
    cd ./testdata/
    aws s3 cp ./coffee.txt s3://your-bucket
  3. The application invokes the translation workflow, and within a couple of seconds you can list the output files in the translations folder:aws s3 ls s3://your-bucket/translations/

    Translations output.

  4. Create an output directory, then download the translations to your local machine to view the contents:
    mkdir output
    aws s3 cp s3://your-bucket/translations/ ./output/ --recursive
    more ./output/coffee-fr.txt
    more ./output/coffee-es.txt
    more ./output/coffee-it.txt
  5. For the next step, translate several text files containing test data. Copy these to the Translation bucket, replacing your-bucket below with your bucket name:aws s3 cp ./ s3://your-bucket --include "*.txt" --exclude "*/*" --recursive
  6. After a few seconds, list the files in the translations folder to see your translated files:aws s3 ls s3://your-bucket/translations/

    Listing the translated files.

  7. Finally, translate a larger file using the batching process. Copy this file to the Batching S3 bucket (replacing your-bucket with this bucket name):
    cd ../testdata-batching/
    aws s3 cp ./your-filename.txt s3://your-bucket
  8. Since this is a larger file, the batching Lambda function breaks it apart into smaller text files in the Translation bucket. List these files in the terminal, together with their translations:
    aws s3 ls s3://your-bucket
    aws s3 ls s3://your-bucket/translations/

    Listing the output files.

In this example, you can translate a reasonable number of text files for a trivial use-case. However, in an enterprise environment where there could be thousands of files in a single bucket, you need a more robust architecture. The second application introduces a more resilient approach.

Scaling up the translation solution

In an enterprise environment, the application must handle long documents and large quantities of documents. Amazon Translate has service limits in place per account – you can request an increase via an AWS Support Center ticket if needed. However, S3 can ingest a large number of objects quickly, so the application should decouple these two services.

The next example uses Amazon SQS for decoupling, and also introduces Amazon DynamoDB for tracking the status of each translation. The new architecture looks like this:

Decoupled translation architecture.

  1. A downstream process saves text objects in the Batching S3 bucket.
  2. The Batching function breaks these files into smaller parts, saving these in the Translation S3 bucket.
  3. When an object is saved in this bucket, this invokes the Add to Queue function. This writes a message to an SQS queue, and logs the item in a DynamoDB table.
  4. The Translation function receives messages from the SQS queue, and requests translations from the Amazon Translate service.
  5. The function updates the item as completed in the DynamoDB table, and stores the output translation in the Results S3 bucket.

Testing the application

This test uses a much larger text document – the text version of the novel War and Peace, which is over 3 million characters long. It’s recommended that you use a shorter piece of text for the walkthrough, between 20-50 kilobytes, to minimize cost on your AWS bill.

  1. Deploy the second application by following the README.md in the GitHub repo, and note the application’s S3 bucket name and DynamoDB table name.
    The output values from the SAM deployment.
  2. Download your text sample and then upload it the Batching bucket. Replace your-bucket with your bucket name and your-text.txt with your text file name:aws s3 cp ./your-text.txt s3://your-bucket/ 
  3. The batching process creates smaller files in the Translation bucket. After a few seconds, list the files in the Translation bucket (replacing your-bucket with your bucket name):aws s3 ls s3://patterns-translations-v2/ --recursive --summarize

    Listing the translated files.

  4. To see the status of the translations, navigate to the DynamoDB console. Select Tables in the left-side menu and then choose the application’s DynamoDB table. Select the Items tab:Listing items in the DynamoDB table.

    This shows each translation file and a status of Queue or Translated.

  5. As translations complete, these appear in the Results bucket:aws s3 ls s3://patterns-results-v2/ --summarize

    Listing the output translations.

How this works

In the second application, the SQS queue acts as a buffer between the Batching process and the Translation process. The Translation Lambda function fetches messages from the SQS queue when they are available, and submits the source file to Amazon Translate. This throttles the overall speed of processing.

There are configuration settings you can change in the SAM template to vary the speed of throughput:

  • Translator function: this consumes messages from the SQS queue. The BatchSize configured in the SAM template is set to one message per invocation. This is equivalent to processing one source file at a time. You can set a BatchSize value from 1 to 10, so could increase this from the application’s default.
  • Function concurrency: the SAM template sets the Loader function’s concurrency to 1, using the ReservedConcurrentExecutions attribute. In effect, this means Lambda can only invoke 1 function at the same time. As a result, it keeps fetching the next batch from SQS as soon as processing finishes. The concurrency is a multiplier – as this value is increased, the translation throughput increases proportionately, if there are messages available in SQS.
  • Amazon Translate limits: the service limits in place are designed to protect you from higher-than-intended usage. If you need higher soft limits, open an AWS Support Center ticket.

Combining these settings, you have considerable control over the speed of processing. The defaults in the sample application are set at the lowest values possible so you can observe the queueing mechanism.

Conclusion

Automated translation using deep learning enables you to make documents available at scale to an international audience. For organizations operating globally, this can improve your user experience and increase customer access to your company’s products and services.

In this post, I show how you can create a serverless application to process large numbers of files stored in S3. The write operation in the S3 bucket triggers the process, and you use SQS to buffer the workload between S3 and the Amazon Translate service. This solution also uses DynamoDB to help track the state of the translated files.