Tag Archives: announcements

Introducing IAM and Lambda authorizers for Amazon API Gateway HTTP APIs

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-iam-and-lambda-authorizers-for-amazon-api-gateway-http-apis/

Amazon API Gateway HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than API Gateway REST APIs.

The API Gateway team is continuing work to improve and migrate popular REST API features to HTTP APIs. We are adding two of the most requested features, AWS Identity and Access Management (IAM) authorizers and AWS Lambda authorizers.

HTTP APIs already support JWT authorizers as a part of OpenID Connect (OIDC) and OAuth 2.0 frameworks. For more information, see “Simple HTTP API with JWT Authorizer.”

IAM authorization

AWS IAM roles and policies offer flexible, robust, and fully managed access controls, without writing any code. You can use IAM roles and policies to control who can create and manage your APIs, in addition to who can invoke them. IAM authorization for HTTP API routes is the best choice for internal or private APIs called by other AWS services like AWS Lambda.

IAM authorization for HTTP API APIs is similar to that for REST APIs. IAM access is determined by identity policies, which are attached to IAM users, groups, or roles. These policies define what identity can access which HTTP APIs routes. See “AWS Services That Work with IAM.”

Lambda authorization

A Lambda authorizer is a Lambda function which API Gateway calls for an authorization check when a client makes a request to an HTTP API route. You can use Lambda authorizers to implement custom authorization schemes to comply with your security requirements.

New authorizer features

HTTP API Lambda authorizers have some new features compared to REST APIs. There is a new payload and response format, including a simple Boolean authorization option.

New payload versions and response format

Lambda authorizers for HTTP APIs introduce a new payload format, version 2.0. If you need compatibility to use the same Lambda authorizers for both REST and HTTP APIs, you can continue to use version 1.0.

The payload format version also determines the request format and response structure that you must send to and return from your Lambda authorizer function. The version 2.0 payload context now allows non-string values. With version 1.0, your Lambda authorizer must return an IAM policy that allows or denies access to your API route. This is the same existing functionality as REST APIs. You can use standard IAM policy syntax in the policy. For examples of IAM policies, see “Control access for invoking an API.”

If you choose the new 2.0 format version when configuring the authorizer, you can now return either a Boolean value, or an IAM policy. The Boolean value enables simple responses from the authorizer without having to construct an IAM policy, and is in the format:

{
  "isAuthorized": true/false,
  "context": {
    "exampleKey": "exampleValue"
  }
}

The context object is optional. You can pass context properties on to Lambda integrations or access logs by using $context.authorizer.property. To learn more, see “Customizing HTTP API access logs.”

Caching authorizer responses

You can enable caching for a Lambda authorizer for up to one hour. To enable caching, your authorizer must have at least one identity source. API Gateway calls the Lambda authorizer function only when all of the specified identity sources are present. API Gateway uses the identity sources as the cache key. If a client specifies the same identity source parameters within the cache TTL, API Gateway uses the cached authorizer result. The Lambda authorizer function is not invoked.

Caching is enabled at the API Gateway level per authorizer. It is important to understand the effect of caching, particularly with simple responses and multiple routes. When using a simple response, the authorizer fully allows or denies all API requests that match the cached identity source values.

For example, you have two different routes using the same Lambda authorizer with a simple response. Both routes have different access requirements. The first route allows access to GET /list-users with an Authorization header with the value SecretTokenUsers. The second route denies access using the same header to GET /list-admins.

The Lambda authorizer has a single identity source, $request.header.Authorization, with the following code:

$request.header.Authorization.
exports.handler = async(event, context) => {
    let response = {
        "isAuthorized": false,
        "context": {
            "AuthInfo": "defaultdeny"
        }
    };
    if ((event.routeKey === "GET /list-users") && (event.headers.Authorization === "SecretTokenUsers")) {
        response = {
            "isAuthorized": true,
            "context": {
                "AuthInfo": "true-users"
            }
        };
    }
    if ((event.routeKey === "GET /list-admins") && (event.headers.authorization === "SecretTokenUsers")) {
        response = {
            "isAuthorized": false,
            "context": {
                "AuthInfo": "false-admins",
            }
        };
    }
    return response;
};

As both routes share the same identity source parameter, a cache result from successfully accessing /list-users with the Authorization header could allow access to /list-admins which is not intended. To cache responses differently per route, add $context.routeKey as an additional identity source. This creates a cache key that is unique for each route.

If more granular permissions are required, disable simple responses and return an IAM policy instead.

Testing Lambda authorizers

You have an existing Lambda function behind an HTTP API and want to add a Lambda authorizer using the new Boolean simple response. Create a new Lambda authorizer function with the following code.

exports.handler = async(event, context) => {
    let response = {
        "isAuthorized": false,
        "context": {
            "AuthInfo": "defaultdeny"
        }
    };
    if (event.headers.Authorization === "secretToken") {
        response = {
            "isAuthorized": true,
            "context": {
                "AuthInfo": "Customer1"
            }
        };
    }
    return response;
};

The authorizer returns true if a header called Authorization has the value secretToken.

To create an authorizer, browse to the API Gateway console. Navigate to your HTTP API, choose Authorization under Develop, select the Attach authorizers to routes tab, and choose Create and attach an authorizer.

Create and attach HTTP API authorizer

Create and attach HTTP API authorizer

Create the Lambda authorizer, pointing to your Lambda authorizer function. Select Payload format version 2.0 with a Simple response.

Create Lambda simple authorizer settings

Create Lambda simple authorizer settings

Enable caching and add two identity sources, $request.header.Authorization and $context.routeKey, to ensure that your cache key is unique when adding multiple routes.

Add caching and identity sources to Lambda authorizer

Add caching and identity sources to Lambda authorizer

Choose Create and attach. The route is now using a Lambda authorizer.

HTTP API route includes Lambda authorizer

HTTP API route includes Lambda authorizer

The following examples to test the API authentication use Postman but you can use any HTTP client.

Send a GET request to the HTTP APIs URL without specifying any authorization header.

Postman unauthorized GET request

Postman unauthorized GET request

API Gateway returns a 401 Unauthorized response, as expected. The required $request.header.Authorization identity source is not provided, so the Lambda authorizer is not called.

Enter a valid Authorization header key, but an invalid value.

Postman Forbidden GET request

Postman Forbidden GET request

API Gateway returns a 403 Forbidden response as the request is now passed to the Lambda authorizer, which has evaluated the value, and returned "isAuthorized": false.

Supply a valid Authorization header key and value.

Postman successful authorized GET request

Postman successful authorized GET request

API Gateway authorizes the request using the Lambda authorizer and sends the request to the Lambda function integration which returns a successful 200 response.

For more Lambda authorizer code examples see “Custom Authorizer Blueprints for AWS Lambda.”

AWS CloudFormation support

Lambda authorizers for HTTP APIs are configured as AWS::ApiGatewayV2::Authorizer CloudFormation resources. Today, they are imported into AWS Serverless Application Model (AWS SAM) applications as native CloudFormation resources.

LambdaAuthorizer:
    Type: 'AWS::ApiGatewayV2::Authorizer'
    Properties:
    Name: LambdaAuthorizer
    ApiId: !Ref HttpApi
    AuthorizerType: REQUEST
    AuthorizerUri: arn:aws:apigateway:{region}:lambda:path/2015-03-31/functions/arn:aws:lambda: {region}:{account id}:function:{Function name}/invocations
    IdentitySource:
        - $request.header.Authorization
    AuthorizerPayloadFormatVersion: 2.0

Conclusion

IAM and Lambda authorizers are two of the most requested features for Amazon API Gateway HTTP APIs. You can now use IAM authorization in a similar way to API Gateway REST APIs. Lambda authorizers for HTTP APIs offer the option of a simpler Boolean response with the new version 2.0 payload and response format. You configure identity sources to specify the location of data that’s required to authorize a request, which are also used as the cache key.

These authorizers are generally available in all AWS Regions where API Gateway is available. To learn more about options for protecting your APIs, you can read the documentation. For more information about Amazon API Gateway, visit the product page.

For the latest blogs, videos, and training for AWS Serverless, see https://serverlessland.com/.

Introducing queued purchases for Savings Plans

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/introducing-queued-purchases-for-savings-plans/

This blog post is contributed by Idan Maizlits, Sr. Product Manager, Savings Plans

AWS now provides the ability for you to queue purchases of Savings Plans by specifying a time, up to 3 years in the future, to carry out those purchases. This blog reviews how you can queue purchases of Savings Plans.

In November 2019, AWS launched Savings Plans. This is a new flexible pricing model that allows you to save up to 72% on Amazon EC2, AWS Fargate, and AWS Lambda in exchange for making a commitment to a consistent amount of compute usage measured in dollars per hour (for example $10/hour) for a 1- or 3-year term. Savings Plans is the easiest way to save money on compute usage while providing you the flexibility to use the compute options that best fits your needs as they change.

Queueing Savings Plans allows you to plan ahead for future events. Say, you want to purchase a Savings Plan three months into the future to cover a new workload. Now, with the ability to queue plans in advance, you can easily schedule the purchase to be carried out at the exact time you expect your workload to go live. This helps you plan in advance by eliminating the need to make “just-in-time” purchases, and benefit from low prices on your future workloads from the get-go. With the ability to queue purchases, you can also enjoy uninterrupted Savings Plans coverage by scheduling renewals of your plans ahead of their expiry. This makes it even easier to save money on your overall AWS bill.

So how do queued purchases for Savings Plans work? Queued purchases are similar to regular purchases in all aspects but one – the start date. With a regular purchase, a plan goes active immediately whereas with a queued purchase, you select a date in the future for a plan to start. Up until the said future date, the Savings Plan remains in a queued state, and on the future date any upfront payments are charged and the plan goes active.

Now, let’s look at this in more detail with a couple of examples. I walk through three scenarios – a) queuing Savings Plans to cover future usage b) renewing expiring Savings Plans and c) deleting a queued Savings plan.

How do I queue a Savings Plan?

If you are planning ahead and would like to queue a Savings Plan to support future needs such as new workloads or expiring Reserved Instances, head to the Purchase Savings Plans page on the AWS Cost Management Console. Then, select the type of Savings Plan you would like to queue, including the term length, purchase commitment, and payment option.

Select the type of Savings Plan

Now, indicate the start date and time for this plan (this is the date/time at which your Savings Plan becomes active). The time you indicate is in UTC, but is also shown in your browser’s local time zone. If you are looking to replace an existing Reserved Instance, you can provide the start date and time to align with the expiration of your existing Reserved Instances. You can find the expiration time of your Reserved Instances on the EC2 Reserved Instances Console (this is in your local time zone, convert it to UTC when you queue a Savings Plan).

After you have selected the start time and date for the Savings Plan, click “Add to cart”. When you are ready to complete the purchase, click “Submit Order,” which completes the purchase.

Once you have submitted the order, the Savings Plans Inventory page lists the queued Savings Plan with a “Queued” status and that purchase will be carried out on the date and time provided.

How can I replace an expiring plan?

If you have already purchased a Savings Plan, queuing purchases allow you to renew that Savings Plan upon expiry for continuous coverage. All you have to do is head to the AWS Cost Management Console, go to the Savings Plans Inventory page, and select the Savings Plan you would like to renew. Then, click on Actions and select “Renew Savings Plan” as seen in the following image.

This action automatically queues a Savings Plan in the cart with the same configuration (as your original plan) to replace the expiring one. The start time for the plan automatically sets to one second after expiration of the old Savings Plan. All you have to do now is submit the order and you are good to go.

If you would like to renew multiple Savings Plans, select each one and click “Renew Savings Plan,” which adds them to the Cart. When you are done adding new Savings Plans, your cart lists all of the Savings Plans that you added to the order. When you are ready to submit the order, click “Submit order.

How can I delete a queued Savings Plan?

If you have queued Savings Plans that you no longer need to purchase, or need to modify, you can do so by visiting the console. Head to the AWS Cost Management Console, select the Savings Plans Inventory page, and then select the Savings Plan you would like to delete. By selecting the Savings Plan and clicking on Actions, as seen in the following image, you can delete the queued purchase if you need to make changes or if you no longer need the plan to be purchased. If you need the Savings Plan at a different commitment value, you can make a new queued purchase.

Conclusion

AWS Savings Plans allow you to save up to 72% of On-demand prices by committing to a 1- or 3- year term. Starting today, with the ability to queue purchases of Savings Plans, you can easily plan for your future needs or renew expiring Savings Plan ahead of time, all with just a few clicks. In this blog, I walked through various scenarios. As you can see, it’s even easier to save money with AWS Savings Plans by queuing your purchases to meet your future needs and continue benefiting from uninterrupted coverage.

Click here to learn more about queuing purchases of Savings Plans and visit the AWS Cost Management Console to get started.

Amazon Transcribe Now Supports Automatic Language Identification

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-transcribe-now-supports-automatic-language-identification/

In 2017, we launched Amazon Transcribe, an automatic speech recognition service that makes it easy for developers to add a speech-to-text capability to their applications. Since then, we added support for more languages, enabling customers globally to transcribe audio recordings in 31 languages, including 6 in real-time.

A popular use case for Amazon Transcribe is transcribing customer calls. This allows companies to analyze the transcribed text using natural language processing techniques to detect sentiment or to identify the most common call causes. If you operate in a country with multiple official languages or across multiple regions, your audio files can contain different languages. Thus, files have to be tagged manually with the appropriate language before transcription can take place. This typically involves setting up teams of multi-lingual speakers, which creates additional costs and delays in processing audio files.

The media and entertainment industry often uses Amazon Transcribe to convert media content into accessible and searchable text files. Use cases include generating subtitles or transcripts, moderating content, and more. Amazon Transcribe is also used by operations team for quality control, for example checking that audio and video are in sync thanks to the timestamps present in the extracted text. However, other problems couldn’t be easily solved, such as verifying that the main spoken language in your videos is correctly labeled to avoid streaming video in the wrong language.

Today, I’m extremely happy to announce that Amazon Transcribe can now automatically identify the dominant language in an audio recording. This feature will help customers build more efficient transcription workflows by getting rid of manual tagging. In addition to the examples mentioned above, you can now also easily use Amazon Transcribe to automatically recognize and transcribe voicemails, meetings, and any form of recorded communication.

Introducing Automatic Language Identification
With a minimum of 30 seconds of audio, Amazon Transcribe can efficiently generate transcripts in the spoken language without wasting time and resources on manual tagging. Automatic identification of the dominant language is available in batch transcription mode for all 31 languages. Thanks to sampling techniques, language identification happens much faster than the transcription itself, in the matter of seconds.

If you’re already using Amazon Transcribe for speech recognition, you just need to enable the feature in the StartTranscriptionJob API. Before your transcription job is complete, the response of the GetTranscriptionJob API will tell the dominant language of the audio recording, and its confidence score between 0 and 1. The transcript lists the top five languages and their respective confidence scores.

Of course, if you want to use Amazon Transcribe exclusively for automatic language identification, you can simply process the API response and ignore the transcript. In this case, you should stick to short 30-45 second audio recordings to minimize costs.

You can also restrict languages that Amazon Transcribe tries to identify, by passing a list of languages to the StartTranscriptionJob API. For example, if your company call center only receives calls in English, Spanish and French, then restricting identifiable languages to this list will increase language identification accuracy.

Now, I’d like to show you how easy it us to use this new feature!

Detecting the Dominant Language With Amazon Transcribe
First, let’s try a high quality sample. I’ll use the audio track from one of my breakout sessions at AWS Summit Paris 2019. I can easily download it using the youtube-dl tool.

$ youtube-dl -f bestaudio https://www.youtube.com/watch?v=AFN5jaTurfA
$ mv AWS\ \&\ EarthCube\ _\ Deep\ learning\ démarrer\ avec\ MXNet\ et\ Tensorflow\ en\ 10\ minutes-AFN5jaTurfA.m4a video.m4a

Using ffmpeg, I shorten the audio clip to 1 minute.

$ ffmpeg -i video.m4a -ss 00:00:00.00 -t 00:01:00.00 video-1mn.m4a

Then, I upload the clip to an Amazon Simple Storage Service (S3) bucket.

$ aws s3 cp video-1mn.m4a s3://jsimon-transcribe-uswest2/

Next, I use the AWS CLI to run a transcription job on this audio clip, with language identification enabled.

$ awscli transcribe start-transcription-job --transcription-job-name video-test --identify-language --media MediaFileUri=s3://jsimon-transcribe-uswest2/video-1mn.m4a

Waiting only a few seconds, I check the status of the job. I could also use a Amazon CloudWatch event to be notified that language identification is complete.

$ awscli transcribe get-transcription-job --transcription-job-name video-test
{
    "TranscriptionJob": {
        "TranscriptionJobName": "video-test",
        "TranscriptionJobStatus": "IN_PROGRESS",
        "LanguageCode": "fr-FR",
        "MediaSampleRateHertz": 44100,
        "MediaFormat": "mp4",
        "Media": {
        "MediaFileUri": "s3://jsimon-transcribe-uswest2/video-1mn.m4a"
    },
    "Transcript": {},
    "StartTime": 1593704323.312,
"CreationTime": 1593704323.287,

    "Settings": {
        "ChannelIdentification": false,
        "ShowAlternatives": false
    },
    "IdentifyLanguage": true,
    "IdentifiedLanguageScore": 0.915885329246521
    }
}

As highlighted in the output, the dominant language has been correctly detected in seconds, with a high confidence score of 91.59%. A few more seconds later, the transcription job is complete. Running the same CLI call, I can retrieve a link to the transcription, which also includes the top 5 languages for the audio clip, sorted by decreasing score.

"language_identification":[{"score":"0.9159","code":"fr-FR"},{"score":"0.0839","code":"fr-CA"},{"score":"0.0001","code":"en-GB"},{"score":"0.0001","code":"pt-PT"},{"score":"0.0001","code":"de-CH"}]

Adding up French and Canadian French, we pretty much get a score of 100%, so there’s no doubt that this clip is in French. In some cases, you may not care for that level of detail, and you’ll see in the next example how to restrict the list of detected languages.

Restricting the List of Detected Languages
As customer call transcription is a popular use case for Amazon Transcribe, here is a 40-second audio clip (WAV, 8KHz, 16-bit resolution), where I’m reading a paragraph from the French version of the Amazon Transcribe page. As you can hear, quality is pretty awful, and I added background music (Bach-ground, actually) for good measure.

Again, I upload the clip to an S3 bucket, and I use the AWS CLI to transcribe it. This time, I restrict the list of languages to French, Spanish, German, US English, and British English.

$ aws s3 cp speech-8k.wav s3://jsimon-transcribe-uswest2/
$ awscli transcribe start-transcription-job --transcription-job-name speech-8k-test --identify-language --media MediaFileUri=s3://jsimon-transcribe-uswest2/speech-8k.wav --language-options fr-FR es-ES de-DE en-US en-GB

A few seconds later, I check the status of the job.

$ awscli transcribe get-transcription-job --transcription-job-name speech-8k-test
{
    "TranscriptionJob": {
    "TranscriptionJobName": "speech-8k-test",
    "TranscriptionJobStatus": "IN_PROGRESS",
    "LanguageCode": "fr-FR",
    "MediaSampleRateHertz": 8000,
    "MediaFormat": "wav",
    "Media": {
        "MediaFileUri": "s3://jsimon-transcribe-uswest2/speech-8k.wav"
    },
    "Transcript": {},
    "StartTime": 1593705151.446,
"CreationTime": 1593705151.423,

    "Settings": {
        "ChannelIdentification": false,
        "ShowAlternatives": false
    },
    "IdentifyLanguage": true,
    "LanguageOptions": [
        "fr-FR","es-ES","de-DE","en-US","en-GB"
    ],
    "IdentifiedLanguageScore": 0.9995
    }
}

As highlighted in the output, the dominant language has been correctly detected with a very high confidence score in spite of the terrible audio quality. Restricting the list of languages certainly helps, and you should use it whenever possible.

Getting Started
Automatic Language Identification is available today in these regions:

  • US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), AWS GovCloud (US-West).
  • Canada (Central).
  • South America (São Paulo).
  • Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt).
  • Middle East (Bahrain).
  • Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney).

There is no additional charge on top of the existing pricing. Give it a try, and please send us feedback either through your usual AWS Support contacts, or on the AWS Forum for Amazon Transcribe.

– Julien

New EC2 T4g Instances – Burstable Performance Powered by AWS Graviton2 – Try Them for Free

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-t4g-instances-burstable-performance-powered-by-aws-graviton2/

Two years ago Amazon Elastic Compute Cloud (EC2) T3 instances were first made available, offering a very cost effective way to run general purpose workloads. While current T3 instances offer sufficient compute performance for many use cases, many customers have told us that they have additional workloads that would benefit from increased peak performance and lower cost.

Today, we are launching T4g instances, a new generation of low cost burstable instance type powered by AWS Graviton2, a processor custom built by AWS using 64-bit Arm Neoverse cores. Using T4g instances you can enjoy a performance benefit of up to 40% at a 20% lower cost in comparison to T3 instances, providing the best price/performance for a broader spectrum of workloads.

T4g instances are designed for applications that don’t use CPU at full power most of the time, using the same credit model as T3 instances with unlimited mode enabled by default. Examples of production workloads that require high CPU performance only during times of heavy data processing are web/application servers, small/medium data stores, and many microservices. Compared to previous generations, the performance of T4g instances makes it possible to migrate additional workloads such as caching servers, search engine indexing, and e-commerce platforms.

T4g instances are available in 7 sizes providing up to 5 Gbps of network and up to 2.7 Gbps of Amazon Elastic Block Store (EBS) performance:

NamevCPUsBaseline Performance/vCPUCPU Credits Earned/HourMemory
t4g.nano25%60.5 GiB
t4g.micro210%121 GiB
t4g.small220%242 GiB
t4g.medium220%244 GiB
t4g.large230%368 GiB
t4g.xlarge440%9616 GiB
t4g.2xlarge840%19232 GiB

Free Trial
To make it easier to develop, test, and run your applications on T4g instances, all AWS customers are automatically enrolled in a free trial on the t4g.micro size. Starting September 2020 until December 31st 2020, you can run a t4g.micro instance and automatically get 750 free hours per month deducted from your bill, including any CPU credits during the free 750 hours of usage. The 750 hours are calculated in aggregate across all regions. For details on terms and conditions of the free trial, please refer to the EC2 FAQs.

During the free trial, have a look at this getting started guide on using the Arm-based AWS Graviton processors. There, you can find suggestions on how to build and optimize your applications, using different programming languages and operating systems, and on managing container-based workloads. Some of the tips are specific for the Graviton processor, but most of the content works generally for anyone using Arm to run their code.

Using T4g Instances
You can start an EC2 instance in different ways, for example using the EC2 console, the AWS Command Line Interface (CLI), AWS SDKs, or AWS CloudFormation. For my first T4g instance, I use the AWS CLI:

$ aws ec2 run-instances \
  --instance-type t4g.micro \
  --image-id ami-09a67037138f86e67 \
  --security-groups MySecurityGroup \
  --key-name my-key-pair

The Amazon Machine Image (AMI) I am using is based on Amazon Linux 2. Other platforms are available, such as Ubuntu 18.04 or newer, Red Hat Enterprise Linux 8.0 and newer, and SUSE Enterprise Server 15 and newer. You can find additional AMIs in the AWS Marketplace, for example Fedora, Debian, NetBSD, CentOS, and NGINX Plus. For containerized applications, Amazon ECS and Amazon Elastic Kubernetes Service optimized AMIs are available as well.

The security group I selected gives me SSH access to the instance. I connect to the instance and do a general update:

$ sudo yum update -y

Since the kernel has been updated, I reboot the instance.

I’d like to set up this instance as a development environment. I can use it to build new applications, or to recompile my existing apps to the 64-bit Arm architecture. To install most development tools, such as Git, GCC, and Make, I use this group of packages:

$ sudo yum groupinstall -y "Development Tools"

AWS is working with several open source communities to drive improvements to the performance of software stacks running on AWS Graviton2. For example, you can see our contributions to PHP for Arm64 in this post.

Using the latest versions helps you obtain maximum performance from your Graviton2-based instances. The amazon-linux-extras command enables new versions for some of my favorite programming environments:

$ sudo amazon-linux-extras enable golang1.11 corretto8 php7.4 python3.8 ruby2.6

The output of the amazon-linux-extras command tells me which packages to install with yum:

$ yum clean metadata
$ sudo yum install -y golang java-1.8.0-amazon-corretto \
  php-cli php-pdo php-fpm php-json php-mysqlnd \
  python38 ruby ruby-irb rubygem-rake rubygem-json rubygems

Let’s check the versions of the tools that I just installed:

$ go version
go version go1.13.14 linux/arm64
$ java -version
openjdk version "1.8.0_265"
OpenJDK Runtime Environment Corretto-8.265.01.1 (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM Corretto-8.265.01.1 (build 25.265-b01, mixed mode)
$ php -v
PHP 7.4.9 (cli) (built: Aug 21 2020 21:45:13) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
$ python3.8 -V
Python 3.8.5
$ ruby -v
ruby 2.6.3p62 (2019-04-16 revision 67580) [aarch64-linux]

It looks like I am ready to go! Many more packages are available with yum, such as MariaDB and PostgreSQL. If you’re interested in databases, you might also want to try the preview of Amazon RDS powered by AWS Graviton2 processors.

Available Now
T4g instances are available today in US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Mumbai), Europe (Frankfurt, Ireland).

You now have a broad choice of Graviton2-based instances to better optimize your workloads for cost and performance: low cost burstable general-purpose (T4g), general purpose (M6g), compute optimized (C6g) and memory optimized (R6g) instances. Local NVMe-based SSD storage options are also available.

You can use the free trial to develop new applications, or migrate your existing workloads to the AWS Graviton2 processor. Let me know how that goes!

Danilo

AWS Architecture Center: Your One-Stop Destination for Guidance & Resources

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/redesigned-architecture-center/

Discovering relevant architecture-related content has been simplified and made easier with the newly updated and expanded AWS Architecture Center. Now you can browse, search, and even request reference architectures, architecture patterns, best practices, and prescriptive guidance all in one location.

The revamped Architecture Center is the only place where you can browse recommended guidance by the most relevant use cases for specific domains as well as view aggregated collections of content related to those domains in one location. We designed this aggregated experience to help you discover content you might not have known to look for in the past.

How can the Architecture Center help me?

Let’s say you’re looking for security guidance and content. Instead of having to visit several different AWS sites to find what you need (such as whitepapers, AWS Solutions, blog posts, etc.), it’s now all in one place. You’ll also see content most popular with other AWS customers.

From the Architecture Center homepage, in the Security Identity, & Compliance column, click any of the areas you’re interested and discover best practices and featured content.

Best Practices for Security, Identity, & Compliance

There are several other domains listed on the Architecture Center homepage—such as Analytics & Big Data, Machine Learning, and Databases—and we will be adding more domains and industries.

We want your input and ideas

If you can’t find what you’re looking for (or even if you’re not quite sure what you’re looking for), we’re giving you a direct line to us so that you can request content or let us know what’s missing. Look for the Didn’t find what you were looking for? Let us know link at the bottom of the Architecture Center homepage as well as under the Filter by: section on each domain page.

Didn't find what you were looking for? link

Your requests and ideas will help us decide what content we should add to that domain and even to create. So, don’t be shy—let us know if you’re just not finding what you need and we’ll do our best to help.

All the guidance and content you need in one place

The AWS Architecture Center can help you find the accurate and up-to-date information, helping you make the right decisions from the very beginning of your projects. It’s your one-stop destination that provides recommended guidance from AWS Solutions Architects while giving you insights into the architecture content read most often by other AWS customers. We’re making it easier to design and operate reliable, secure, efficient, and cost-effective cloud applications right from the start.

Introducing the AWS Best Practices for Security, Identity, & Compliance Webpage and Customer Polling Feature

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/introducing-aws-best-practices-security-identity-compliance-webpage-and-customer-polling-feature/

The AWS Security team has made it easier for you to find information and guidance on best practices for your cloud architecture. We’re pleased to share the Best Practices for Security, Identity, & Compliance webpage of the new AWS Architecture Center. Here you’ll find top recommendations for security design principles, workshops, and educational materials, and you can browse our full catalog of self-service content including blogs, whitepapers, videos, trainings, reference implementations, and more.

We’re also running polls on the new AWS Architecture Center to gather your feedback. Want to learn more about how to protect account access? Or are you looking for recommendations on how to improve your incident response capabilities? Let us know by completing the poll. We will use your answers to help guide security topics for upcoming content.

Poll topics will change periodically, so bookmark the Security, Identity, & Compliance webpage for easy access to future questions, or to submit your topic ideas at any time. Our first poll, which asks what areas of the Well-Architected Security Pillar are most important for your use, is available now. We look forward to hearing from you.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

AWS Named as a Cloud Leader for the 10th Consecutive Year in Gartner’s Infrastructure & Platform Services Magic Quadrant

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-named-as-a-cloud-leader-for-the-10th-consecutive-year-in-gartners-infrastructure-platform-services-magic-quadrant/

At AWS, we strive to provide you a technology platform that allows for agile development, rapid deployment, and unlimited scale, so that you can free up your resources to focus on innovation for your customers. It’s greatly rewarding to see our efforts recognized not just by our customers, but also by leading analysts.

This year, Gartner announced a new Magic Quadrant for Cloud Infrastructure and Platform Services (CIPS). This is an evolution of their Magic Quadrant for Cloud Infrastructure as a Service (IaaS) for which AWS has been named as a Leader for nine consecutive years.

Customers are using the cloud in broad ways, beyond foundational compute, networking and storage services. We believe for this reason, Gartner is expanding the scope to include additional platform as a service (PaaS) capabilities, and is extending coverage for areas such as managed database services, serverless computing, and developer tools.

Today, I am happy to share that AWS has been named as a Leader in the Magic Quadrant for Cloud Infrastructure and Platform Services, and placed highest in Ability to Execute and furthest in Completeness of Vision.

More information on the features and factors that our customers examine when choosing a cloud provider are available in the full report.

Danilo

Gartner, Magic Quadrant for Cloud Infrastructure and Platform Services, Raj Bala, Bob Gill, Dennis Smith, David Wright, Kevin Ji, 1 September 2020 – Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Introducing larger state payloads for AWS Step Functions

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/introducing-larger-state-payloads-for-aws-step-functions/

AWS Step Functions allows you to create serverless workflows that orchestrate your business processes. Step Functions stores data from workflow invocations as application state. Today we are increasing the size limit of application state from 32,768 characters to 256 kilobytes of data per workflow invocation. The new limit matches payload limits for other commonly used serverless services such as Amazon SNS, Amazon SQS, and Amazon EventBridge. This means you no longer need to manage Step Functions payload limitations as a special case in your serverless applications.

Faster, cheaper, simpler state management

Previously, customers worked around limits on payload size by storing references to data, such as a primary key, in their application state. An AWS Lambda function then loaded the data via an SDK call at runtime when the data was needed. With larger payloads, you now can store complete objects directly in your workflow state. This removes the need to persist and load data from data stores such as Amazon DynamoDB and Amazon S3. You do not pay for payload size, so storing data directly in your workflow may reduce both cost and execution time of your workflows and Lambda functions. Storing data in your workflow state also reduces the amount of code you need to write and maintain.

AWS Management Console and workflow history improvements

Larger state payloads mean more data to visualize and search. To help you understand that data, we are also introducing changes to the AWS Management Console for Step Functions. We have improved load time for the Execution History page to help you get the information you need more quickly. We have also made backwards-compatible changes to the GetExecutionHistory API call. Now if you set includeExecutionData to false, GetExecutionHistory excludes payload data and returns only metadata. This allows you to debug your workflows more quickly.

Doing more with dynamic parallelism

A larger payload also allows your workflows to process more information. Step Functions workflows can process an arbitrary number of tasks concurrently using dynamic parallelism via the Map State. Dynamic parallelism enables you to iterate over a collection of related items applying the same process to each item. This is an implementation of the map procedure in the MapReduce programming model.

When to choose dynamic parallelism

Choose dynamic parallelism when performing operations on a small collection of items generated in a preliminary step. You define an Iterator, which operates on these items individually. Optionally, you can reduce the results to an aggregate item. Unlike with parallel invocations, each item in the collection is related to the other items. This means that an error in processing one item typically impacts the outcome of the entire workflow.

Example use case

Ecommerce and line of business applications offer many examples where dynamic parallelism is the right approach. Consider an order fulfillment system that receives an order and attempts to authorize payment. Once payment is authorized, it attempts to lock each item in the order for shipment. The available items are processed and their total is taken from the payment authorization. The unavailable items are marked as pending for later processing.

The following Amazon States Language (ASL) defines a Map State with a simplified Iterator that implements the order fulfillment steps described previously.


    "Map": {
      "Type": "Map",
      "ItemsPath": "$.orderItems",
      "ResultPath": "$.packedItems",
      "MaxConcurrency": 40,
      "Next": "Print Label",
      "Iterator": {
        "StartAt": "Lock Item",
        "States": {
          "Lock Item": {
            "Type": "Pass",
            "Result": "Item locked!",
            "Next": "Pull Item"
          },
          "Pull Item": {
            "Type": "Pass",
            "Result": "Item pulled!",
            "Next": "Pack Item"
          },
          "Pack Item": {
            "Type": "Pass",
            "Result": "Item packed!",
            "End": true
          }
        }
      }
    }

The following image provides a visualization of this workflow. A preliminary state retrieves the collection of items from a data store and loads it into the state under the orderItems key. The triple dashed lines represent the Map State which attempts to lock, pull, and pack each item individually. The result of processing each individual item impacts the next state, Print Label. As more items are pulled and packed, the total weight increases. If an item is out of stock, the total weight will decrease.

A visualization of a portion of an AWS Step Functions workflow that implements dynamic parallelism

Dynamic parallelism or the “Map State”

Larger state payload improvements

Without larger state payloads, each item in the $.orderItems object in the workflow state would be a primary key to a specific item in a DynamoDB table. Each step in the “Lock, Pull, Pack” workflow would need to read data from DynamoDB for every item in the order to access detailed item properties.

With larger state payloads, each item in the $.orderItems object can be a complete object containing the required fields for the relevant items. Not only is this faster, resulting in a better user experience, but it also makes debugging workflows easier.

Pricing and availability

Larger state payloads are available now in all commercial and AWS GovCloud (US) Regions where Step Functions is available. No changes to your workflows are required to use larger payloads, and your existing workflows will continue to run as before. The larger state is available however you invoke your Step Functions workflows, including the AWS CLI, the AWS SDKs, the AWS Step Functions Data Science SDK, and Step Functions Local.

Larger state payloads are included in existing Step Functions pricing for Standard Workflows. Because Express Workflows are priced by runtime and memory, you may see more cost on individual workflows with larger payloads. However, this increase may also be offset by the reduced cost of Lambda, DynamoDB, S3, or other AWS services.

Conclusion

Larger Step Functions payloads simplify and increase the efficiency of your workflows by eliminating function calls to persist and retrieve data. Larger payloads also allow your workflows to process more data concurrently using dynamic parallelism.

With larger payloads, you can minimize the amount of custom code you write and focus on the business logic of your workflows. Get started building serverless workflows today!

TISAX scope broadened

Post Syndicated from Kevin Quaid original https://aws.amazon.com/blogs/security/tisax-scope-broadened/

The Trusted Information Security Assessment Exchange (TISAX) provides automotive industry organizations the assurance needed to build secure applications and services on the cloud. In late June, AWS achieved the assessment objectives required for data with a very high need for protection according to TISAX criteria.

We’re happy to announce this broadened scope of our TISAX certification today, September 3, the same day that Ferdinand Porsche, credited with originating VW’s Beetle, pioneering hybrid electric-gasoline technology, and founding the Porsche car company, was born 145 years ago.

Automotive customers and their entire supply chain rely on AWS, including Volkswagen’s global supply chain comprised of 122 manufacturing plants and 1,500 suppliers. This certification evidences that the AWS information management systems meet industry standards.

“We rely on our partners and suppliers to achieve a unified level of information security established by TISAX. AWS recognizes the importance of this bar and demonstrates innovation by expanding program scope to include additional regions, control domains, and protection levels.”
    –Stefan Arnold, Director Technology & Acceleration, Porsche

AWS completed a scope extension assessment against TISAX very high protection level (AL 3) for five additional regions. The seven regions in scope include Frankfurt, Ireland, US West (Oregon), US East (Ohio), US East (North Virginia), Canada, and Seoul. Control domains in scope expanded to include data protection.

TISAX was established by the German Association of the Automotive Industry (VDA) and is governed by the European Network Exchange (ENX). The assessment was conducted and accredited by an audit provider, and the results are retrievable from the ENX Portal. The scope ID and assessment ID are SP208R and AYZ38F-1, respectively.

For more information, see Trusted Information Security Assessment Exchange.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kevin Quaid

Kevin leads expansion initiatives for security assurance, supporting customers using and migrating to AWS. He previously managed datacenter site selection and qualification for AWS infrastructure. He is passionate about leveraging his decade-plus risk management experience at Amazon to drive innovation and cloud adoption.

New third-party test compares Amazon GuardDuty to network intrusion detection systems

Post Syndicated from Tim Winston original https://aws.amazon.com/blogs/security/new-third-party-test-compares-amazon-guardduty-to-network-intrusion-detection-systems/

A new whitepaper is available that summarizes the results of tests by Foregenix comparing Amazon GuardDuty with network intrusion detection systems (IDS) on threat detection of network layer attacks. GuardDuty is a cloud-centric IDS service that uses Amazon Web Services (AWS) data sources to detect a broad range of threat behaviors. Security engineers need to understand how Amazon GuardDuty compares to traditional solutions for network threat detection. Assessors have also asked for clarity on the effectiveness of GuardDuty for meeting compliance requirements, like Payment Card Industry (PCI) Data Security Standard (DSS) requirement 11.4, which requires intrusion detection techniques to be implemented at critical points within a network.

A traditional IDS typically relies on monitoring network traffic at specific network traffic control points, like firewalls and host network interfaces. This allows the IDS to use a set of preconfigured rules to examine incoming data packet information and identify patterns that closely align with network attack types. Traditional IDS have several challenges in the cloud:

  • Networks are virtualized. Data traffic control points are decentralized and traffic flow management is a shared responsibility with the cloud provider. This makes it difficult or impossible to monitor all network traffic for analysis.
  • Cloud applications are dynamic. Features like auto-scaling and load balancing continuously change how a network environment is configured as demand fluctuates.

Most traditional IDS require experienced technicians to maintain their effective operation and avoid the common issue of receiving an overwhelming number of false positive findings. As a compliance assessor, I have often seen IDS intentionally de-tuned to address the false positive finding reporting issue when expert, continuous support isn’t available.

GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs. This gives GuardDuty the ability to analyze event data, such as AWS API calls to AWS Identity and Access Management (IAM) login events, which is beyond the capabilities of traditional IDS solutions. Monitoring AWS API calls from CloudTrail also enables threat detection for AWS serverless services, which sets it apart from traditional IDS solutions. However, without inspection of packet contents, the question remained, “Is GuardDuty truly effective in detecting network level attacks that more traditional IDS solutions were specifically designed to detect?”

AWS asked Foregenix to conduct a test that would compare GuardDuty to market-leading IDS to help answer this question for us. AWS didn’t specify any specific attacks or architecture to be implemented within their test. It was left up to the independent tester to determine both the threat space covered by market-leading IDS and how to construct a test for determining the effectiveness of threat detection capabilities of GuardDuty and traditional IDS solutions which included open-source and commercial IDS.

Foregenix configured a lab environment to support tests that used extensive and complex attack playbooks. The lab environment simulated a real-world deployment composed of a web server, a bastion host, and an internal server used for centralized event logging. The environment was left running under normal operating conditions for more than 45 days. This allowed all tested solutions to build up a baseline of normal data traffic patterns prior to the anomaly detection testing exercises that followed this activity.

Foregenix determined that GuardDuty is at least as effective at detecting network level attacks as other market-leading IDS. They found GuardDuty to be simple to deploy and required no specialized skills to configure the service to function effectively. Also, with its inherent capability of analyzing DNS requests, VPC flow logs, and CloudTrail events, they concluded that GuardDuty was able to effectively identify threats that other IDS could not natively detect and required extensive manual customization to detect in the test environment. Foregenix recommended that adding a host-based IDS agent on Amazon Elastic Compute Cloud (Amazon EC2) instances would provide an enhanced level of threat defense when coupled with Amazon GuardDuty.

As a PCI Qualified Security Assessor (QSA) company, Foregenix states that they consider GuardDuty as a qualifying network intrusion technique for meeting PCI DSS requirement 11.4. This is important for AWS customers whose applications must maintain PCI DSS compliance. Customers should be aware that individual PCI QSAs might have different interpretations of the requirement, and should discuss this with their assessor before a PCI assessment.

Customer PCI QSAs can also speak with AWS Security Assurance Services, an AWS Professional Services team of PCI QSAs, to obtain more information on how customers can leverage AWS services to help them maintain PCI DSS Compliance. Customers can request Security Assurance Services support through their AWS Account Manager, Solutions Architect, or other AWS support.

We invite you to download the Foregenix Amazon GuardDuty Security Review whitepaper to see the details of the testing and the conclusions provided by Foregenix.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Winston

Tim is long-time security and compliance consultant and currently a PCI QSA with AWS Security Assurance Services.

Announcing the newest AWS Heroes – August 2020

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/announcing-the-newest-aws-heroes-august-2020/

The AWS Heroes program recognizes a select few individuals who go above and beyond to share AWS knowledge and teach others about AWS, all while helping make building AWS skills accessible to many. These leaders have an incredible impact within technical communities worldwide and their efforts are greatly appreciated.

Today we are excited to introduce to you the latest AWS Heroes, including the first Heroes from Greece and Portugal:

Angela Timofte – Copenhagen, Denmark

Serverless Hero Angela Timofte is a Data Platform Manager at Trustpilot. Passionate about knowledge sharing, coaching, and public speaking, she is committed to leading by example and empowering people to seek and develop the skills they need to achieve their goals. She is driven to build scalable solutions with the latest technologies while migrating from monolithic solutions using serverless applications and event-driven architecture. She is a co-organizer of the Copenhagen AWS User Group and frequent speaker about serverless technologies at AWS Summits, AWS Community Days, ServerlessDays, and more.

Avi Keinan – Tel Aviv, Israel

Community Hero Avi Keinan is a Senior Cloud Engineer at DoIT International. He specializes in AWS Infrastructure, serverless, and security solutions, enjoys solving complex problems, and is passionate about helping others. Avi is a member of many online AWS forums and groups where he assists both novice and experienced cloud users with solving complex and simple solutions. He also frequently posts AWS-related videos on his YouTube channel.

Hirokazu Hatano – Tokyo, Japan

Community Hero Hirokazu Hatano is the founder & CEO of Operation Lab, LLC. He created the Japan AWS User Group (JAWS-UG) CLI chapter in 2014 and has hosted 165 AWS CLI events over a six-year period, with 4,133 attendees. In 2020, despite the challenging circumstances of COVID-19, he has successfully transformed the AWS CLI events from in-person to virtual. In the first quarter of 2020, he held 12 online hands-on events with 1,170 attendees. In addition, he has organized six other JAWS-UG chapters, making him one of the most frequent JAWS-UG organizers.

Ian McKay – Sydney, Australia

Community Hero Ian McKay is the Cloud Lead at Kablamo. Between helping clients, he loves the chance to build open-source projects with a focus on AWS automation and tooling. Ian built and maintains both the “Former2” and “Console Recorder for AWS” projects which help developers author Infrastructure as Code templates (such as CloudFormation) from existing resources or while in the AWS Management Console. Ian also enjoys speaking at meetups, co-hosting podcasts, engaging with the community on Slack, and posting his latest experiences with AWS on his blog.

Jacopo Nardiello – Milan, Italy

Container Hero Jacopo Nardiello is CEO and Founder at SIGHUP. He regularly speaks at conferences and events, and closely supports the local Italian community where he currently runs the Kubernetes and Cloud Native Milano meetup and regularly collaborates with AWS and the AWS Milan User Group on all things containers. He is passionate about developing and delivering battle-tested solutions based on EKS, Fargate, and OSS software – both from the CNCF and AWS – like AWS Firecracker (the base for Lambda), AWS Elasticsearch Open Distro, and other projects.

Jérémie Rodon – Paris, France

Community Hero Jérémie Rodon is a 9x certified AWS Cloud Architect working for Devoteam Revolve. He has 3 years of consulting experience, designing solutions in various environments from small businesses to CAC40 companies, while also empowering clients on AWS by delivering AWS courses as an AWS Authorized Instructor Champion. His interests lie mainly in security, and especially cryptography: he has done presentations explaining how AWS KMS works under the hood and has written a blog post explaining why the quantum crypto-calypse will not happen.

Kittaya Niennattrakul – Bangkok, Thailand

Community Hero Kittaya Niennattrakul (Tak) is a Product Manager and Assistant Managing Director at Dailitech. Tak has helped run the AWS User Group Thailand since 2014, and is also one of the AWS Asian Women’s Association leaders. In 2019, she and the Thailand User Group team, with support from AWS, organized two big events: AWS Meetup – Career Day (400+ attendees), and AWS Community Day in Bangkok (200+ attendees). Currently, Tak helps write a blog for sharing useful information with the AWS community. In 2020, AWS User Group Thailand has more than 8,000+ members and continues growing.

Konstantinos Siaterlis – Athens, Greece

Machine Learning Hero Konstantinos Siaterlis is Head of Data Engineering at Orfium, a Music Rights Management company based in Malibu. During his day-to-day, he drives adoption/integration and training on several AWS services, including Amazon SageMaker. He is the co-organizer of AWS User Group Greece, and a blogger on TheLastDev, writing about Data Science and introductory tutorials on AWS Machine Learning Services.

Kyle Bai – Taichung City, Taiwan

Container Hero Kyle Bai, also known as Kai-Ren Bai, is a Site Reliability Engineer at MaiCoin. He is a creator and contributor of a few open-source projects on GitHub about AWS, Containers, and Kubernetes. Kyle is a co-organizer of the Cloud Native Taiwan User Group. He is also a technical speaker, with experience delivering presentations at meetups and conferences including AWS re:Invent re:Cap Taipei, AWS Summit Taipei, AWS Developer Year-end Meetup Taipei, AWS Taiwan User Group, and so on.

Linghui Gong – Shanghai, China

Community Hero Linghui Gong is VP of Engineering at Strikingly, Inc. One of the early AWS users in China, Linghui has been leading his team to drive all critical architecture evolutions on AWS, including building a cross-region website cluster that serves millions of sites worldwide. Linghui shared the story of his teams’ cross-region website cluster on AWS at re:Invent 2018. He has also presented AWS best practices twice at AWS Summit Beijing, as well as at many other AWS events.

Luca Bianchi – Milan, Italy

Serverless Hero Luca Bianchi is Chief Technology Officer at Neosperience. He writes on Medium about serverless and Machine Learning, where he focuses on technologies such as AWS CDK, AWS Lambda, and Amazon SageMaker. He is co-founder of Serverless Meetup Italy and co-organizer of ServerlessDays Milano and ServerlessDays Italy. As an active member of the serverless community, Luca is a regular speaker at user groups, helping developers and teams to adopt cloud technologies.

Matthieu Napoli – Lyon, France

Serverless Hero Matthieu Napoli is a software engineer passionate about helping developers to create. Fascinated by how serverless unlocks creativity, he works on making serverless accessible to everyone. Matthieu enjoys maintaining open-source projects including Bref, a framework for creating serverless PHP applications on AWS. Alongside Bref, he sends a monthly newsletter containing serverless news relevant to PHP developers. Matthieu recently created the Serverless Visually Explained course which is packed with use cases, visual explanations, and code samples.

Mike Chambers – Brisbane, Australia

Machine Learning Hero Mike Chambers is an independent trainer, specializing in AWS and machine learning. He was one of the first in the world to become AWS certified and has gone on to train well over a quarter of a million students in the art of AWS, machine learning, and cloud. An active advocate of AWS’s machine learning capabilities, Mike has traveled the world helping study groups, delivering talks at AWS Meetups, all while posting widely on social media. Mike’s passion is to help the community get as excited about technology as he is.

Noah Gift – Raleigh-Durham, USA

Machine Learning Hero Noah Gift is the founder of Pragmatic A.I. Labs. He lectures on cloud computing at top universities globally, including the Duke and Northwestern graduate data science programs. He designs graduate machine learning, MLOps, A.I., and Data Science courses, consults on Machine Learning and Cloud Architecture for AWS, and is a massive advocate of AWS Machine Learning and putting machine learning models into production. Noah has authored several books, including Pragmatic AI, Python for DevOps, and Cloud Computing for Data Analysis.

Olalekan Elesin – Berlin, Germany

Machine Learning Hero Olalekan Elesin is an engineer at heart, with a proven record of leading successful machine learning projects as a data science engineer and a product manager, including using Amazon SageMaker to deliver an AI enabled platform for Scout24, which reduced productionizing ML projects from at least 7 weeks to 3 weeks. He has given several talks across Germany on building machine learning products on AWS, including Serverless Product Recommendations with Amazon Rekognition. For his machine learning blog posts, he writes about automating machine learning workflows with Amazon SageMaker and Amazon Step Functions.

Peter Hanssens – Sydney, Australia

Serverless Hero Peter Hanssens is a community leader: he has led the Sydney Serverless community for the past 3 years, and also built out Data Engineering communities in Melbourne, Sydney, and Brisbane. His passion is helping others to learn and grow their careers through shared experiences. He ran the first ever ServerlessDays in Australia in 2019 and in 2020 he has organized AWS Serverless Community Day ANZ, ServerlessDays ANZ, and DataEngBytes, a community-built data engineering conference.

Ruofei Ma – Beijing, China

Container Hero Ruofei Ma works as a principal software engineer for FreeWheel Inc, where he focuses on developing cloud-native applications with AWS. He enjoys sharing technology with others and is a lecturer at time.geekbang.org, the biggest IT knowledge-sharing platform in China. He enjoys sharing his experience on various meetups, such as the CNCF webinar and GIAC. He is a committee member of the largest service mesh community in China, servicemesher.com, and often posts blogs to share his best practices with the community.

Sheen Brisals – London, United Kingdom

Serverless Hero Sheen Brisals is a Senior Engineering Manager at The LEGO Group and is actively involved in the global Serverless community. Sheen loves being part of Serverless conferences and enjoys sharing knowledge with community members. He talks about Serverless at many events around the world, and his insights into Serverless technology and adoption strategies can be found on his Medium channel. You can also find him tweeting about Serverless success stories and technical thoughts.

Sridevi Murugayen – Chennai, India

Data Hero Sridevi Murugayen has 16+ years of IT experience and is a passionate developer and problem solver. She is an active co-organizer of the AWS User Group in Chennai, and helps host and deliver AWS Community Days, Meetups, and technical sessions to the developer community. She is a regular speaker in community events focusing on analytics solutions using AWS Analytics services, including Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon S3, and AWS Glue. She strongly believes in diversity and inclusion for a successful society and loves encouraging and enabling women technologists.

Stéphane Maarek – Lisbon, Portugal

Data Hero Stéphane Maarek is an online instructor at Udemy with many courses designed to teach users how to build on AWS (including the AWS Certified Data & Analytics Specialty certification) and deepen their knowledge about AWS services. Stephane also has a keen interest in teaching about Apache Kafka and recently partnered with AWS to launch an in-depth course on Amazon MSK (Managed Streaming for Apache Kafka). Stephane is passionate about technology and how it can help improve lives and processes.

Tom McLaughlin – Boston, USA

Serverless Hero Tom McLaughlin is a cloud infrastructure and operations engineer who has worked in companies ranging from startups to the enterprise. What drew Tom early to serverless was the prospect of having no hosts or container management platform to build and manage which yielded the question; what would he do if the servers he was responsible for went away? He’s found enjoyment in a community of people that are both pushing the future of technology and trying to understand its effects on the future of people and businesses.

 

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Hero website.

Ross;

AWS announces AWS Contact Center Intelligence solutions

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/aws-announces-aws-contact-center-intelligence-solutions/

What was announced?

We’re announcing the availability of AWS Contact Center Intelligence (CCI) solutions, a combination of services that empowers customers to easily integrate AI into contact centers, made available through AWS Partner Network (APN) partners.

AWS CCI has solutions for self-service, live-call analytics & agent assist, and post-call analytics, making it possible for customers to quickly deploy AI into their existing workflows or build completely new ones.

Pricing and regional availability correspond to the underlying services (Amazon Comprehend, Amazon Kendra, Amazon Lex, Amazon Transcribe, Amazon Translate, and Amazon Polly) used.

What is AWS Contact Center Intelligence?

We mentioned that AWS CCI brings solutions to contact centers powered by AI for before, during, and after customer interactions.

My colleague Swami Sivasubramanian (VP, Amazon Machine Learning, AWS) said: “We want to make it easy for our customers with contact centers to benefit from machine learning capabilities even if they have no machine learning expertise. By partnering with APN technology and consulting partners to bring AWS Contact Center Intelligence solutions to market, we are making it easier for customers to realize the benefits of cloud-based machine learning services while removing the heavy lifting and the need to hire specialized developers to integrate the ML capabilities in to their existing contact centers.

But what does that mean? 🤔

AWS CCI solutions lets you leverage machine learning (ML) functionality such as text-to-speech, translation, enterprise search, chatbots, business intelligence, and language comprehension into current contact center environments. Customers can now implement contact center intelligence ML solutions to aid self-service, live-call analytics & agent assist, and post-call analytics. Currently, AWS CCI solutions are available through partners such as Genesys, Vonage, and UiPath for easy integration into existing enterprise contact center systems.

“We’re proud Genesys customers will be among the first to benefit from the off-the-shelf machine learning capabilities of AWS Contact Center Intelligence solutions. It’s now simpler and more cost-effective for organizations to combine AWS’s AI capabilities, including search, text-to-speech and natural language understanding, with the advanced contact center capabilities of Genesys Cloud to give customers outstanding self-service experiences.” ~ Olivier Jouve (Executive Vice President and General Manager of Genesys Cloud)

“More and more consumers are relying on automated methods to interact with brands, especially in today’s retail environment where online shopping is taking a front seat. The Genesys Cloud and Amazon Web Services (AWS) integration will make it easier to leverage conversational AI so we can provide more effective self-service experiences for our customers.” ~ Aarde Cosseboom (Senior Director of Global Member Services Technology, Analytics and Product at TechStyle Fashion Group)

 

How it works and who it’s for…

AWS Contact Center Intelligence solutions offer a variety of ways that organizations can quickly and cost-effectively add machine learning-based intelligence to their contact centers, via AWS pre-trained AI Services. AWS CCI is currently available through participating APN partners, and it is focused on three stages of the contact center workflow: Self-Service, Live Call Analytics and Agent Assist, and Post-Call Analytics. Let’s break each one of these up.

The Self-Service solution helps with creation of chatbots and ML-driven IVRs (Interactive voice response) to address the most common queries a contact center workforce often gets. This now allows actual call center employees to focus on higher value work. To implement this solution, you’ll want to work with either Amazon Lex and/or Amazon Kendra. The novelty of this solution is that Lex + Kendra not only fulfills transactional queries (i.e. book a hotel room or reset my password), but also addresses the long tail of customers questions whose answers live in enterprises knowledge systems. Before, these Q&A had to be hard coded in Lex, making it harder to implement and maintain. Today, you can implement this solution directly from your existing contact center platform with AWS CCI partners, such as Genesys.

The Live Call Analytics & Agent Assist solution enables the creation of real-time ML capabilities to increase staff productivity and engagement. Here, Amazon Transcribe is used to perform real-time speech transcription, while Amazon Comprehend can analyze interactions, detect the sentiment of the caller, and identify key words and phrases in the conversation. Amazon Translate can even be added to translate the conversation into a preferred language! Now, you can implement this solution directly from several leading contact center platforms with AWS CCI partners, like SuccessKPI.

The Post-Call Analytics solution is an automatic analysis of contact center conversations, which tend to leave actionable data for product and service feedback loops. Similar to live call analytics, this solution combines Amazon Transcribe to perform speech recognition and creates a high-quality text transcription of each call, with Amazon Comprehend to analyze the interaction. Amazon Translate can be added to translate the conversation into your preferred language, and Amazon Kendra can be used for contextual natural language queries. Today, you can implement this solution directly from several leading contact center platforms with AWS CCI partners, such as Acqueon.

AWS helps partners integrate these solutions into their products. Some solutions also have a Quick Start, which includes CloudFormation templates and deployment guide, to automate the deployments. The good news is that our AWS Partners landing pages will also provide additional implementation information specific to their products. 👌

Let’s see a demo…

For today’s post, we chose to focus on diving deeper into the Self-Service and Post-Call Analytics solutions, so let’s begin with Self-Service.

Self-Service
We have a public GitHub repository that has a complete Quick Start template plus a detailed deployment guide with architecture diagrams. (And the good news is that our APN partner landing pages will also reference this repo!)

This GitHub repo talks about the Amazon Lex chatbot integration with Amazon Kendra. The main idea here is that the customer can bring their own document repository through Amazon Kendra, which can be sourced through Amazon Lex when customers are interacting with this Lex chatbot.

The main thing we want to notice in this architecture is that customers can bring their existing documents and allow their chatbot to search that document whenever someone interacts with said chatbot. The architecture below assumes our docs are in an S3 bucket, but it’s worth noting that Amazon Kendra can integrate with multiple kinds of data sources. If using an S3 bucket, customers must provide their own S3 bucket name, the one that has their document repository. This is a prerequisite for deployment.

Let’s follow the instructions under the repo’s Deployment Steps, skipping ahead to Step #2, “Click Deploy to launch the CloudFormation template.”

Since this is a Quick Start template, you can see how everything is already filled out for us. We click Next and move on to Step 2, Specify stack details.

Notice how the S3 bucket section is blank. You can provide your own S3 bucket name if you want to test this out with your own docs. For today, I am going to use the S3 bucket name that was provided to us in the GitHub doc.

The next part to configure will be the Cross account role configuration section. For my demo, I will add my own AWS account ID under “Assuming Account ID.”

We click Next and move on to Step 3, Configure Stack options.

Nothing to configure here, so we can click Next again and move on to Step 4, Review. We click to accept these final acknowledgements and click Create Stack.

If we were to navigate over to our deployed AWS CloudFormation stacks, we can go to Outputs of this stack and see our Kendra index name and Lex bot name.

Now if we head over to Amazon Lex, we should be able to easily find our chatbot.

We click into it and we can see that our chatbot is ready. At this point, we can start interacting with it!

We can something like “Hi” for example.

Eventually we would also get a response that details the reply source. What this means is that it will tell you if this came from Amazon Lex or from Amazon Kendra and the documents we saved in our S3 bucket.

 

Live Call Analytics & Agent Assist
We have two public GitHub repositories for this solution too, and both have detailed deployment guide with architecture diagrams as well.

This GitHub repo provides us a code example and a fully functional AWS Lambda function to get you started with capturing and transcribing Amazon Chime Voice Connector phone calls using Amazon Kinesis Video Streams and Amazon Transcribe. This solution gives us the ability to see how to use AI and ML services to talk to the customer’s existent environment, to drive agent assistance or analytics. We can take a real-time voice feed, transcribe that information, and then use Amazon Comprehend to pull that information out to provide the key action and sentiment.

We now also provide the Chime SIP req connector (a chime component that allows you to connect voice over an IP compatible environment with Amazon voice services) to stream voice in Amazon Transcribe from virtually any contact center. Our partner Vonage can do the same through websocket.

👉🏽 Check out the GitHub developer docs:

And as we mentioned above, for today’s post, we chose to focus on diving deeper into the Self-Service and Post-Call Analytics solutions. So let’s move on to show an example for Post-Call Analytics.

 

Post-Call Analytics

We have a public GitHub repository for this solution too, with another complete Quick Start template and detailed deployment guide with architecture diagrams. This solution is used after the call has ended, so that our customers can review the analytics of those calls.

This GitHub repo talks about how to look for insights and information about calls that have already happened. We call this, Quality Management. We can use Amazon Transcribe and Amazon Comprehend to pull out key words, information, and data, in order to know how to better drive what is happening in our contact center calls. We can then review these insights on Amazon QuickSight.

Let’s look at the architecture diagram for this solution too. Our call recording gets stored in an S3 bucket, which is then picked up by a Lambda function which does a transcription using Amazon Transcribe. It puts the result in a different bucket and then that call’s metadata gets stored in DynamoDB. Now Amazon Comprehend can conduct text analysis on the call’s metadata, and stores the result in a Text analysis Output bucket. Eventually, QuickSight is used to provide dashboards showing the resulting call analytics.

Just like in the previous example, we move down to the Deployment steps section. Just like before, we have a pre-made CloudFormation template that is ready to be deployed.

Step 1, Specify template is good to go, so we click Next.

In Step 2, Specify stack details, something important to note is that the User Pool Domain Name must be globally unique.

We click Next and move on to Step 3, Configure Stack options. Nothing additional to configure here either, so we can click Next again and move on to Step 4, Review.

We click to accept these final acknowledgements and click Create Stack.

And if we were to navigate over to our deployed AWS CloudFormation stacks again, we can go to Outputs of this stack and see the PortalEndpoint key. After the stack creation has completed successfully, and portal website is available at CloudFront distribution endpoint. This key is what will allow us to find the portal URL.

We will need to have user created in Amazon Cognito for the next steps to work. (If you have never created one, visit this how-to guide.)

⚠ NOTE: Make sure to open the portal URL endpoint in a different Incognito Window as the portal attaches a QuickSight User Role that can interfere with your actual role.

We go to the portal URL and login with our created Cognito user. We’re prompted to change the temporary password and are eventually directed to the QuickSight homepage.

Now we want to upload the audio files of our calls and we can do so with the Upload button.

After successfully uploading our audio files, the audio processing will run through transcription and text analysis. At this point we can click on the Call Analytics logo in the top left of the Navigation Bar to return to home page.

Now we can drill down into a call to see Amazon Comprehend’s result of the call classifications and turn-by-turn sentiments.

 

🌎 Lastly…

Regional availability for AWS Contact Center Intelligence (CCI) solutions correspond to the underlying services (Amazon Comprehend, Amazon Kendra, Amazon Lex, Amazon Transcribe, Amazon Translate) used.

We are announcing AWS CCI availability with 12 APN partners: Genesys, UiPath, Vonage, Acqueon, SuccessKPI, and Inference Solutions (Technology partners), and Slalom, Onica/Rackspace, TensorIoT, Quantiphi, Accenture, and HGS Digital (Consulting partners).

Ready to get started? Contact one of the AWS CCI launch partners listed on the AWS CCI web page.

 

You may also want to see…

👉🏽AWS Quick Start links from post:

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️🤖 y Canela 🐾

19 additional AWS services authorized at DoD Impact Level 5 for AWS GovCloud (US) Regions

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/19-additional-aws-services-authorized-dod-impact-level-5-aws-govcloud-us-regions/

I’m excited to share that the Defense Information Systems Agency (DISA) has authorized 19 additional AWS services at Impact Level (IL) 5 and four services at IL 4 in the AWS GovCloud (US) Regions.

With these additional 19 services, a total of 80 AWS services and features at IL4 and IL 5 are authorized and available for DoD Mission Owners to process under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). DISA’s authorization demonstrates that AWS effectively implemented over 421 security controls using applicable criteria from NIST SP 800-53 Rev. 4, the US General Services Administration’s FedRAMP High baseline, and DoD CC SRG for Impact Level 5.

The authorization at DoD IL 4 and IL 5 allows DoD Mission Owners to process controlled unclassified information (CUI) and to include mission critical workloads for National Security Systems in AWS GovCloud (US) Regions. This authorization supplements the full range of U.S. Government data classifications supported on AWS. AWS remains the only cloud service provider accredited to address the full range, including Unclassified, Secret, and Top Secret.

The recently authorized AWS services and features at DoD Impact Levels 5 include the following:

  1. Amazon AppStream 2.0 (also authorized at IL 4)
  2. Amazon Cloud Directory
  3. Amazon Comprehend
  4. Amazon Kinesis Data Firehose
  5. Amazon Route 53
  6. Amazon Transcribe
  7. Amazon Translate
  8. AWS CodeBuild
  9. AWS CodeCommit
  10. AWS DataSync
  11. AWS Elemental MediaConvert (also authorized at IL 4)
  12. AWS IoT Greengrass
  13. AWS License Manager
  14. AWS Organizations
  15. AWS Secrets Manager (also authorized at IL 4)
  16. AWS Serverless Application Repository (also authorized at IL 4)
  17. AWS Service Catalog
  18. AWS Trusted Advisor
  19. AWS Web Application Firewall (WAF)

The addition of the 19 new services will allow DoD Mission Owners and their developers from the Defense Industrial Base to use the newly authorized AWS services and features to solve critical mission challenges as shown below:

DevOps:

  • Leverage a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy using AWS CodeBuild.
  • Use a fully managed source control service to collaborate on code in a secure and highly scalable system with AWS CodeCommit.

Artificial Intelligence / Machine Learning and Big Data:

  • Uncover the insights and relationships in unstructured data and text using Amazon Comprehend.
  • Accurately transcribe and translate large volumes of text using Amazon Transcribe and Amazon Translate.
  • Easily move large amounts of data online between on-premises storage and storage services (i.e., Amazon S3 and Amazon Elastic File System) using AWS Data Sync and reliably load streaming data into data lakes, data stores, and analytics services using Amazon Kinesis Data Firehose.

Administration and Security:

  • Create and manage licenses and catalogs of IT services that are approved for use on AWS (i.e., AWS License Manager and AWS Service Catalog).
  • Provide scalable workload management with AWS Organizations.
  • Optimize real-time workload provisioning guidance with AWS Trusted Advisor.
  • Rotate, manage, and retrieve credentials with AWS Secrets Manager.
  • Protect web applications using AWS Web Application Firewall (WAF).

IAM, IoT, Networking, Serverless, Tactical Edge:

  • Organize hierarchies of data along multiple dimensions using Amazon Cloud Directory.
  • Store, share, and deploy applications through a serverless architecture using AWS Serverless Application Repository.
  • Build out and connect Internet of Things (IoT) environments with AWS IoT Greengrass.
  • Efficiently route traffic to Internet applications with Amazon Route53.
  • Enable file-based video transcoding with AWS Elemental Media Convert.
  • Centrally manage and securely deliver desktop applications to any computer with Amazon AppStream 2.0.

Figure 1 below highlights the new services now available to DoD Mission Owners.

Figure 1: The new AWS services now available, broken out into categories.

Figure 1: The new AWS services now available, broken out into categories.

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

AWS announces WorldForge in AWS RoboMaker

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/aws-announces-worldforge-in-aws-robomaker/

What was announced?

Introducing AWS RoboMaker WorldForge 🤖, a new capability that makes robotics simulation easier! 🎉

AWS RoboMaker WorldForge makes it faster, simpler, and less expensive to create a multitude of 3D virtual worlds to simulate your robots in. Now robotics application developers and QA engineers can automatically create hundreds of user-defined, randomized 3D virtual environments that mimic real-world conditions without extra engineering investment or infrastructure management.

How it works and who it’s for…

Robotics companies are building increasingly sophisticated robots with autonomous capabilities and artificial intelligence. To develop these capabilities, developer teams need simulation to test and train their robots. Scaling simulation unlocks the ability to make testing more robust, reinforcement learning faster, and synthetic data generation (information that’s artificially manufactured rather than generated by real-world events) more affordable.

Until today, it was difficult for robotics teams to do simulation because building even just one 3D world was costly, time consuming, and required specialized skills in 3D modeling and knowledge of simulation engines. Given the investment needed to create a single simulation world, it was almost impossible to build enough worlds to effectively scale simulation. With AWS RoboMaker WorldForge, robotics developers can easily increase the scale and variance of simulation, improving the quality of production code and accelerating time to market.

Let’s see a demo…

Who wants to see a demo? I know I do. 😁

Let’s head over to the AWS Console and search for AWS RoboMaker. We click through to the AWS RoboMaker console and upon opening the side navigation menu, we notice a new section for AWS RoboMaker WorldForge capabilities.

Under Simulation WorldForge, we go to World templates and click the “Create template” button.

We have two options. Start a world from scratch or use one of the sample templates that RoboMaker WorldForge provides ready to use out of the box. Although we could start from scratch, for today’s blog post we’re going to use a sample template.

After we create this new template, we will be able to edit and customize it to create one or more random worlds.

AWS RoboMaker WorldForge provides four sample templates: a bedroom, a living room/dining room, a one-bedroom apartment, and a small house.

For this demo, we will choose the living room/dining room templates to start with.

Do you notice the green cylinder in the world? If you’re thinking that looked a little out of place, you’re right! That’s not a typical piece of furniture you would have in your home! It’s actually an indicator of the starting position when you put your robot into the world for simulation.

This will create a new world template in your account. This is an interactive console that allows us to edit and customize our worlds and rooms.

But first, let’s go ahead and generate some worlds with as few as 4 clicks.

I am doing only 1 world to start with.

Now RoboMaker WorldForge starts a job that is going to create this world.

We can click through our new world and view it in detail.

As you can see, this world is now a resource in your account. We can use this for simulation.

Take note how this world is similar but not identical to the world in the sample template. It still has two rooms and furniture, but the furniture is in a different location. The materials are different for the floor and walls, and the furniture setup varies. We could also place our robot in these worlds, to test our robot application performance in each world.

Ok. Let’s go into the World template and take a detailed look at it. We’re going to edit our sample template that we started with, and we’re going to add new rooms.

We can also name our templates.

Now let’s edit our template. We’re starting with our floor plan dimensions. I am going to use a 1:3 ratio and make my ceiling taller too.

Perhaps we want to edit one of the rooms. I can edit the desired aspect ratio and area.

We proceed to make this room have a 1:3 ratio as well.

But we can also add even more rooms. In fact, we have 7 types of rooms to choose from.

Now we want to customize where these are in the layout and that is what Connections is for. Connections helps you specify a preference for which rooms are adjacent to each other.

By default, AWS RoboMaker WorldForge will fully connect your floor plans. There are two types of connections; openings and doorways.

We can also customize the Interiors. Interiors are things such as walls and flooring materials, and you must select which specific rooms you wish to update.

We can also override the custom furniture that comes in each world template.

Ok. Let’s take a step back and recap what we’ve done so far.

We’ve decorated an interior, created a 6-room floor plan, customized the room connections, and saved this template. Now, QA and Robotics engineers can just as easily generate worlds for regression testing with these capabilities!

But now, we want to create more worlds. 🙌🏽

I click on the “Generate Worlds” button in the top right corner.

We can configure the number of worlds by specifying the number of floor plans and interior variations per floor plan.

You can request up to a total of 50 worlds! For today’s demo, we’re going to choose 10 floor plans and 3 variations, for a total of 30 worlds.

There we go! We just requested 30 worlds for our 6-room floor plan. This is the generation job page.

Our worlds will stream in here as they are completed. Once completed, we’ll be able to see each of these worlds and their variations. You can then test your robot’s navigation, moving it through each of these new complex worlds and then compare the results.

This is just one example of how you can create a multitude of 3D virtual worlds to simulate your robots in with AWS RoboMaker WorldForge.

🌎 Lastly…

AWS RoboMaker WorldForge is now Generally Available in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore).

To start building with AWS RoboMaker WorldForge, visit the product landing page and the Developer Guide.

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️🤖 y Canela 🐾

Amazon ECS Now Supports EC2 Inf1 Instances

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-ecs-now-supports-ec2-inf1-instances/

As machine learning and deep learning models become more sophisticated, hardware acceleration is increasingly required to deliver fast predictions at high throughput. Today, we’re very happy to announce that AWS customers can now use the Amazon EC2 Inf1 instances on Amazon ECS, for high performance and the lowest prediction cost in the cloud. For a few weeks now, these instances have also been available on Amazon Elastic Kubernetes Service.

A primer on EC2 Inf1 instances
Inf1 instances were launched at AWS re:Invent 2019. They are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads.

Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips, with up to 100 Gbps network bandwidth and up to 19 Gbps EBS bandwidth. An AWS Inferentia chip contains four NeuronCores. Each one implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, saving I/O time in the process. When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models.

Compiling Models for EC2 Inf1 Instances
To run machine learning models on Inf1 instances, you need to compile them to a hardware-optimized representation using the AWS Neuron SDK. All tools are readily available on the AWS Deep Learning AMI, and you can also install them on your own instances. You’ll find instructions in the Deep Learning AMI documentation, as well as tutorials for TensorFlow, PyTorch, and Apache MXNet in the AWS Neuron SDK repository.

In the demo below, I will show you how to deploy a Neuron-optimized model on an ECS cluster of Inf1 instances, and how to serve predictions with TensorFlow Serving. The model in question is BERT, a state of the art model for natural language processing tasks. This is a huge model with hundreds of millions of parameters, making it a great candidate for hardware acceleration.

Creating an Amazon ECS Cluster
Creating a cluster is the simplest thing: all it takes is a call to the CreateCluster API.

$ aws ecs create-cluster --cluster-name ecs-inf1-demo

Immediately, I see the new cluster in the console.

New cluster

Several prerequisites are required before we can add instances to this cluster:

  • An AWS Identity and Access Management (IAM) role for ECS instances: if you don’t have one already, you can find instructions in the documentation. Here, my role is named ecsInstanceRole.
  • An Amazon Machine Image (AMI) containing the ECS agent and supporting Inf1 instances. You could build your own, or use the ECS-optimized AMI for Inferentia. In the us-east-1 region, its id is ami-04450f16e0cd20356.
  • A Security Group, opening network ports for TensorFlow Serving (8500 for gRPC, 8501 for HTTP). The identifier for mine is sg-0994f5c7ebbb48270.
  • If you’d like to have ssh access, your Security Group should also open port 22, and you should pass the name of an SSH key pair. Mine is called admin.

We also need to create a small user data file in order to let instances join our cluster. This is achieved by storing the name of the cluster in an environment variable, itself written to the configuration file of the ECS agent.

#!/bin/bash
echo ECS_CLUSTER=ecs-inf1-demo >> /etc/ecs/ecs.config

We’re all set. Let’s add a couple of Inf1 instances with the RunInstances API. To minimize cost, we’ll request Spot Instances.

$ aws ec2 run-instances \
--image-id ami-04450f16e0cd20356 \
--count 2 \
--instance-type inf1.xlarge \
--instance-market-options '{"MarketType":"spot"}' \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ecs-inf1-demo}]' \
--key-name admin \
--security-group-ids sg-0994f5c7ebbb48270 \
--iam-instance-profile Name=ecsInstanceRole \
--user-data file://user-data.txt

Both instances appear right away in the EC2 console.

Inf1 instances

A couple of minutes later, they’re ready to run tasks on the cluster.

Inf1 instances

Our infrastructure is ready. Now, let’s build a container storing our BERT model.

Building a Container for Inf1 Instances
The Dockerfile is pretty straightforward:

  • Starting from an Amazon Linux 2 image, we open ports 8500 and 8501 for TensorFlow Serving.
  • Then, we add the Neuron SDK repository to the list of repositories, and we install a version of TensorFlow Serving that supports AWS Inferentia.
  • Finally, we copy our BERT model inside the container, and we load it at startup.

Here is the complete file.

FROM amazonlinux:2
EXPOSE 8500 8501
RUN echo $'[neuron] \n\
name=Neuron YUM Repository \n\
baseurl=https://yum.repos.neuron.amazonaws.com \n\
enabled=1' > /etc/yum.repos.d/neuron.repo
RUN rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
RUN yum install -y tensorflow-model-server-neuron
COPY bert /bert
CMD ["/bin/sh", "-c", "/usr/local/bin/tensorflow_model_server_neuron --port=8500 --rest_api_port=8501 --model_name=bert --model_base_path=/bert/"]

Then, I build and push the container to a repository hosted in Amazon Elastic Container Registry. Business as usual.

$ docker build -t neuron-tensorflow-inference .

$ aws ecr create-repository --repository-name ecs-inf1-demo

$ aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com

$ docker tag neuron-tensorflow-inference 123456789012.dkr.ecr.us-east-1.amazonaws.com/ecs-inf1-demo:latest

$ docker push

Now, we need to create a task definition in order to run this container on our cluster.

Creating a Task Definition for Inf1 Instances
If you don’t have one already, you should first create an execution role, i.e. a role allowing the ECS agent to perform API calls on your behalf. You can find more information in the documentation. Mine is called ecsTaskExecutionRole.

The full task definition is visible below. As you can see, it holds two containers:

  • The BERT container that I built,
  • A sidecar container called neuron-rtd, that allows the BERT container to access NeuronCores present on the Inf1 instance. The AWS_NEURON_VISIBLE_DEVICES environment variable lets you control which ones may be used by the container. You could use it to pin a container on one or several specific NeuronCores.
{
  "family": "ecs-neuron",
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "entryPoint": [
        "sh",
        "-c"
      ],
      "portMappings": [
        {
          "hostPort": 8500,
          "protocol": "tcp",
          "containerPort": 8500
        },
        {
          "hostPort": 8501,
          "protocol": "tcp",
          "containerPort": 8501
        },
        {
          "hostPort": 0,
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "command": [
        "tensorflow_model_server_neuron --port=8500 --rest_api_port=8501 --model_name=bert --model_base_path=/bert"
      ],
      "cpu": 0,
      "environment": [
        {
          "name": "NEURON_RTD_ADDRESS",
          "value": "unix:/sock/neuron-rtd.sock"
        }
      ],
      "mountPoints": [
        {
          "containerPath": "/sock",
          "sourceVolume": "sock"
        }
      ],
      "memoryReservation": 1000,
      "image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/ecs-inf1-demo:latest",
      "essential": true,
      "name": "bert"
    },
    {
      "entryPoint": [
        "sh",
        "-c"
      ],
      "portMappings": [],
      "command": [
        "neuron-rtd -g unix:/sock/neuron-rtd.sock"
      ],
      "cpu": 0,
      "environment": [
        {
          "name": "AWS_NEURON_VISIBLE_DEVICES",
          "value": "ALL"
        }
      ],
      "mountPoints": [
        {
          "containerPath": "/sock",
          "sourceVolume": "sock"
        }
      ],
      "memoryReservation": 1000,
      "image": "790709498068.dkr.ecr.us-east-1.amazonaws.com/neuron-rtd:latest",
      "essential": true,
      "linuxParameters": { "capabilities": { "add": ["SYS_ADMIN", "IPC_LOCK"] } },
      "name": "neuron-rtd"
    }
  ],
  "volumes": [
    {
      "name": "sock",
      "host": {
        "sourcePath": "/tmp/sock"
      }
    }
  ]
}

Finally, I call the RegisterTaskDefinition API to let the ECS backend know about it.

$ aws ecs register-task-definition --cli-input-json file://inf1-task-definition.json

We’re now ready to run our container, and predict with it.

Running a Container on Inf1 Instances
As this is a prediction service, I want to make sure that it’s always available on the cluster. Instead of simply running a task, I create an ECS Service that will make sure the required number of container copies is running, relaunching them should any failure happen.

$ aws ecs create-service --cluster ecs-inf1-demo \
--service-name bert-inf1 \
--task-definition ecs-neuron:1 \
--desired-count 1

A minute later, I see that both task containers are running on the cluster.

Running containers

Predicting with BERT on ECS and Inf1
The inner workings of BERT are beyond the scope of this post. This particular model expects a sequence of 128 tokens, encoding the words of two sentences we’d like to compare for semantic equivalence.

Here, I’m only interested in measuring prediction latency, so dummy data is fine. I build 100 prediction requests storing a sequence of 128 zeros. Using the IP address of the BERT container, I send them to the TensorFlow Serving endpoint via grpc, and I compute the average prediction time.

Here is the full code.

import numpy as np
import grpc
import tensorflow as tf
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import time

if __name__ == '__main__':
    channel = grpc.insecure_channel('18.234.61.31:8500')
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'bert'
    i = np.zeros([1, 128], dtype=np.int32)
    request.inputs['input_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['input_mask'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['segment_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))

    latencies = []
    for i in range(100):
        start = time.time()
        result = stub.Predict(request)
        latencies.append(time.time() - start)
        print("Inference successful: {}".format(i))
    print ("Ran {} inferences successfully. Latency average = {}".format(len(latencies), np.average(latencies)))

For convenience, I’m running this code on an EC2 instance based on the Deep Learning AMI. It comes pre-installed with a Conda environment for TensorFlow and TensorFlow Serving, saving me from installing any dependencies.

$ source activate tensorflow_p36
$ python predict.py

On average, prediction took 56.5ms. As far as BERT goes, this is pretty good!

Ran 100 inferences successfully. Latency average = 0.05647835493087769

Getting Started
You can now deploy Amazon Elastic Compute Cloud (EC2) Inf1 instances on Amazon ECS today in the US East (N. Virginia) and US West (Oregon) regions. As Inf1 deployment progresses, you’ll be able to use them with Amazon ECS in more regions.

Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for Amazon ECS, or on the container roadmap on Github.

– Julien

Deploying your first 5G enabled application with AWS Wavelength

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/deploying-your-first-5g-enabled-application-with-aws-wavelength/

This post was written by Mike Coleman, Senior Developer Advocate, Twitter handle: @mikegcoleman

Today, AWS released AWS Wavelength. Wavelength allows you to deploy applications and services at the edge of a mobile carrier’s 5G network. By combining the benefits of 5G, such as high bandwidth and low latency, with the ability to use AWS tools and services you’re already familiar with, you’re able to build next generation edge applications quickly and easily.

Rather than go into more depth about Wavelength in this blog, I’d recommend reading Jeff Barr’s blog post. His post goes into detail about why we built Wavelength, and how you can get started with deploying AWS resources in a Wavelength Zone.

In this blog, I walk you through deploying one of the most common Wavelength use cases: machine learning inference.

Why inference at the edge?

One of the tradeoffs with machine learning applications is system responsiveness. If your application must be highly responsive, you may need to deploy your inference processing application close to the end user. In the case of mobile devices, this could mean that the inference processing takes place on the device itself. This type of additional processing demand on the device often results in reduced device battery life among other tradeoffs. Additionally, if you need to update your machine learning model, you must push out an update to all the devices running your application.

As I mentioned earlier, one of the key benefits of 5G and Wavelength is significantly lower latencies compared to previous generation mobile networks. For edge applications, this implies you can actually perform inference processing in a Wavelength zone with near real-time responsiveness to the mobile device. By moving the inference processing to the Wavelength zone, you reduce power consumption and battery drain on the mobile device. Additionally, you can simplify application updates.  If you need to make a change to your training model, you simply update your servers in the Wavelength Zone instead of having to ship a new version to all the devices running your code.

Solution Overview

architecture of the wavelength zone

The following tutorial guides you through deploying an object detection application that is comprised of three components:

  • A Wavelength-hosted API endpoint (using Flask)
  • A Wavelength-hosted Inference server (running Torchserve)
  • A React web app being accessed via the browser via mobile device running on the carrier’s 5G network..
  • A server that acts as a bastion host allowing you to SSH into your other instances and as a web server for the React web application.

The API server is built using Python and Flask, and runs on a t3.medium instance based upon a standard Unbuntu 18.04 image. It accepts an image from the client application running on a device connected to the carriers 5G mobile network, which it then forwards to the inference server. The inference server returns the detected object along with coordinates for that object (or an error if it can’t detect any objects). The API server adds a text label and bounding boxes to the image and returns it to the mobile client.

The inference server runs Torchserve, an open source project that provides a flexible and easy way to serve up PyTorch models. Object detection is done using a Faster R-CNN model. It is then deploy it on a g4dn.2xlarge instance running the AWS deep learning Amazon Machine Image (AMI).

You will use the web browser on your mobile device to access the web server which will host the client application which is written in React.

Wavelength is designed to provide access to services and applications that require low latency. It’s important to note that you don’t need to deploy your entire application in a Wavelength Zone. You only need to deploy parts of your application that benefit from being deployed in the Wavelength Zone – such as application components requiring low latency.

In the case of the demo application, the API and inference servers are located in the Wavelength Zone because one of the design goals of the application is low-latency processing of the inference requests.

On the other hand, because the web server is only serving a small single page React web app, it does not have the same latency requirements as the inference processing. For that reason, it’s hosted in the Region instead of the Wavelength Zone.

Prerequisites

To complete the walkthrough below, you need:

  • To be familiar working from the command line, including editing text files.
  • The AWS CLI installed on your local machine. Ensure it’s the latest version so it supports Wavelength.
  • An administrative account with sufficient permissions to create VPC resources (instances, subnets, etc).
  • In order to access resources in a Wavelength Zone, you need a mobile device on a carrier’s 5G mobile network in a city that has access to the Zone. The following tutorial is written to be deployed in the Boston Wavelength Zone, but you can adjust the environment variables for the Zone and Region to deploy it other area.
  • An SSH key pair in the us-east-1 Region.
  • The commands below work on Mac and Linux machines. If you are on a Windows machine the easiest way to run through the tutorial is to spin up a Linux-based EC2 instance, install and configure the AWS CLI, and run the commands from the EC2 instance’s command line.

Create the VPC and associated resources

The first step in this tutorial is deploying to the VPC, internet gateway, and carrier gateway.

Start by configuring some environment variables, and then deploying the resources.

  1. In order to get started, you need to first set some environment variables.
    Note: replace the value for KEY_NAME with the name of the key pair you wish to use.
    Note: these values are specific to the us-east-1 Region. If you wish to deploy into another region, you’ll need to modify them as appropriate. Check the documentation for more info.

    export REGION="us-east-1"
    export WL_ZONE="us-east-1-wl1-bos-wlz-1"
    export NBG="us-east-1-wl1-bos-wlz-1"
    export INFERENCE_IMAGE_ID="ami-029510cec6d69f121"
    export API_IMAGE_ID="ami-0ac80df6eff0e70b5"
    export BASTION_IMAGE_ID="ami-027b7646dafdbe9fa"
    export KEY_NAME=<your key name>
  1. Use the AWS CLI to create the VPC.
    export VPC_ID=$(aws ec2 --region $REGION \
    --output text \
    create-vpc \
    --cidr-block 10.0.0.0/16 \
    --query 'Vpc.VpcId') \
    && echo '\nVPC_ID='$VPC_ID
  1. Create an internet gateway and attach it to the VPC.
    export IGW_ID=$(aws ec2 --region $REGION \
    --output text \
    create-internet-gateway \
    --query 'InternetGateway.InternetGatewayId') \
    && echo '\nIGW_ID='$IGW_ID
    aws ec2 --region $REGION \
    attach-internet-gateway \
    --vpc-id $VPC_ID \
    --internet-gateway-id $IGW_ID
  1. Add the carrier gateway.
    export CAGW_ID=$(aws ec2 --region $REGION \
    --output text \
    create-carrier-gateway \
    --vpc-id $VPC_ID \
    --query 'CarrierGateway.CarrierGatewayId') \
    && echo '\nCAGW_ID='$CAGW_ID

Deploy the security groups

In this section, you add three security groups:

  • Bastion SG allows SSH traffic from your local machine to the bastion host as well as HTTP web traffic from the Internet
  • API SG allows SSH traffic from the Bastion SG and opens up port 5000 to accept incoming API requests
  • Inference SG allows SSH traffic from the Bastion host and communications on port 8080 and 8081 (the ports used by the inference server) from the API SG.

 

  1. Create the bastion security group and add the ingress SSH role.Note: SSH access is only being allowed from your current IP address. You can adjust if you need by changing the –-cidr parameter in the second command.
    export BASTION_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name bastion-sg \
    --description "Security group for bastion host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nBASTION_SG_ID='$BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $BASTION_SG_ID \
    --protocol tcp \
    --port 22 \
    --cidr $(curl https://checkip.amazonaws.com)/32
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $BASTION_SG_ID \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/0
    
  2. Create the API security group along with two ingress rules: one for SSH from the bastion security group and one opening up the port the API server communicates on (5000).
    export API_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name api-sg \
    --description "Security group for API host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nAPI_SG_ID='$API_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $API_SG_ID \
    --protocol tcp \
    --port 22 \
    --source-group $BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $API_SG_ID \
    --protocol tcp \
    --port 5000 \
    --cidr 0.0.0.0/0
  3. Create the security group for the inference server along with three ingress rules: one for SSH from the bastion security group, and opening the ports the inference server communicates on (8080 and 8081) to the API security group.
    export INFERENCE_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name inference-sg \
    --description "Security group for inference host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nINFERENCE_SG_ID='$INFERENCE_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 22 \
    --source-group $BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 8080 \
    --source-group $API_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 8081 \
    --source-group $API_SG_ID
    

Add the subnets and routing tables

In the following steps you’ll create two subnets along with their associated routing tables and routes.

  1. Create the subnet for the Wavelength Zone
    export WL_SUBNET_ID=$(aws ec2 --region $REGION \
    --output text \
    create-subnet \
    --cidr-block 10.0.0.0/24 \
    --availability-zone $WL_ZONE \
    --vpc-id $VPC_ID \
    --query 'Subnet.SubnetId') \
    && echo '\nWL_SUBNET_ID='$WL_SUBNET_ID
    
  2. Create the route table for the Wavelength subnet
    export WL_RT_ID=$(aws ec2 --region $REGION \
    --output text \
    create-route-table \
    --vpc-id $VPC_ID \
    --query 'RouteTable.RouteTableId') \
    && echo '\nWL_RT_ID='$WL_RT_ID
    
  3. Associate the route table with the Wavelength subnet and a route to route traffic to the carrier gateway which in turns routes traffic to the carrier mobile network.
    aws ec2 --region $REGION \
    associate-route-table \
    --route-table-id $WL_RT_ID \
    --subnet-id $WL_SUBNET_ID
    
    aws ec2 --region $REGION create-route \
    --route-table-id $WL_RT_ID \
    --destination-cidr-block 0.0.0.0/0 \
    --carrier-gateway-id $CAGW_ID
    

Next, repeat the same process to create the subnet and routing for the bastion subnet.

  1. Create the bastion subnet
    BASTION_SUBNET_ID=$(aws ec2 --region $REGION \
    --output text \
    create-subnet \
    --cidr-block 10.0.1.0/24 \
    --vpc-id $VPC_ID \
    --query 'Subnet.SubnetId') \
    && echo '\nBASTION_SUBNET_ID='$BASTION_SUBNET_ID
    
  2. Deploy the bastion subnet route table and a route to direct traffic to the internet gateway
    export BASTION_RT_ID=$(aws ec2 --region $REGION \
    --output text \
    create-route-table \
    --vpc-id $VPC_ID \
    --query 'RouteTable.RouteTableId') \
    && echo '\nBASTION_RT_ID='$BASTION_RT_ID
    
    aws ec2 --region $REGION \
    create-route \
    --route-table-id $BASTION_RT_ID \
    --destination-cidr-block 0.0.0.0/0 \
    --gateway-id $IGW_ID
    
    aws ec2 --region $REGION \
    associate-route-table \
    --subnet-id $BASTION_SUBNET_ID \
    --route-table-id $BASTION_RT_ID
    
  3. Modify the bastion’s subnet to assign public IPs by default
    aws ec2 --region $REGION \
    modify-subnet-attribute \
    --subnet-id $BASTION_SUBNET_ID \
    --map-public-ip-on-launch

Create the Elastic IPs and networking interfaces

The final step before deploying the actual instances is to create two carrier IPs, IP addresses associated with the carrier network. These IP addresses will be assigned to two Elastic Network Interfaces (ENIs), and the ENIs will be assigned to our API and Inference server (the bastion host will have it’s public IP assigned upon creation by the bastion subnet).

  1. Create two carrier IPs, one for the API server and one for the inference server
    export INFERENCE_CIP_ALLOC_ID=$(aws ec2 --region $REGION \
    --output text \
    allocate-address \
    --domain vpc \
    --network-border-group $NBG \
    --query 'AllocationId') \
    && echo '\nINFERENCE_CIP_ALLOC_ID='$INFERENCE_CIP_ALLOC_ID
    
    export API_CIP_ALLOC_ID=$(aws ec2 --region $REGION \
    --output text \
    allocate-address \
    --domain vpc \
    --network-border-group $NBG \
    --query 'AllocationId') \
    && echo '\nAPI_CIP_ALLOC_ID='$API_CIP_ALLOC_ID
    
  2. Create two elastic network interfaces (ENIs)
    export INFERENCE_ENI_ID=$(aws ec2 --region $REGION \
    --output text \
    create-network-interface \
    --subnet-id $WL_SUBNET_ID \
    --groups $INFERENCE_SG_ID \
    --query 'NetworkInterface.NetworkInterfaceId') \
    && echo '\nINFERENCE_ENI_ID='$INFERENCE_ENI_ID
    
    export API_ENI_ID=$(aws ec2 --region $REGION \
    --output text \
    create-network-interface \
    --subnet-id $WL_SUBNET_ID \
    --groups $API_SG_ID \
    --query 'NetworkInterface.NetworkInterfaceId') \
    && echo '\nAPI_ENI_ID='$API_ENI_ID
    
  3. Associate the carrier IPs with the ENIs
    aws ec2 --region $REGION associate-address \
    --allocation-id $INFERENCE_CIP_ALLOC_ID \
    --network-interface-id $INFERENCE_ENI_ID   
    
    aws ec2 --region $REGION associate-address \
    --allocation-id $API_CIP_ALLOC_ID \
    --network-interface-id $API_ENI_ID

Deploy the API and inference instances

With the VPC and underlying networking and security deployed, you can now move on to deploying your API and Inference instances. The API server is a t3.instance based on a standard Ubuntu 18.04 AMI. The Inference server is a g4dn.2xlarge running the AWS deep learning AMI. You install and configure the software components in subsequent steps.

 

  1. Deploy the API instance
    aws ec2 --region $REGION \
    run-instances \
    --instance-type r5d.2xlarge \
    --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"'$API_ENI_ID'"}]' \
    --image-id $API_IMAGE_ID \
    --key-name $KEY_NAME
    
  2. Deploy the inference instance
    aws ec2 --region $REGION \
    run-instances \
    --instance-type t3.medium \
    --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"'$API_ENI_ID'"}]' \
    --image-id $API_IMAGE_ID \
    --key-name $KEY_NAME

Deploy the bastion / web server

You must deploy a bastion server in order to SSH into your application instances. Remember that the carrier gateway in a Wavelength Zone only allows ingress from the carrier’s 5G network. This means that in order to SSH into the API and inference servers you need to first SSH into the bastion host, and then from there SSH into your Wavelength instances.

You are also going to install the client front end application onto the bastion host. You can use the webserver to test the application if you don’t want to install the React Native version of the application onto a mobile device. Remember that even though you’re not using the native application, the website must still be accessed from a device on the carrier’s 5G network.

  1. Issue the command below to create your bastion host
    aws ec2 --region $REGION run-instances \
    --instance-type t3.medium \
    --associate-public-ip-address \
    --subnet-id $BASTION_SUBNET_ID \
    --image-id $BASTION_IMAGE_ID \
    --security-group-ids $BASTION_SG_ID \
    --key-name $KEY_NAME
    

Note: It takes a few minutes for your instances to be ready. Even when the status check in the EC2 console reads 2/2 checks passed, It may still be a few minutes before the instance is done installing additional software packages and configuring itself. If you receive a lock error while running apt-get, wait several minutes and try again.

 

Configure the bastion host / web server

The last server you deployed serves two purposes. It acts as the bastion host allowing you to SSH into your other two servers, and it serves the client web app. In this section you’ll install that web app.

  1. SSH into bastion host (the user name is bitnami).Note: In order to be able to easily SSH from the bastion host to the inference server you should use the -A (agent forwarding) parameter when starting your SSH session e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion ip address>
  1. Clone the GitHub repo with the React code
    git clone https://github.com/mikegcoleman/react-wavelength-inference-demo.git
  2. Install the dependencies
    cd react-wavelength-inference-demo && npm install
  1. Build the webpage
    npm run build
  1. Copy the page into web servers root directory
    cp -r ./build/* /home/bitnami/htdocs
  2. Test that the web app is running correctly by navigating to the public IP address of your bastion instance

 

Configure the inference server

In this section you deploy a Torchserve server running on EC2. Torchserve is configured with the fasterrcnn model. It receives the image from the API server, runs the inference, and returns the labels and bounding boxes for the items found in the image.

I’m not going to spend time going into the inner workings of Torchserve in this post. However, if you’re interested in learning more, check out my colleague Shashank’s blog.

  1. SSH into bastion host and the nSSH into the inference server instance.Note: In order to be able to easily SSH from the bastion host to the inference server you will want to use the -A (agent forwarding) parameter when starting your SSH session with the bastion host e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion public ip>

    To SSH from the bastion host to the inference server you do not need the -i or -A parameters e.g.:

    ssh [email protected]<inference server private ip>
  1. Update the packages on the server and install the necessary prerequisite packages.
    sudo apt-get update -y \
    && sudo apt-get install -y virtualenv openjdk-11-jdk gcc python3-dev
  2. Create a virtual environment.
    mkdir inference && cd inference
    virtualenv --python=python3 inference
    source inference/bin/activate
  3. Install Torchserve and its related components
    pip3 install \
    torch torchtext torchvision sentencepiece psutil \
    future wheel requests torchserve torch-model-archiver
  1. Install the inference model that the application will use.
    mkdir torchserve-examples && cd torchserve-examples
    
    git clone https://github.com/pytorch/serve.git
    
    mkdir model_store
    
    wget https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
    
    torch-model-archiver --model-name fasterrcnn --version 1.0 \
    --model-file serve/examples/object_detector/fast-rcnn/model.py \
    --serialized-file fasterrcnn_resnet50_fpn_coco-258fb6c6.pth \
    --handler object_detector \
    --extra-files serve/examples/object_detector/index_to_name.json
    
    mv fasterrcnn.mar model_store/
  2. Create a configuration file for Torchserve (config.properties) and configure Torchserve to listen on your instance’s private IP.Be sure to substitute the private IP of your instance below, you can find the private IP for your instance in the EC2 console.The contents of config.properties should look as follows:
    inference_address=http://<your instance private IP>:8080
    management_address=http://<your instance private IP>:8081

    For example:

    inference_address=http://10.0.0.253:8080
    management_address=http://10.0.0.253:8081
  3. Start the Torchserve server.
    torchserve --start --model-store model_store --models
    fasterrcnn=fasterrcnn.mar --ts-config config.properties

    It takes a few seconds for the server to startup, when it’s ready you should see a line that ends with:

    State change WORKER_STARTED -> WORKER_MODEL_LOADED

Leave this SSH session running so you can watch the inference server’s logs to see when it receives requests from the API server.

 

Configure the API server

In this section, you deploy the Flask-based API server.

  1. SSH into bastion host and then SSH into the API server instance.Note: In order to be able to easily SSH from the bastion host to the API server you should use the -A (agent forwarding) parameter when starting your SSH session with the bastion host e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion public ip>

    To SSH from the bastion host to the API server you do not need the -i or -A parameters e.g.:

    ssh [email protected]<api server private ip>
  1. Test your inference server (being sure to substitute the INTERNAL IP of the inference instance in the second line below):
    curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
    
    curl -X POST \
    http://<your_inf_server_internal_IP>:8080/predictions/fasterrcnn \
    -T kitten.jpg

    You should see something similar to

    [
           {
        "cat": "[(228.7825, 82.63463), (583.77545, 677.3058)]"
         },
      {
        "car": "[(124.427414, 69.34327), (270.15457, 205.53458)]"
         }
    ]
    

    The inference server returns the labels of the objects it detected, and the corner coordinates of boxes that surround those objects.

    Now that you have verified the API server can connect to the inference server, you can configure the API server.

  1. Run the following command to update system package information and install necessary prerequisites.
    sudo apt-get update -y \
    && sudo apt-get install -y \
    libsm6 libxrender1 libfontconfig1 virtualenv
  1. Clone the Python code into the application directory
    mkdir apiserver && cd apiserver
    git clone https://github.com/mikegcoleman/flask_wavelength_api .
  2. Create and activate a virtual environment.
    virtualenv --python=python3 apiserver
    source apiserver/bin/activate
  3. Install necessary Python packages.
    pip3 install opencv-python flask pillow requests flask-cors
  4. Create a configuration file (config_values.txt) with the following line (substituting the INTERNAL IP of your inference server).
    http://<your_inf_server_internal_IP>:8080/predictions/fasterrcnn
  5. Start the application.
    python api.py

    You should see output similar to the following:

    * Serving Flask app "api" (lazy loading)
    * Environment: production
    WARNING: This is a development server. Do not use it in a production
    deployment.
    Use a production WSGI server instead
    * Debug mode: on
    * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
    * Restarting with stat
    * Debugger is active!
    * Debugger PIN: 311-750-351

 

Test the client application

To test the application, you need to have a device on the carrier’s 5G network. From your device’s web browser navigate the bastion / web server’s public IP address. In the text box at the top of the app enter the public IP of your API server.

Next, choose an existing photo from your camera roll, or take a photo with the camera and press the process object button underneath the preview photo (you may need to scroll down).

The client will send the image to the API server, which forwards it to the inference server for detection. The API server then receives back the prediction from the inference server, adds a label and bounding boxes, and return the marked-up image to the client where it will be displayed.

example screenshot of the image

If the inference server cannot detect any objects in the image, you will receive a message indicating the prediction failed.

Conclusion and next steps

In this blog I covered some of the architectural considerations when deploying applications into Wavelength Zones. You then deployed a sample application designed to give you an idea of how you might architect an inference-at-the-edge solution. I hope this has inspired you to go off build something new to take advantage of the exciting capabilities that Wavelength and 5G enable. Visit https://aws.amazon.com/wavelength/  to request access and check out documentation and other resources.

 

 

New – Using Amazon GuardDuty to Protect Your S3 Buckets

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-using-amazon-guardduty-to-protect-your-s3-buckets/

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

AWS achieves FedRAMP JAB High and Moderate provisional authorization across nine additional services in AWS US Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization-nine-additional-services-aws-us-regions/

We are pleased to announce that Amazon Web Services (AWS) has achieved FedRAMP JAB authorization on an additional nine AWS services. These services provide capabilities that enable your organization to:

  • Assemble and deploy serverless architectures in powerful new ways using AWS Serverless Application Repository
  • Simplify application delivery and complete workload migration to the cloud using Amazon AppStream 2.0
  • Protect access to applications, services, and IT resources using AWS Secrets Manager
  • Send messages to your customers through multiple channels all around the world using Amazon Pinpoint
  • Provide developer and data scientists with the ability to build, train, and deploy machine learning models quickly using Amazon SageMaker
  • Easily create video-on-demand content for broadcast and multiscreen delivery at scale using AWS Elemental MediaConvert
  • Meet, chat, and place business calls inside and outside your organization, all using Amazon Chime
  • Provide seamless experience across voice and chat for your customers and agents at a lower cost with Amazon Connect
  • Send mail from within any application including transactional, marketing, or mass email communications with Amazon Simple Email Service

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

AWS US East and US West Regions (FedRAMP Moderate authorization)

  1. Amazon AppStream 2.0
  2. Amazon Chime
  3. Amazon Connect
  4. Amazon Pinpoint
  5. Amazon SageMaker
  6. Amazon Simple Email Service
  7. AWS Secrets Manager
  8. AWS Serverless Application Repository
  9. AWS Elemental MediaConvert

AWS GovCloud (US) Region (FedRAMP High authorization)

AWS is continually expanding the scope of our compliance programs to help enable your organization to use our services for sensitive and regulated workloads. Today, AWS offers 86 services authorized in the AWS US East and US West Regions under FedRAMP Moderate, and 75 services authorized in the AWS GovCloud (US) Region under FedRAMP High.

To learn what other public sector customers are achieving with AWS, visit our Customer Success page and choose Public Sector. Stay tuned for future updates at AWS Services in Scope by Compliance Program. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of the AWS Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the federal government, and over 13 years of experience supporting CISO teams with risk management framework (RMF) activities.

Announcing the New AWS Community Builders Program!

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-the-new-aws-community-builders-program/

We continue to be amazed by the enthusiasm for AWS knowledge sharing in technical communities. Many experienced AWS advocates are passionate about helping others build on AWS by sharing their challenges, success stories, and code. Others who are newer to AWS are showing a similar enthusiasm for community building and are asking how they can get more involved in community activities. These builders are seeking better ways to connect with one another, share best practices, and receive resources & mentorship to help improve community knowledge sharing.

To help address these points, we are excited to announce the new AWS Community Builders Program which offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. As of today, this program is open for anyone to apply to join!

Members of the program will receive:

  • Access to AWS product teams and information about new services and features
  • Mentorship from AWS subject matter experts on a variety of topics, including content creation, community building, and securing speaking engagements
  • AWS Promotional Credits and other helpful resources to support content creation and community-based work

Any individual who is passionate about building on AWS can apply to join the AWS Community Builders program. The application process is open to AWS builders worldwide, and the program seeks applicants from all regions, demographics, and underrepresented communities.

While there is no single specific criteria for being accepted into the program, applications will generally be reviewed for evidence and accuracy of technical content, such as blog posts, open source contributions, presentations, online knowledge sharing, and community organization efforts, such as hosting AWS Community Days, AWS User Groups, or other community-based events. Equally important, the program seeks individuals from diverse backgrounds, who are enthusiastic about getting more involved in these types of activities! The program will accept a limited number of applicants per year.

Please apply to be an AWS Community Builder today. To learn more, you can get connected via a variety of community resources.

Channy and Jason;

Over 150 AWS services now have a security chapter

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/over-150-aws-services-now-have-security-chapter/

We’re happy to share an update on the service documentation initiative that we first told you about on the AWS Security Blog in June, 2019. We’re excited to announce that over 150 services now have dedicated security chapters available in the AWS security documentation.

In case you aren’t familiar with the security chapters, they were developed to provide easy-to-find, easy-to-consume security content in existing service documentation, so you don’t have to refer to multiple sources when reviewing the security capabilities of an AWS service. The chapters align with the Security Epics of the AWS Cloud Adoption Framework (CAF), including information about the security ‘of’ the cloud and security ‘in’ the cloud, as outlined in the AWS Shared Responsibility Model. The chapters cover the following security topics from the CAF, as applicable for each AWS service:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

These topics also align with the control domains of many industry-recognized standards that customers use to meet their compliance needs when using cloud services. This enables customers to evaluate the services against the frameworks they are already using.

We thought it might be helpful to share some of the ways that we’ve seen our customers and partners use the security chapters as a resource to both assess services and configure them securely. We’ve seen customers develop formal service-by-service assessment processes that include key considerations, such as achieving compliance, data protection, isolation of compute environments, automating audits with APIs, and operational access and security, when determining how cloud services can help them address their regulatory obligations.

To support their cloud journey and digital transformation, Fidelity Investments established a Cloud Center of Excellence (CCOE) to assist and enable Fidelity business units to safely and securely adopt cloud services at scale. The CCOE security team created a collaborative approach, inviting business units to partner with them to identify use cases and perform service testing in a safe environment. This ongoing process enables Fidelity business units to gain service proficiency while working directly with the security team so that risks are properly assessed, minimized, and evidenced well before use in a production environment.

Steve MacIntyre, Cloud Security Lead at Fidelity Investments, explains how the availability of the chapters assists them in this process: “As a diversified financial services organization, it is critical to have a deep understanding of the security, data protection, and compliance features for each AWS offering. The AWS security “chapters” allow us to make informed decisions about the safety of our data and the proper configuration of services within the AWS environment.”

Information found in the security chapters has also been used by customers as key inputs in refining their cloud governance, and helping customers to balance agility and innovation, while remaining secure as they adopt new services. Outlining customer responsibilities that are laid out under the AWS Shared Responsibility Model, the chapters have influenced the refinement of service assessment processes by a number of AWS customers, enabling customization to meet specific control objectives based on known use cases.

For example, when AWS Partner Network (APN) Partner Deloitte works on cloud strategies with organizations, they advise on topics that range from enterprise-wide cloud adoption to controls needed for specific AWS services.

Devendra Awasthi, Cloud Risk & Compliance Leader at Deloitte & Touche LLP, explained that, “When working with companies to help develop a secure cloud adoption framework, we don’t want them to make assumptions about shared responsibility that lead to a false sense of security. We advise clients to use the AWS service security chapters to identify their responsibilities under the AWS Shared Responsibility Model; the chapters can be key to informing their decision-making process for specific service use.”

Partners and customers, including Deloitte and Fidelity, have been helpful by providing feedback on both the content and structure of the security chapters. Service teams will continue to update the security chapters as new features are released, and in the meantime, we would appreciate your input to help us continue to expand the content. You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use the security chapters within your organization.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.