All posts by Brandon West

Host Your Apps with AWS Amplify Console from the AWS Amplify CLI

Post Syndicated from Brandon West original

Have you tried out AWS Amplify and AWS Amplify Console yet? In my opinion, they provide one of the fastest ways to get a new web application from idea to prototype on AWS. So what are they? AWS Amplify is an opinionated framework for building modern applications, with a toolchain for easily adding services like authentication (via Amazon Cognito) or storage (via Amazon Simple Storage Service (S3)) or GraphQL APIs, all via a command-line interface. AWS Amplify Console makes continuous deployment and hosting for your modern web apps easy. It supports hosting the frontend and backend assets for single page app (SPA) frameworks including React, Angular, Vue.js, Ionic, and Ember. It also supports static site generators like Gatsby, Eleventy, Hugo, VuePress, and Jekyll.

With today’s launch, hosting options available from the AWS Amplify CLI now include Amplify Console in addition to S3 and Amazon CloudFront. By using Amplify Console, you can take advantage of features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains.

Initializing an Amplify App

Let’s take a look at a quick example. We’ll be deploying a static site demo of Amazon Transcribe. I’ve already got the AWS Command Line Interface (CLI) installed, as well as the AWS Amplify CLI. I’ve forked and then cloned the sample code to my local machine. In the following gif, you can see the initialization process for an AWS Amplify app. (I sped things up a little for the gif. It might take a few seconds for your app to be created.)

Terminal session showing the "amplify init" workflow

Now that I’ve got my app initialized, I can add additional services. Let’s add some hosting via AWS Amplify Console. After choosing Amplify Console for hosting, I can pick manual deployment or continuous deployment using a git-based workflow.

Continuous Deployment

First, I’m going to set up continuous deployment so that changes to our git repo will trigger a build and deploy.

A screenshot of a terminal session adding Amplify Console to an Amplify project

The workflow for configuring continuous deployment requires a quick browser session. First, I select our git provider. The forked repo is on GitHub, so I need to authorize Amplify Console to use my GitHub account.

Screenshot of git provider selection

Once a provider is authorized, I choose the repo and branch to watch for changes.

Screenshot of repo and branch selection

AWS Amplify Console auto-detected the correct build settings, based on the contents of package.json.

Screenshot of build settings

Once I’ve confirmed the settings, the initial build and deploy will start. Then any changes to the selected git branch will result in additional builds and deploys. Now I need to finish the workflow in the CLI, and I the need the ARN of the new Amplify Console app for that. In the browser, under App Settings and then General, I copy the ARN and then paste it into my terminal, and check the status.

A screenshot of a terminal window where the app ARN is being set

A quick check of the URL in my browser confirms that the app has been successfully deployed.

A screenshot of the sample app we deployed in this post

Manual Deploys

Manual deploys with Amplify Console also provide a bunch of useful features. The CLI can now manage front-end environments, making it easy to add a test or dev environment. It’s also easy to add URL redirects and rewrites, or add a username/password via HTTP Basic Auth.

Configuring manual deploys is straightforward. Just set your environment name. When it’s time to deploy, run amplify publishand the build scripts defined during the initialization of the project will be run. The generated artifact will then be uploaded automatically.

A screenshot of a terminal window where manual deploys are configured

With manual deployments, you can set up multiple frontend environments (e.g. dev and prod) directly from the CLI. To create a new dev environment, run amplify env add (name it dev) and amplify publish. This will create a second frontend environment in Amplify Console. To view all your frontend and backend environments, run amplify console from the CLI to open your Amplify Console app.

Ever since using AWS Amplify Console for the first time a few weeks ago, it has become my go-to way to deploy applications, especially static sites. I’m excited to see the simplicity of hosting with AWS Amplify Console extended to the Amplify CLI, and I hope you are too. Happy building!

— Brandon

Easily Manage Shared Data Sets with Amazon S3 Access Points

Post Syndicated from Brandon West original

Storage that is secure, scalable, durable, and highly available is a fundamental component of cloud computing. That’s why Amazon Simple Storage Service (S3) was the first service launched by AWS, back in 2006. It has been a building block of many of the more than 175 services that AWS now offers. As we approach the beginning of a new decade, capabilities like Amazon Redshift, Amazon Athena, Amazon EMR and AWS Lake Formation have made S3 not just a way to store objects but an engine for turning that data into insights. These capabilities mean that access patterns and requirements for the data stored in buckets have evolved.

Today we’re launching a new way to manage data access at scale for shared data sets in S3: Amazon S3 Access Points. S3 Access Points are unique hostnames with dedicated access policies that describe how data can be accessed using that endpoint. Before S3 Access Points, shared access to data meant managing a single policy document on a bucket. These policies could represent hundreds of applications with many differing permissions, making audits, and updates a potential bottleneck affecting many systems.

With S3 Access Points, you can add access points as you add additional applications or teams, keeping your policies specific and easier to manage. A bucket can have multiple access points, and each access point has its own AWS Identity and Access Management (IAM) policy. Access point policies are similar to bucket policies, but associated with the access point. S3 Access Points can also be restricted to only allow access from within a Amazon Virtual Private Cloud. And because each access point has a unique DNS name, you can now address your buckets with any name that is unique within your AWS account and region.

Creating S3 Access Points

Let’s add an access point to a bucket using the S3 Console. You can also create and manage your S3 Access Points using the AWS Command Line Interface (CLI), AWS SDKs, or via the API. I’ve selected a bucket that contains artifacts generated by a AWS Lambda function, and clicked on the access points tab.

Access points tab in S3 Console

Let’s create a new access point. I want to give an IAM user Alice permission to GET and PUT objects with the prefix Alice. I’m going to name this access point alices-access-point. There are options for restricting access to a Virtual Private Cloud, which just requires a Virtual Private Cloud ID. In this, I want to allow access from outside the VPC as well, so after I took this screenshot, I selected Internet and moved onto the next step.

Creating an Access Point

S3 Access Points makes it easy to block public access. I’m going to block all public access to this access point.

Public access settings

And now I can attach my policy. In this policy, our Principal is our user Alice, and the resource is our access point combined with every object with the prefix /Alice. For more examples of the kinds of policies you might want to attach to your S3 Access Points, take a look at the docs.

Creating access point policy

After I create the access point, I can access it by hostname using the format https://[access_point_name]-[accountID].s3-accesspoint.[region] Via the SDKs and CLI, I can use it the same way I would use a bucket once I’ve updated to the latest version. For example, assuming I were authenticated as Alice, I could do the following:

$ aws s3api get-object --key /Alice/ --bucket arn:aws:s3:us-east-1:[my-account-id]:alices-access-point

Access points that are not restricted to VPCs can also be used via the S3 Console.

Things to Know

When it comes to software design, keeping scopes small and focused on a specific task is almost always a good decision. With S3 Access Points, you can customize hostnames and permissions for any user or application that needs access to your shared data set. Let us know how you like this new capability, and happy building!

— Brandon

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Post Syndicated from Brandon West original

Today I get to share my favorite kind of announcement. It’s the sort of thing that will improve security for just about everyone that builds on AWS, it can be turned on with almost no configuration, and it costs nothing to use. We’re launching a new, first-of-its-kind capability called AWS Identity and Access Management (IAM) Access Analyzer. IAM Access Analyzer mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts. It continuously monitors all policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. With IAM Access Analyzer, you have visibility into the aggregate impact of your access controls, so you can be confident your resources are protected from unintended access from outside of your account.

Let’s look at a couple examples. An IAM Access Analyzer finding might indicate an S3 bucket named my-bucket-1 is accessible to an AWS account with the id 123456789012 when originating from the source IP Or IAM Access Analyzer may detect a KMS key policy that allow users from another account to delete the key, identifying a data loss risk you can fix by adjusting the policy. If the findings show intentional access paths, they can be archived.

So how does it work? Using the kind of math that shows up on unexpected final exams in my nightmares, IAM Access Analyzer evaluates your policies to determine how a given resource can be accessed. Critically, this analysis is not based on historical events or pattern matching or brute force tests. Instead, IAM Access Analyzer understands your policies semantically. All possible access paths are verified by mathematical proofs, and thousands of policies can be analyzed in a few seconds. This is done using a type of cognitive science called automated reasoning. IAM Access Analyzer is the first service powered by automated reasoning available to builders everywhere, offering functionality unique to AWS. To start learning about automated reasoning, I highly recommend this short video explainer. If you are interested in diving a bit deeper, check out this re:Invent talk on automated reasoning from Byron Cook, Director of the AWS Automated Reasoning Group. And if you’re really interested in understanding the methodology, make yourself a nice cup of chamomile tea, grab a blanket, and get cozy with a copy of Semantic-based Automated Reasoning for AWS Access Policies using SMT.

Turning on IAM Access Analyzer is way less stressful than an unexpected nightmare final exam. There’s just one step. From the IAM Console, select Access analyzer from the menu on the left, then click Create analyzer.

Creating an Access Analyzer

Analyzers generate findings in the account from which they are created. Analyzers also work within the region defined when they are created, so create one in each region for which you’d like to see findings.

Once our analyzer is created, findings that show accessible resources appear in the Console. My account has a few findings that are worth looking into, such as KMS keys and IAM roles that are accessible by other accounts and federated users.Viewing Access Analyzer Findings

I’m going to click on the first finding and take a look at the access policy for this KMS key.

An Access Analyzer Finding

From here we can see the open access paths and details about the resources and principals involved. I went over to the KMS console and confirmed that this is intended access, so I archived this particular finding.

All IAM Access Analyzer findings are visible in the IAM Console, and can also be accessed using the IAM Access Analyzer API. Findings related to S3 buckets can be viewed directly in the S3 Console. Bucket policies can then be updated right in the S3 Console, closing the open access pathway.

An Access Analyzer finding in S3

You can also see high-priority findings generated by IAM Access Analyzer in AWS Security Hub, ensuring a comprehensive, single source of truth for your compliance and security-focused team members. IAM Access Analyzer also integrates with CloudWatch Events, making it easy to automatically respond to or send alerts regarding findings through the use of custom rules.

Now that you’ve seen how IAM Access Analyzer provides a comprehensive overview of cloud resource access, you should probably head over to IAM and turn it on. One of the great advantages of building in the cloud is that the infrastructure and tools continue to get stronger over time and IAM Access Analyzer is a great example. Did I mention that it’s free? Fire it up, then send me a tweet sharing some of the interesting things you find. As always, happy building!

— Brandon

Announcing AWS Managed Rules for AWS WAF

Post Syndicated from Brandon West original

Building and deploying secure applications is critical work, and the threat landscape is always shifting. We’re constantly working to reduce the pain of maintaining a strong cloud security posture. Today we’re launching a new capability called AWS Managed Rules for AWS WAF that helps you protect your applications without needing to create or manage the rules directly. We’ve also made multiple improvements to AWS WAF with the launch of a new, improved console and API that makes it easier than ever to keep your applications safe.

AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. Today it’s getting a number of exciting improvements. Creating rules is more straightforward with the introduction of the OR operator, allowing evaluations that would previously require multiple rules. The API experience has been greatly improved, and complex rules can now be created and updated with a single API call. We’ve removed the limit of ten rules per web access control list (ACL) with the introduction of the WAF Capacity Unit (WCU). The switch to WCUs allows the creation of hundreds of rules. Each rule added to a web access control list (ACL) consumes capacity based on the type of rule being deployed, and each web ACL has a defined WCU limit.

Using the New AWS WAF

Let’s take a look at some of the changes and turn on AWS Managed Rules for AWS WAF. First, I’ll go to AWS WAF and switch over to the new version.

Next I’ll create a new web ACL and add it to an existing API Gateway resource on my account.

Now I can start adding some rules to our web ACL. With the new AWS WAF, the rules engine has been improved. Statements can be combined with AND, OR, and NOT operators, allowing for more complex rule logic.

Screenshot of the WAF v2 Boolean operators

I’m going to create a simple rule that blocks any request that uses the HTTP method POST. Another cool feature is support for multiple text transformations, so for example, you could have all your requests transformed to decode HTML entities, and then made lowercase.

JSON objects now define web ACL rules (and web ACLs themselves), making them versionable assets you can match with your application code. You can also use these JSON documents to create or update rules with a single API call.

Using AWS Managed Rules for AWS WAF

Now let’s play around with something totally new: AWS Managed Rules. AWS Managed Rules give you instant protection. The AWS Threat Research Team maintains the rules, with new ones being added as additional threats are identified. Additional rule sets are available on the AWS Marketplace. Choose a managed rule group, add it to your web ACL, and AWS WAF immediately helps protect against common threats.

I’ve selected a rule group that protects against SQL attacks, and also enabled core rule set. The core rule set covers some of the common threats and security risks described in OWASP Top 10 publication. As soon as I create the web ACL and the changes are propagated, my app will be protected from a whole range of attacks such as SQL injections. Now let’s look at both rules that I’ve added to our ACL and see how things are shaping up.

Since my demo rule was quite simple, it doesn’t require much capacity. The managed rules use a bit more, but we’ve got plenty of room to add many more rules to this web ACL.

Things to Know

That’s a quick tour of the benefits of the new and improved AWS WAF. Before you head to the console to turn it on, there’s a few things to keep in mind.

  • The new AWS WAF supports AWS CloudFormation, allowing you to create and update your web ACL and rules using CloudFormation templates.
  • There is no additional charge for using AWS Managed Rules. If you subscribe to managed rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller.
  • Pricing for AWS WAF has not changed.

As always, happy (and secure) building, and I’ll see you at re:Invent or on the re:Invent livestreams soon!

— Brandon

Announcing CloudTrail Insights: Identify and Respond to Unusual API Activity

Post Syndicated from Brandon West original

Building software in the cloud makes it easy to instrument systems for logging from the very beginning. With tools like AWS CloudTrail, tracking every action taken on AWS accounts and services is straightforward, providing a way to find the event that caused a given change. But not all log entries are useful. When things are running smoothly, those log entries are like the steady, reassuring hum of machinery on a factory floor. When things start going wrong, that hum can make it harder to hear which piece of equipment has gone a bit wobbly. The same is true with large scale software systems: the volume of log data can be overwhelming. Sifting through those records to find actionable information is tedious. It usually requires a lot of custom software or custom integrations, and can result in false positives and alert fatigue when new services are added.

That’s where software automation and machine learning can help. Today, we’re launching AWS CloudTrail Insights in all commercial AWS regions. CloudTrail Insights automatically analyzes write management events from CloudTrail trails and alerts you to unusual activity. For example, if there is an increase in TerminateInstance events that differs from established baselines, you’ll see it as an Insight event. These events make finding and responding to unusual API activity easier than ever.

Enabling AWS CloudTrail Insights

CloudTrail tracks user activity and API usage. It provides an event history of AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. With the launch of AWS CloudTrail Insights, you can enable machine learning models that detect unusual activity in these logs with just a few clicks. AWS CloudTrail Insights will analyze historical API calls, identifying usage patterns and generating Insight Events for unusual activity.

Screenshot showing how to enable CloudTrail Insights

You can also enable Insights on a trail from the AWS Command Line Interface (CLI) by using the put-insight-selectors command:

$ aws cloudtrail put-insight-selectors --trail-name trail_name --insight-selectors '{"InsightType": "ApiCallRateInsight"}'

Once enabled, CloudTrail Insights sends events to the S3 bucket specified on the trail details page. Events are also sent to CloudWatch Events, and optionally to an CloudWatch Logs log group, just like other CloudTrail Events. This gives you options when it comes to alerting, from sophisticated rules that respond to CloudWatch events to custom AWS Lambda functions. After enabling Insights, historical events for the trail will be analyzed. Anomalous usage patterns found will appear in the CloudTrail Console within 30 minutes.

Using CloudTrail Insights

In this post we’ll take a look at some AWS CloudTrail Insights Events from the AWS Console. If you’d like to view Insight events from the AWS CLI, you use the CloudTrail LookupEvents call with the event-category parameter.

$ aws cloudtrail lookup-events --event-category insight [--max-item] [--lookup-attributes]

Quickly scanning the list of CloudTrail Insights, the RunInstances event jumps out to me. Spinning up more EC2 instances can be expensive, and I’ve definitely mis-configured things such that I created more instances than needed before, so I want to take a closer look. Let’s filter the list down to just these events and see what we can learn from AWS CloudTrail Insights.

Let’s dig in to the latest event.

Here we see that over the course of one minute, there was a spike in RunInstances API call volume. From the Insights graph, we can see the raw event as JSON.

    "Records": [
            "eventVersion": "1.07",
            "eventTime": "2019-11-07T13:25:00Z",
            "awsRegion": "us-east-1",
            "eventID": "a9edc959-9488-4790-be0f-05d60e56b547",
            "eventType": "AwsCloudTrailInsight",
            "recipientAccountId": "-REDACTED-",
            "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314",
            "insightDetails": {
                "state": "Start",
                "eventSource": "",
                "eventName": "RunInstances",
                "insightType": "ApiCallRateInsight",
                "insightContext": {
                    "statistics": {
                        "baseline": {
                            "average": 0.0020833333},
                        "insight": {
                            "average": 6}
            "eventCategory": "Insight"},
            "eventVersion": "1.07",
            "eventTime": "2019-11-07T13:26:00Z",
            "awsRegion": "us-east-1",
            "eventID": "33a52182-6ff8-49c8-baaa-9caac16a96ce",
            "eventType": "AwsCloudTrailInsight",
            "recipientAccountId": "-REDACTED-",
            "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314",
            "insightDetails": {
                "state": "End",
                "eventSource": "",
                "eventName": "RunInstances",
                "insightType": "ApiCallRateInsight",
                "insightContext": {
                    "statistics": {
                        "baseline": {
                            "average": 0.0020833333},
                        "insight": {
                            "average": 6},
                        "insightDuration": 1}
            "eventCategory": "Insight"}

Here we can see that the baseline API call volume is 0.002. That means that there’s usually one call to RunInstances roughly once every 500 minutes, so the activity we see in the graph is definitely not normal. By clicking over to the CloudTrail Events tab we can see the individual events that are grouped into this Insight event. It looks like this was probably a normal EC2 autoscaling activity, but I still want to dig in and confirm.

By expanding an event in this tab and clicking “View Event,” I can head directly to the event in CloudTrail for more information. After reviewing the event metadata and associated EC2 and IAM resources, I’ve confirmed that while this behavior was unusual, it’s not a cause for concern. It looks like autoscaling did what it was supposed to and that the correct type of instance was created.

Things to Know

Before you get started, here are some important things to know:

  • CloudTrail Insights costs $0.35 for every 100,000 write management events analyzed for each Insight type. At launch, API call volume insights are the only type available.
  • Activity baselines are scoped to the region and account in which the CloudTrail trail is operating.
  • After an account enables Insights events for the first time, if an unusual activity is detected, you can expect to receive the first Insights events within 36 hours of enabling Insights..
  • New unusual activity is logged as it is discovered, sending Insight Events to your destination S3 buckets and the AWS console within 30 minutes in most cases.

Let me know if you have any questions or feature requests, and happy building!

— Brandon


Amazon Transcribe Streaming Now Supports WebSockets

Post Syndicated from Brandon West original

I love services like Amazon Transcribe. They are the kind of just-futuristic-enough technology that excites my imagination the same way that magic does. It’s incredible that we have accurate, automatic speech recognition for a variety of languages and accents, in real-time. There are so many use-cases, and nearly all of them are intriguing. Until now, the Amazon Transcribe Streaming API available has been available using HTTP/2 streaming. Today, we’re adding WebSockets as another integration option for bringing real-time voice capabilities to the things you build.

In this post, we are going to transcribe speech in real-time using only client-side JavaScript in a browser. But before we can build, we need a foundation. We’ll review just enough information about Amazon Transcribe, WebSockets, and the Amazon Transcribe Streaming API to broadly explain the demo. For more detailed information, check out the Amazon Transcribe docs.

If you are itching to see things in action, you can head directly to the demo, but I recommend taking a quick read through this post first.

What is Amazon Transcribe?

Amazon Transcribe applies machine learning models to convert speech in audio to text transcriptions. One of the most powerful features of Amazon Transcribe is the ability to perform real-time transcription of audio. Until now, this functionality has been available via HTTP/2 streams. Today, we’re announcing the ability to connect to Amazon Transcribe using WebSockets as well.

For real-time transcription, Amazon Transcribe currently supports British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US).

What are WebSockets?

WebSockets are a protocol built on top of TCP, like HTTP. While HTTP is great for short-lived requests, it hasn’t historically been good at handling situations that require persistent real-time communications. While an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open. This means that messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, meaning that the server can client can both transmit data at the same time. They were also designed for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP.

HTTP/2 streams solve a lot of the issues that HTTP had with real-time communications, and the first Amazon Transcribe Streaming API available uses HTTP/2. WebSocket support opens Amazon Transcribe Streaming up to a wider audience, and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge.

How the Amazon Transcribe Streaming API Works


The first thing we need to do is authorize an IAM user to use Amazon Transcribe Streaming WebSockets. In the AWS Management Console, attach the following policy to your user:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "transcribestreaming",
            "Effect": "Allow",
            "Action": "transcribe:StartStreamTranscriptionWebSocket",
            "Resource": "*"


Transcribe uses AWS Signature Version 4 to authenticate requests. For WebSocket connections, use a pre-signed URL, that contains all of the necessary information is passed as query parameters in the URL. This gives us an authenticated endpoint that we can use to establish our WebSocket.

Required Parameters

All of the required parameters are included in our pre-signed URL as part of the query string. These are:

  • language-code: The language code. One of en-US, en-GB, fr-FR, fr-CA, es-US.
  • sample-rate: The sample rate of the audio, in Hz. Max of 16000 for en-US and es-US, and 8000 for the other languages.
  • media-encoding: Currently only pcm is valid.
  • vocabulary-name: Amazon Transcribe allows you to define custom vocabularies for uncommon or unique words that you expect to see in your data. To use a custom vocabulary, reference it here.

Audio Data Requirements

There are a few things that we need to know before we start sending data. First, Transcribe expects audio to be encoded as PCM data. The sample rate of a digital audio file relates to the quality of the captured audio. It is the number of times per second (Hz) that the analog signal is checked in order to generate the digital signal. For high-quality data, a sample rate of 16,000 Hz or higher is recommended. For lower-quality audio, such as a phone conversation, use a sample rate of 8,000 Hz. Currently, US English (en-US) and US Spanish (es-US) support sample rates up to 48,000 Hz. Other languages support rates up to 16,000 Hz.

In our demo, the file lib/audioUtils.js contains a downsampleBuffer() function for reducing the sample rate of the incoming audio bytes from the browser, and a pcmEncode() function that takes the raw audio bytes and converts them to PCM.

Request Format

Once we’ve got our audio encoding as PCM data with the right sample rate, we need to wrap it in an envelope before we send it across the WebSocket connection. Each messages consists of three headers, followed by the PCM-encoded audio bytes in the message body. The entire message is then encoded as a binary event stream message and sent. If you’ve used the HTTP/2 API before, there’s one difference that I think makes using WebSockets a bit more straightforward, which is that you don’t need to cryptographically sign each chunk of audio data you send.

Response Format

The messages we receive follow the same general format: they are binary-encoded event stream messages, with three headers and a body. But instead of audio bytes, the message body contains a Transcript object. Partial responses are returned until a natural stopping point in the audio is determined. For more details on how this response is formatted, check out the docs and have a look at the handleEventStreamMessage() function in main.js.

Let’s See the Demo!

Now that we’ve got some context, let’s try out a demo. I’ve deployed it using AWS Amplify Console – take a look, or push the button to deploy your own copy. Enter the Access ID and Secret Key for the IAM User you authorized earlier, hit the Start Transcription button, and start speaking into your microphone.

Deploy to Amplify Console

The complete project is available on GitHub. The most important file is lib/main.js. This file defines all our required dependencies, wires up the buttons and form fields in index.html, accesses the microphone stream, and pushes the data to Transcribe over the WebSocket. The code has been thoroughly commented and will hopefully be easy to understand, but if you have questions, feel free to open issues on the GitHub repo and I’ll be happy to help. I’d like to extend a special thanks to Karan Grover, Software Development Engineer on the Transcribe team, for providing the code that formed that basis of this demo.

AWS Security Hub Now Generally Available

Post Syndicated from Brandon West original

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve.

With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices.

Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations.

Getting Started

When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year.

The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.

Findings can be grouped into insights using aggregation statements and filters.


In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions.

AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default.

    "Findings": [{
        "AwsAccountId": "12345678912",
        "CreatedAt": "2019-06-13T22:22:58Z",
        "Description": "This is a custom finding from the API",
        "GeneratorId": "api-test",
        "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef",
        "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default",
        "Resources": [{
            "Type": "Other",
            "Id": "i-decafbad"
        "SchemaVersion": "2018-10-08",
        "Severity": {
            "Product": 2.5,
            "Normalized": 11
        "Title": "Security Finding from Custom Software",
        "Types": [
            "Software and Configuration Checks/Vulnerabilities/CVE"
        "UpdatedAt": "2019-06-13T22:22:58Z"

Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK.

const fs    = require('fs');        // For file system interactions
const util  = require('util');      // To wrap fs API with promises
const AWS   = require('aws-sdk');   // Load the AWS SDK

AWS.config.update({region: 'us-east-1'});

// Create our Security Hub client
const sh = new AWS.SecurityHub();

// Wrap readFile so it returns a promise and can be awaited 
const readFile = util.promisify(fs.readFile);

async function getFindings(path) {
    try {
        // wait for the file to be read...
        let fileData = await readFile(path);

        // ...then parse it as JSON and return it
        return JSON.parse(fileData);
    catch (error) {

async function importFindings() {
    // load the findings from our file
    const findings = await getFindings('./findings.json');

    try {
        // call the AWS Security Hub BatchImportFindings endpoint
        response = await sh.batchImportFindings(findings).promise();
    catch (error) {

// Engage!

A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console:

Custom Actions

AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service.

First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN.

Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this:

  "source": [
  "detail-type": [
    "Security Hub Findings - Custom Action"
  "resources": [

Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked.

Important Notes

  • AWS Config must be enabled for Security Hub compliance checks to run.
  • AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
  • AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions.

AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building!

— Brandon