All posts by Jeff Barr

Now Available – EC2 Instances (G4) with NVIDIA T4 Tensor Core GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-ec2-instances-g4-with-nvidia-t4-tensor-core-gpus/

The NVIDIA-powered G4 instances that I promised you earlier this year are available now and you can start using them today in eight AWS regions in six sizes! You can use them for machine learning training & inferencing, video transcoding, game streaming, and remote graphics workstations applications.

The instances are equipped with up to four NVIDIA T4 Tensor Core GPUs, each with 320 Turing Tensor cores, 2,560 CUDA cores, and 16 GB of memory. The T4 GPUs are ideal for machine learning inferencing, computer vision, video processing, and real-time speech & natural language processing. The T4 GPUs also offer RT cores for efficient, hardware-powered ray tracing. The NVIDIA Quadro Virtual Workstation (Quadro vWS) is available in AWS Marketplace. It supports real-time ray-traced rendering and can speed creative workflows often found in media & entertainment, architecture, and oil & gas applications.

G4 instances are powered by AWS-custom Second Generation Intel® Xeon® Scalable (Cascade Lake) processors with up to 64 vCPUs, and are built on the AWS Nitro system. Nitro’s local NVMe storage building block provides direct access to up to 1.8 TB of fast, local NVMe storage. Nitro’s network building block delivers high-speed ENA networking. The Intel AVX512-Deep Learning Boost feature extends AVX-512 with a new set of Vector Neural Network Instructions (VNNI for short). These instructions accelerate the low-precision multiply & add operations that reside in the inner loop of many inferencing algorithms.

Here are the instance sizes:

Instance Name
NVIDIA T4 Tensor Core GPUsvCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
g4dn.xlarge1416 GiB1 x 125 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.2xlarge1832 GiB1 x 225 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.4xlarge11664 GiB1 x 225 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.8xlarge132128 GiB1 x 900 GB7 Gbps50 Gbps
g4dn.12xlarge448192 GiB1 x 900 GB7 Gbps50 Gbps
g4dn.16xlarge164256 GiB1 x 900 GB7 Gbps50 Gbps

We are also working on a bare metal instance that will be available in the coming months:

Instance Name
NVIDIA T4 Tensor Core GPUsvCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
g4dn.metal896384 GiB2 x 900 GB14 Gbps100 Gbps

If you want to run graphics workloads on G4 instances, be sure to use the latest version of the NVIDIA AMIs (available in AWS Marketplace) so that you have access to the requisite GRID and Graphics drivers, along with an NVIDIA Quadro Workstation image that contains the latest optimizations and patches. Here’s where you can find them:

  • NVIDIA Gaming – Windows Server 2016
  • NVIDIA Gaming – Windows Server 2019
  • NVIDIA Gaming – Ubuntu 18.04

The newest AWS Deep Learning AMIs include support for G4 instances. The team that produces the AMIs benchmarked a g3.16xlarge instance against a g4dn.12xlarge instance and shared the results with me. Here are some highlights:

  • MxNet Inference (resnet50v2, forward pass without MMS) – 2.03 times faster.
  • MxNet Inference (with MMS) – 1.45 times faster.
  • MxNet Training (resnet50_v1b, 1 GPU) – 2.19 times faster.
  • Tensorflow Inference (resnet50v1.5, forward pass) – 2.00 times faster.
  • Tensorflow Inference with Tensorflow Service (resnet50v2) – 1.72 times faster.
  • Tensorflow Training (resnet50_v1.5) – 2.00 times faster.

The benchmarks used FP32 numeric precision; you can expect an even larger boost if you use mixed precision (FP16) or low precision (INT8).

You can launch G4 instances today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Seoul), and Asia Pacific (Tokyo) Regions. We are also working to make them accessible in Amazon SageMaker and in Amazon EKS clusters.

Jeff;

Now Available – Amazon Quantum Ledger Database (QLDB)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-quantum-ledger-database-qldb/

Given the wide range of data types, query models, indexing options, scaling expectations, and performance requirements, databases are definitely not one size fits all products. That’s why there are many different AWS database offerings, each one purpose-built to meet the needs of a different type of application.

Introducing QLDB
Today I would like to tell you about Amazon QLDB, the newest member of the AWS database family. First announced at AWS re:Invent 2018 and made available in preview form, it is now available in production form in five AWS regions.

As a ledger database, QLDB is designed to provide an authoritative data source (often known as a system of record) for stored data. It maintains a complete, immutable history of all committed changes to the data that cannot be updated, altered, or deleted. QLDB supports PartiQL SQL queries to the historical data, and also provides an API that allows you to cryptographically verify that the history is accurate and legitimate. These features make QLDB a great fit for banking & finance, ecommerce, transportation & logistics, HR & payroll, manufacturing, and government applications and many other use cases that need to maintain the integrity and history of stored data.

Important QLDB Concepts
Let’s review the most important QLDB concepts before diving in:

Ledger – A QLDB ledger consists of a set of QLDB tables and a journal that maintains the complete, immutable history of changes to the tables. Ledgers are named and can be tagged.

Journal – A journal consists of a sequence of blocks, each cryptographically chained to the previous block so that changes can be verified. Blocks, in turn, contain the actual changes that were made to the tables, indexed for efficient retrieval. This append-only model ensures that previous data cannot be edited or deleted, and makes the ledgers immutable. QLDB allows you to export all or part of a journal to S3.

Table – Tables exist within a ledger, and contain a collection of document revisions. Tables support optional indexes on document fields; the indexes can improve performance for queries that make use of the equality (=) predicate.

Documents – Documents exist within tables, and must be in Amazon Ion form. Ion is a superset of JSON that adds additional data types, type annotations, and comments. QLDB supports documents that contain nested JSON elements, and gives you the ability to write queries that reference and include these elements. Documents need not conform to any particular schema, giving you the flexibility to build applications that can easily adapt to changes.

PartiQLPartiQL is a new open standard query language that supports SQL-compatible access to relational, semi-structured, and nested data while remaining independent of any particular data source. To learn more, read Announcing PartiQL: One Query Languge for All Your Data.

Serverless – You don’t have to worry about provisioning capacity or configuring read & write throughput. You create a ledger, define your tables, and QLDB will automatically scale to meet the needs of your application.

Using QLDB
You can create QLDB ledgers and tables from the AWS Management Console, AWS Command Line Interface (CLI), a CloudFormation template, or by making calls to the QLDB API. I’ll use the QLDB Console and I will follow the steps in Getting Started with Amazon QLDB. I open the console and click Start tutorial to get started:

The Getting Started page outlines the first three steps; I click Create ledger to proceed (this opens in a fresh browser tab):

I enter a name for my ledger (vehicle-registration), tag it, and (again) click Create ledger to proceed:

My ledger starts out in Creating status, and transitions to Active within a minute or two:

I return to the Getting Started page, refresh the list of ledgers, choose my new ledger, and click Load sample data:

This takes a second or so, and creates four tables & six indexes:

I could also use PartiQL statements such as CREATE TABLE, CREATE INDEX, and INSERT INTO to accomplish the same task.

With my tables, indexes, and sample data loaded, I click on Editor and run my first query (a single-table SELECT):

This returns a single row, and also benefits from the index on the VIN field. I can also run a more complex query that joins two tables:

I can obtain the ID of a document (using a query from here), and then update the document:

I can query the modification history of a table or a specific document in a table, with the ability to find modifications within a certain range and on a particular document (read Querying Revision History to learn more). Here’s a simple query that returns the history of modifications to all of the documents in the VehicleRegistration table that were made on the day that I wrote this post:

As you can see, each row is a structured JSON object. I can select any desired rows and click View JSON for further inspection:

Earlier, I mentioned that PartiQL can deal with nested data. The VehicleRegistration table contains ownership information that looks like this:

{
   "Owners":{
      "PrimaryOwner":{
         "PersonId":"6bs0SQs1QFx7qN1gL2SE5G"
      },
      "SecondaryOwners":[

      ]
  }

PartiQL lets me reference the nested data using “.” notation:

I can also verify the integrity of a document that is stored within my ledger’s journal. This is fully described in Verify a Document in a Ledger, and is a great example of the power (and value) of cryptographic verification. Each QLDB ledger has an associated digest. The digest is a 256-bit hash value that uniquely represents the ledger’s entire history of document revisions as of a point in time. To access the digest, I select a ledger and click Get digest:

When I click Save, the console provides me with a short file that contains all of the information needed to verify the ledger. I save this file in a safe place, for use when I want to verify a document in the ledger. When that time comes, I get the file, click on Verification in the left-navigation, and enter the values needed to perform the verification. This includes the block address of a document revision, and the ID of the document. I also choose the digest that I saved earlier, and click Verify:

QLDB recomputes the hashes to ensure that the document has not been surreptitiously changed, and displays the verification:

In a production environment, you would use the QLDB APIs to periodically download digests and to verify the integrity of your documents.

Building Applications with QLDB
You can use the Amazon QLDB Driver for Java to write code that accesses and manipulates your ledger database. This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results. Drivers for other languages are in the works; stay tuned for more information.

Available Now
Amazon QLDB is available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions. Pricing is based on the following factors, and is detailed on the Amazon QLDB Pricing page, including some real-world examples:

  • Write operations
  • Read operations
  • Journal storage
  • Indexed storage
  • Data transfer

Jeff;

New – Client IP Address Preservation for AWS Global Accelerator

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-client-ip-address-preservation-for-aws-global-accelerator/

AWS Global Accelerator is a network service that routes incoming network traffic to multiple AWS regions in order to improve performance and availability for your global applications. It makes use of our collection of edge locations and our congestion-free global network to direct traffic based on application health, network health, and the geographic locations of your users, and provides a set of static Anycast IP addresses that are announced from multiple AWS locations (read New – AWS Global Accelerator for Availability and Performance to learn a lot more). The incoming TCP or UDP traffic can be routed to an Application Load Balancer, Network Load Balancer, or to an Elastic IP Address.

Client IP Address Preservation
Today we are announcing an important new feature for AWS Global Accelerator. If you are routing traffic to an Application Load Balancer, the IP address of the user’s client is now available to code running on the endpoint. This allows you to apply logic that is specific to a particular IP address. For example, you can use security groups that filter based on IP address, and you can serve custom content to users based on their IP address or geographic location. You can also use the IP addresses to collect more accurate statistics on the geographical distribution of your user base.

Using Client IP Address Preservation
If you are already using AWS Global Accelerator, we recommend that you phase in your use of Client IP Address Preservation by using weights on the endpoints. This will allow you to verify that any rules or systems that make use of IP addresses continue to function as expected.

In order to test this new feature, I launched some EC2 instances, set up an Application Load Balancer, put the instances into a target group, and created an accelerator in front of my ALB:

I checked the IP address of my browser:

I installed a simple Python program (courtesy of the Global Accelerator team), sent an HTTP request to one of the Global Accelerator’s IP addresses, and captured the output:

The Source (99.82.172.36) is an internal address used by my accelerator. With my baseline established and everything working as expected, I am now ready to enable Client IP Address Preservation!

I open the AWS Global Accelerator Console, locate my accelerator, and review the current configuration, as shown above. I click the listener for port 80, and click the existing endpoint group:

From there I click Add endpoint, add a new endpoint to the group, use a Weight of 255, and select Preserve client IP address:

My endpoint group now has two endpoints (one with client IP preserved, and one without), both of which point to the same ALB:

In a production environment I would start with a low weight and test to make sure that any security groups or other logic that was dependent on IP addresses continue to work as expected (I can also use the weights to manage traffic during blue/green deployments and software updates). Since I’m simply testing, I can throw caution to the wind and delete the old (non-IP-preserving) endpoint. Either way, the endpoint change becomes effective within a couple of minutes, and I can refresh my test window:

Now I can see that my code has access to the IP address of the browser (via the X-Forwarded-For header) and I can use it as desired. I can also use this IP address in security group rules.

To learn more about best practices for switching over, read Transitioning Your ALB Endpoints to Use Client IP Address Preservation.

Things to Know
Here are a couple of important things to know about client IP preservation:

Elastic Network Interface (ENI) Usage – The Global Accelerator creates one ENI for each subnet that contains IP-preserving endpoints, and will delete them when they are no longer required. Don’t edit or delete them.

Security Groups – The Global Accelerator creates and manages a security group named GlobalAccelerator. Again, you should not edit or delete it.

Available Now
You can enable this new feature for Application Load Balancers in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Asia Pacific (Sydney) Regions.

Jeff;

Amazon Prime Day 2019 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2019-powered-by-aws/

What did you buy for Prime Day? I bought a 34″ Alienware Gaming Monitor and used it to replace a pair of 25″ monitors that had served me well for the past six years:

 

As I have done in years past, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. You can read How AWS Powered Amazon’s Biggest Day Ever and Prime Day 2017 – Powered by AWS to learn more about how we evaluate the results of each Prime Day and use what we learn to drive improvements to our systems and processes.

This year I would like to focus on three ways that AWS helped to support record-breaking amounts of traffic and sales on Prime Day: Amazon Prime Video Infrastructure, AWS Database Infrastructure, and Amazon Compute Infrastructure. Let’s take a closer look at each one…

Amazon Prime Video Infrastructure
Amazon Prime members were able to enjoy the second Prime Day Concert (presented by Amazon Music) on July 10, 2019. Headlined by 10-time Grammy winner Taylor Swift, this live-streamed event also included performances from Dua Lipa, SZA, and Becky G.

Live-streaming an event of this magnitude and complexity to an audience in over 200 countries required a considerable amount of planning and infrastructure. Our colleagues at Amazon Prime Video used multiple AWS Media Services including AWS Elemental MediaPackage and AWS Elemental live encoders to encode and package the video stream.

The streaming setup made use of two AWS Regions, with a redundant pair of processing pipelines in each region. The pipelines delivered 1080p video at 30 fps to multiple content distribution networks (including Amazon CloudFront), and worked smoothly.

AWS Database Infrastructure
A combination of NoSQL and relational databases were used to deliver high availability and consistent performance at extreme scale during Prime Day:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

Amazon Aurora also supports the network of Amazon fulfillment centers. On Prime Day, 1,900 database instances processed 148 billion transactions, stored 609 terabytes of data, and transferred 306 terabytes of data.

Amazon Compute Infrastructure
Prime Day 2019 also relied on a massive, diverse collection of EC2 instances. The internal scaling metric for these instances is known as a server equivalent; Prime Day started off with 372K server equivalents and scaled up to 426K at peak.

Those EC2 instances made great use of a massive fleet of Elastic Block Store (EBS) volumes. The team added an additional 63 petabytes of storage ahead of Prime Day; the resulting fleet handled 2.1 trillion requests per day and transferred 185 petabytes of data per day.

And That’s a A Wrap
These are some impressive numbers, and show you the kind of scale that you can achieve with AWS. As you can see, scaling up for one-time (or periodic) events and then scaling back down afterward, is easy and straightforward, even at world scale!

If you want to run your own world-scale event, I’d advise you to check out the blog posts that I linked above, and also be sure to read about AWS Infrastructure Event Management. My colleagues are ready (and eager) to help you to plan for your large-scale product or application launch, infrastructure migration, or marketing event. Here’s an overview of their process:

 

Jeff;

AWS CloudFormation Update – Public Coverage Roadmap & CDK Goodies

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudformation-update-public-coverage-roadmap-cdk-goodies/

I launched AWS CloudFormation in early 2011 with a pair of posts: AWS CloudFormation – Create Your AWS Stack From a Recipe and AWS CloudFormation in the AWS Management Console. Since that launch, we have added support for many AWS resource types, launched many new features, and worked behind the scenes to ensure that CloudFormation is efficient, scalable, and highly available.

Public Coverage Roadmap
CloudFormation use is growing even faster than AWS itself, and the team has prioritized scalability over complete resource coverage. While our goal of providing 100% coverage remains, the reality is that it will take us some time to get there. In order to be more transparent about our priorities and to give you an opportunity to manage them, I am pleased to announce the much-anticipated CloudFormation Coverage Roadmap:

Styled after the popular AWS Containers Roadmap, the CloudFormation Coverage Roadmap contains four columns:

Shipped – Available for use in production form in all public AWS regions.

Coming Soon – Generally a few months out.

We’re working on It – Work in progress, but further out.

Researching – We’re thinking about the right way to implement the coverage.

Please feel free to create your own issues, and to give a thumbs-up to those that you need to have in order to make better use of CloudFormation:

Before I close out, I would like to address one common comment – that AWS is part of a big company, and that we should simply throw more resources at it. While the team is growing, implementing robust, secure coverage is still resource-intensive. Please consider the following quote, courtesy of the must-read Mythical Man-Month:

Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.

Cloud Development Kit Goodies
The Cloud Development Kit (CDK) lets you model and provision your AWS resources using a programming language that you are already familiar with. You use a set of CDK Constructs (VPCs, subnets, and so forth) to define your application, and then use the CDK CLI to synthesize a CloudFormation template, deploy it to AWS, and create a stack.

Here are some resources to help you to get started with the CDK:

Stay Tuned
The CloudFormation Coverage Roadmap is an important waypoint on a journey toward open source that started out with cfn-lint, with some more stops in the works. Stay tuned and I’ll tell you more just as soon as I can!

Jeff;

AWS DeepLens – Now Orderable in Seven Additional Countries

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-deeplens-now-orderable-in-seven-additional-countries/

The new (2019) edition of the AWS DeepLens can now be purchased in six countries (US, UK, Germany, France, Spain, Italy, and Canada), and preordered in Japan. The 2019 edition is easier to set up, and (thanks to Amazon SageMaker Neo) runs machine learning models up to twice as fast as the earlier edition.

New Tutorials
We are also launch a pair of new tutorials to help you to get started:

aws-deeplens-coffee-leaderboard – This tutorial focuses on a demo that uses face detection to track the number of people that drink coffee. It watches a scene, and triggers a Lambda function when a face is detected. Amazon Rekognition is used to detect the presence of a coffee mug, and the face is added to a DynamoDB database that is maintained by (and private to) the demo. The demo also includes a leaderboard that tracks the number of coffees over time. Here’s the architecture:

And here’s the leaderboard:

To learn more, read Track the number of coffees consumed using AWS DeepLens.

aws-deeplens-worker-safety-project – This tutorial focuses on a demo that identifies workers that are not wearing safety helmets. The DeepLens detects faces, and uploads the images to S3 for further processing. The results are analyze using AWS IoT and Amazon CloudWatch, and are displayed on a web dashboard. Here’s the architecture:

To learn more, register for and then take the free 30-minute course: Worker Safety Project with AWS DeepLens.

Detecting Cats, and Cats with Rats
Finally, I would like to share a really cool video featuring my colleague Ben Hamm. After growing tired of cleaning up the remains of rats and other creatures that his cat Metric had killed, Ben decided to put his DeepLens to work. Using a hand-labeled training set, Ben created a model that could tell when Metric was carrying an unsavory item its his mouth, and then lock him out. Ben presented his project at Ignite Seattle and the video has been very popular. Take a look for yourself:

Order Your DeepLens Today
If you are in one of the countries that I listed above, you can order your DeepLens today and get started with Machine Learning in no time flat! Visit the DeepLens home page to learn more.

Jeff;

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for the 9th Consecutive Year

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-infrastructure-as-a-service-iaas-magic-quadrant-for-the-9th-consecutiveyear/

My colleagues on the AWS service teams work to deliver what customers want today, and also do their best to anticipate what they will need tomorrow. This Customer Obsession, along with our commitment to Hire and Develop the Best (two of the fourteen Amazon Leadership Principles), helps us to figure out, and then to deliver on, our vision. It is always good to see that our hard work continues to delight customers, and to be recognized by Gartner and other leading analysts.

For the ninth consecutive year, AWS has secured the top-right corner of the Leader’s quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS), earning highest placement for Ability to Execute and furthest for Completeness of Vision:

The full report contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Meet the Newest AWS News Bloggers!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-news-bloggers/

I wrote my first post for this blog way back in 2004! Over the course of the first decade, the amount of time that I devoted to the blog grew from a small fraction of my day to a full day. In the early days my email inbox was my primary source of information about upcoming launches, and also my primary tool for managing my work backlog. When that proved to be unscalable, Ana came onboard and immediately built a ticketing system and set up a process for teams to request blog posts. Today, a very capable team (Greg, Devin, and Robin) takes care of tickets, platforms, comments, metrics, and so forth so that I can focus on what I like to do best: using new services and writing about them!

Over the years we have experimented with a couple of different strategies to scale the actual writing process. If you are a long-time reader you may have seen posts from Mike, Jinesh, Randall, Tara, Shaun, and a revolving slate of guest bloggers.

News Bloggers
I would like to introduce you to our current lineup of AWS News Bloggers. Like me, the bloggers have a technical background and are prepared to go hands-on with every new service and feature. Here’s our roster:

Steve Roberts (@bellevuesteve) – Steve focuses on .NET tools and technologies.

Julien Simon (@julsimon) – Julien likes to help developers and enterprises to bring their ideas to life.

Brandon West (@bwest) – Brandon leads our developer relations team in the Americas, and has written a book on the topic.

Martin Beeby (@thebeebs) – Martin focuses on .NET applications, and has worked as a C# and VB developer since 2001.

Danilo Poccia (@danilop) – Danilo works with companies of any size to support innovation. He is the author of AWS Lambda in Action.

Sébastien Stormacq (@sebesto) – Sébastien works with builders to unlock the value of the AWS cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity, and creativity.

We are already gearing up for re:Invent 2019, and can’t wait to bring you a rich set of blog posts. Stay tuned!

Jeff;

AWS New York Summit 2019 – Summary of Launches & Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-new-york-summit-2019-summary-of-launches-announcements/

The AWS New York Summit just wrapped up! Here’s a quick summary of what we launched and announced:

Amazon EventBridge – This new service builds on the event-processing model that forms the basis for Amazon CloudWatch Events, and makes it easy for you to integrate your AWS applications with SaaS applications such as Zendesk, Datadog, SugarCRM, and Onelogin. Read my blog post, Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications, to learn more.

Werner announces EventBridge – Photo by Serena

Cloud Development Kit – CDK is now generally available, with support for TypeScript and Python. Read Danilo‘s post, AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available, to learn more.

Fluent Bit Plugins for AWSFluent Bit is a multi-platform, open source log processor and forwarder that is compatible with Docker and Kubernetes environments. You can now build a container image that includes new Fluent Bit plugins for Amazon CloudWatch and Amazon Kinesis Data Firehose. The plugins routes logs to CloudWatch, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service. Read Centralized Container Logging with Fluent Bit to learn more.


Nicki, Randall, Robert, and Steve – Photo by Deepak

AWS Toolkit for VS Code – This toolkit lets you develop and test locally (including step-through debugging) in a Lambda-like environment, and then deploy to the AWS Region of your choice. You can invoke Lambda functions locally or remotely, with full control of the function configuration, including the event payload and environment variables. To learn more, read Announcing AWS Toolkit for Visual Studio Code.

Amazon CloudWatch Container Insights (preview) – You can now create CloudWatch Dashboards that monitor the performance and health of your Amazon ECS and AWS Fargate clusters, tasks, containers, and services. Read Using Container Insights to learn more.

CloudWatch Anomaly Detection (preview) – This cool addition to CloudWatch uses machine learning to continuously analyze system and application metrics, determine a nominal baseline, and surface anomalies, all without user intervention. It adapts to trends, and helps to identity unexpected changes in performance or behavior. Read the CloudWatch Anomaly Detection documentation to learn more.

Amazon SageMaker Managed Spot Training (coming soon) – You will soon be able to use Amazon EC2 Spot to lower the cost of training your machine learning models. This upcoming enhancement to SageMaker will lower your training costs by up to 70%, and can be used in conjunction with Automatic Model Training.

Jeff;

 

Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-eventbridge-event-driven-aws-integration-for-your-saas-applications/

Many AWS customers also make great use of SaaS (Software as a Service) applications. For example, they use Zendesk to manage customer service & support tickets, PagerDuty to handle incident response, and SignalFx for real-time monitoring. While these applications are quite powerful on their own, they are even more so when integrated into a customer’s own systems, databases, and workflows.

New Amazon EventBridge
In order to support this increasingly common use case, we are launching Amazon EventBridge today. Building on the powerful event processing model that forms the basis for CloudWatch Events, EventBridge makes it easy for our customers to integrate their own AWS applications with SaaS applications. The SaaS applications can be hosted anywhere, and simply publish events to an event bus that is specific to each AWS customer. The asynchronous, event-based model is fast, clean, and easy to use. The publisher (SaaS application) and the consumer (code running on AWS) are completely decoupled, and are not dependent on a shared communication protocol, runtime environment, or programming language. You can use simple Lambda functions to handle events that come from a SaaS application, and you can also route events to a wide variety of other AWS targets. You can store incident or ticket data in Amazon Redshift, train a machine learning model on customer support queries, and much more.

Everything that you already know (and hopefully love) about CloudWatch Events continues to apply, with one important change. In addition to the existing default event bus that accepts events from AWS services, calls to PutEvents, and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.

As part of today’s launch we are also opening up a partner program. The integration process is simple and straightforward, and generally requires less than one week of developer time.

All About Amazon EventBridge
Here are some terms that you need to know in order to understand how to use Amazon EventBridge:

Partner – An organization that has integrated their SaaS application with EventBridge.

Customer – An organization that uses AWS, and that has subscribed to a partner’s SaaS application.

Partner Name – A unique name that identifies an Amazon EventBridge partner.

Partner Event Bus – An Event Bus that is used to deliver events from a partner to AWS.

EventBridge can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS SDKs. There are distinct commands and APIs for partners and for customers. Here are some of the most important ones:

PartnersCreatePartnerEventSource, ListPartnerEventSourceAccounts, ListPartnerEventSources, PutPartnerEvents.

CustomersListEventSources, ActivateEventSource, CreateEventBus, ListEventBuses, PutRule, PutTargets.

Amazon EventBridge for Partners & Customers
As I noted earlier, the integration process is simple and straightforward. You need to allow your customers to enter an AWS account number and to select an AWS region. With that information in hand, you call CreatePartnerEventSource in the desired region, inform the customer of the event source name and tell them that they can accept the invitation to connect, and wait for the status of the event source to change to ACTIVE. Then, each time an event of interest to the customer occurs, you call PutPartnerEvents and reference the event source.

The process is just as simple on the customer side. You accept the invitation to connect by calling CreateEventBus to create an event bus associated with the event source. You add rules and targets to the event bus, and prepare your Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. You can use DeActivateEventSource and ActivateEventSource to control the flow.

Here’s the overall flow (diagram created using SequenceDiagram):

Each partner has the freedom to choose the events that are relevant to their application, and to define the data elements that are included with each event.

Using EventBridge
Starting from the EventBridge Console, I click Partner event sources, find the partner of interest, and click it to learn more:

Each partner page contains additional information about the integration. I read the info, and click Set up to proceed:

The page provides me with a simple, three-step procedure to set up my event source:


After the partner creates the event source, I return to Partner event sources and I can see that the Zendesk event source is Pending:

I click the pending event source, review the details, and then click Associate with event bus:

I have the option to allow other AWS accounts, my Organization, or another Organization to access events on the event bus that I am about to create. After I have confirmed that I trust the origin and have added any additional permissions, I click Associate:

My new event bus is now available, and is listed as a Custom event bus:

I click Rules, select the event bus, and see the rules (none so far) associated with it. Then I click Create rule to make my first rule:

I enter a name and a description for my first rule:

Then I define a pattern, choosing Zendesk as the Service name:

Next, I select a Lambda function as my target:

I can also choose from many other targets:

After I create my rule, it will be activated in response to activities that occur within my Zendesk account. The initial set of events includes TicketCreated, CommentCreated, TagsChanged, AgentAssignmentChanged, GroupAssignmentChanged, FollowersChanged, EmailCCsChanged, CustomFieldChanged, and StatusChanged. Each event includes a rich set of properties; you’ll need to consult the documentation to learn more.

Partner Event Sources
We are launching with ten partner event sources, with more to come:

  • Datadog
  • Zendesk
  • PagerDuty
  • Whispir
  • Saviynt
  • Segment
  • SignalFx
  • SugarCRM
  • OneLogin
  • Symantec

If you have a SaaS application and you are ready to integrate, read more about EventBridge Partner Integration.

Now Available
Amazon EventBridge is available now and you can start using it today in all public AWS regions in the aws partition. Support for the AWS regions in China, and for the Asia Pacific (Osaka) Local Region, is in the works.

Pricing is based on the number of events published to the event buses in your account, billed at $1 for every million events. There is no charge for events published by AWS services.

Jeff;

PS – As you can see from this post, we are paying even more attention to the overall AWS event model, and have a lot of interesting goodies on the drawing board. With this launch, CloudWatch Events has effectively earned a promotion to a top-level service, and I’ll have a lot more to say about that in the future!

AWS Project Resilience – Up to $2K in AWS Credits to Support DR Preparation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-project-resilience-up-to-2k-in-aws-credits-to-support-dr-preparation/

We want to help state and local governments, community organizations, and educational institutions to better prepare for natural and man-made disasters that could affect their ability to run their mission-critical IT systems.

Today we are launching AWS Project Resilience. This new element of our existing Disaster Response program offers up to $2,000 in AWS credits to organizations of the types that I listed above. The program is open to new and existing customers, with distinct benefits for each:

New Customers – Eligible new customers can submit a request for up to $2,000 in AWS Project Resilience credits that can be used to offset costs incurred by storing critical datasets in Amazon Simple Storage Service (S3).

Existing Customers – Eligible existing customers can submit a request for up to $2,000 in AWS Project Resilience credits to offset the costs incurred by engaging CloudEndure and AWS Disaster Response experts to do a deep dive on an existing business continuity architecture.

Earlier this month I sat down with my colleague Ana Visneski to learn more about disaster preparedness, disaster recovery, and AWS Project Resilience. Here’s our video:

To learn more and to apply to the program, visit the AWS Project Resilience page!

Jeff;

 

EC2 Instance Update – Two More Sizes of M5 & R5 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-two-more-sizes-of-m5-r5-instances/

When I introduced the Nitro system last year I said:

The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Today I am happy to make good on that promise, with the introduction of two additional sizes of the Intel and AMD-powered M5 and R5 instances, including optional NVMe storage. These additional sizes will make it easier for you to find an instance size that is a perfect match for your workload.

M5 Instances
These instances are designed for general-purpose workloads such as web servers, app servers, dev/test environments, gaming, logging, and media processing. Here are the specs:

Instance NamevCPUsRAMStorageEBS-Optimized BandwidthNetwork Bandwidth
m5.8xlarge
32 128 GiBEBS Only5 Gbps10 Gbps
m5.16xlarge
64 256 GiBEBS Only10 Gbps20 Gbps
m5a.8xlarge
32 128 GiBEBS Only3.5 GbpsUp to 10 Gbps
m5a.16xlarge
64 256 GiBEBS Only7 Gbps12 Gbps
m5d.8xlarge
32128 GiB2 x 600 GB NVMe SSD5 Gbps10 Gbps
m5d.16xlarge
64256 GiB4 x 600 GB NVMe SSD10 Gbps20 Gbps

If you are currently using m4.10xlarge or m4.16xlarge instances, you now have several upgrade paths.

To learn more, read M5 – The Next Generation of General-Purpose EC2 Instances, New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances, and EC2 Instance Update – M5 Instances with Local NVMe Storage.

R5 Instances
These instances are designed for data mining, in-memory analytics, caching, simulations, and other memory-intensive workloads. Here are the specs:

Instance NamevCPUsRAMStorageEBS-Optimized BandwidthNetwork Bandwidth
r5.8xlarge
32256 GiBEBS Only5 Gbps10 Gbps
r5.16xlarge
64512 GiBEBS Only10 Gbps20 Gbps
r5a.8xlarge
32256 GiBEBS Only3.5 GbpsUp to 10 Gbps
r5a.16xlarge
64512 GiBEBS Only7 Gbps12 Gbps
r5d.8xlarge
32256 GiB2 x 600 GB NVMe SSD5 Gbps10 Gbps
r5d.16xlarge
64512 GiB4 x 600 GB NVMe SSD10 Gbps20 Gbps

If you are currently using r4.8xlarge or r4.16xlarge instances, you now have several easy and powerful upgrade paths.

To learn more, read Amazon EC2 Instance Update – Faster Processors and More Memory.

Things to Know
Here a couple of things to keep in mind when you use these new instances:

Processor Choice – You can choose between Intel and AMD EPYC processors (instance names include an “a”). Read my post, New Lower-Cost AMD-Powered M5a and R5a EC2 Instances, to learn more.

AMIs – You can use the same AMIs that you use with your existing M5 and R5 instances.

Regions – The new sizes are available in all AWS Regions where the existing sizes are already available.

Local NVMe Storage – On “d” instances with local NVMe storage, the devices are encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated. The local devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
The new sizes are available in On-Demand, Spot, and Reserved Instance form and you can start using them today!

Jeff;

 

New – VPC Traffic Mirroring – Capture & Inspect Network Traffic

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-vpc-traffic-mirroring/

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly.

VPC Traffic Mirroring
Today we are launching VPC Traffic Mirroring. This is a new feature that you can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to:

Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools.

Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed.

Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth.

Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications.

You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. As you will soon see, you can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection.

You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, M5, M5a, M5d, R5, R5a, R5d, T3, and z1d as I write this).

Getting Started with VPC Traffic Mirroring
Let’s review the key elements of VPC Traffic Mirroring and then set it up:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works. I’ll use the Console.

I already have ENI that I will use as my mirror source and destination (in a real-world use case I would probably use an NLB destination):

The MirrorTestENI_Source and MirrorTestENI_Destination ENIs are already attached to suitable EC2 instances. I open the VPC Console and scroll down to the Traffic Mirroring items, then click Mirror Targets:

I click Create traffic mirror target:

I enter a name and description, choose the Network Interface target type, and select my ENI from the menu. I add a Blog tag to my target, as is my practice, and click Create:

My target is created and ready to use:

Now I click Mirror Filters and Create traffic mirror filter. I create a simple filter that captures inbound traffic on three ports (22, 80, and 443), and click Create:

Again, it is created and ready to use in seconds:

Next, I click Mirror Sessions and Create traffic mirror session. I create a session that uses MirrorTestENI_Source, MainTarget, and MyFilter, allow AWS to choose the VXLAN network identifier, and indicate that I want the entire packet mirrored:

And I am all set. Traffic from my mirror source that matches my filter is encapsulated as specified in RFC 7348 and delivered to my mirror target. I can then use tools like Suricata to capture, analyze, and visualize it.

Things to Know
Here are a couple of things to keep in mind:

Sessions Per ENI – You can have up to three active sessions on each ENI.

Cross-VPC – The source and target ENIs can be in distinct VPCs as long as they are peered to each other or connected through Transit Gateway.

Scaling & HA – In most cases you should plan to mirror traffic to a Network Load Balancer and then run your capture & analysis tools on an Auto Scaled fleet of EC2 instances behind it.

Bandwidth – The replicated traffic generated by each instance will count against the overall bandwidth available to the instance. If traffic congestion occurs, mirrored traffic will be dropped first.

Now Available
VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Jeff;

 

AWS Control Tower – Set up & Govern a Multi-Account AWS Environment

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-control-tower-set-up-govern-a-multi-account-aws-environment/

Earlier this month I met with an enterprise-scale AWS customer. They told me that they are planning to go all-in on AWS, and want to benefit from all that we have learned about setting up and running AWS at scale. In addition to setting up a Cloud Center of Excellence, they want to set up a secure environment for teams to provision development and production accounts in alignment with our recommendations and best practices.

AWS Control Tower
Today we are announcing general availability of AWS Control Tower. This service automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. Control Tower incorporates the knowledge that AWS Professional Service has gained over the course of thousands of successful customer engagements, and also draws from the recommendations found in our whitepapers, documentation, the Well-Architected Framework, and training. The guidance offered by Control Tower is opinionated and prescriptive, and is designed to accelerate your cloud journey!

AWS Control Tower builds on multiple AWS services including AWS Organizations, AWS Identity and Access Management (IAM) (including Service Control Policies), AWS Config, AWS CloudTrail, and AWS Service Catalog. You get a unified experience built around a collection of workflows, dashboards, and setup steps. AWS Control Tower automates a landing zone to set up a baseline environment that includes:

  • A multi-account environment using AWS Organizations.
  • Identity management using AWS Single Sign-On (SSO).
  • Federated access to accounts using AWS SSO.
  • Centralize logging from AWS CloudTrail, and AWS Config stored in Amazon S3.
  • Cross-account security audits using AWS IAM and AWS SSO.

Before diving in, let’s review a couple of key Control Tower terms:

Landing Zone – The overall multi-account environment that Control Tower sets up for you, starting from a fresh AWS account.

Guardrails – Automated implementations of policy controls, with a focus on security, compliance, and cost management. Guardrails can be preventive (blocking actions that are deemed as risky), or detective (raising an alert on non-conformant actions).

Blueprints – Well-architected design patterns that are used to set up the Landing Zone.

Environment – An AWS account and the resources within it, configured to run an application. Users make requests (via Service Catalog) for new environments and Control Tower uses automated workflows to provision them.

Using Control Tower
Starting from a brand new AWS account that is both Master Payer and Organization Master, I open the Control Tower Console and click Set up landing zone to get started:

AWS Control Tower will create AWS accounts for log arching and for auditing, and requires email addresses that are not already associated with an AWS account. I enter two addresses, review the information within Service permissions, give Control Tower permission to administer AWS resources and services, and click Set up landing zone:

The setup process runs for about an hour, and provides status updates along the way:

Early in the process, Control Tower sends a handful of email requests to verify ownership of the account, invite the account to participate in AWS SSO, and to subscribe to some SNS topics. The requests contain links that I must click in order for the setup process to proceed. The second email also requests that I create an AWS SSO password for the account. After the setup is complete, AWS Control Tower displays a status report:

The console offers some recommended actions:

At this point, the mandatory guardrails have been applied and the optional guardrails can be enabled:

I can see the Organizational Units (OUs) and accounts, and the compliance status of each one (with respect to the guardrails):

 

Using the Account Factory
The navigation on the left lets me access all of the AWS resources created and managed by Control Tower. Now that my baseline environment is set up, I can click Account factory to provision AWS accounts for my teams, applications, and so forth.

The Account factory displays my network configuration (I’ll show you how to edit it later), and gives me the option to Edit the account factory network configuration or to Provision new account:

I can control the VPC configuration that is used for new accounts, including the regions where VPCs are created when an account is provisioned:

The account factory is published to AWS Service Catalog automatically. I can provision managed accounts as needed, as can the developers in my organization. I click AWS Control Tower Account Factory to proceed:

I review the details and click LAUNCH PRODUCT to provision a new account:

Working with Guardrails
As I mentioned earlier, Control Tower’s guardrails provide guidance that is either Mandatory or Strongly Recommended:

Guardrails are implemented via an IAM Service Control Policy (SCP) or an AWS Config rule, and can be enabled on an OU-by-OU basis:

Now Available
AWS Control Tower is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with more to follow. There is no charge for the Control Tower service; you pay only for the AWS resources that it creates on your behalf.

In addition to adding support for more AWS regions, we are working to allow you to set up a parallel landing zone next to an existing AWS account, and to give you the ability to build and use custom guardrails.

Jeff;

 

New – UDP Load Balancing for Network Load Balancer

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/

The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part (read my post, New Network Load Balancer – Effortless Scaling to Millions of Requests per Second to learn more).

In response to customer requests, we have added several new features since the late-2017 launch, including cross-zone load balancing, support for resource-based and tag-based permissions, support for use across an AWS managed VPN tunnel, the ability to create a Network Load Balancer using the AWS Elastic Beanstalk Console, support for Inter-Region VPC Peering, and TLS Termination.

UDP Load Balancing
Today we are adding support for another frequent customer request, the ability to load balance UDP traffic. You can now use Network Load Balancers to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. If you are hosting DNS, SIP, SNMP, Syslog, RADIUS, and other UDP services in your own data center, you can now move the services to AWS. You can also deploy services to handle Authentication, Authorization, and Accounting, often known as AAA.

You no longer need to maintain a fleet of proxy servers to ingest UDP traffic, and you can now use the same load balancer for both TCP and UDP traffic. You can simplify your architecture, reduce your costs, and increase your scalability.

Creating a UDP Network Load Balancer
I can create a Network Load Balancer with UDP support using the Console, CLI (create-load-balancer), API (CreateLoadBalancer), or a CloudFormation template (AWS::ElasticLoadBalancing::LoadBalancer), as usual. The console lets me choose the desired load balancer; I click the Create button underneath Network Load Balancer:

I name my load balancer, choose UDP from the protocol menu, and select a port (514 is for Syslog):

I already have suitable EC2 instances in us-east-1b and us-east-1c so I’ll use those AZs:

Then I set up a target group for the UDP protocol on port 514:

I choose my instances and click Add to registered:

I review my settings on the next page, and my new UDP Load Balancer is ready to accept traffic within a minute or so (the state starts out as provisioning and transitions to active when it is ready):

I’ll test this out by configuring my EC2 instances as centralized Syslogd servers. I simply edit the configuration file (/etc/rsyslog.conf) on the instances to make them listen on port 514, and restart the service:

Then I launch another EC2 instance and configure it to use my NLB endpoint:

And I can see log entries in my servers (ip-172-31-29-40 is my test instance):

I did have to do make one small configuration change in order to get this to work! Using UDP to check on the health of a service does not really make sense, so I clicked override and specified a health check on port 80 instead:

In a real-world scenario you would want to build a TCP-style health check into your service, of course. And, needless to say, I would run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form.

Things to Know
Here are a couple of things to know about this important new NLB feature:

Supported Targets – UDP on Network Load Balancers is supported for Instance target types (IP target types and PrivateLink are not currently supported).

Health Checks – As I mentioned above, health checks must be done using TCP, HTTP, or HTTPS.

Multiple Protocols – A single Network Load Balancer can handle both TCP and UDP traffic. You can add another listener to an existing load balancer to gain UDP support, as long as you use distinct ports. In situations such as DNS where you need support for both TCP and UDP on the same port, you can set up a multi-protocol target group and a multi-protocol listener (use TCP_UDP for the listener type and the TargetGroup).

New CloudWatch Metrics – The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on a given Network Load Balancer.

Available Now
This feature is available now and you can start using it today in all commercial AWS Regions. For pricing, see the Elastic Load Balancing Pricing page.

Jeff;

 

Amazon S3 Update – SigV2 Deprecation Period Extended & Modified

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-sigv2-deprecation-period-extended-modified/

Every request that you make to the Amazon S3 API must be signed to ensure that it is authentic. In the early days of AWS we used a signing model that is known as Signature Version 2, or SigV2 for short. Back in 2012, we announced SigV4, a more flexible signing method, and made it the sole signing method for all regions launched after 2013. At that time, we recommended that you use it for all new S3 applications.

Last year we announced that we would be ending support for SigV2 later this month. While many customers have updated their applications (often with nothing more than a simple SDK update), to use SigV4, we have also received many requests for us to extend support.

New Date, New Plan
In response to the feedback on our original plan, we are making an important change. Here’s the summary:

Original Plan – Support for SigV2 ends on June 24, 2019.

Revised Plan – Any new buckets created after June 24, 2020 will not support SigV2 signed requests, although existing buckets will continue to support SigV2 while we work with customers to move off this older request signing method.

Even though you can continue to use SigV2 on existing buckets, and in the subset of AWS regions that support SigV2, I encourage you to migrate to SigV4, gaining some important security and efficiency benefits in the process. The newer signing method uses a separate, specialized signing key that is derived from the long-term AWS access key. The key is specific to the service, region, and date. This provides additional isolation between services and regions, and provides better protection against key reuse. Internally, our SigV4 implementation is able to securely cache the results of authentication checks; this reduces latency and adds to the overall resiliency of your application. To learn more, read Changes in Signature Version 4.

Identifying Use of SigV2
S3 has been around since 2006 and some of the code that you or your predecessors wrote way back then might still be around, dutifully making requests that are signed with SigV2. You can use CloudTrail Data Events or S3 Server Access Logs to find the old-school requests and target the applications for updates:

CloudTrail Data Events – Look for the SignatureVersion element within the additionalDataElement of each CloudTrail event entry (read Using AWS CloudTrail to Identify Amazon S3 Signature Version 2 Requests to learn more).

S3 Server Access Logs – Look for the SignatureVersion element in the logs (read Using Amazon S3 Access Logs to Identify Signature Version 2 Requests to learn more).

Updating to SigV4



“Do we need to change our code?”

The Europe (Frankfurt), US East (Ohio), Canada (Central), Europe (London), Asia Pacific (Seoul), Asia Pacific (Mumbai), Europe (Paris), China (Ningxia), Europe (Stockholm), Asia Pacific (Osaka Local), AWS GovCloud (US-East), and Asia Pacific (Hong Kong) Regions were launched after 2013, and support SigV4 but not SigV2. If you have code that accesses S3 buckets in that region, it is already making exclusive use of SigV4.

If you are using the latest version of the AWS SDKs, you are either ready or just about ready for the SigV4 requirement on new buckets beginning June 24, 2020. If you are using an older SDK, please check out the detailed version list at Moving from Signature Version 2 to Signature Version 4 for more information.

There are a few situations where you will need to make some changes to your code. For example, if you are using pre-signed URLs with the AWS Java, JavaScript (node.js), or Python SDK, you need to set the correct region and signature version in the client configuration. Also, be aware that SigV4 pre-signed URLs are valid for a maximum of 7 days, while SigV2 pre-signed URLs can be created with a maximum expiry time that could be many weeks or years in the future (in almost all cases, using time-limited URLs is a much better practice). Using SigV4 will improve your security profile, but might also mandate a change in the way that you create, store, and use the pre-signed URLs. While using long-lived pre-signed URLs was easy and convenient for developers, using SigV4 with URLs that have a finite expiration is a much better security practice.

If you are using Amazon EMR, you should upgrade your clusters to version 5.22.0 or later so that all requests to S3 are made using SigV4 (see Amazon EMR 5.x Release Versions for more info).

If your S3 objects are fronted by Amazon CloudFront and you are signing your own requests, be sure to update your code to use SigV4. If you are using Origin Access Identities to restrict access to S3, be sure to include the x-amz-content-sha256 header and the proper regional S3 domain endpoint.

We’re Here to Help
The AWS team wants to help make your transition to SigV4 as smooth and painless as possible. If you run in to problems, I strongly encourage you to make use of AWS Support, as described in Getting Started with AWS Support.

You can also Discuss this Post on Reddit!

Jeff;

 

Now Available – AWS IoT Things Graph

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-iot-things-graph/

We announced AWS IoT Things Graph last November and described it as a tool to let you build IoT applications visually. Today I am happy to let you know that the service is now available and ready for you to use!

As you will see in a moment, you can represent your business logic in a flow composed of devices and services. Each web service and each type of device (sensor, camera, display, and so forth) is represented in Things Graph as a model. The models hide the implementation details that are peculiar to a particular brand or model of device, and allow you to build flows that can evolve along with your hardware. Each model has a set of actions (inputs), events (outputs), and states (attributes). Things Graph includes a set of predefined models, and also allows you to define your own. You can also use mappings as part of your flow to convert the output from one device into the form expected by other devices. After you build your flow, you can deploy it to the AWS Cloud or an AWS IoT Greengrass-enabled device for local execution. The flow, once deployed, orchestrates interactions between locally connected devices and web services.

Using AWS IoT Things Graph
Let’s take a quick walk through the AWS IoT Things Graph Console!

The first step is to make sure that I have models which represent the devices and web services that I plan to use in my flow. I click Models in the console navigation to get started:

The console outlines the three steps that I must follow to create a model, and also lists my existing models:

The presence of aws/examples in the URN for each of the devices listed above indicates that they are predefined, and part of the public AWS IoT Things Graph namespace. I click on Camera to learn more about this model; I can see the Properties, Actions, and Events:

The model is defined using GraphQL; I can view it, edit it, or upload a file that contains a model definition. Here’s the definition of the Camera:

This model defines an abstract Camera device. The model, in turn, can reference definitions for one or more actual devices, as listed in the Devices section:

Each of the devices is also defined using GraphQL. Of particular interest is the use of MQTT topics & messages to define actions:

Earlier, I mentioned that models can also represent web services. When a flow that references a model of this type is deployed, activating an action on the model invokes a Greengrass Lambda function. Here’s how a web service is defined:

Now I can create a flow. I click Flows in the navigation, and click Create flow:

I give my flow a name and enter a description:

I start with an empty canvas, and then drag nodes (Devices, Services, or Logic) to it:

For this demo (which is fully explained in the AWS IoT Things Graph User Guide), I’ll use a MotionSensor, a Camera, and a Screen:

I connect the devices to define the flow:

Then I configure and customize it. There are lots of choices and settings, so I’ll show you a few highlights, and refer you to the User Guide for more info. I set up the MotionSensor so that a change of state initiates this flow:

I also (not shown) configure the Camera to perform the Capture action, and the Screen to display it. I could also make use of the predefined Services:

I can also add Logic to my flow:

Like the models, my flow is ultimately defined in GraphQL (I can view and edit it directly if desired):

At this point I have defined my flow, and I click Publish to make it available for deployment:

The next steps are:

Associate – This step assigns an actual AWS IoT Thing to a device model. I select a Thing, and then choose a device model, and repeat this step for each device model in my flow:

Deploy – I create a Flow Configuration, target it at the Cloud or Greengrass, and use it to deploy my flow (read Creating Flow Configurations to learn more).

Things to Know
I’ve barely scratched the surface here; AWS IoT Things Graph provides you with a lot of power and flexibility and I’ll leave you to discover more on your own!

Here are a couple of things to keep in mind:

Pricing – Pricing is based on the number of steps executed (for cloud deployments) or deployments (for edge deployments), and is detailed on the AWS IoT Things Graph Pricing page.

API Access – In addition to console access, you can use the AWS IoT Things Graph API to build your models and flows.

RegionsAWS IoT Things Graph is available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

 

 

New – Data API for Amazon Aurora Serverless

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/

If you have ever written code that accesses a relational database, you know the drill. You open a connection, use it to process one or more SQL queries or other statements, and then close the connection. You probably used a client library that was specific to your operating system, programming language, and your database. At some point you realized that creating connections took a lot of clock time and consumed memory on the database engine, and soon after found out that you could (or had to) deal with connection pooling and other tricks. Sound familiar?

The connection-oriented model that I described above is adequate for traditional, long-running programs where the setup time can be amortized over hours or even days. It is not, however, a great fit for serverless functions that are frequently invoked and that run for time intervals that range from milliseconds to minutes. Because there is no long-running server, there’s no place to store a connection identifier for reuse.

Aurora Serverless Data API
In order to resolve this mismatch between serverless applications and relational databases, we are launching a Data API for the MySQL-compatible version of Amazon Aurora Serverless. This API frees you from the complexity and overhead that come along with traditional connection management, and gives you the power to quickly and easily execute SQL statements that access and modify your Amazon Aurora Serverless Database instances.

The Data API is designed to meet the needs of both traditional and serverless apps. It takes care of managing and scaling long-term connections to the database and returns data in JSON form for easy parsing. All traffic runs over secure HTTPS connections. It includes the following functions:

ExecuteStatement – Run a single SQL statement, optionally within a transaction.

BatchExecuteStatement – Run a single SQL statement across an array of data, optionally within a transaction.

BeginTransaction – Begin a transaction, and return a transaction identifier. Transactions are expected to be short (generally 2 to 5 minutes).

CommitTransaction – End a transaction and commit the operations that took place within it.

RollbackTransaction – End a transaction without committing the operations that took place within it.

Each function must run to completion within 1 minute, and can return up to 1 megabyte of data.

Using the Data API
I can use the Data API from the Amazon RDS Console, the command line, or by writing code that calls the functions that I described above. I’ll show you all three in this post.

The Data API is really easy to use! The first step is to enable it for the desired Amazon Aurora Serverless database. I open the Amazon RDS Console, find & select the cluster, and click Modify:

Then I scroll down to the Network & Security section, click Data API, and Continue:

On the next page I choose to apply the settings immediately, and click Modify cluster:

Now I need to create a secret to store the credentials that are needed to access my database. I open the Secrets Manager Console and click Store a new secret. I leave Credentials for RDS selected, enter a valid database user name and password, optionally choose a non-default encryption key, and then select my serverless database. Then I click Next:

I name my secret and tag it, and click Next to configure it:

I use the default values on the next page, click Next again, and now I have a brand new secret:

Now I need two ARNs, one for the database and one for the secret. I fetch both from the console, first for the database:

And then for the secret:

The pair of ARNs (database and secret) provides me with access to my database, and I will protect them accordingly!

Using the Data API from the Amazon RDS Console
I can use the Query Editor in the Amazon RDS Console to run queries that call the Data API. I open the console and click Query Editor, and create a connection to the database. I select the cluster, enter my credentials, and pre-select the table of interest. Then I click Connect to database to proceed:

I enter a query and click Run, and view the results within the editor:

Using the Data API from the Command Line
I can exercise the Data API from the command line:

$ aws rds-data execute-statement \
  --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL" \
  --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1" \
  --database users \
  --sql "show tables" \
  --output json

I can use jq to pick out the part of the result that is of interest to me:

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "users"
      }
    ]
  }
]

I can query the table and get the results (the SQL statement is "select * from users where userid='jeffbarr'"):

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "jeffbarr"
      },
      {
        "stringValue": "Jeff"
      },
      {
        "stringValue": "Barr"
      }
    ]
  }

If I specify --include-result-metadata, the query also returns data that describes the columns of the result (I’ll show only the first one in the interest of frugality):

... | jq .columnMetadata[0]
{
  "type": 12,
  "name": "userid",
  "label": "userid",
  "nullable": 1,
  "isSigned": false,
  "arrayBaseColumnType": 0,
  "scale": 0,
  "schemaName": "",
  "tableName": "users",
  "isCaseSensitive": false,
  "isCurrency": false,
  "isAutoIncrement": false,
  "precision": 15,
  "typeName": "VARCHAR"
}

The Data API also allows me to wrap a series of statements in a transaction, and then either commit or rollback. Here’s how I do that (I’m omitting --secret-arn and --resource-arn for clarity):

$ $ID=`aws rds-data begin-transaction --database users --output json | jq .transactionId`
$ echo $ID
"ATP6Gz88GYNHdwNKaCt/vGhhKxZs2QWjynHCzGSdRi9yiQRbnrvfwF/oa+iTQnSXdGUoNoC9MxLBwyp2XbO4jBEtczBZ1aVWERTym9v1WVO/ZQvyhWwrThLveCdeXCufy/nauKFJdl79aZ8aDD4pF4nOewB1aLbpsQ=="

$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ ...
$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ aws rds-data commit-transaction $ID

If I decide not to commit, I invoke rollback-transaction instead.

Using the Data API with Python and Boto
Since this is an API, programmatic access is easy. Here’s some very simple Python / Boto code:

import boto3

client = boto3.client('rds-data')

response = client.execute_sql(
    secretArn   = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL',
    database    = 'users',
    resourceArn = 'arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1',
    sql         = 'select * from users'
)

for user in response['records']:
  userid     = user[0]['stringValue']
  first_name = user[1]['stringValue']
  last_name  = user[2]['stringValue']
  print(userid + ' ' + first_name + ' ' + last_name)

And the output:

$ python data_api.py
jeffbarr Jeff Barr
carmenbarr Carmen Barr

Genuine, production-quality code would reference the table columns symbolically using the metadata that is returned as part of the response.

By the way, my Amazon Aurora Serverless cluster was configured to scale capacity all the way down to zero when not active. Here’s what the scaling activity looked like while I was writing this post and running the queries:

Now Available
You can make use of the Data API today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. There is no charge for the API, but you will pay the usual price for data transfer out of AWS.

Jeff;

New – AWS IoT Events: Detect and Respond to Events at Scale

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-iot-events-detect-and-respond-to-events-at-scale/

As you may have been able to tell from many of the announcements that we have made over the last four or five years, we are working to build a wide-ranging set of Internet of Things (IoT) services and capabilities. Here’s a quick recap:

October 2015AWS IoT Core – A fundamental set of Cloud Services for Connected Devices.

Jun 2017AWS Greengrass – The ability to Run AWS Lambda Functions on Connected Devices.

November 2017AWS IoT Device ManagementOnboarding, Organization, Monitoring, and Remote Management of Connected Devices.

November 2017AWS IoT AnalyticsAdvanced Data Analysis for IoT Devices.

November 2017Amazon FreeRTOSAn IoT Operating System for Microcontrollers.

April 2018Greengrass ML InferenceThe power to do Machine Learning Inference at the Edge.

August 2018AWS IoT Device Defender – A service that helps to Keep Your Connected Devices Safe.

Last November we also announced our plans to launch four new IoT Services:

You can use these services individually or together to build all sorts of powerful, connected applications!

AWS IoT Events Now Available
Today we are making AWS IoT Events available in production form in four AWS Regions. You can use this service to monitor and respond to events (patterns of data that identify changes in equipment or facilities) at scale. You can detect a misaligned robot arm, a motion sensor that triggers outside of business hours, an unsealed freezer door, or a motor that is running outside of tolerance, all with the goal of driving faster and better-informed decisions.

As you will see in a moment, you can easily create detector models that represent your devices, their states, and the transitions (driven by sensors and events, both known as inputs) between the states. The models can trigger actions when critical events are detected, allowing you to build robust, highly automated systems. Actions can, for example, send a text message to a service technician or invoke an AWS Lambda function.

You can access AWS IoT Events from the AWS IoT Event Console or by writing code that calls the AWS IoT Events API functions. I’ll use the Console, and I will start by creating a detector model. I click Create detector model to begin:

I have three options; I’ll go with the demo by clicking Launch demo with inputs:

This shortcut creates an input and a model, and also enables some “demo” functionality that sends data to the model. The model looks like this:

Before examining the model, let’s take a look at the input. I click on Inputs in the left navigation to see them:

I can see all of my inputs at a glance; I click on the newly created input to learn more:

This input represents the battery voltage measured from a device that is connected to a particular powerwallId:

Ok, let’s return to (and dissect) the detector model! I return to the navigation, click Detector models, find my model, and click it:

There are three Send options at the top; each one sends data (an input) to the detector model. I click on Send data for Charging to get started. This generates a message that looks like this; I click Send data to do just that:

Then I click Send data for Charged to indicate that the battery is fully charged. The console shows me the state of the detector:

Each time an input is received, the detector processes it. Let’s take a closer look at the detector. It has three states (Charging, Charged, and Discharging):

The detector starts out in the Charging state, and transitions to Charged when the Full_charge event is triggered. Here’s the definition of the event, including the trigger logic:

The trigger logic is evaluated each time an input is received (your IoT app must call BatchPutMessage to inform AWS IoT Events). If the trigger logic evaluates to a true condition, the model transitions to the new (destination) state, and it can also initiate an event action. This transition has no actions; I can add one (or more) by clicking Add action. My choices are:

  • Send MQTT Message – Send a message to an MQTT topic.
  • Send SNS Message – Send a message to an SNS target, identifed by an ARN.
  • Set Timer – Set, reset, or destroy a timer. Times can be expressed in seconds, minutes, hours, days, or months.
  • Set Variable – Set, increment, or decrement a variable.

Returning (once again) to the detector, I can modify the states as desired. For example, I could fine-tune the Discharging aspect of the detector by adding a LowBattery state:

After I create my inputs and my detector, I Publish the model so that my IoT devices can use and benefit from it. I click Publish and fill in a few details:

The Detector generation method has two options. I can Create a detector for each unique key value (if I have a bunch of devices), or I can Create a single detector (if I have one device). If I choose the first option, I need to choose the key that separates one device from another.

Once my detector has been published, I can send data to it using AWS IoT Analytics, IoT Core, or from a Lambda function.

Get Started Today
We are launching AWS IoT Events in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions and you can start using it today!

Jeff;

 

 

New – Opt-in to Default Encryption for New EBS Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/

My colleagues on the AWS team are always looking for ways to make it easier and simpler for you to protect your data from unauthorized access. This work is visible in many different ways, and includes the AWS Cloud Security page, the AWS Security Blog, a rich collection of AWS security white papers, an equally rich set of AWS security, identity, and compliance services, and a wide range of security features within individual services. As you might recall from reading this blog, many AWS services support encryption at rest & in transit, logging, IAM roles & policies, and so forth.

Default Encryption
Today I would like to tell you about a new feature that makes the use of encrypted Amazon EBS (Elastic Block Store) volumes even easier. This launch builds on some earlier EBS security launches including:

You can now specify that you want all newly created EBS volumes to be created in encrypted form, with the option to use the default key provided by AWS, or a key that you create. Because keys and EC2 settings are specific to individual AWS regions, you must opt-in on a region-by-region basis.

This new feature will let you reach your protection and compliance goals by making it simpler and easier for you to ensure that newly created volumes are created in encrypted form. It will not affect existing unencrypted volumes.

If you use IAM policies that require the use of encrypted volumes, you can use this feature to avoid launch failures that would occur if unencrypted volumes were inadvertently referenced when an instance is launched. Your security team can enable encryption by default without having to coordinate with your development team, and with no other code or operational changes.

Encrypted EBS volumes deliver the specified instance throughput, volume performance, and latency, at no extra charge. I open the EC2 Console, make sure that I am in the region of interest, and click Settings to get started:

Then I select Always encrypt new EBS volumes:

I can click Change the default key and choose one of my keys as the default:

Either way, I click Update to proceed. One thing to note here: This setting applies to a single AWS region; I will need to repeat the steps above for each region of interest, checking the option and choosing the key.

Going forward, all EBS volumes that I create in this region will be encrypted, with no additional effort on my part. When I create a volume, I can use the key that I selected in the EC2 Settings, or I can select a different one:

Any snapshots that I create are encrypted with the key that was used to encrypt the volume:

If I use the volume to create a snapshot, I can use the original key or I can choose another one:

Things to Know
Here are some important things that you should know about this important new AWS feature:

Older Instance Types – After you enable this feature, you will not be able to launch any more C1, M1, M2, or T1 instances or attach newly encrypted EBS volumes to existing instances of these types. We recommend that you migrate to newer instance types.

AMI Sharing – As I noted above, we recently gave you the ability to share encrypted AMIs with other AWS accounts. However, you cannot share them publicly, and you should use a separate account to create community AMIs, Marketplace AMIs, and public snapshots. To learn more, read How to Share Encrypted AMIs Across Accounts to Launch Encrypted EC2 Instances.

Other AWS Services – AWS services such as Amazon Relational Database Service (RDS) and Amazon WorkSpaces that use EBS for storage perform their own encryption and key management and are not affected by this launch. Services such as Amazon EMR that create volumes within your account will automatically respect the encryption setting, and will use encrypted volumes if the always-encrypt feature is enabled.

API / CLI Access – You can also access this feature from the EC2 CLI and the API.

No Charge – There is no charge to enable or use encryption. If you are using encrypted AMIs and create a separate one for each AWS account, you can now share the AMI with other accounts, leading to a reduction in storage utilization and charges.

Per-Region – As noted above, you can opt-in to default encryption on a region-by-region basis.

Available Now
This feature is available now and you can start using it today in all public AWS regions and in GovCloud. It is not available in the AWS regions in China.

Jeff;