Tag Archives: launch

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for the 9th Consecutive Year

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-infrastructure-as-a-service-iaas-magic-quadrant-for-the-9th-consecutiveyear/

My colleagues on the AWS service teams work to deliver what customers want today, and also do their best to anticipate what they will need tomorrow. This Customer Obsession, along with our commitment to Hire and Develop the Best (two of the fourteen Amazon Leadership Principles), helps us to figure out, and then to deliver on, our vision. It is always good to see that our hard work continues to delight customers, and to be recognized by Gartner and other leading analysts.

For the ninth consecutive year, AWS has secured the top-right corner of the Leader’s quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS), earning highest placement for Ability to Execute and furthest for Completeness of Vision:

The full report contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Meet the Newest AWS News Bloggers!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-news-bloggers/

I wrote my first post for this blog way back in 2004! Over the course of the first decade, the amount of time that I devoted to the blog grew from a small fraction of my day to a full day. In the early days my email inbox was my primary source of information about upcoming launches, and also my primary tool for managing my work backlog. When that proved to be unscalable, Ana came onboard and immediately built a ticketing system and set up a process for teams to request blog posts. Today, a very capable team (Greg, Devin, and Robin) takes care of tickets, platforms, comments, metrics, and so forth so that I can focus on what I like to do best: using new services and writing about them!

Over the years we have experimented with a couple of different strategies to scale the actual writing process. If you are a long-time reader you may have seen posts from Mike, Jinesh, Randall, Tara, Shaun, and a revolving slate of guest bloggers.

News Bloggers
I would like to introduce you to our current lineup of AWS News Bloggers. Like me, the bloggers have a technical background and are prepared to go hands-on with every new service and feature. Here’s our roster:

Steve Roberts (@bellevuesteve) – Steve focuses on .NET tools and technologies.

Julien Simon (@julsimon) – Julien likes to help developers and enterprises to bring their ideas to life.

Brandon West (@bwest) – Brandon leads our developer relations team in the Americas, and has written a book on the topic.

Martin Beeby (@thebeebs) – Martin focuses on .NET applications, and has worked as a C# and VB developer since 2001.

Danilo Poccia (@danilop) – Danilo works with companies of any size to support innovation. He is the author of AWS Lambda in Action.

Sébastien Stormacq (@sebesto) – Sébastien works with builders to unlock the value of the AWS cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity, and creativity.

We are already gearing up for re:Invent 2019, and can’t wait to bring you a rich set of blog posts. Stay tuned!

Jeff;

AWS New York Summit 2019 – Summary of Launches & Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-new-york-summit-2019-summary-of-launches-announcements/

The AWS New York Summit just wrapped up! Here’s a quick summary of what we launched and announced:

Amazon EventBridge – This new service builds on the event-processing model that forms the basis for Amazon CloudWatch Events, and makes it easy for you to integrate your AWS applications with SaaS applications such as Zendesk, Datadog, SugarCRM, and Onelogin. Read my blog post, Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications, to learn more.

Werner announces EventBridge – Photo by Serena

Cloud Development Kit – CDK is now generally available, with support for TypeScript and Python. Read Danilo‘s post, AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available, to learn more.

Fluent Bit Plugins for AWSFluent Bit is a multi-platform, open source log processor and forwarder that is compatible with Docker and Kubernetes environments. You can now build a container image that includes new Fluent Bit plugins for Amazon CloudWatch and Amazon Kinesis Data Firehose. The plugins routes logs to CloudWatch, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service. Read Centralized Container Logging with Fluent Bit to learn more.


Nicki, Randall, Robert, and Steve – Photo by Deepak

AWS Toolkit for VS Code – This toolkit lets you develop and test locally (including step-through debugging) in a Lambda-like environment, and then deploy to the AWS Region of your choice. You can invoke Lambda functions locally or remotely, with full control of the function configuration, including the event payload and environment variables. To learn more, read Announcing AWS Toolkit for Visual Studio Code.

Amazon CloudWatch Container Insights (preview) – You can now create CloudWatch Dashboards that monitor the performance and health of your Amazon ECS and AWS Fargate clusters, tasks, containers, and services. Read Using Container Insights to learn more.

CloudWatch Anomaly Detection (preview) – This cool addition to CloudWatch uses machine learning to continuously analyze system and application metrics, determine a nominal baseline, and surface anomalies, all without user intervention. It adapts to trends, and helps to identity unexpected changes in performance or behavior. Read the CloudWatch Anomaly Detection documentation to learn more.

Amazon SageMaker Managed Spot Training (coming soon) – You will soon be able to use Amazon EC2 Spot to lower the cost of training your machine learning models. This upcoming enhancement to SageMaker will lower your training costs by up to 70%, and can be used in conjunction with Automatic Model Training.

Jeff;

 

AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-cloud-development-kit-cdk-typescript-and-python-are-now-generally-available/

Managing your Infrastructure as Code provides great benefits and is often a stepping stone for a successful application of DevOps practices. In this way, instead of relying on manually performed steps, both administrators and developers can automate provisioning of compute, storage, network, and application services required by their applications using configuration files.

For example, defining your Infrastructure as Code makes it possible to:

  • Keep infrastructure and application code in the same repository
  • Make infrastructure changes repeatable and predictable across different environments, AWS accounts, and AWS regions
  • Replicate production in a staging environment to enable continuous testing
  • Replicate production in a performance test environment that you use just for the time required to run a stress test
  • Release infrastructure changes using the same tools as code changes, so that deployments include infrastructure updates
  • Apply software development best practices to infrastructure management, such as code reviews, or deploying small changes frequently

Configuration files used to manage your infrastructure are traditionally implemented as YAML or JSON text files, but in this way you’re missing most of the advantages of modern programming languages. Specifically with YAML, it can be very difficult to detect a file truncated while transferring to another system, or a missing line when copying and pasting from one template to another.

Wouldn’t it be better if you could use the expressive power of your favorite programming language to define your cloud infrastructure? For this reason, we introduced last year in developer preview the AWS Cloud Development Kit (CDK), an extensible open-source software development framework to model and provision your cloud infrastructure using familiar programming languages.

I am super excited to share that the AWS CDK for TypeScript and Python is generally available today!

With the AWS CDK you can design, compose, and share your own custom components that incorporate your unique requirements. For example, you can create a component setting up your own standard VPC, with its associated routing and security configurations. Or a standard CI/CD pipeline for your microservices using tools like AWS CodeBuild and CodePipeline.

Personally I really like that by using the AWS CDK, you can build your application, including the infrastructure, in your IDE, using the same programming language and with the support of autocompletion and parameter suggestion that modern IDEs have built in, without having to do a mental switch between one tool, or technology, and another. The AWS CDK makes it really fun to quickly code up your AWS infrastructure, configure it, and tie it together with your application code!

How the AWS CDK works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

How to use the AWS CDK
We continuously add new features based on the feedback of our customers. That means that when creating an AWS resource, you often have to specify many options and dependencies. For example, if you create a VPC you have to think about how many Availability Zones (AZs) to use and how to configure subnets to give private and public access to the resources that will be deployed in the VPC.

To make it easy to define the state of AWS resources, the AWS Construct Library exposes the full richness of many AWS services with sensible defaults that you can customize as needed. In the case above, the VPC construct creates by default public and private subnets for all the AZs in the VPC, using 3 AZs if not specified.

For creating and managing CDK apps, you can use the AWS CDK Command Line Interface (CLI), a command-line tool that requires Node.js and can be installed quickly with:

npm install -g aws-cdk

After that, you can use the CDK CLI with different commands:

  • cdk init to initialize in the current directory a new CDK project in one of the supported programming languages
  • cdk synth to print the CloudFormation template for this app
  • cdk deploy to deploy the app in your AWS Account
  • cdk diff to compare what is in the project files with what has been deployed

Just run cdk to see more of the available commands and options.

You can easily include the CDK CLI in your deployment automation workflow, for example using Jenkins or AWS CodeBuild.

Let’s use the AWS CDK to build two sample projects, using different programming languages.

An example in TypeScript
For the first project I am using TypeScript to define the infrastructure:

cdk init app --language=typescript

Here’s a simplified view of what I want to build, not entering into the details of the public/private subnets in the VPC. There is an online frontend, writing messages in a queue, and an asynchronous backend, consuming messages from the queue:

Inside the stack, the following TypeScript code defines the resources I need, and their relations:

  • First I define the VPC and an Amazon ECS cluster in that VPC. By using the defaults provided by the AWS Construct Library, I don’t need to specify any parameter here.
  • Then I use an ECS pattern that in a few lines of code sets up an Amazon SQS queue and an ECS service running on AWS Fargate to consume the messages in that queue.
  • The ECS pattern library provides higher-level ECS constructs which follow common architectural patterns, such as load balanced services, queue processing, and scheduled tasks.
  • A Lambda function has the name of the queue, created by the ECS pattern, passed as an environment variable and is granted permissions to send messages to the queue.
  • The code of the Lambda function and the Docker image are passed as assets. Assets allow you to bundle files or directories from your project and use them with Lambda or ECS.
  • Finally, an Amazon API Gateway endpoint provides an HTTPS REST interface to the function.
const myVpc = new ec2.Vpc(this, "MyVPC");

const myCluster = new ecs.Cluster(this, "MyCluster", {
  vpc: myVpc
});

const myQueueProcessingService = new ecs_patterns.QueueProcessingFargateService(
  this, "MyQueueProcessingService", {
    cluster: myCluster,
    memoryLimitMiB: 512,
    image: ecs.ContainerImage.fromAsset("my-queue-consumer")
  });

const myFunction = new lambda.Function(
  this, "MyFrontendFunction", {
    runtime: lambda.Runtime.NODEJS_10_X,
    timeout: Duration.seconds(3),
    handler: "index.handler",
    code: lambda.Code.asset("my-front-end"),
    environment: {
      QUEUE_NAME: myQueueProcessingService.sqsQueue.queueName
    }
  });

myQueueProcessingService.sqsQueue.grantSendMessages(myFunction);

const myApi = new apigateway.LambdaRestApi(
  this, "MyFrontendApi", {
    handler: myFunction
  });

I find this code very readable and easier to maintain than the corresponding JSON or YAML. By the way, cdk synth in this case outputs more than 800 lines of plain CloudFormation YAML.

An example in Python
For the second project I am using Python:

cdk init app --language=python

I want to build a Lambda function that is executed every 10 minutes:

When you initialize a CDK project in Python, a virtualenv is set up for you. You can activate the virtualenv and install your project requirements with:

source .env/bin/activate

pip install -r requirements.txt

Note that Python autocompletion may not work with some editors, like Visual Studio Code, if you don’t start the editor from an active virtualenv.

Inside the stack, here’s the Python code defining the Lambda function and the CloudWatch Event rule to invoke the function periodically as target:

myFunction = aws_lambda.Function(
    self, "MyPeriodicFunction",
    code=aws_lambda.Code.asset("src"),
    handler="index.main",
    timeout=core.Duration.seconds(30),
    runtime=aws_lambda.Runtime.PYTHON_3_7,
)

myRule = aws_events.Rule(
    self, "MyRule",
    schedule=aws_events.Schedule.rate(core.Duration.minutes(10)),
)
myRule.add_target(aws_events_targets.LambdaFunction(myFunction))

Again, this is easy to understand even if you don’t know the details of AWS CDK. For example, durations include the time unit and you don’t have to wonder if they are expressed in seconds, milliseconds, or days. The output of cdk synth in this case is more than 90 lines of plain CloudFormation YAML.

Available Now
There is no charge for using the AWS CDK, you pay for the AWS resources that are deployed by the tool.

To quickly get hands-on with the CDK, start with this awesome step-by-step online tutorial!

More examples of CDK projects, using different programming languages, are available in this repository:

https://github.com/aws-samples/aws-cdk-examples

You can find more information on writing your own constructs here.

The AWS CDK is open source and we welcome your contribution to make it an even better tool:

https://github.com/awslabs/aws-cdk

Check out our source code on GitHub, start building your infrastructure today using TypeScript or Python, or try different languages in developer preview, such as C# and Java, and give us your feedback!

Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-eventbridge-event-driven-aws-integration-for-your-saas-applications/

Many AWS customers also make great use of SaaS (Software as a Service) applications. For example, they use Zendesk to manage customer service & support tickets, PagerDuty to handle incident response, and SignalFx for real-time monitoring. While these applications are quite powerful on their own, they are even more so when integrated into a customer’s own systems, databases, and workflows.

New Amazon EventBridge
In order to support this increasingly common use case, we are launching Amazon EventBridge today. Building on the powerful event processing model that forms the basis for CloudWatch Events, EventBridge makes it easy for our customers to integrate their own AWS applications with SaaS applications. The SaaS applications can be hosted anywhere, and simply publish events to an event bus that is specific to each AWS customer. The asynchronous, event-based model is fast, clean, and easy to use. The publisher (SaaS application) and the consumer (code running on AWS) are completely decoupled, and are not dependent on a shared communication protocol, runtime environment, or programming language. You can use simple Lambda functions to handle events that come from a SaaS application, and you can also route events to a wide variety of other AWS targets. You can store incident or ticket data in Amazon Redshift, train a machine learning model on customer support queries, and much more.

Everything that you already know (and hopefully love) about CloudWatch Events continues to apply, with one important change. In addition to the existing default event bus that accepts events from AWS services, calls to PutEvents, and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.

As part of today’s launch we are also opening up a partner program. The integration process is simple and straightforward, and generally requires less than one week of developer time.

All About Amazon EventBridge
Here are some terms that you need to know in order to understand how to use Amazon EventBridge:

Partner – An organization that has integrated their SaaS application with EventBridge.

Customer – An organization that uses AWS, and that has subscribed to a partner’s SaaS application.

Partner Name – A unique name that identifies an Amazon EventBridge partner.

Partner Event Bus – An Event Bus that is used to deliver events from a partner to AWS.

EventBridge can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS SDKs. There are distinct commands and APIs for partners and for customers. Here are some of the most important ones:

PartnersCreatePartnerEventSource, ListPartnerEventSourceAccounts, ListPartnerEventSources, PutPartnerEvents.

CustomersListEventSources, ActivateEventSource, CreateEventBus, ListEventBuses, PutRule, PutTargets.

Amazon EventBridge for Partners & Customers
As I noted earlier, the integration process is simple and straightforward. You need to allow your customers to enter an AWS account number and to select an AWS region. With that information in hand, you call CreatePartnerEventSource in the desired region, inform the customer of the event source name and tell them that they can accept the invitation to connect, and wait for the status of the event source to change to ACTIVE. Then, each time an event of interest to the customer occurs, you call PutPartnerEvents and reference the event source.

The process is just as simple on the customer side. You accept the invitation to connect by calling CreateEventBus to create an event bus associated with the event source. You add rules and targets to the event bus, and prepare your Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. You can use DeActivateEventSource and ActivateEventSource to control the flow.

Here’s the overall flow (diagram created using SequenceDiagram):

Each partner has the freedom to choose the events that are relevant to their application, and to define the data elements that are included with each event.

Using EventBridge
Starting from the EventBridge Console, I click Partner event sources, find the partner of interest, and click it to learn more:

Each partner page contains additional information about the integration. I read the info, and click Set up to proceed:

The page provides me with a simple, three-step procedure to set up my event source:


After the partner creates the event source, I return to Partner event sources and I can see that the Zendesk event source is Pending:

I click the pending event source, review the details, and then click Associate with event bus:

I have the option to allow other AWS accounts, my Organization, or another Organization to access events on the event bus that I am about to create. After I have confirmed that I trust the origin and have added any additional permissions, I click Associate:

My new event bus is now available, and is listed as a Custom event bus:

I click Rules, select the event bus, and see the rules (none so far) associated with it. Then I click Create rule to make my first rule:

I enter a name and a description for my first rule:

Then I define a pattern, choosing Zendesk as the Service name:

Next, I select a Lambda function as my target:

I can also choose from many other targets:

After I create my rule, it will be activated in response to activities that occur within my Zendesk account. The initial set of events includes TicketCreated, CommentCreated, TagsChanged, AgentAssignmentChanged, GroupAssignmentChanged, FollowersChanged, EmailCCsChanged, CustomFieldChanged, and StatusChanged. Each event includes a rich set of properties; you’ll need to consult the documentation to learn more.

Partner Event Sources
We are launching with ten partner event sources, with more to come:

  • Datadog
  • Zendesk
  • PagerDuty
  • Whispir
  • Saviynt
  • Segment
  • SignalFx
  • SugarCRM
  • OneLogin
  • Symantec

If you have a SaaS application and you are ready to integrate, read more about EventBridge Partner Integration.

Now Available
Amazon EventBridge is available now and you can start using it today in all public AWS regions in the aws partition. Support for the AWS regions in China, and for the Asia Pacific (Osaka) Local Region, is in the works.

Pricing is based on the number of events published to the event buses in your account, billed at $1 for every million events. There is no charge for events published by AWS services.

Jeff;

PS – As you can see from this post, we are paying even more attention to the overall AWS event model, and have a lot of interesting goodies on the drawing board. With this launch, CloudWatch Events has effectively earned a promotion to a top-level service, and I’ll have a lot more to say about that in the future!

Amazon Aurora PostgreSQL Serverless – Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-aurora-postgresql-serverless-now-generally-available/

The database is usually the most critical part of a software architecture and managing databases, especially relational ones, has never been easy. For this reason, we created Amazon Aurora Serverless, an auto-scaling version of Amazon Aurora that automatically starts up, shuts down and scales up or down based on your application workload.

The MySQL-compatible edition of Aurora Serverless has been available for some time now. I am pleased to announce that the PostgreSQL-compatible edition of Aurora Serverless is generally available today.

Before moving on with details, I take the opportunity to congratulate the Amazon Aurora development team that has just won the 2019 Association for Computing Machinery’s (ACM) Special Interest Group on Management of Data (SIGMOD) Systems Award!

When you create a database with Aurora Serverless, you set the minimum and maximum capacity. Your client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is very fast because resources are “warm” and ready to be added to serve your requests.

 

There is no change with Aurora Serverless on how storage is managed by Aurora. The storage layer is independent from the compute resources used by the database. There is no need to provision storage in advance. The minimum storage is 10GB and, based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance.

Creating an Aurora Serverless PostgreSQL Database
Let’s start an Aurora Serverless PostgreSQL database and see the automatic scalability at work. From the Amazon RDS console, I select to create a database using Amazon Aurora as engine. Currently, Aurora serverless is compatible with PostgreSQL version 10.5. Selecting that version, the serverless option becomes available.

I give the new DB cluster an identifier, choose my master username, and let Amazon RDS generate a password for me. I will be able to retrieve my credentials during database creation.

I can now select the minimum and maximum capacity for my database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration I choose to pause compute capacity after 5 minutes of inactivity. Based on my settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.

Testing Some Load on the Database
To generate some load on the database I am using sysbench on an EC2 instance. There are a couple of Lua scripts bundled with sysbench that can help generate an online transaction processing (OLTP) workload:

  • The first script, parallel_prepare.lua, generates 100,000 rows per table for 24 tables.
  • The second script, oltp.lua, generates workload against those data using 64 worker threads.

By using those scripts, I start generating load on my database cluster. As you can see from this graph, taken from the RDS console monitoring tab, the serverless database capacity grows and shrinks to follow my requirements. The metric shown on this graph is the number of ACUs used by the database cluster. First it scales up to accommodate the sysbench workload. When I stop the load generator, it scales down and then pauses.

Available Now
Aurora Serverless PostgreSQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). With Aurora Serverless, you pay on a per-second basis for the database capacity you use when the database is active, plus the usual Aurora storage costs.

For more information on Amazon Aurora, I recommend this great post explaining why and how it was created:

Amazon Aurora ascendant: How we designed a cloud-native relational database

It’s never been so easy to use a relational database in production. I am so excited to see what you are going to use it for!

AWS Project Resilience – Up to $2K in AWS Credits to Support DR Preparation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-project-resilience-up-to-2k-in-aws-credits-to-support-dr-preparation/

We want to help state and local governments, community organizations, and educational institutions to better prepare for natural and man-made disasters that could affect their ability to run their mission-critical IT systems.

Today we are launching AWS Project Resilience. This new element of our existing Disaster Response program offers up to $2,000 in AWS credits to organizations of the types that I listed above. The program is open to new and existing customers, with distinct benefits for each:

New Customers – Eligible new customers can submit a request for up to $2,000 in AWS Project Resilience credits that can be used to offset costs incurred by storing critical datasets in Amazon Simple Storage Service (S3).

Existing Customers – Eligible existing customers can submit a request for up to $2,000 in AWS Project Resilience credits to offset the costs incurred by engaging CloudEndure and AWS Disaster Response experts to do a deep dive on an existing business continuity architecture.

Earlier this month I sat down with my colleague Ana Visneski to learn more about disaster preparedness, disaster recovery, and AWS Project Resilience. Here’s our video:

To learn more and to apply to the program, visit the AWS Project Resilience page!

Jeff;

 

OpsCenter – A New Feature to Streamline IT Operations

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/opscenter-a-new-feature-to-streamline-it-operations/

The AWS teams are always listening to customers and trying to understand how they can improve services to make customers more productive. A new feature in AWS Systems Manager called OpsCenter exemplifies this approach by enabling customers to aggregate issues, events and alerts, across services. So customers can go to one place to view, investigate, and remediate issues reducing the need to navigate across multiple different AWS services.

Issues, events and alerts appear as operations items (OpsItems) in this new console and provide contextual information, historical guidance, and quick solution steps. The feature aims to improve the mean time to resolution, making engineers more productive by ensuring key investigation data is available in one place.

Engineers working on an OpsItem get access to information such as:

  • Event, resource and account details
  • Past OpsItems with similar characteristics
  • Related AWS Config changes and relationships
  • AWS CloudTrail logs
  • Amazon CloudWatch alarms
  • AWS CloudFormation Stack information
  • Other quick-links to access logs and metrics
  • List of runbooks and recommended runbooks
  • Additional information passed to OpsCenter through AWS services

This information helps engineers to investigate and remediate operational issues faster. Engineers can use OpsCenter to view and address problems using the Systems Manager console or via the Systems Manager OpsCenter APIs.

I’ll spend the rest of this blog exploring the capabilities of this new feature. To get started, I open the Systems Manager Console, make sure that I am in the region of interest, and click OpsCenter inside the Operations Management menu which is on the far left of the screen.

After arriving at the OpsCenter screen for the first time and clicking on “Getting Started” I am prompted with a configure sources screen. This screen sets up the systems with some example CloudWatch rules that will create OpsItems when specific rules trigger. For example, one of the CloudWatch rules will alert if an AutoScaling EC2 instance is stopped or terminated. On this screen, you need to configure and add the ARN of an IAM role that has permission to create OpsItems. This security role is used by the CloudWatch rules to create the OpsItems. You can, of course, create your OpsItems through the API or by creating custom CloudWatch rules.

Now the system has set me up some CloudWatch rules I thought I would test it out by triggering an alert. In the EC2 console, I will intentionally deregister (delete) the Amazon Machine Image that is associated with my AutoScaling Group. I will then increase the Desired Capacity of my AutoScaling group from 2 to 4. The AutoScaling group will later try to create new instances; however, it will fail because I have deleted the AMI.

As I expected this triggered the CloudWatch rule to create an OpsItem in OpsCenter console. There is now one item open in the OpsItem status summary dashboard. I click on this to get more detail on the open OpsItems.

This gives me a list of all the open OpsItems, and I can see that I have one with the title “Auto Scaling EC2 instance launch failed” which has been created by CloudWatch rules because I deleted the AMI associated with the AutoScaling group. Clicking on that OpsItems takes me to more detail of the OpsItem.

I can from this overview screen start to explore the item. Looking around this screen, I can find out more information about this OpsItem and see it is collecting data from numerous services and presenting it in one place.

Further down the screen I can see other Similar OpsItems and can explore them. In a real situation, this might give me contextual information as to how similar problems were solved in the past, ensuring that operations teams learn from their previous collective experience. I can also manually add a relationship between OpsItems if they are connected. Importantly the Operational data section gives me information about the cause. The status message is particularly useful since it’s calling out the issue: that the AMI does not exist.

On the related resources details screen, I can find out more information about this OpsItem. For example, I can see tag information about the resources alongside relevant CloudWatch alarms. I can explore details from AWS config as well as drill into CloudTrail logs. I can even see if the resources are associated with any CloudFormation stacks.

Earlier on, I created a CloudWatch alarm that will alert when the number of instances on my AutoScaling group falls below the desired instance threshold (4 Instances). As you can see, I don’t need to go into the CloudWatch console to view this, I can see right from this screen that I have an Alarm State for Booking App Instance Count Low.

The Runbooks section is fascinating; what it is offering me is automated ways in which I can resolve this issue. There are several built-in Runbooks; however, I have a custom one which, luckily enough, automates the fix for this exact problem. It will create a new AMI based upon one of the healthy instances in my AutoScaling Group and then update the config to use that new AMI when it creates instances. To run this automation, I select the runbook and press execute.

It asks me to provide some parameters for the automation job. I paste the AutoScaling Group Name (BookingsAppASG) as the only required parameter and press Execute.

After a minute or so a green success signifier appears in the Latest Status column of the runbook and I am now able to view the logs and even save the output to operation data on the OpsItem so that other engineers can clearly see what I have done.

 

Back in the OpsCenter OpsItem related resource details screen, I can now see that my CloudWatch alarm is green and in an OK state, signifying that my AutoScalling group currently has four instances running and I am safe to resolve the OpsItem.

This service is available now, and you can start using it today in all public AWS regions so why not open up the console and start exploring all the ways that you can save you and your team valuable time.

EC2 Instance Update – Two More Sizes of M5 & R5 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-two-more-sizes-of-m5-r5-instances/

When I introduced the Nitro system last year I said:

The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Today I am happy to make good on that promise, with the introduction of two additional sizes of the Intel and AMD-powered M5 and R5 instances, including optional NVMe storage. These additional sizes will make it easier for you to find an instance size that is a perfect match for your workload.

M5 Instances
These instances are designed for general-purpose workloads such as web servers, app servers, dev/test environments, gaming, logging, and media processing. Here are the specs:

Instance NamevCPUsRAMStorageEBS-Optimized BandwidthNetwork Bandwidth
m5.8xlarge
32 128 GiBEBS Only5 Gbps10 Gbps
m5.16xlarge
64 256 GiBEBS Only10 Gbps20 Gbps
m5a.8xlarge
32 128 GiBEBS Only3.5 GbpsUp to 10 Gbps
m5a.16xlarge
64 256 GiBEBS Only7 Gbps12 Gbps
m5d.8xlarge
32128 GiB2 x 600 GB NVMe SSD5 Gbps10 Gbps
m5d.16xlarge
64256 GiB4 x 600 GB NVMe SSD10 Gbps20 Gbps

If you are currently using m4.10xlarge or m4.16xlarge instances, you now have several upgrade paths.

To learn more, read M5 – The Next Generation of General-Purpose EC2 Instances, New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances, and EC2 Instance Update – M5 Instances with Local NVMe Storage.

R5 Instances
These instances are designed for data mining, in-memory analytics, caching, simulations, and other memory-intensive workloads. Here are the specs:

Instance NamevCPUsRAMStorageEBS-Optimized BandwidthNetwork Bandwidth
r5.8xlarge
32256 GiBEBS Only5 Gbps10 Gbps
r5.16xlarge
64512 GiBEBS Only10 Gbps20 Gbps
r5a.8xlarge
32256 GiBEBS Only3.5 GbpsUp to 10 Gbps
r5a.16xlarge
64512 GiBEBS Only7 Gbps12 Gbps
r5d.8xlarge
32256 GiB2 x 600 GB NVMe SSD5 Gbps10 Gbps
r5d.16xlarge
64512 GiB4 x 600 GB NVMe SSD10 Gbps20 Gbps

If you are currently using r4.8xlarge or r4.16xlarge instances, you now have several easy and powerful upgrade paths.

To learn more, read Amazon EC2 Instance Update – Faster Processors and More Memory.

Things to Know
Here a couple of things to keep in mind when you use these new instances:

Processor Choice – You can choose between Intel and AMD EPYC processors (instance names include an “a”). Read my post, New Lower-Cost AMD-Powered M5a and R5a EC2 Instances, to learn more.

AMIs – You can use the same AMIs that you use with your existing M5 and R5 instances.

Regions – The new sizes are available in all AWS Regions where the existing sizes are already available.

Local NVMe Storage – On “d” instances with local NVMe storage, the devices are encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated. The local devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
The new sizes are available in On-Demand, Spot, and Reserved Instance form and you can start using them today!

Jeff;

 

New – VPC Traffic Mirroring – Capture & Inspect Network Traffic

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-vpc-traffic-mirroring/

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly.

VPC Traffic Mirroring
Today we are launching VPC Traffic Mirroring. This is a new feature that you can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to:

Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools.

Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed.

Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth.

Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications.

You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. As you will soon see, you can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection.

You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, M5, M5a, M5d, R5, R5a, R5d, T3, and z1d as I write this).

Getting Started with VPC Traffic Mirroring
Let’s review the key elements of VPC Traffic Mirroring and then set it up:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works. I’ll use the Console.

I already have ENI that I will use as my mirror source and destination (in a real-world use case I would probably use an NLB destination):

The MirrorTestENI_Source and MirrorTestENI_Destination ENIs are already attached to suitable EC2 instances. I open the VPC Console and scroll down to the Traffic Mirroring items, then click Mirror Targets:

I click Create traffic mirror target:

I enter a name and description, choose the Network Interface target type, and select my ENI from the menu. I add a Blog tag to my target, as is my practice, and click Create:

My target is created and ready to use:

Now I click Mirror Filters and Create traffic mirror filter. I create a simple filter that captures inbound traffic on three ports (22, 80, and 443), and click Create:

Again, it is created and ready to use in seconds:

Next, I click Mirror Sessions and Create traffic mirror session. I create a session that uses MirrorTestENI_Source, MainTarget, and MyFilter, allow AWS to choose the VXLAN network identifier, and indicate that I want the entire packet mirrored:

And I am all set. Traffic from my mirror source that matches my filter is encapsulated as specified in RFC 7348 and delivered to my mirror target. I can then use tools like Suricata to capture, analyze, and visualize it.

Things to Know
Here are a couple of things to keep in mind:

Sessions Per ENI – You can have up to three active sessions on each ENI.

Cross-VPC – The source and target ENIs can be in distinct VPCs as long as they are peered to each other or connected through Transit Gateway.

Scaling & HA – In most cases you should plan to mirror traffic to a Network Load Balancer and then run your capture & analysis tools on an Auto Scaled fleet of EC2 instances behind it.

Bandwidth – The replicated traffic generated by each instance will count against the overall bandwidth available to the instance. If traffic congestion occurs, mirrored traffic will be dropped first.

Now Available
VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Jeff;

 

AWS Security Hub Now Generally Available

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/aws-security-hub-now-generally-available/

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve.

With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices.

Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations.

Getting Started

When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year.

The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.

Findings can be grouped into insights using aggregation statements and filters.

Integrations

In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions.

AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default.

{
    "Findings": [{
        "AwsAccountId": "12345678912",
        "CreatedAt": "2019-06-13T22:22:58Z",
        "Description": "This is a custom finding from the API",
        "GeneratorId": "api-test",
        "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef",
        "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default",
        "Resources": [{
            "Type": "Other",
            "Id": "i-decafbad"
        }],
        "SchemaVersion": "2018-10-08",
        "Severity": {
            "Product": 2.5,
            "Normalized": 11
        },
        "Title": "Security Finding from Custom Software",
        "Types": [
            "Software and Configuration Checks/Vulnerabilities/CVE"
        ],
        "UpdatedAt": "2019-06-13T22:22:58Z"
    }]
}

Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK.

const fs    = require('fs');        // For file system interactions
const util  = require('util');      // To wrap fs API with promises
const AWS   = require('aws-sdk');   // Load the AWS SDK

AWS.config.update({region: 'us-east-1'});

// Create our Security Hub client
const sh = new AWS.SecurityHub();

// Wrap readFile so it returns a promise and can be awaited 
const readFile = util.promisify(fs.readFile);

async function getFindings(path) {
    try {
        // wait for the file to be read...
        let fileData = await readFile(path);

        // ...then parse it as JSON and return it
        return JSON.parse(fileData);
    }
    catch (error) {
        console.error(error);
    }
}

async function importFindings() {
    // load the findings from our file
    const findings = await getFindings('./findings.json');

    try {
        // call the AWS Security Hub BatchImportFindings endpoint
        response = await sh.batchImportFindings(findings).promise();
        console.log(response);
    }
    catch (error) {
        console.error(error);
    }
}

// Engage!
importFindings();

A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console:

Custom Actions

AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service.

First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN.

Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this:

{
  "source": [
    "aws.securityhub"
  ],
  "detail-type": [
    "Security Hub Findings - Custom Action"
  ],
  "resources": [
    "arn:aws:securityhub:us-west-2:123456789012:action/custom/DoThing"
  ]
}

Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked.

Important Notes

  • AWS Config must be enabled for Security Hub compliance checks to run.
  • AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
  • AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions.

AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building!

— Brandon

AWS Control Tower – Set up & Govern a Multi-Account AWS Environment

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-control-tower-set-up-govern-a-multi-account-aws-environment/

Earlier this month I met with an enterprise-scale AWS customer. They told me that they are planning to go all-in on AWS, and want to benefit from all that we have learned about setting up and running AWS at scale. In addition to setting up a Cloud Center of Excellence, they want to set up a secure environment for teams to provision development and production accounts in alignment with our recommendations and best practices.

AWS Control Tower
Today we are announcing general availability of AWS Control Tower. This service automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. Control Tower incorporates the knowledge that AWS Professional Service has gained over the course of thousands of successful customer engagements, and also draws from the recommendations found in our whitepapers, documentation, the Well-Architected Framework, and training. The guidance offered by Control Tower is opinionated and prescriptive, and is designed to accelerate your cloud journey!

AWS Control Tower builds on multiple AWS services including AWS Organizations, AWS Identity and Access Management (IAM) (including Service Control Policies), AWS Config, AWS CloudTrail, and AWS Service Catalog. You get a unified experience built around a collection of workflows, dashboards, and setup steps. AWS Control Tower automates a landing zone to set up a baseline environment that includes:

  • A multi-account environment using AWS Organizations.
  • Identity management using AWS Single Sign-On (SSO).
  • Federated access to accounts using AWS SSO.
  • Centralize logging from AWS CloudTrail, and AWS Config stored in Amazon S3.
  • Cross-account security audits using AWS IAM and AWS SSO.

Before diving in, let’s review a couple of key Control Tower terms:

Landing Zone – The overall multi-account environment that Control Tower sets up for you, starting from a fresh AWS account.

Guardrails – Automated implementations of policy controls, with a focus on security, compliance, and cost management. Guardrails can be preventive (blocking actions that are deemed as risky), or detective (raising an alert on non-conformant actions).

Blueprints – Well-architected design patterns that are used to set up the Landing Zone.

Environment – An AWS account and the resources within it, configured to run an application. Users make requests (via Service Catalog) for new environments and Control Tower uses automated workflows to provision them.

Using Control Tower
Starting from a brand new AWS account that is both Master Payer and Organization Master, I open the Control Tower Console and click Set up landing zone to get started:

AWS Control Tower will create AWS accounts for log arching and for auditing, and requires email addresses that are not already associated with an AWS account. I enter two addresses, review the information within Service permissions, give Control Tower permission to administer AWS resources and services, and click Set up landing zone:

The setup process runs for about an hour, and provides status updates along the way:

Early in the process, Control Tower sends a handful of email requests to verify ownership of the account, invite the account to participate in AWS SSO, and to subscribe to some SNS topics. The requests contain links that I must click in order for the setup process to proceed. The second email also requests that I create an AWS SSO password for the account. After the setup is complete, AWS Control Tower displays a status report:

The console offers some recommended actions:

At this point, the mandatory guardrails have been applied and the optional guardrails can be enabled:

I can see the Organizational Units (OUs) and accounts, and the compliance status of each one (with respect to the guardrails):

 

Using the Account Factory
The navigation on the left lets me access all of the AWS resources created and managed by Control Tower. Now that my baseline environment is set up, I can click Account factory to provision AWS accounts for my teams, applications, and so forth.

The Account factory displays my network configuration (I’ll show you how to edit it later), and gives me the option to Edit the account factory network configuration or to Provision new account:

I can control the VPC configuration that is used for new accounts, including the regions where VPCs are created when an account is provisioned:

The account factory is published to AWS Service Catalog automatically. I can provision managed accounts as needed, as can the developers in my organization. I click AWS Control Tower Account Factory to proceed:

I review the details and click LAUNCH PRODUCT to provision a new account:

Working with Guardrails
As I mentioned earlier, Control Tower’s guardrails provide guidance that is either Mandatory or Strongly Recommended:

Guardrails are implemented via an IAM Service Control Policy (SCP) or an AWS Config rule, and can be enabled on an OU-by-OU basis:

Now Available
AWS Control Tower is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with more to follow. There is no charge for the Control Tower service; you pay only for the AWS resources that it creates on your behalf.

In addition to adding support for more AWS regions, we are working to allow you to set up a parallel landing zone next to an existing AWS account, and to give you the ability to build and use custom guardrails.

Jeff;

 

New – UDP Load Balancing for Network Load Balancer

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/

The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part (read my post, New Network Load Balancer – Effortless Scaling to Millions of Requests per Second to learn more).

In response to customer requests, we have added several new features since the late-2017 launch, including cross-zone load balancing, support for resource-based and tag-based permissions, support for use across an AWS managed VPN tunnel, the ability to create a Network Load Balancer using the AWS Elastic Beanstalk Console, support for Inter-Region VPC Peering, and TLS Termination.

UDP Load Balancing
Today we are adding support for another frequent customer request, the ability to load balance UDP traffic. You can now use Network Load Balancers to deploy connectionless services for online gaming, IoT, streaming, media transfer, and native UDP applications. If you are hosting DNS, SIP, SNMP, Syslog, RADIUS, and other UDP services in your own data center, you can now move the services to AWS. You can also deploy services to handle Authentication, Authorization, and Accounting, often known as AAA.

You no longer need to maintain a fleet of proxy servers to ingest UDP traffic, and you can now use the same load balancer for both TCP and UDP traffic. You can simplify your architecture, reduce your costs, and increase your scalability.

Creating a UDP Network Load Balancer
I can create a Network Load Balancer with UDP support using the Console, CLI (create-load-balancer), API (CreateLoadBalancer), or a CloudFormation template (AWS::ElasticLoadBalancing::LoadBalancer), as usual. The console lets me choose the desired load balancer; I click the Create button underneath Network Load Balancer:

I name my load balancer, choose UDP from the protocol menu, and select a port (514 is for Syslog):

I already have suitable EC2 instances in us-east-1b and us-east-1c so I’ll use those AZs:

Then I set up a target group for the UDP protocol on port 514:

I choose my instances and click Add to registered:

I review my settings on the next page, and my new UDP Load Balancer is ready to accept traffic within a minute or so (the state starts out as provisioning and transitions to active when it is ready):

I’ll test this out by configuring my EC2 instances as centralized Syslogd servers. I simply edit the configuration file (/etc/rsyslog.conf) on the instances to make them listen on port 514, and restart the service:

Then I launch another EC2 instance and configure it to use my NLB endpoint:

And I can see log entries in my servers (ip-172-31-29-40 is my test instance):

I did have to do make one small configuration change in order to get this to work! Using UDP to check on the health of a service does not really make sense, so I clicked override and specified a health check on port 80 instead:

In a real-world scenario you would want to build a TCP-style health check into your service, of course. And, needless to say, I would run a custom implementation of Syslog that stores the log messages centrally and in a highly durable form.

Things to Know
Here are a couple of things to know about this important new NLB feature:

Supported Targets – UDP on Network Load Balancers is supported for Instance target types (IP target types and PrivateLink are not currently supported).

Health Checks – As I mentioned above, health checks must be done using TCP, HTTP, or HTTPS.

Multiple Protocols – A single Network Load Balancer can handle both TCP and UDP traffic. You can add another listener to an existing load balancer to gain UDP support, as long as you use distinct ports. In situations such as DNS where you need support for both TCP and UDP on the same port, you can set up a multi-protocol target group and a multi-protocol listener (use TCP_UDP for the listener type and the TargetGroup).

New CloudWatch Metrics – The existing CloudWatch metrics (ProcessedBytes, ActiveFlowCount, and NewFlowCount) now represent the aggregate traffic processed by the TCP, UDP, and TLS listeners on a given Network Load Balancer.

Available Now
This feature is available now and you can start using it today in all commercial AWS Regions. For pricing, see the Elastic Load Balancing Pricing page.

Jeff;

 

Now Available: New C5 instance sizes and bare metal instances

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/now-available-new-c5-instance-sizes-and-bare-metal-instances/

Amazon EC2 C5 instances are very popular for running compute-heavy workloads like batch processing, distributed analytics, high-performance computing, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.

Today, we are happy to expand the Amazon EC2 C5 family with:

  • New larger virtualized instance sizes: 12xlarge and 24xlarge,
  • A bare metal option.

The new C5 instance sizes run on Intel’s Second Generation Xeon Scalable processors (code-named Cascade Lake) with sustained all-core turbo frequency of 3.6GHz and maximum single core turbo frequency of 3.9GHz.

The new processors also enable a new feature called Intel Deep Learning Boost, a capability based on the AVX-512 instruction set. Thanks to the new Vector Neural Network Instructions (AVX-512 VNNI), deep learning frameworks will speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of workloads.

These instances are also based on the AWS Nitro System, with dedicated hardware accelerators for EBS processing (including crypto operations), the software-defined network inside of each Virtual Private Cloud (VPC), and ENA networking.

New C5 instance sizes: 12xlarge and 24xlarge

Previously, the largest C5 instance available was C5.18xlarge, with 72 logical processors and 144 GiB of memory. As you can see, the new 24xlarge size increases available resources by 33%, in order to scale up and reduce the time required to compute intensive tasks.

Instance NameLogical ProcessorsMemoryEBS-Optimized BandwidthNetwork Bandwidth
c5.12xlarge4896 GiB7 Gbps12 Gbps
c5.24xlarge96192 GiB14 Gbps25 Gbps

Bare metal C5

Just like for existing bare metal instances (M5, M5d, R5, R5d, z1d, and so forth), your operating system runs directly on the underlying hardware with direct access to the processor.

As described in a previous blog post, you can leverage bare metal instances for applications that:

  • do not want to take the performance hit of nested virtualization,
  • need access to physical resources and low-level hardware features, such as performance counters and Intel VT that are not always available or fully supported in virtualized environments,
  • are intended to run directly on the hardware, or licensed and supported for use in non-virtualized environments.

Bare metal instances can also take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, and other AWS services.

Instance NameLogical ProcessorsMemoryEBS-Optimized BandwidthNetwork Bandwidth
c5.metal96192 GiB14 Gbps25 Gbps

Now Available!

You can start using these new instances today in the following regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Europe (Paris), Asia Pacific (Singapore), Asia Pacific (Sydney), and AWS GovCloud (US-West).

Please send us feedback and help us build the next generation of compute-optimized instances.

Julien;

Amazon S3 Update – SigV2 Deprecation Period Extended & Modified

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-sigv2-deprecation-period-extended-modified/

Every request that you make to the Amazon S3 API must be signed to ensure that it is authentic. In the early days of AWS we used a signing model that is known as Signature Version 2, or SigV2 for short. Back in 2012, we announced SigV4, a more flexible signing method, and made it the sole signing method for all regions launched after 2013. At that time, we recommended that you use it for all new S3 applications.

Last year we announced that we would be ending support for SigV2 later this month. While many customers have updated their applications (often with nothing more than a simple SDK update), to use SigV4, we have also received many requests for us to extend support.

New Date, New Plan
In response to the feedback on our original plan, we are making an important change. Here’s the summary:

Original Plan – Support for SigV2 ends on June 24, 2019.

Revised Plan – Any new buckets created after June 24, 2020 will not support SigV2 signed requests, although existing buckets will continue to support SigV2 while we work with customers to move off this older request signing method.

Even though you can continue to use SigV2 on existing buckets, and in the subset of AWS regions that support SigV2, I encourage you to migrate to SigV4, gaining some important security and efficiency benefits in the process. The newer signing method uses a separate, specialized signing key that is derived from the long-term AWS access key. The key is specific to the service, region, and date. This provides additional isolation between services and regions, and provides better protection against key reuse. Internally, our SigV4 implementation is able to securely cache the results of authentication checks; this reduces latency and adds to the overall resiliency of your application. To learn more, read Changes in Signature Version 4.

Identifying Use of SigV2
S3 has been around since 2006 and some of the code that you or your predecessors wrote way back then might still be around, dutifully making requests that are signed with SigV2. You can use CloudTrail Data Events or S3 Server Access Logs to find the old-school requests and target the applications for updates:

CloudTrail Data Events – Look for the SignatureVersion element within the additionalDataElement of each CloudTrail event entry (read Using AWS CloudTrail to Identify Amazon S3 Signature Version 2 Requests to learn more).

S3 Server Access Logs – Look for the SignatureVersion element in the logs (read Using Amazon S3 Access Logs to Identify Signature Version 2 Requests to learn more).

Updating to SigV4



“Do we need to change our code?”

The Europe (Frankfurt), US East (Ohio), Canada (Central), Europe (London), Asia Pacific (Seoul), Asia Pacific (Mumbai), Europe (Paris), China (Ningxia), Europe (Stockholm), Asia Pacific (Osaka Local), AWS GovCloud (US-East), and Asia Pacific (Hong Kong) Regions were launched after 2013, and support SigV4 but not SigV2. If you have code that accesses S3 buckets in that region, it is already making exclusive use of SigV4.

If you are using the latest version of the AWS SDKs, you are either ready or just about ready for the SigV4 requirement on new buckets beginning June 24, 2020. If you are using an older SDK, please check out the detailed version list at Moving from Signature Version 2 to Signature Version 4 for more information.

There are a few situations where you will need to make some changes to your code. For example, if you are using pre-signed URLs with the AWS Java, JavaScript (node.js), or Python SDK, you need to set the correct region and signature version in the client configuration. Also, be aware that SigV4 pre-signed URLs are valid for a maximum of 7 days, while SigV2 pre-signed URLs can be created with a maximum expiry time that could be many weeks or years in the future (in almost all cases, using time-limited URLs is a much better practice). Using SigV4 will improve your security profile, but might also mandate a change in the way that you create, store, and use the pre-signed URLs. While using long-lived pre-signed URLs was easy and convenient for developers, using SigV4 with URLs that have a finite expiration is a much better security practice.

If you are using Amazon EMR, you should upgrade your clusters to version 5.22.0 or later so that all requests to S3 are made using SigV4 (see Amazon EMR 5.x Release Versions for more info).

If your S3 objects are fronted by Amazon CloudFront and you are signing your own requests, be sure to update your code to use SigV4. If you are using Origin Access Identities to restrict access to S3, be sure to include the x-amz-content-sha256 header and the proper regional S3 domain endpoint.

We’re Here to Help
The AWS team wants to help make your transition to SigV4 as smooth and painless as possible. If you run in to problems, I strongly encourage you to make use of AWS Support, as described in Getting Started with AWS Support.

You can also Discuss this Post on Reddit!

Jeff;

 

Amazon Personalize is Now Generally Available

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-personalize-is-now-generally-available/

Today, we’re happy to announce that Amazon Personalize is available to all AWS customers. Announced in preview at AWS re:Invent 2018, Amazon Personalize is a fully-managed service that allows you to create private, customized personalization recommendations for your applications, with little to no machine learning experience required.

Whether it is a timely video recommendation inside an application or a personalized notification email delivered just at the right moment, personalized experiences, based on your data, deliver more relevant experiences for customers often with much higher business returns.

The task of developing an efficient recommender system is quite challenging: building, optimizing, and deploying real-time personalization requires specialized expertise in analytics, applied machine learning, software engineering, and systems operations. Few organizations have the knowledge, skills, and experience to overcome these challenges, and simple rule-based systems become brittle and costly to maintain as new products and promotions are introduced, or customer behavior changes.

For over 20 years, Amazon.com has perfected machine learning models that provide personalized buying experiences from product discovery to checkout. With Amazon Personalize, we are bringing developers that same capability to build custom models without having to deal with the complexity of infrastructure and machine learning that typically accompanies these types of solutions.

With Amazon Personalize, you provide the unique signals in your activity data (page views, signups, purchases, and so forth) along with optional customer demographic information (age, location, etc.). You then provide the inventory of the items you want to recommend, such as articles, products, videos, or music as an example. Then, entirely under the covers, Amazon Personalize will process and examine the data, identify what is meaningful, select the right algorithms, and train and optimize a personalization model that is customized for your data, and accessible via an API. All data analyzed by Amazon Personalize is kept private and secure and only used for your customized recommendations. The resulting models are yours and yours alone.

With a single API call, you can make recommendations for your users and personalize the customer experience, driving more engagement, higher conversion, and increased performance on marketing campaigns. Domino’s Pizza, for instance, is using Amazon Personalize to deliver customized communications such as promotional deals through their digital properties. Sony Interactive Entertainment uses Personalize with Amazon SageMaker to automate and accelerate their machine learning development and drive more effective personalization at scale.

Personalize is like having your own Amazon.com machine learning personalization team at your beck and call, 24 hours a day.

Introducing Amazon Personalize

Amazon Personalize can make recommendations based on your historical data stored in Amazon S3, or on streaming data sent in real-time by your applications, or on both.

This gives customers a lot of flexibility to build recommendation solutions. For instance, you could build an initial recommender based on historical data, and retrain it periodically when you’ve accumulated enough live events. Alternatively, if you have no historical data to start from, you could ingest events for a while, and then build your recommender.

Having covered historical data in my previous blog post, I will focus on ingesting live events this time.

The high-level process looks like this:

  1. Create a dataset group in order to store events sent by your application.
  2. Create an interaction dataset and define its schema (no data is needed at this point).
  3. Create an event tracker in order to send events to Amazon Personalize.
  4. Start sending events to Amazon Personalize.
  5. Select a recommendation recipe, or let Amazon Personalize pick one for you thanks to AutoML.
  6. Create a solution, i.e. train the recipe on your dataset.
  7. Create a campaign and start recommending items.

Creating a dataset group

Let’s say we’d like to capture a click stream of movie recommendations. Using the the first time setup wizard, we create a dataset group to store these events. Here, let’s assume we don’t have any historical data to start with: all events are generated by the click stream, and are ingested using the event ingestion SDK.

Creating a dataset group just requires a name.

Then, we have to create the interaction dataset, which shows how users are interacting with items (liking, clicking, etc.). Of course, we need to define a schema describing the data: here, we’ll simply use the default schema provided by Amazon Personalize.

Optionally, we could now define an import job, in order to add historical data to the data set: as mentioned above, we’ll skip this step as all data will come from the stream.

Configuring the event tracker

The next step is to create the event tracker that will allow us to send streaming events to the dataset group.

After a minute or so, our tracker is ready. Please take note of the tracking id: we’ll need it to send events.

Creating the dataset

When Amazon Personalize creates an event tracker, it automatically creates a new dataset in the dataset group associated with the event tracker. This dataset has a well-defined schema storing the following information:

  • user_id and session_id: these values are defined by your application.
  • tracking_id: the event tracker id.
  • timestamp, item_id, event_type, event_value: these values describe the event itself and must be passed by your application.

Real-time events can be sent to this dataset in two different ways:

  • Server-side, via the AWS SDK: please note ingestion can happen from any source, whether your code is hosted inside of AWS (e.g. in Amazon EC2 or AWS Lambda) or outside.
  • With the AWS Amplify JavaScript library.

Let’s look at both options.

Sending real-time events with the AWS SDK

This is a very easy process: we can simply use the PutEvents API to send either a single event, or a list of up to 10 events. Of course, we could use any of the AWS SDKs: since my favourite language is Python, this is how we can send events using the boto3 SDK.

import boto3
personalize_events = boto3.client('personalize-events')
personalize_events.put_events(
    trackingId = <TRACKING_ID>,
    userId = <USER_ID>,
    sessionId = <SESSION_ID>,
    eventList = [
      {
          "eventId": "event1",
          "sentAt": 1549959198,
          "eventType": "rating",
          "properties": """{\"itemId\": \"123\", \"eventValue\": \"4\"}"""
      },
      {
          "eventId": "event2",
          "sentAt": 1549959205,
          "eventType": "rating",
          "properties": """{\"itemId\": \"456\", \"eventValue\": \"2\"}"""
      }
    ]
)

In our application, we rated movie 123 as a 4, and movie 456 as a 2. Using the appropriate tracking identifier, we sent two Events to our event tracker:

  • eventId: an application-specific identifier.
  • sentAt: a timestamp, matching the timestamp property defined in the schema. This value seconds since the Unix Epoch (1 January 1970 00:00:00.000 UTC), and is independent of any particular time zone.
  • eventType: the event type, matching the event_type property defined in the schema,
  • properties: the item id and event value, matching the item_id and event_value properties defined in the schema.

Here’s a similar code snippet in Java.

List<Event> eventList = new ArrayList<>();
eventList.add(new Event().withProperties(properties).withType(eventType));
PutEventsRequest request = new PutEventsRequest()
  .withTrackingId(<TRACKING_ID>)
  .withUserId(<USER_ID>)
  .withSessionId(<SESSION_ID>)
  .withEventList(eventList);
client.putEvents(request)

You get the idea!

Sending real-time events with AWS Amplify

AWS Amplify is a JavaScript library that makes it easy to create, configure, and implement scalable mobile and web apps powered by AWS. It’s integrated with the event tracking service in Amazon Personalize.

A couple of setup steps are required before we can send events. For the sake of brevity, please refer to these detailed instructions in the Amazon Personalize documentation:

  • Create an identity pool in Amazon Cognito, in order to authenticate users.
  • Configure the Amazon Personalize plug-in with the pool id and tracker id.

Once this is taken care of, we can send events to Amazon Personalize. We can still use any text string for event types, but please note that a couple of special types are available:

  • Identify lets you send the userId for a particular user to Amazon Personalize. The userId then becomes an optional parameter in subsequent calls.
  • MediaAutoTrack automatically calculates the play, pause and resume position for media events, and Amazon Personalize uses the position as event value.

Here is how to send some sample events with AWS Amplify:

Analytics.record({
    eventType: "Identify",
    properties: {
      "userId": "<USER_ID>"
    }
}, "AmazonPersonalize");
Analytics.record({
    eventType: "<EVENT_TYPE>",
    properties: {
      "itemId": "<ITEM_ID>",
      "eventValue": "<EVENT_VALUE>"
    }
}, "AmazonPersonalize");
Analytics.record({
    eventType: "MediaAutoTrack",
    properties: {
      "itemId": "<ITEM_ID>",
      "domElementId": "MEDIA DOM ELEMENT ID"
    }
}, "AmazonPersonalize");

As you can see, this is pretty simple as well.

Creating a recommendation solution

Now that we know how to ingest events, let’s define how our recommendation solution will be trained.

We first need to select a recipe, which is much more than an algorithm: it also includes predefined feature transformations, initial parameters for the algorithm as well as automatic model tuning. Thus, recipes remove the need to have expertise in personalization. Amazon Personalize comes with several recipes suitable for different use cases.

Still, if you’re new to machine learning, you may wonder which one of these recipes best fits your use case. No worry: as mentioned earlier, Amazon Personalize supports AutoML, a new technique that automatically searches for the most optimal recipe, so let’s enable it. While we’re at it, let’s also ask Amazon Personalize to automatically tune recipe parameters.

All of this is very straightforward in the AWS console: as you’ll probably want to automate from now on, let’s use the AWS CLI instead.

$ aws personalize create-solution \
  --name jsimon-movieclick-solution \ 
  --perform-auto-ml --perform-hpo \
  --dataset-group-arn $DATASET_GROUP_ARN

Now we’re ready to train the solution. No servers to worry about, training takes places on fully-managed infrastructure.

$ aws personalize create-solution-version \
  --solution-arn $SOLUTION_ARN 

Once training is complete, we can use the solution version to create a recommendation campaign.

Deploying a recommendation campaign

Still no servers to worry about! In fact, campaigns scale automatically according to incoming traffic: we simply need to define the minimum number of transactions per second (TPS) that we want to support.

This number is used to size the initial fleet for hosting the model. It also impacts how much you will be charged for recommendations ($0.20 per TPS-hour). Here, I’m setting that parameter to 10, which means that I will initially be charged $2 per hour. If traffic exceeds 10 TPS, Personalize will scale up, increasing my bill according to the new TPS setting. Once traffic drops, Personalize will scale down, but it won’t go below my minimum TPS setting.

$ aws personalize create-campaign \
  --name jsimon-movieclick-campaign \
  --min-provisioned-tps 10 \
  --solution-version-arn $SOLUTION_VERSION_ARN

Should you later need to update the campaign with a new solution version, you can simply use the UpdateCampaign API and pass the ARN of the new solution version.

Once the campaign has been deployed, we can quickly test that it’s able to recommend new movies.

Recommending new items in real-time

I don’t think this could be simpler: just pass the id of the user and receive recommendations.

$ aws personalize-rec get-recommendations \
--campaign-arn $CAMPAIGN_ARN \
--user-id 123 --query "itemList[*].itemId"
["1210", "260", "2571", "110", "296", "1193", ...]

At this point, we’re ready to integrate our recommendation model in your application. For example, a web application would have to implement the following steps to display a list of recommended movies:

  • Use the GetRecommendations API in our favorite language to invoke the campaign and receive movie recommendation for a given user,
  • Read movie metadata from a backend (say, image URL, title, genre, release date, etc.),
  • Generate HTML code to be rendered in the user’s browser.

Amazon Personalize in action

Actually, my colleague Jake Wells has built a web application recommending books. Using an open dataset containing over 19 million book reviews, Jake first used a notebook hosted on Amazon SageMaker to clean and prepare the data. Then, he trained a recommendation model with Amazon Personalize, and wrote a simple web application demonstrating the recommendation process. This is a really cool project, which would definitely be worthy of its own blog post!

Available now!

Whether you work with historical data or event streams, a few simple API calls are all it takes to train and deploy recommendation models. Zero machine learning experience is required, so please visit aws.amazon.com/personalize, give it a try and let us know what you think.

Amazon Personalize is available in the following regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), and EU (Ireland)

The service is also part of the AWS free tier. For the first two months after sign-up, you will be offered:
1. Data processing and storage: up to 20 GB per month
2. Training: up to 100 training hours per month
3. Prediction: up to 50 TPS-hours of real-time recommendations per month

We’re looking forward to your feedback!

Julien;

Amazon Managed Streaming for Apache Kafka (MSK) – Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-managed-streaming-for-apache-kafka-msk-now-generally-available/

I am always amazed at how our customers are using streaming data. For example, Thomson Reuters, one of the world’s most trusted news organizations for businesses and professionals, built a solution to capture, analyze, and visualize analytics data to help product teams continuously improve the user experience. Supercell, the social game company providing games such as Hay Day, Clash of Clans, and Boom Beach, is delivering in-game data in real-time, handling 45 billion events per day.

Since we launched Amazon Kinesis at re:Invent 2013, we have continually expanded the ways in in which customers work with streaming data on AWS. Some of the available tools are:

  • Kinesis Data Streams, to capture, store, and process data streams with your own applications.
  • Kinesis Data Firehose, to transform and collect data into destinations such as Amazon S3, Amazon Elasticsearch Service, and Amazon Redshift.
  • Kinesis Data Analytics, to continuously analyze data using SQL or Java (via Apache Flink applications), for example to detect anomalies or for time series aggregation.
  • Kinesis Video Streams, to simplify processing of media streams.

At re:Invent 2018, we introduced in open preview Amazon Managed Streaming for Apache Kafka (MSK), a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data.

I am excited to announce that Amazon MSK is generally available today!

How it works

Apache Kafka (Kafka) is an open-source platform that enables customers to capture streaming data like click stream events, transactions, IoT events, application and machine logs, and have applications that perform real-time analytics, run continuous transformations, and distribute this data to data lakes and databases in real time. You can use Kafka as a streaming data store to decouple applications producing streaming data (producers) from those consuming streaming data (consumers).

While Kafka is a popular enterprise data streaming and messaging framework, it can be difficult to setup, scale, and manage in production. Amazon MSK takes care of these managing tasks and makes it easy to set up, configure, and run Kafka, along with Apache ZooKeeper, in an environment following best practices for high availability and security.

Your MSK clusters always run within an Amazon VPC managed by the MSK service. Your MSK resources are made available to your own VPC, subnet, and security group through elastic network interfaces (ENIs) which will appear in your account, as described in the following architectural diagram:

Customers can create a cluster in minutes, use AWS Identity and Access Management (IAM) to control cluster actions, authorize clients using TLS private certificate authorities fully managed by AWS Certificate Manager (ACM), encrypt data in-transit using TLS, and encrypt data at rest using AWS Key Management Service (KMS) encryption keys.

Amazon MSK continuously monitors server health and automatically replaces servers when they fail, automates server patching, and operates highly available ZooKeeper nodes as a part of the service at no additional cost. Key Kafka performance metrics are published in the console and in Amazon CloudWatch. Amazon MSK is fully compatible with Kafka versions 1.1.1 and 2.1.0, so that you can continue to run your applications, use Kafka’s admin tools, and and use Kafka compatible tools and frameworks without having to change your code.

Based on our customer feedback during the open preview, Amazon MSK added may features such as:

  • Encryption in-transit via TLS between clients and brokers, and between brokers
  • Mutual TLS authentication using ACM private certificate authorities
  • Support for Kafka version 2.1.0
  • 99.9% availability SLA
  • HIPAA eligible
  • Cluster-wide storage scale up
  • Integration with AWS CloudTrail for MSK API logging
  • Cluster tagging and tag-based IAM policy application
  • Defining custom, cluster-wide configurations for topics and brokers

AWS CloudFormation support is coming in the next few weeks.

Creating a cluster

Let’s create a cluster using the AWS management console. I give the cluster a name, select the VPC I want to use the cluster from, and the Kafka version.

I then choose the Availability Zones (AZs) and the corresponding subnets to use in the VPC. In the next step, I select how many Kafka brokers to deploy in each AZ. More brokers allow you to scale the throughtput of a cluster by allocating partitions to different brokers.

I can add tags to search and filter my resources, apply IAM policies to the Amazon MSK API, and track my costs. For storage, I leave the default storage volume size per broker.

I select to use encryption within the cluster and to allow both TLS and plaintext traffic between clients and brokers. For data at rest, I use the AWS-managed customer master key (CMK), but you can select a CMK in your account, using KMS, to have further control. You can use private TLS certificates to authenticate the identity of clients that connect to your cluster. This feature is using Private Certificate Authorities (CA) from ACM. For now, I leave this option unchecked.

In the advanced setting, I leave the default values. For example, I could have chosen here a different instance type for my brokers. Some of these settings can be updated using the AWS CLI.

I create the cluster and monitor the status from the cluster summary, including the Amazon Resource Name (ARN) that I can use when interacting via CLI or SDKs.

When the status is active, the client information section provides specific details to connect to the cluster, such as:

  • The bootstrap servers I can use with Kafka tools to connect to the cluster.
  • The Zookeper connect list of hosts and ports.

I can get similar information using the AWS CLI:

  • aws kafka list-clusters to see the ARNs of your clusters in a specific region
  • aws kafka get-bootstrap-brokers --cluster-arn <ClusterArn> to get the Kafka bootstrap servers
  • aws kafka describe-cluster --cluster-arn <ClusterArn> to see more details on the cluster, including the Zookeeper connect string

Quick demo of using Kafka

To start using Kafka, I create two EC2 instances in the same VPC, one will be a producer and one a consumer. To set them up as client machines, I download and extract the Kafka tools from the Apache website or any mirror. Kafka requires Java 8 to run, so I install Amazon Corretto 8.

On the producer instance, in the Kafka directory, I create a topic to send data from the producer to the consumer:

bin/kafka-topics.sh --create --zookeeper <ZookeeperConnectString> \
--replication-factor 3 --partitions 1 --topic MyTopic

Then I start a console-based producer:

bin/kafka-console-producer.sh --broker-list <BootstrapBrokerString> \
--topic MyTopic

On the consumer instance, in the Kafka directory, I start a console-based consumer:

bin/kafka-console-consumer.sh --bootstrap-server <BootstrapBrokerString> \
--topic MyTopic --from-beginning

Here’s a recording of a quick demo where I create the topic and then send messages from a producer (top terminal) to a consumer of that topic (bottom terminal):

Pricing and availability

Pricing is per Kafka broker-hour and per provisioned storage-hour. There is no cost for the Zookeeper nodes used by your clusters. AWS data transfer rates apply for data transfer in and out of MSK. You will not be charged for data transfer within the cluster in a region, including data transfer between brokers and data transfer between brokers and ZooKeeper nodes.

You can migrate your existing Kafka cluster to MSK using tools like MirrorMaker (that comes with open source Kafka) to replicate data from your clusters into a MSK cluster.

Upstream compatibility is a core tenet of Amazon MSK. Our code changes to the Kafka platform are released back to open source.

Amazon MSK is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), EU (Frankfurt), EU (Ireland), EU (Paris), and EU (London).

I look forward to see how are you going to use Amazon MSK to simplify building and migrating streaming applications to the cloud!

Now Available – AWS IoT Things Graph

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-iot-things-graph/

We announced AWS IoT Things Graph last November and described it as a tool to let you build IoT applications visually. Today I am happy to let you know that the service is now available and ready for you to use!

As you will see in a moment, you can represent your business logic in a flow composed of devices and services. Each web service and each type of device (sensor, camera, display, and so forth) is represented in Things Graph as a model. The models hide the implementation details that are peculiar to a particular brand or model of device, and allow you to build flows that can evolve along with your hardware. Each model has a set of actions (inputs), events (outputs), and states (attributes). Things Graph includes a set of predefined models, and also allows you to define your own. You can also use mappings as part of your flow to convert the output from one device into the form expected by other devices. After you build your flow, you can deploy it to the AWS Cloud or an AWS IoT Greengrass-enabled device for local execution. The flow, once deployed, orchestrates interactions between locally connected devices and web services.

Using AWS IoT Things Graph
Let’s take a quick walk through the AWS IoT Things Graph Console!

The first step is to make sure that I have models which represent the devices and web services that I plan to use in my flow. I click Models in the console navigation to get started:

The console outlines the three steps that I must follow to create a model, and also lists my existing models:

The presence of aws/examples in the URN for each of the devices listed above indicates that they are predefined, and part of the public AWS IoT Things Graph namespace. I click on Camera to learn more about this model; I can see the Properties, Actions, and Events:

The model is defined using GraphQL; I can view it, edit it, or upload a file that contains a model definition. Here’s the definition of the Camera:

This model defines an abstract Camera device. The model, in turn, can reference definitions for one or more actual devices, as listed in the Devices section:

Each of the devices is also defined using GraphQL. Of particular interest is the use of MQTT topics & messages to define actions:

Earlier, I mentioned that models can also represent web services. When a flow that references a model of this type is deployed, activating an action on the model invokes a Greengrass Lambda function. Here’s how a web service is defined:

Now I can create a flow. I click Flows in the navigation, and click Create flow:

I give my flow a name and enter a description:

I start with an empty canvas, and then drag nodes (Devices, Services, or Logic) to it:

For this demo (which is fully explained in the AWS IoT Things Graph User Guide), I’ll use a MotionSensor, a Camera, and a Screen:

I connect the devices to define the flow:

Then I configure and customize it. There are lots of choices and settings, so I’ll show you a few highlights, and refer you to the User Guide for more info. I set up the MotionSensor so that a change of state initiates this flow:

I also (not shown) configure the Camera to perform the Capture action, and the Screen to display it. I could also make use of the predefined Services:

I can also add Logic to my flow:

Like the models, my flow is ultimately defined in GraphQL (I can view and edit it directly if desired):

At this point I have defined my flow, and I click Publish to make it available for deployment:

The next steps are:

Associate – This step assigns an actual AWS IoT Thing to a device model. I select a Thing, and then choose a device model, and repeat this step for each device model in my flow:

Deploy – I create a Flow Configuration, target it at the Cloud or Greengrass, and use it to deploy my flow (read Creating Flow Configurations to learn more).

Things to Know
I’ve barely scratched the surface here; AWS IoT Things Graph provides you with a lot of power and flexibility and I’ll leave you to discover more on your own!

Here are a couple of things to keep in mind:

Pricing – Pricing is based on the number of steps executed (for cloud deployments) or deployments (for edge deployments), and is detailed on the AWS IoT Things Graph Pricing page.

API Access – In addition to console access, you can use the AWS IoT Things Graph API to build your models and flows.

RegionsAWS IoT Things Graph is available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

 

 

New – Data API for Amazon Aurora Serverless

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/

If you have ever written code that accesses a relational database, you know the drill. You open a connection, use it to process one or more SQL queries or other statements, and then close the connection. You probably used a client library that was specific to your operating system, programming language, and your database. At some point you realized that creating connections took a lot of clock time and consumed memory on the database engine, and soon after found out that you could (or had to) deal with connection pooling and other tricks. Sound familiar?

The connection-oriented model that I described above is adequate for traditional, long-running programs where the setup time can be amortized over hours or even days. It is not, however, a great fit for serverless functions that are frequently invoked and that run for time intervals that range from milliseconds to minutes. Because there is no long-running server, there’s no place to store a connection identifier for reuse.

Aurora Serverless Data API
In order to resolve this mismatch between serverless applications and relational databases, we are launching a Data API for the MySQL-compatible version of Amazon Aurora Serverless. This API frees you from the complexity and overhead that come along with traditional connection management, and gives you the power to quickly and easily execute SQL statements that access and modify your Amazon Aurora Serverless Database instances.

The Data API is designed to meet the needs of both traditional and serverless apps. It takes care of managing and scaling long-term connections to the database and returns data in JSON form for easy parsing. All traffic runs over secure HTTPS connections. It includes the following functions:

ExecuteStatement – Run a single SQL statement, optionally within a transaction.

BatchExecuteStatement – Run a single SQL statement across an array of data, optionally within a transaction.

BeginTransaction – Begin a transaction, and return a transaction identifier. Transactions are expected to be short (generally 2 to 5 minutes).

CommitTransaction – End a transaction and commit the operations that took place within it.

RollbackTransaction – End a transaction without committing the operations that took place within it.

Each function must run to completion within 1 minute, and can return up to 1 megabyte of data.

Using the Data API
I can use the Data API from the Amazon RDS Console, the command line, or by writing code that calls the functions that I described above. I’ll show you all three in this post.

The Data API is really easy to use! The first step is to enable it for the desired Amazon Aurora Serverless database. I open the Amazon RDS Console, find & select the cluster, and click Modify:

Then I scroll down to the Network & Security section, click Data API, and Continue:

On the next page I choose to apply the settings immediately, and click Modify cluster:

Now I need to create a secret to store the credentials that are needed to access my database. I open the Secrets Manager Console and click Store a new secret. I leave Credentials for RDS selected, enter a valid database user name and password, optionally choose a non-default encryption key, and then select my serverless database. Then I click Next:

I name my secret and tag it, and click Next to configure it:

I use the default values on the next page, click Next again, and now I have a brand new secret:

Now I need two ARNs, one for the database and one for the secret. I fetch both from the console, first for the database:

And then for the secret:

The pair of ARNs (database and secret) provides me with access to my database, and I will protect them accordingly!

Using the Data API from the Amazon RDS Console
I can use the Query Editor in the Amazon RDS Console to run queries that call the Data API. I open the console and click Query Editor, and create a connection to the database. I select the cluster, enter my credentials, and pre-select the table of interest. Then I click Connect to database to proceed:

I enter a query and click Run, and view the results within the editor:

Using the Data API from the Command Line
I can exercise the Data API from the command line:

$ aws rds-data execute-statement \
  --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL" \
  --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1" \
  --database users \
  --sql "show tables" \
  --output json

I can use jq to pick out the part of the result that is of interest to me:

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "users"
      }
    ]
  }
]

I can query the table and get the results (the SQL statement is "select * from users where userid='jeffbarr'"):

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "jeffbarr"
      },
      {
        "stringValue": "Jeff"
      },
      {
        "stringValue": "Barr"
      }
    ]
  }

If I specify --include-result-metadata, the query also returns data that describes the columns of the result (I’ll show only the first one in the interest of frugality):

... | jq .columnMetadata[0]
{
  "type": 12,
  "name": "userid",
  "label": "userid",
  "nullable": 1,
  "isSigned": false,
  "arrayBaseColumnType": 0,
  "scale": 0,
  "schemaName": "",
  "tableName": "users",
  "isCaseSensitive": false,
  "isCurrency": false,
  "isAutoIncrement": false,
  "precision": 15,
  "typeName": "VARCHAR"
}

The Data API also allows me to wrap a series of statements in a transaction, and then either commit or rollback. Here’s how I do that (I’m omitting --secret-arn and --resource-arn for clarity):

$ $ID=`aws rds-data begin-transaction --database users --output json | jq .transactionId`
$ echo $ID
"ATP6Gz88GYNHdwNKaCt/vGhhKxZs2QWjynHCzGSdRi9yiQRbnrvfwF/oa+iTQnSXdGUoNoC9MxLBwyp2XbO4jBEtczBZ1aVWERTym9v1WVO/ZQvyhWwrThLveCdeXCufy/nauKFJdl79aZ8aDD4pF4nOewB1aLbpsQ=="

$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ ...
$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ aws rds-data commit-transaction $ID

If I decide not to commit, I invoke rollback-transaction instead.

Using the Data API with Python and Boto
Since this is an API, programmatic access is easy. Here’s some very simple Python / Boto code:

import boto3

client = boto3.client('rds-data')

response = client.execute_sql(
    secretArn   = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL',
    database    = 'users',
    resourceArn = 'arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1',
    sql         = 'select * from users'
)

for user in response['records']:
  userid     = user[0]['stringValue']
  first_name = user[1]['stringValue']
  last_name  = user[2]['stringValue']
  print(userid + ' ' + first_name + ' ' + last_name)

And the output:

$ python data_api.py
jeffbarr Jeff Barr
carmenbarr Carmen Barr

Genuine, production-quality code would reference the table columns symbolically using the metadata that is returned as part of the response.

By the way, my Amazon Aurora Serverless cluster was configured to scale capacity all the way down to zero when not active. Here’s what the scaling activity looked like while I was writing this post and running the queries:

Now Available
You can make use of the Data API today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. There is no charge for the API, but you will pay the usual price for data transfer out of AWS.

Jeff;

New – AWS IoT Events: Detect and Respond to Events at Scale

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-iot-events-detect-and-respond-to-events-at-scale/

As you may have been able to tell from many of the announcements that we have made over the last four or five years, we are working to build a wide-ranging set of Internet of Things (IoT) services and capabilities. Here’s a quick recap:

October 2015AWS IoT Core – A fundamental set of Cloud Services for Connected Devices.

Jun 2017AWS Greengrass – The ability to Run AWS Lambda Functions on Connected Devices.

November 2017AWS IoT Device ManagementOnboarding, Organization, Monitoring, and Remote Management of Connected Devices.

November 2017AWS IoT AnalyticsAdvanced Data Analysis for IoT Devices.

November 2017Amazon FreeRTOSAn IoT Operating System for Microcontrollers.

April 2018Greengrass ML InferenceThe power to do Machine Learning Inference at the Edge.

August 2018AWS IoT Device Defender – A service that helps to Keep Your Connected Devices Safe.

Last November we also announced our plans to launch four new IoT Services:

You can use these services individually or together to build all sorts of powerful, connected applications!

AWS IoT Events Now Available
Today we are making AWS IoT Events available in production form in four AWS Regions. You can use this service to monitor and respond to events (patterns of data that identify changes in equipment or facilities) at scale. You can detect a misaligned robot arm, a motion sensor that triggers outside of business hours, an unsealed freezer door, or a motor that is running outside of tolerance, all with the goal of driving faster and better-informed decisions.

As you will see in a moment, you can easily create detector models that represent your devices, their states, and the transitions (driven by sensors and events, both known as inputs) between the states. The models can trigger actions when critical events are detected, allowing you to build robust, highly automated systems. Actions can, for example, send a text message to a service technician or invoke an AWS Lambda function.

You can access AWS IoT Events from the AWS IoT Event Console or by writing code that calls the AWS IoT Events API functions. I’ll use the Console, and I will start by creating a detector model. I click Create detector model to begin:

I have three options; I’ll go with the demo by clicking Launch demo with inputs:

This shortcut creates an input and a model, and also enables some “demo” functionality that sends data to the model. The model looks like this:

Before examining the model, let’s take a look at the input. I click on Inputs in the left navigation to see them:

I can see all of my inputs at a glance; I click on the newly created input to learn more:

This input represents the battery voltage measured from a device that is connected to a particular powerwallId:

Ok, let’s return to (and dissect) the detector model! I return to the navigation, click Detector models, find my model, and click it:

There are three Send options at the top; each one sends data (an input) to the detector model. I click on Send data for Charging to get started. This generates a message that looks like this; I click Send data to do just that:

Then I click Send data for Charged to indicate that the battery is fully charged. The console shows me the state of the detector:

Each time an input is received, the detector processes it. Let’s take a closer look at the detector. It has three states (Charging, Charged, and Discharging):

The detector starts out in the Charging state, and transitions to Charged when the Full_charge event is triggered. Here’s the definition of the event, including the trigger logic:

The trigger logic is evaluated each time an input is received (your IoT app must call BatchPutMessage to inform AWS IoT Events). If the trigger logic evaluates to a true condition, the model transitions to the new (destination) state, and it can also initiate an event action. This transition has no actions; I can add one (or more) by clicking Add action. My choices are:

  • Send MQTT Message – Send a message to an MQTT topic.
  • Send SNS Message – Send a message to an SNS target, identifed by an ARN.
  • Set Timer – Set, reset, or destroy a timer. Times can be expressed in seconds, minutes, hours, days, or months.
  • Set Variable – Set, increment, or decrement a variable.

Returning (once again) to the detector, I can modify the states as desired. For example, I could fine-tune the Discharging aspect of the detector by adding a LowBattery state:

After I create my inputs and my detector, I Publish the model so that my IoT devices can use and benefit from it. I click Publish and fill in a few details:

The Detector generation method has two options. I can Create a detector for each unique key value (if I have a bunch of devices), or I can Create a single detector (if I have one device). If I choose the first option, I need to choose the key that separates one device from another.

Once my detector has been published, I can send data to it using AWS IoT Analytics, IoT Core, or from a Lambda function.

Get Started Today
We are launching AWS IoT Events in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions and you can start using it today!

Jeff;