Tag Archives: announcements

Use MAP for Windows to Simplify your Migration to AWS

Post Syndicated from Fred Wurden original https://aws.amazon.com/blogs/compute/use-map-for-windows-to-simplify-your-migration-to-aws/

There’s no question that organizations today are being disrupted in their industry. In a previous blog post, I shared that such disruption often accelerates organizations’ decisions to move to the cloud. When these organizations migrate to the cloud, Windows workloads are often critical to their business and these workloads require a performant, reliable, and secure cloud infrastructure. Customers tell us that reducing risk, building cloud expertise, and lowering costs are important factors when choosing that infrastructure.

Today, we are announcing the general availability of the Migration Acceleration Program (MAP) for Windows, a comprehensive program that helps you execute large-scale migrations and modernizations of your Windows workloads on AWS. We have millions of customers on AWS, and have spent the last 11 years helping Windows customers successfully move to our cloud. We’ve built a proven methodology, providing you with AWS services, tools, and expertise to help simplify the migration of your Windows workloads to AWS. MAP for Windows provides prescriptive guidance, consulting support from experts, tools, trainings, and service credits to help reduce the risk and cost of migrating to the cloud as you embark on your migration journey.

MAP for Windows also helps you along the pathways to modernize current and legacy versions of Windows Server and SQL Server to cloud native and open source solutions, enabling you to break free from commercial licensing costs. With the strong price-performance of open-source solutions and the proven reliability of AWS, you can innovate quickly while reducing your risk.

With MAP for Windows, you will follow a simple three-step migration process to your migration:

  1. Assess Your Readiness: The migration readiness assessment helps you identify gaps along the six dimensions of the AWS Cloud Adoption Framework: business, process, people, platform, operations, and security. This assessment helps customers identify capabilities required in the migration. MAP for Windows also includes an Optimization and Licensing Assessment, which provides recommendations on how to optimize your licenses on AWS.
  2. Mobilize Your Resources: The mobilize phase helps you build an operational foundation for your migration, with the goal of fixing the capability gaps identified in the assessment phase. The mobilize phase accelerates your migration decisions by providing clear guidance on migration plans that improve the success of your migration.
  3. Migrate or Modernize Your Workloads: APN Partners and the AWS ProServe team help customers execute the large-scale migration plan developed during the mobilize phase. MAP for Windows also offers financial incentives to help you offset migration costs such as labor, training, and the expense of sometimes running two environments in parallel.

MAP for Windows includes support from AWS Professional Services and AWS Migration Competency Partners, such as Rackspace, 2nd Watch, Accenture, Cloudreach, Enimbos Global Services, Onica, and Slalom. Our MAP for Windows partners have successfully demonstrated completion of multiple large-scale migrations to AWS. They have received the APN Migration Competency Partner and the Microsoft Workloads Competency designations.

Learn about what MAP for Windows can do for you on this page. Learn also about the migration experiences of AWS customers. And contact us to discuss your Windows migration or modernization initiatives and apply to MAP for Windows.

About the Author

Fred Wurden is the GM of Enterprise Engineering (Windows, VMware, Red Hat, SAP, benchmarking) working to make AWS the most customer-centric cloud platform on Earth. Prior to AWS, Fred worked at Microsoft for 17 years and held positions, including: EU/DOJ engineering compliance for Windows and Azure, interoperability principles and partner engagements, and open source engineering. He lives with his wife and a few four-legged friends since his kids are all in college now.

CloudWatch Contributor Insights for DynamoDB – Now Generally Available

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/cloudwatch-contributor-insights-for-dynamodb-now-generally-available/

Amazon DynamoDB provides our customers a fully-managed key-value database service that can easily scale from a few requests per month to millions of requests per second. DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.

In November 2019, we announced Amazon CloudWatch Contributor Insights for Amazon DynamoDB as Preview, and today, I am happy to announce it is Generally Available for all AWS Regions.

Amazon CloudWatch Contributor Insights for Amazon DynamoDB

Amazon CloudWatch Contributor Insights, which also launched in November 2019, analyzes log data and creates time-series visualizations to provide a view of top contributors influencing system performance. You do this by creating Contributor Insights rules to evaluate CloudWatch Logs (including logs from AWS services) and any custom logs sent by your service or on-premises servers. For example, you can find bad hosts, identify the heaviest network users, or find the URLs that generate the most errors.

For developers building applications on top of DynamoDB, it’s useful to understand your database access patterns, such as traffic trends and frequently accessed keys, to help optimize DynamoDB costs and performance. You can create visualizations of these patterns with just a few clicks in the console using CloudWatch Contributor Insights for DynamoDB. DynamoDB automatically creates the required CloudWatch resources, then provides a summary view of the graphs. This summary view lives in the DynamoDB console, but you can also see the individual rule details in the CloudWatch console with other CloudWatch Contributor Insights rules, reports, and graphs of report data.

You can use these graphs to view traffic trends and pinpoint any hot keys in your DynamoDB tables.

How it works

Let’s see how it works and how it benefits developers. Here is a table on DynamoDB.

If we select any table, we can see details of the table. Click the new tab called [Contributor Insights].

CloudWatch Contributor Insights is now DISABLED. You can enable by selecting the upper [Contributor Insights] tab.

When you access the [Contributor Insights] tab, you can check its status. The activation process is very easy (this is one of my favorite points of this feature!) If you click [Manage Contributor Insights], a dialog pop-up comes up.

If you choose [Enabled] and click [Confirm], Dashboard comes up, and CloudWatch Contributor Insights for DynamoDB will record every access to the table.

After a while, you will see table insights by some graphs.

You can change the time range in the upper right corner of the dashboard, or simply click and drag a graph directly.

What does the dashboard tell us?

The dashboard shows us 4 metrics, which are powerful insights for application performance tuning. DynamoDB creates separate visualizations for partition key vs. partition+sort key, so if your table doesn’t have a sort key, then you will only see two graphs, not all four.

  • Most Accessed Items (Partition Key only) – Identifies the partition keys of the most accessed items in your table or global secondary index.
  • Most Accessed Items (Partition Key + Sort Key) –   Identifies the partition and sort keys of the most accessed items in your table or global secondary index.
  • Most Throttled Items (Partition Key only) –  Identifies the partition keys of the most throttled items in your table or global secondary index.
  • Most Throttled Items (Partition Key + Sort Key) – Identifies the partition and sort keys of the most throttled items in your table or global secondary index.

Most Accessed Items

Let’s break down what these metrics and graphs mean, starting with the “Most Accessed Items” metrics. This metric shows the frequency of which a key is accessed, based on both read and write traffic.

Outliers in these graphs are your most frequently accessed, or hottest, keys. Many DynamoDB workloads have at least some imbalanced traffic, but you can use this graph to see whether your workload will bump against DynamoDB’s per-key limits. On the other hand, if you see several closely clustered lines without any obvious outliers, it indicates that your workload is relatively balanced across items over the given time window (great job balancing your workload!)

Most Throttled Items

The “Most Throttled Items” shows just that, a graph of throttle count over time for your most throttled keys. If you see no data in this graph, it means your requests have not been throttled. If you see isolated points instead of connected lines, that indicates an item was throttled only for a brief period.

This blog article “Choosing the Right DynamoDB Partition Key” tells us the importance of considerations and strategies for choosing the right partition key for designing a schema that uses Amazon DynamoDB. Choosing the right partition key is an important step in the design and building of scalable and reliable applications on top of DynamoDB. Also, you can check our DynamoDB documentation page “Best Practices for Designing and Using Partition Keys Effectively“.

Integrating with CloudWatch Dashboard

This feature is integrated with CloudWatch for ease of use. You can integrate any of these graphs onto an existing CloudWatch dashboard. Let’s see how to do it. Going back to the DynamoDB dashboard, click [Add to dashboard].
You are redirected to CloudWatch Management Console, and are asked which dashboard to add in.

You can choose any existing dashboard or create a new one. For example, I put these metrics into my existing test dashboard as [test20180321].

Activating the feature does not affect anything in your existing production environment. You can enable it or disable it ay any time.

Generally Available Today

This feature is generally available today for all AWS regions.

– Kame;

 

Amazon Redshift update – ra3.4xlarge instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-redshift-update-ra3-4xlarge-instances/

Since we launched Amazon Redshift as a cloud data warehouse service more than seven years ago, tens of thousands of customers built their workloads using it. We are always listening to your feedback and, in December last year, we announced our 3rd generation RA3 node type providing you the ability to scale compute and storage separately. Previous generation DS2 and DC2 nodes had a fixed amount of storage and required adding more nodes to your cluster to increase storage capacity. The new RA3 nodes let you determine how much compute capacity you need to support your workload and then scale the amount of storage based on your needs. The first member of the RA3 family was the ra3.16xlarge which we heard from many customers was fantastic, but more than they needed for their workload needs.

Today we are adding a new smaller member to the RA3 family: the ra3.4xlarge.

The RA3 node type is based on AWS Nitro and includes support for Redshift managed storage. Redshift managed storage automatically manages data placement across tiers of storage and caches the hottest data in high-performance SSD storage while automatically offloading colder data to Amazon Simple Storage Service (S3). Redshift managed storage uses advanced techniques such as block temperature, data block age, and workload patterns to optimize performance.

RA3 nodes with managed storage are a great fit for analytics workloads that require massive storage capacity and can be a great fit for workloads such as operational analytics, where the subset of data that is most important evolves constantly over time. In the past, there was pressure to offload or archive old data to other storage because of fixed storage limits. This made maintaining the operational analytics data set and the larger historical dataset difficult to query when needed.

The new ra3.4xlarge node provides 12 vCPUs, 96 GiB of RAM, and addresses up to 64 Tb of managed storage. A cluster can contain up to 32 of these instances, for a total storage of 2048 TB (that’s 2 petabytes!).

The differences between ra3.16xlarge and ra3.4xlarge nodes are summarized in the table below.

vCPUMemoryAddressable Storage I/OPrice
(US East (N. Virginia))
ra3.4xlarge1296 GiB64TB RMS2 GB/sec$3.26 per Hour
ra3.16xlarge28384 GiB64TB RMS8 GB/sec$13.04 per Hour

To create a new cluster, I am using the Redshift AWS Management Console or AWS Command Line Interface (CLI). In the console. I click Create Cluster and choose ra3.4xlarge instances.

If you have a DS2 or DC2 instance-based cluster you create a new RA3 cluster to evaluate the new instance with managed storage. You use a recent snapshot of your Redshift DS2 or DC2 cluster to create a new cluster based on ra3.4xlarge instances. You keep the two clusters running in parallel to evaluate the compute needs of your application.

You can resize your RA3 cluster at anytime by using elastic resize to add or remove compute capacity. If elastic resize is not available for your chosen configuration then you can do a classic resize.

RA3 instances are now available in 14 AWS Regions : US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Canada (Central), and South America (São Paulo).

The price vary from one region to the other, starting at $3.26/hr/node in US East (N. Virginia). Check the Amazon Redshift pricing page for details.

— seb

AWS DeepComposer – Now Generally Available With New Features

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/aws-deepcomposer-now-generally-available-with-new-features/

AWS DeepComposer, a creative way to get started with machine learning, was launched in preview at AWS re:Invent 2019. Today, I’m extremely happy to announce that DeepComposer is now available to all AWS customers, and that it has been expanded with new features.

A primer on AWS DeepComposer
If you’re new to AWS DeepComposer, here’s how to get started.

  • Log into the AWS DeepComposer console.
  • Learn about the service and how it uses generative AI.
  • Record a short musical tune, using either the virtual keyboard in the console, or a physical keyboard available for order on Amazon.com.
  • Select a pretrained model for your favorite genre.
  • Use this model to generate a new polyphonic composition based on your tune.
  • Play the composition in the console.
  • Export the composition, or share it on SoundCloud.

Now let’s look at the new features, which make it even easier to get started with generative AI.

Learning Capsules
DeepComposer is powered by Generative Adversarial Networks (aka GANs, research paper), a neural network architecture built specifically to generate new samples from an existing data set. A GAN pits two different neural networks against each other to produce original digital works based on sample inputs: with DeepComposer, you can train and optimize GAN models to create original music.

Until now, developers interested in growing skills in GANs haven’t had an easy way to get started. In order to help them regardless of their background in ML or music, we are building a collection of easy learning capsules that introduce key concepts, and how to train and evaluate GANs. This includes an hands-on lab with step-by-step instructions and code to build a GAN model.

Once you’re familiar with GANs, you’ll be ready to move on to training your own model!

In-console Training
You now have the ability to train your own generative model right in the DeepComposer console, without having to write a single line of machine learning code.

First, let’s select a GAN architecture:

  • MuseGAN, by Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang and Yi-Hsuan Yang (research paper, Github): MuseGAN has been specifically designed for generating music. The generator in MuseGAN is composed of a shared network to learn a high level representation of the song, and a series of private networks to learn how to generate individual music tracks.
  • U-Net, by Olaf Ronneberger, Philipp Fischer and Thomas Brox (research paper, project page): U-Net has been extremely successful in the image translation domain (e.g. converting winter images to summer images), and it can also be used for music generation. It’s a simpler architecture than MuseGAN, and therefore easier for beginners to understand. If you’re curious what’s happening under the hood, you can learn more about the U-Net architecture in this Jupyter notebook.

Let’s go with MuseGAN, and give the new model a name.

Next, I just have to pick the dataset I want to train my model on.

Optionally, I can also set hyperparameters (i.e. training parameters), but I’ll go with default settings this time. Finally, I click on ‘Start training’, and AWS DeepComposer fires up a training job, taking care of all the infrastructure and machine learning setup for me.

About 8 hours later, the model has been trained, and I can use it to generate compositions. Here, I can add the new ‘rhythm assist’ feature, that helps correct the timing of musical notes in your input, and make sure notes are in time with the beat.

Getting started
AWS DeepComposer is available today in the US East (N. Virginia) region.

The service includes a 12-month Free Tier for all AWS customers, so you can generate 500 compositions using our sample models at no cost.

In addition to the Free Tier, ordering the keyboard from Amazon.com in the US, and linking it to the DeepComposer console will get you another 3 months of free trial!

picture of underside of the keyboard

Give AWS DeepComposer a try, and let us know what you think! You can send your feedback through your usual AWS Support contacts, or on the AWS Forum for DeepComposer.

– Julien

AWS Step Functions support in Visual Studio Code

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/aws-step-functions-support-in-visual-studio-code/

The AWS Toolkit for Visual Studio Code has been installed over 115,000 times since launching in July 2019. We are excited to announce toolkit support for AWS Step Functions, enabling you to define, visualize, and create your Step Functions workflows without leaving VS Code.

Version 1.8 of the toolkit provides two new commands in the Command Palette to help you define and visualize your workflows. The toolkit also provides code snippets for seven different Amazon States Language (ASL) state types and additional service integrations to speed up workflow development. Automatic linting detects errors in your state machine as you type, and provides tooltips to help you correct the errors. Finally, the toolkit allows you to create or update Step Functions workflows in your AWS account without leaving VS Code.

Defining a new state machine

To define a new Step Functions state machine, first open the VS Code Command Palette by choosing Command Palette from the View menu. Enter Step Functions to filter the available options and choose AWS: Create a new Step Functions state machine.

Screen capture of the Command Palette in Visual Studio Code with the text ">AWS Step Functions" entered

Creating a new Step Functions state machine in VS Code

A dialog box appears with several options to help you get started quickly. Select Hello world to create a basic example using a series of Pass states.

A screen capture of the Visual Studio Code Command Palette "Select a starter template" dialog with "Hello world" selected

Selecting the “Hello world” starter template

VS Code creates a new Amazon States Language file containing a workflow with examples of the Pass, Choice, Fail, Wait, and Parallel states.

A screen capture of a Visual Studio Code window with a "Hello World" example state machine

The “Hello World” example state machine

Pass states allow you to define your workflow before building the implementation of your logic with Task states. This lets you work with business process owners to ensure you have the workflow right before you start writing code. For more information on the other state types, see State Types in the ASL documentation.

Save your new workflow by choosing Save from the File menu. VS Code automatically applies the .asl.json extension.

Visualizing state machines

In addition to helping define workflows, the toolkit also enables you to visualize your workflows without leaving VS Code.

To visualize your new workflow, open the Command Palette and enter Preview state machine to filter the available options. Choose AWS: Preview state machine graph.

A screen capture of the Visual Studio Code Command Palette with the text ">Preview state machine" entered and the option "AWS: Preview state machine graph" highlighted

Previewing the state machine graph in VS Code

The toolkit renders a visualization of your workflow in a new tab to the right of your workflow definition. The visualization updates automatically as the workflow definition changes.

A screen capture of a Visual Studio Code window with two side-by-side tabs, one with a state machine definition and one with a preview graph for the same state machine

A state machine preview graph

Modifying your state machine definition

The toolkit provides code snippets for 12 different ASL states and service integrations. To insert a code snippet, place your cursor within the States object in your workflow and press Ctrl+Space to show the list of available states.

A screen capture of a Visual Studio Code window with a code snippet insertion dialog showing twelve Amazon States Langauge states

Code snippets are available for twelve ASL states

In this example, insert a newline after the definition of the Pass state, press Ctrl+Space, and choose Map State to insert a code snippet with the required structure for an ASL Map State.

Debugging state machines

The toolkit also includes features to help you debug your Step Functions state machines. Visualization is one feature, as it allows the builder and the product owner to confirm that they have a shared understanding of the relevant process.

Automatic linting is another feature that helps you debug your workflows. For example, when you insert the Map state into your workflow, a number of errors are detected, underlined in red in the editor window, and highlighted in red in the Minimap. The visualization tab also displays an error to inform you that the workflow definition has errors.

A screen capture of a Visual Studio Code window with a tooltip dialog indicating an "Unreachable state" error

A tooltip indicating an “Unreachable state” error

Hovering over an error opens a tooltip with information about the error. In this case, the toolkit is informing you that MapState is unreachable. Correct this error by changing the value of Next in the Pass state above from Hello World Example to MapState. The red underline automatically disappears, indicating the error has been resolved.

To finish reconciling the errors in your workflow, cut all of the following states from Hello World Example? through Hello World and paste into MapState, replacing the existing values of MapState.Iterator.States. The workflow preview updates automatically, indicating that the errors have been resolved. The MapState is indicated by the three dashed lines surrounding most of the workflow.

A Visual Studio Code window displaying two tabs, an updated state machine definition and the automatically-updated preview of the same state machine

Automatically updating the state machine preview after changes

Creating and updating state machines in your AWS account

The toolkit enables you to publish your state machine directly to your AWS account without leaving VS Code. Before publishing a state machine to your account, ensure that you establish credentials for your AWS account for the toolkit.

Creating a state machine in your AWS account

To publish a new state machine to your AWS account, bring up the VS Code Command Palette as before. Enter Publish to filter the available options and choose AWS: Publish state machine to Step Functions.

Screen capture of the Visual Studio Command Palette with the command "AWS: Publish state machine to Step Functions" highlighted

Publishing a state machine to AWS Step Functions

Choose Quick Create from the dialog box to create a new state machine in your AWS account.

Screen Capture from a Visual Studio Code flow to publish a state machine to AWS Step Functions with "Quick Create" highlighted

Publishing a state machine to AWS Step Functions

Select an existing execution role for your state machine to assume. This role must already exist in your AWS account.

For more information on creating execution roles for state machines, please visit Creating IAM Roles for AWS Step Functions.

Screen capture from Visual Studio Code showing a selection execution role dialog with "HelloWorld_IAM_Role" selected

Selecting an IAM execution role for a state machine

Provide a name for the new state machine in your AWS account, for example, Hello-World. The name must be from one to 80 characters, and can use alphanumeric characters, dashes, or underscores.

Screen capture from a Visual Studio Code flow entering "Hello-World" as a state machine name

Naming your state machine

Press the Enter or Return key to confirm the name of your state machine. The Output console opens, and the toolkit displays the result of creating your state machine. The toolkit provides the full Amazon Resource Name (ARN) of your new state machine on completion.

Screen capture from Visual Studio Code showing the successful creation of a new state machine in the Output window

Output of creating a new state machine

You can check creation for yourself by visiting the Step Functions page in the AWS Management Console. Choose the newly-created state machine and the Definition tab. The console displays the definition of your state machine along with a preview graph.

Screen capture of the AWS Management Console showing the newly-created state machine

Viewing the new state machine in the AWS Management Console

Updating a state machine in your AWS account

It is common to change workflow definitions as you refine your application. To update your state machine in your AWS account, choose Quick Update instead of Quick Create. Select your existing workflow.

A screen capture of a Visual Studio Code dialog box with a single state machine displayed and highlighted

Selecting an existing state machine to update

The toolkit displays “Successfully updated state machine” and the ARN of your state machine in the Output window on completion.

Summary

In this post, you learn how to use the AWS Toolkit for VS Code to create and update Step Functions state machines in your local development environment. You discover how sample templates, code snippets, and automatic linting can accelerate your development workflows. Finally, you see how to create and update Step Functions workflows in your AWS account without leaving VS Code.

Install the latest release of the toolkit and start building your workflows in VS Code today.

 

Announcing AWS Lambda support for .NET Core 3.1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/announcing-aws-lambda-supports-for-net-core-3-1/

This post is courtesy of Norm Johanson, Senior Software Development Engineer, AWS SDKs and Tools.

From today, you can develop AWS Lambda functions using .NET Core 3.1. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Version 1.17.0.0 AWS Toolkit for Visual Studio and version 4.0.0 of the .NET Core Global Tool Amazon.Lambda.Tools are also available today. These make it easy to build and deploy your .NET Core 3.1 Lambda functions.

New features of .NET Core 3.1

.NET Core 3.1 brings many new runtime features to Lambda including C# 8.0 and F# 4.7 support, .NET Standard 2.1 support, new JSON serializer, and a new ReadyToRun feature for ahead-of-time compilation. There are also new versions of the .NET Lambda tooling and libraries. These include the Amazon.Lambda.AspNetCoreServer package, which allows you to run ASP.NET Core 3.1
projects as Lambda functions.

New Lambda JSON serializer

.NET Core Lambda functions support JSON serialization of input and return parameters. Use this feature by registering a serializer in your Lambda code. Typically, this is done using an assembly attribute like this which registers the JsonSerializer class from the Amazon.Lambda.Serialization.Json NuGet package as the serializer:

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

Amazon.Lambda.Serialization.Json uses the popular NuGet package Newtonsoft.Json for serialization. Newtonsoft.Json is a powerful serializer with many built-in features. This makes it a large assembly to add to your .NET Core Lambda functions.

Starting with .NET Core 3.0, a new JSON serializer called System.Text.Json is built into the .NET Core framework. This serializer is focused on the core features of serialization and built for performance. To take advantage of this new serializer, use the new NuGet package Amazon.Lambda.Serialization.SystemTextJson. Testing with this new serializer shows significant
improvements to Lambda cold start performance. The new Lambda blueprints available in Visual Studio or dotnet new via Amazon.Lambda.Templates default to this new serializer using the following assembly attribute.

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.LambdaJsonSerializer))]

Most .NET Lambda event packages work with either of the AWS serializer packages. For performance and simplicity reasons, the new versions of Amazon.Lambda.APIGatewayEvents and Amazon.Lambda.AspNetCoreServer only use the newer, faster Amazon.Lambda.Serialization.SystemTextJson for JSON serialization when targeting .NET Core 3.1.

ReadyToRun for better cold start performance

.NET Core 3.0 introduces a new build concept called ReadyToRun, which is also available in .NET Core 3.1. ReadyToRun performs much of the work of the just-in-time compiler used by the .NET runtime. If your project contains large amounts of code or large dependencies like the AWS SDK for .NET, this feature can significantly reduce cold start performance. It has less effect on small
functions using only the .NET Core base library.

To use ReadyToRun for Lambda, you must package your .NET Lambda function on Linux. You can use a Linux environment like an EC2 Linux instance, or CodeBuild, which also recently added .NET Core 3.1 support. Once you are in a Linux environment using .NET Lambda tooling, enable ReadyToRun by setting the –msbuild-parameters switch:

"/p:PublishReadyToRun=true --self-contained false"

For example, to deploy a function with ReadyToRun enabled using the .NET Core Global Tool for Lambda Amazon.Lambda.Tools, use:

dotnet lambda deploy-function R2RExample --msbuild-parameters "/p:PublishReadyToRun=true --self-contained false"

To avoid setting in the command line, set msbuild-parameters as a property in the aws-lambda-tools-defaults.json file.

Updated AWS Mock .NET Lambda Test Tool

Lambda test tool

With .NET Core 2.1, AWS released the AWS .NET Core Mock Lambda Test Tool. This makes it easy to debug .NET Core Lambda functions. If you are using the AWS Toolkit for Visual Studio, the toolkit automatically installs or updates the test tool, and configures your launchSettings.json file. With the toolkit, you can use F5 debugging when the project opens.

For .NET Core 3.1, this tool offers new features. First, the way the tool loads .NET Lambda code internally is redesigned. Previously, the assemblies in customer code may collide with the test tool’s assemblies. Now the Lambda code is loaded in a separate AssemblyLoadContext, preventing this collision.

The test tool is an ASP.NET Core application that loads and executes the Lambda code. This allows the debugger that is currently attached to the test tool to debug the loaded Lambda code. Pressing F5 opens the web interface, allowing you to select the function, payload, and other parameters. Once everything is set, choose execute to run the code inside the test tool’s process. To improve the debug turnaround cycle, there is a new switch: –no-ui. This skips the web interface after code changes, making it faster to debug your code.

My work flow for this tool is to use the web interface for the initial debug session, then save the request JSON. After the initial debug session, I edit the launchSettings.json file, which looks like this, setting up the port for the web interface:


{
  "profiles": {
    "Mock Lambda Test Tool": {
      "commandName": "Executable",
      "commandLineArgs": "--port 5050",
      "workingDirectory": ".\\bin\\$(Configuration)\\netcoreapp3.1",
      "executablePath": "C:\\Users\\%USERNAME%\\.dotnet\\tools\\dotnet-lambda-test-tool-3.1.exe"
    }
  }
}

I update this to:


{
  "profiles": {
    "Mock Lambda Test Tool": {
      "commandName": "Executable",
      "commandLineArgs": "—no-ui --payload SavedRequest",
      "workingDirectory": ".\\bin\\$(Configuration)\\netcoreapp3.1",
      "executablePath": "C:\\Users\\%USERNAME%\\.dotnet\\tools\\dotnet-lambda-test-tool-3.1.exe"
    }
  }
}

This uses the saved request from the web interface as the input payload, instead of the web interface.

For more information about this feature, see the new Documentation tab in the test tool after launching. Here is a demonstration of my debug workflow:

Lambda debug workflow

Amazon Linux 2

.NET Core 3.1, like (Ruby 2.7, Python 3.8, Node.js 10 and 12, and Java 11) is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Migrate to .NET Core 3.1

To migrate existing .NET Core 2.1 Lambda functions to the new 3.1 runtime, follow the steps below:

  1. Open the csproj or fsproj file.
    • Set the TargetFramework element to netcoreapp3.1.
  2. Open the aws-lambda-tools-defaults.json file.
    • If it exists, set the function-runtime field to dotnetcore3.1
    • If it exists, set the framework field to netcoreapp3.1. If you remove the field, the value is inferred from the project file.
  3. If it exists, open the serverless.template file.
    • For any AWS::Lambda::Function or AWS::Servereless::Function, set the Runtime property to dotnetcore3.1
  4. Update all Amazon.Lambda.* NuGet package references to the latest versions.

To use the new JSON serializer, follow these steps:

  1. Remove the NuGet package reference to Amazon.Lambda.Serialization.Json.
  2. Add the NuGet package reference to Amazon.Lambda.Serialization.SystemTextJson.
  3. In your code, where the LambdaSerializer attribute registers the JSON serializer, change the parameter to Amazon.Lambda.Serialization.SystemTextJson.LambdaJsonSerializer.

Conclusion

There is a blueprint in Visual Studio for detecting labels for images uploaded in S3:

Detect image labels

By converting this blueprint to .NET Core 3.1 and using the new JSON serializer and ReadyToRun features, the cold start time is reduced by 40% when using 256 MB of memory. Performance improvements vary, so be sure to try these new features in your Lambda functions.

Start building .NET Core 3.1 Lambda functions with the latest versions of the AWS Toolkit for Visual Studio or the .NET Core Global Tool Amazon.Lambda.Tools. If you are not using .NET Core Lambda tooling, specify dotnetcore3.1 as the runtime value in your preferred tool to deploy Lambda functions.

We would like to hear your feedback for AWS .NET Lambda support. Contact the AWS .NET Team for Lambda questions through our .NET Lambda GitHub repository.

Amazon Detective – Rapid Security Investigation and Analysis

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-detective-rapid-security-investigation-and-analysis/

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in all commercial AWS Regions, except China. You can start to use it today.

— seb

New – Use AWS IAM Access Analyzer in AWS Organizations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-use-aws-iam-access-analyzer-in-aws-organizations/

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

Now Open – Third Availability Zone in the AWS Canada (Central) Region

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/now-open-third-availability-zone-in-the-aws-canada-central-region/

When you start an EC2 instance, or store data in an S3 bucket, it’s easy to underestimate what an AWS Region is. Right now, we have 22 across the world, and while they look like dots on a global map, they are architected to let you run applications and store data with high availability and fault tolerance. In fact, each of our Regions is made up of multiple data centers, which are geographically separated into what we call Availability Zones (AZs).

Today, I am very happy to announce that we added a third AZ to the AWS Canada (Central) Region to support our customer base in Canada.

This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications, and will support additional AWS services in Canada. We opened the Canada (Central) Region in December 2016, just over 3 years ago, and we’ve more than tripled the number of available services as we bring on this third AZ.

Each AZ is in a separate and distinct geographic location with enough distance to significantly reduce the risk of a single event impacting availability in the Region, yet near enough for business continuity applications that require rapid failover and synchronous replication. For example, our Canada (Central) Region is located in the Montreal area of Quebec, and the upcoming new AZ will be on the mainland more than 45 kms/28 miles away from the next-closest AZ as the crow flies.

Where we place our Regions and AZs is a deliberate and thoughtful process that takes into account not only latency or distance, but also risk profiles. To keep the risk profile low, we look at decades of data related to floods and other environmental factors before we settle on a location. Montreal was heavily impacted in 1998 by a massive ice storm that crippled the power grid and brought down more than 1,000 transmission towers, leaving four million people in neighboring provinces and some areas of New York and Maine without power. In order to ensure that AWS infrastructure can withstand inclement weather such as this, half of the AZs interconnections use underground cables and are out of the impact of potential ice storms. In this way, every AZ is connected to the other two AZs by at least one 100% underground fiber path.

We’re excited to bring a new AZ to Canada to serve our incredible customers in the region. Here are some examples from different industries, courtesy of my colleagues in Canada:

Healthcare – AlayaCare delivers cloud-based software to home care organizations across Canada and all over the world. As a home healthcare technology company, they need in-country data centers to meet regulatory requirements.

Insurance – Aviva is delivering a world-class digital experience to its insurance clients in Canada and the expansion of the AWS Region is welcome as they continue to move more of their applications to the cloud.

E-LearningD2L leverages various AWS Regions around the world, including Canada to deliver a seamless experience for their clients. They have been on AWS for more than four years, and recently completed an all-in migration.

With this launch, AWS has now 70 AZs within 22 geographic Regions around the world, plus 5 new regions coming. We are continuously looking at expanding our infrastructure footprint globally, driven largely by customer demand.

To see how we use AZs in Amazon, have look at this article on Static stability using Availability Zones by Becky Weiss and Mike Furr. It’s part of the Amazon Builders’ Library, a place where we share what we’ve learned over the years.

For more information on our global infrastructure, and the custom hardware we use, check out this interactive map.

Danilo


Une troisième zone de disponibilité pour la Région AWS Canada (Centre) est lancée

Lorsque vous lancez une instance EC2, ou que vous stockez vos données dans Amazon S3, il est facile de sous-estimer l’étendue d’une région infonuagique AWS. À l’heure actuelle, nous avons 22 régions dans le monde. Bien que ces dernières ne ressemblent qu’à des petits points sur une grande carte, elles sont conçues pour vous permettre de lancer des applications et de stocker des données avec une grande disponibilité et une tolérance aux pannes. En fait, chacune de nos régions comprend plusieurs centres de données distincts, regroupés dans ce que nous appelons des zones de disponibilités.

Aujourd’hui, je suis très heureux d’annoncer que nous avons ajouté une troisième zone de disponibilité à la Région AWS Canada (Centre) afin de répondre à la demande croissante de nos clients canadiens.

Cette troisième zone de disponibilité offre aux clients une souplesse additionnelle, leur permettant de concevoir des applications évolutives, tolérantes et hautement disponibles. Cette zone de disponibilité permettra également la prise en charge d’un plus grand nombre de services AWS au Canada. Nous avons ouvert la région infonuagique en décembre 2016, il y a un peu plus de trois ans, et nous avons plus que triplé le nombre de services disponibles en lançant cette troisième zone.

Chaque zone de disponibilité AWS se situe dans un lieu géographique séparé et distinct, suffisamment éloignée pour réduire le risque qu’un seul événement puisse avoir une incidence sur la disponibilité dans la région, mais assez rapproché pour permettre le bon fonctionnement d’applications de continuité d’activités qui nécessitent un basculement rapide et une réplication synchrone. Par exemple, notre Région Canada (Centre) se situe dans la région du grand Montréal, au Québec. La nouvelle zone de disponibilité sera située à plus de 45 km à vol d’oiseau de la zone de disponibilité la plus proche.

Définir l’emplacement de nos régions et de nos zones de disponibilité est un processus délibéré et réfléchi, qui tient compte non seulement de la latence/distance, mais aussi des profils de risque. Par exemple, nous examinons les données liées aux inondations et à d’autres facteurs environnementaux sur des décennies avant de nous installer à un endroit. Ceci nous permet de maintenir un profil de risque faible. En 1998, Montréal a été lourdement touchée par la tempête du verglas, qui a non seulement paralysé le réseau électrique et engendré l’effondrement de plus de 1 000 pylônes de transmission, mais qui a également laissé quatre millions de personnes sans électricité dans les provinces avoisinantes et certaines parties dans les états de New York et du Maine. Afin de s’assurer que l’infrastructure AWS résiste à de telles intempéries, la moitié des interconnexions câblées des zones de disponibilité d’AWS sont souterraines, à l’abri des tempêtes de verglas potentielles par exemple. Ainsi, chaque zone de disponibilité est reliée aux deux autres zones par au moins un réseau de fibre entièrement souterrain.

Nous nous réjouissons d’offrir à nos clients canadiens une nouvelle zone de disponibilité pour la région. Voici quelques exemples clients de différents secteurs, gracieuseté de mes collègues canadiens :

SantéAlayaCare fournit des logiciels de santé à domicile basés sur le nuage à des organismes de soins à domicile canadiens et partout dans le monde. Pour une entreprise de technologie de soins à domicile, le fait d’avoir des centres de données au pays est essentiel et lui permet de répondre aux exigences réglementaires.

AssuranceAviva offre une expérience numérique de classe mondiale à ses clients du secteur de l’assurance au Canada. L’expansion de la région AWS est bien accueillie alors qu’ils poursuivent la migration d’un nombre croissant de leurs applications vers l’infonuagique.

Apprentissage en ligneD2L s’appuie sur diverses régions dans le monde, dont celle au Canada, pour offrir une expérience homogène à ses clients. Ils sont sur AWS depuis plus de quatre ans et ont récemment effectué une migration complète.

Avec ce lancement, AWS compte désormais 70 zones de disponibilité dans 22 régions géographiques au monde – et cinq nouvelles régions à venir. Nous sommes continuellement à la recherche de moyens pour étendre notre infrastructure à l’échelle mondiale, entre autres en raison de la demande croissante des clients.

Pour comprendre comment nous utilisons les zones de disponibilité chez Amazon, consultez cet article sur la stabilité statique à l’aide des zones de disponibilité par Becky Weiss et Mike Furr. Ce billet se retrouve dans la bibliothèque des créateurs d’Amazon, un lieu où nous partageons ce que nous avons appris au fil des années.

Pour plus d’informations sur notre infrastructure mondiale et le matériel informatique personnalisé que nous utilisons, consultez cette carte interactive.

Danilo

Working From Home? Here’s How AWS Can Help

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/working-from-home-heres-how-aws-can-help/

Just a few weeks and so much has changed. Old ways of living, working, meeting, greeting, and communicating are gone for a while. Friendly handshakes and warm hugs are not healthy or socially acceptable at the moment.

My colleagues and I are aware that many people are dealing with changes in their work, school, and community environments. We’re taking measures to support our customers, communities, and employees to help them to adjust and deal with the situation, and will continue to do more.

Working from Home
With people in many cities and countries now being asked to work or learn from home, we believe that some of our services can help to make the transition from the office or the classroom to the home just a bit easier. Here’s an overview of our solutions:

Amazon WorkSpaces lets you launch virtual Windows and Linux desktops that can be accessed anywhere and from any device. These desktops can be used for remote work, remote training, and more.

Amazon WorkDocs makes it easy for you to collaborate with others, also from anywhere and on any device. You can create, edit, share, and review content, all stored centrally on AWS.

Amazon Chime supports online meetings with up to 100 participants (growing to 250 later this month), including chats and video calls, all from a single application.

Amazon Connect lets you set up a call or contact center in the cloud, with the ability to route incoming calls and messages to tens of thousands of agents. You can use this to provide emergency information or personalized customer service, while the agents are working from home.

Amazon AppStream lets you deliver desktop applications to any computer. You can deliver enterprise, educational, or telemedicine apps at scale, including those that make use of GPUs for computation or 3D rendering.

AWS Client VPN lets you set up secure connections to your AWS and on-premises networks from anywhere. You can give your employees, students, or researchers the ability to “dial in” (as we used to say) to your existing network.

Some of these services have special offers designed to make it easier for you to get started at no charge; others are already available to you under the AWS Free Tier. You can learn more on the home page for each service, and on our new Remote Working & Learning page.

You can sign up for and start using these services without talking to us, but we are here to help if you need more information or need some help in choosing the right service(s) for your needs. Here are some points of contact:

If you are already an AWS customer, your Technical Account Manager (TAM) and Solutions Architect (SA) will be happy to help.

Some Useful Content
I am starting a collection of other AWS-related content that will help you use these services and work-from-home as efficiently as possible. Here’s what I have so far:

If you create something similar, share it with me and I’ll add it to my list.

Please Stay Tuned
This is, needless to say, a dynamic and unprecedented situation and we are all learning as we go.

I do want you to know that we’re doing our best to help. If there’s something else that you need, please do not hesitate to reach out. Go through your normal AWS channels first, but contact me if you are in a special situation and I’ll do my best!

Jeff;

 

Improving Transparency of AWS Elastic Beanstalk

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/improving-transparency-of-aws-elastic-beanstalk/

This post is courtesy of David LaBissoniere, Software Development Manager, AWS Elastic Beanstalk.

Today I want to discuss two recent announcements from the AWS Elastic Beanstalk team which improve transparency into our planning and development. We launched a new public roadmap, and we shifted to developing the Elastic Beanstalk command line interface (EB CLI) on GitHub as a community-involved open source project.

Public Roadmap

In January, we launched an experimental public roadmap on GitHub, joining other teams like AWS container services, AWS CloudFormation, and AWS App Mesh. The roadmap allows us to be more transparent about our priorities, and enables you to directly influence them. You can propose a feature by opening a GitHub issue, or comment on existing issues. 2020 is shaping up to be a significant year for us, and as we continue to invest in the service, we want customer input to help direct our focus.

The roadmap itself is built as a GitHub project board and contains five columns:

Just Shipped — Launched and available for production use.
Public Beta — Available in a preview form but not yet recommended for production usage.
Coming Soon — Launching soon, generally within the next one to three months.
We’re Working On It — In progress, but further out.
Researching — We’re interested in this feature but are still thinking about the best way to implement it.

Screen capture of the AWS Elastic Beanstalk project board on GitHub

Please feel free to create a GitHub issue for a feature you want us to support, or give a thumbs-up to existing issues. We’d also love to hear from you in the issue comments about how you’d like to use a particular feature or how you think it should work. While the roadmap doesn’t include every single item we are working on, it does include many of the regular incremental launches customers rely on, for example, new platform runtime updates like PHP 7.3 or .NET Core 3.1. We’re starting out with a subset of our planned and in-flight work, and expect to gradually expand our use of the roadmap over the course of the year.

EB CLI on GitHub

A popular way to use Elastic Beanstalk is our command line interface, the EB CLI. As of January 16, it is hosted on GitHub as an Apache 2.0-licensed open source project. We plan to do nearly all of our CLI development openly on GitHub and welcome pull requests from the community. Many customers rely on the EB CLI as part of their development and deployment workflows. We hope to improve transparency into this critical tool by open-sourcing it, and we also hope you join us in improving it.

We’re thrilled to start off the year with these two announcements. Watch the roadmap for more announcements in this space!

Host Your Apps with AWS Amplify Console from the AWS Amplify CLI

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/host-your-apps-with-aws-amplify-console-from-the-aws-amplify-cli/

Have you tried out AWS Amplify and AWS Amplify Console yet? In my opinion, they provide one of the fastest ways to get a new web application from idea to prototype on AWS. So what are they? AWS Amplify is an opinionated framework for building modern applications, with a toolchain for easily adding services like authentication (via Amazon Cognito) or storage (via Amazon Simple Storage Service (S3)) or GraphQL APIs, all via a command-line interface. AWS Amplify Console makes continuous deployment and hosting for your modern web apps easy. It supports hosting the frontend and backend assets for single page app (SPA) frameworks including React, Angular, Vue.js, Ionic, and Ember. It also supports static site generators like Gatsby, Eleventy, Hugo, VuePress, and Jekyll.

With today’s launch, hosting options available from the AWS Amplify CLI now include Amplify Console in addition to S3 and Amazon CloudFront. By using Amplify Console, you can take advantage of features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains.

Initializing an Amplify App

Let’s take a look at a quick example. We’ll be deploying a static site demo of Amazon Transcribe. I’ve already got the AWS Command Line Interface (CLI) installed, as well as the AWS Amplify CLI. I’ve forked and then cloned the sample code to my local machine. In the following gif, you can see the initialization process for an AWS Amplify app. (I sped things up a little for the gif. It might take a few seconds for your app to be created.)

Terminal session showing the "amplify init" workflow

Now that I’ve got my app initialized, I can add additional services. Let’s add some hosting via AWS Amplify Console. After choosing Amplify Console for hosting, I can pick manual deployment or continuous deployment using a git-based workflow.

Continuous Deployment

First, I’m going to set up continuous deployment so that changes to our git repo will trigger a build and deploy.

A screenshot of a terminal session adding Amplify Console to an Amplify project

The workflow for configuring continuous deployment requires a quick browser session. First, I select our git provider. The forked repo is on GitHub, so I need to authorize Amplify Console to use my GitHub account.

Screenshot of git provider selection

Once a provider is authorized, I choose the repo and branch to watch for changes.

Screenshot of repo and branch selection

AWS Amplify Console auto-detected the correct build settings, based on the contents of package.json.

Screenshot of build settings

Once I’ve confirmed the settings, the initial build and deploy will start. Then any changes to the selected git branch will result in additional builds and deploys. Now I need to finish the workflow in the CLI, and I the need the ARN of the new Amplify Console app for that. In the browser, under App Settings and then General, I copy the ARN and then paste it into my terminal, and check the status.

A screenshot of a terminal window where the app ARN is being set

A quick check of the URL in my browser confirms that the app has been successfully deployed.

A screenshot of the sample app we deployed in this post

Manual Deploys

Manual deploys with Amplify Console also provide a bunch of useful features. The CLI can now manage front-end environments, making it easy to add a test or dev environment. It’s also easy to add URL redirects and rewrites, or add a username/password via HTTP Basic Auth.

Configuring manual deploys is straightforward. Just set your environment name. When it’s time to deploy, run amplify publishand the build scripts defined during the initialization of the project will be run. The generated artifact will then be uploaded automatically.

A screenshot of a terminal window where manual deploys are configured

With manual deployments, you can set up multiple frontend environments (e.g. dev and prod) directly from the CLI. To create a new dev environment, run amplify env add (name it dev) and amplify publish. This will create a second frontend environment in Amplify Console. To view all your frontend and backend environments, run amplify console from the CLI to open your Amplify Console app.

Ever since using AWS Amplify Console for the first time a few weeks ago, it has become my go-to way to deploy applications, especially static sites. I’m excited to see the simplicity of hosting with AWS Amplify Console extended to the Amplify CLI, and I hope you are too. Happy building!

— Brandon

Vulkan is coming to Raspberry Pi: first triangle

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/vulkan-raspberry-pi-first-triangle/

Following on from our recent announcement that Raspberry Pi 4 is OpenGL ES 3.1 conformant, we have some more news to share on the graphics front. We have started work on a much requested feature: an open-source Vulkan driver!

Vulkan

Standards body Khronos describes Vulkan as “a new generation graphics and compute API that provides high-efficiency, cross-platform access to modern GPUs”. The Vulkan API has been designed to better accommodate modern GPUs and address common performance bottlenecks in OpenGL, providing graphics developers with new means to squeeze the best performance out of the hardware.

First triangle

The “first triangle” image is something of a VideoCore graphics tradition: while I arrived at Broadcom too late to witness the VideoCore III version, I still remember the first time James and Gary were able to get a flawless, single-tile, RGB triangle out of VideoCore IV in simulation. So, without further ado, here’s the VideoCore VI Vulkan version.

First triangle out of Vulkan

Before you get too excited, remember that this is just the start of the development process for Vulkan on Raspberry Pi. While there have been community efforts in the direction of Vulkan support (originally on VideoCore IV) as far back as 2018, Igalia has only been working on this new driver for a few weeks, and we still have a very long development roadmap ahead of us before we can put an actual driver in the hands of our users. So don’t hold your breath, and instead look forward to more news from us and Igalia as they make further development progress.

The post Vulkan is coming to Raspberry Pi: first triangle appeared first on Raspberry Pi.

Announcing CloudTrail Insights: Identify and Respond to Unusual API Activity

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/announcing-cloudtrail-insights-identify-and-respond-to-unusual-api-activity/

Building software in the cloud makes it easy to instrument systems for logging from the very beginning. With tools like AWS CloudTrail, tracking every action taken on AWS accounts and services is straightforward, providing a way to find the event that caused a given change. But not all log entries are useful. When things are running smoothly, those log entries are like the steady, reassuring hum of machinery on a factory floor. When things start going wrong, that hum can make it harder to hear which piece of equipment has gone a bit wobbly. The same is true with large scale software systems: the volume of log data can be overwhelming. Sifting through those records to find actionable information is tedious. It usually requires a lot of custom software or custom integrations, and can result in false positives and alert fatigue when new services are added.

That’s where software automation and machine learning can help. Today, we’re launching AWS CloudTrail Insights in all commercial AWS regions. CloudTrail Insights automatically analyzes write management events from CloudTrail trails and alerts you to unusual activity. For example, if there is an increase in TerminateInstance events that differs from established baselines, you’ll see it as an Insight event. These events make finding and responding to unusual API activity easier than ever.

Enabling AWS CloudTrail Insights

CloudTrail tracks user activity and API usage. It provides an event history of AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. With the launch of AWS CloudTrail Insights, you can enable machine learning models that detect unusual activity in these logs with just a few clicks. AWS CloudTrail Insights will analyze historical API calls, identifying usage patterns and generating Insight Events for unusual activity.

Screenshot showing how to enable CloudTrail Insights

You can also enable Insights on a trail from the AWS Command Line Interface (CLI) by using the put-insight-selectors command:

$ aws cloudtrail put-insight-selectors --trail-name trail_name --insight-selectors '{"InsightType": "ApiCallRateInsight"}'

Once enabled, CloudTrail Insights sends events to the S3 bucket specified on the trail details page. Events are also sent to CloudWatch Events, and optionally to an CloudWatch Logs log group, just like other CloudTrail Events. This gives you options when it comes to alerting, from sophisticated rules that respond to CloudWatch events to custom AWS Lambda functions. After enabling Insights, historical events for the trail will be analyzed. Anomalous usage patterns found will appear in the CloudTrail Console within 30 minutes.

Using CloudTrail Insights

In this post we’ll take a look at some AWS CloudTrail Insights Events from the AWS Console. If you’d like to view Insight events from the AWS CLI, you use the CloudTrail LookupEvents call with the event-category parameter.

$ aws cloudtrail lookup-events --event-category insight [--max-item] [--lookup-attributes]

Quickly scanning the list of CloudTrail Insights, the RunInstances event jumps out to me. Spinning up more EC2 instances can be expensive, and I’ve definitely mis-configured things such that I created more instances than needed before, so I want to take a closer look. Let’s filter the list down to just these events and see what we can learn from AWS CloudTrail Insights.

Let’s dig in to the latest event.

Here we see that over the course of one minute, there was a spike in RunInstances API call volume. From the Insights graph, we can see the raw event as JSON.

{
    "Records": [
        {
            "eventVersion": "1.07",
            "eventTime": "2019-11-07T13:25:00Z",
            "awsRegion": "us-east-1",
            "eventID": "a9edc959-9488-4790-be0f-05d60e56b547",
            "eventType": "AwsCloudTrailInsight",
            "recipientAccountId": "-REDACTED-",
            "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314",
            "insightDetails": {
                "state": "Start",
                "eventSource": "ec2.amazonaws.com",
                "eventName": "RunInstances",
                "insightType": "ApiCallRateInsight",
                "insightContext": {
                    "statistics": {
                        "baseline": {
                            "average": 0.0020833333},
                        "insight": {
                            "average": 6}
                    }
                }
            },
            "eventCategory": "Insight"},
        {
            "eventVersion": "1.07",
            "eventTime": "2019-11-07T13:26:00Z",
            "awsRegion": "us-east-1",
            "eventID": "33a52182-6ff8-49c8-baaa-9caac16a96ce",
            "eventType": "AwsCloudTrailInsight",
            "recipientAccountId": "-REDACTED-",
            "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314",
            "insightDetails": {
                "state": "End",
                "eventSource": "ec2.amazonaws.com",
                "eventName": "RunInstances",
                "insightType": "ApiCallRateInsight",
                "insightContext": {
                    "statistics": {
                        "baseline": {
                            "average": 0.0020833333},
                        "insight": {
                            "average": 6},
                        "insightDuration": 1}
                }
            },
            "eventCategory": "Insight"}
    ]}

Here we can see that the baseline API call volume is 0.002. That means that there’s usually one call to RunInstances roughly once every 500 minutes, so the activity we see in the graph is definitely not normal. By clicking over to the CloudTrail Events tab we can see the individual events that are grouped into this Insight event. It looks like this was probably a normal EC2 autoscaling activity, but I still want to dig in and confirm.

By expanding an event in this tab and clicking “View Event,” I can head directly to the event in CloudTrail for more information. After reviewing the event metadata and associated EC2 and IAM resources, I’ve confirmed that while this behavior was unusual, it’s not a cause for concern. It looks like autoscaling did what it was supposed to and that the correct type of instance was created.

Things to Know

Before you get started, here are some important things to know:

  • CloudTrail Insights costs $0.35 for every 100,000 write management events analyzed for each Insight type. At launch, API call volume insights are the only type available.
  • Activity baselines are scoped to the region and account in which the CloudTrail trail is operating.
  • After an account enables Insights events for the first time, if an unusual activity is detected, you can expect to receive the first Insights events within 36 hours of enabling Insights..
  • New unusual activity is logged as it is discovered, sending Insight Events to your destination S3 buckets and the AWS console within 30 minutes in most cases.

Let me know if you have any questions or feature requests, and happy building!

— Brandon

 

The Raspberry Pi Foundation and Bebras

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/bebras-partnership/

We are delighted to announce a new partnership that will ensure the long-term growth and success of the free, annual UK Bebras Computational Thinking Challenge.

Bebras UK logo

‘Bebras’ means ‘beaver’ in Lithuanian; Prof. Valentina Dagiene named the competition after this hard-working, intelligent, and lively animal.

The Raspberry Pi Foundation has teamed up with Oxford University to support the Bebras Challenge, which every November invites students to use computational thinking to solve classical computer science problems re-worked into accessible and interesting questions.

Bebras is:

  • Open to students aged 6 to 18 (and it’s quite good fun for adults too)
  • A great whole-school activity
  • Completely free
  • Easy to sign up to and take part in online
  • Open for two weeks every November; this year it runs from 4 to 15 November and you’ve still got until 31 October to register!

Woman teacher and female students at a computer

Why should I get involved in the Bebras Challenge?

Bebras is an international challenge that started in Lithuania in 2004. Participating in Bebras is a great way to engage students of all ages in the fun of problem solving, and to give them an insight into computing and what it’s all about. Computing principles are highlighted in the answers, so Bebras can be quite educational for teachers too.

Male teacher and female student at a computer
Male teacher and male students at a computer
Woman teacher and female student at a laptop

The UK became involved in Bebras for the first time in 2013, and the numbers of participating students have increased from 21,000 in the first year to 202,000 last year. Internationally, more than 2.78 million learners took part in 2018.

  • Bebras runs from 4 to 15 November this year
  • The challenge takes 40 minutes to complete
  • Use the practice questions on the website to get your students used to what they’ll encounter in challenge
  • All the marking is done for you
  • The results are sent to you the week after the challenge ends, along with an answer booklet, so that you can go through the answers with your learners
  • The highest-achieving students in each age group are invited to Oxford University to take part in the second round over a weekend in January

To give you a taste of what Bebras involves, try this example question!

You’ve still got three more days to sign up for this year’s Bebras Challenge.

Support computational thinking at your school throughout the year with Bebras

The annual challenge is only one part of the equation: questions from previous years are available as a resource with which teachers can create self-marking quizzes to use with their classes! This means you can support the computational thinking part of the school curriculum throughout the whole year.

Male teacher and male students at a computer

 

You can also use the Bebras App to try 100 computational thinking problems, and download sets of Bebras Cards for primary schools.

Follow @bebrasuk to stay up to date with what’s on offer for you.

The post The Raspberry Pi Foundation and Bebras appeared first on Raspberry Pi.

Improve Your App Testing With Amplify Console’s Pull Requests Previews and Cypress Testing

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/improve-your-app-testing-with-amplify-consoles-pull-requests-previews-and-cypress-testing/

Amplify Console allows developers to easly configure a Git-based workflow for continuous deployment and hosting of fullstack serverless web apps. Fullstack serverless apps comprise of backend resources such as GraphQL APIs, Data and File Storage, Authentication, or Analytics, integrated with a frontend framework such as React, Gatsby, or Angular. You can read more about the Amplify Console in a previous article I wrote.

Today, we are announcing the ability to create preview URLs and to run end-to-end tests on pull requests before releasing code to production.

Pull Request previews
You can now configure Amplify Console to deploy your application to a unique URL every time a developer submits a pull request to your Git repository. The preview URL is completely different from the one used by the production site. You can see how changes look before merging the pull request into the main branch of your code repository, triggering a new release in the Amplify Console. For fullstack apps with backend environments provisioned via the Amplify CLI, every pull request spins up an ephemeral backend that is deleted when the pull request is closed. You can test changes in complete isolation from the production environment. Amplify Console creates backend infrastructures for pull requests on private git repositories only. This allows to avoid incurring extra costs in case of unsolicited pull requests.

To learn how it works, let’s start a web application with a cloud-based authentication backend, and deploy it on Amplify Console. I first create a React application (check here to learn how to install React).

npx create-react-app amplify-console-demo                                                
cd amplify-console-demo

I initialize the Amplify environment (learn how to install the Amplify CLI first). I add a cloud based authentication backend powered by Amazon Cognito. I accept all the defaults answers proposed by Amplify CLI.

npm install aws-amplify aws-amplify-react
amplify init
amplify add auth
amplify push

I then modify src/App.js to add the front end authentication user interface. The code is available in the AWS Amplify documentation. Once ready, I start the local development server to test the application locally.

npm run start

I point my browser to http://localhost:8080 to verify the scafolding (the below screenshot is taken from my AWS Cloud 9 development environment). I click Create account to create a user, verify the SignUp flow, and authenticate to the app.

After signing up, I see the application page.

There are two important details to note. First, I am using a private GitHub repository. Amplify Console only creates backend infrastructure on pull requests for private repositories, to avoid creating unnecessary infrastructure for unsollicited pull requests. Second, the Amplify Console build process looks for dependencies in package-lock.json only. This is why I added the amplify packages with npm and not with yarn.

When I am happy with my app, I push the code to a GitHub repo (let’s assume I already did git remote add origin ...).

git add amplify
git commit -am "initial commit"
git push origin master

The next step consists of configuring Amplify Console to build and deploy my app on every git commit. I login to the Amplify Console, click Connect App, choose GitHub as repository and click Continue (the first time I do this, I need to authenticate on GitHub, using my GitHub username and password)

I select my repository and the branch I want to use as source:

Amplify Console detects the type of project and proposes a build file. I select the name of my environment (dev). The first time I use Amplify Console, I follow the instructions to create a new service role. This role authorises Amplify Console to access AWS backend services on my behalf.

I click Next. I review the settings and click Save and Deploy. After a few seconds or minutes, my application is ready. I can point my browser to the deployment URL and verify the app is working correctly.

Now, let’s enable previews for pull requests. Click Preview on the left menu and Enable Previews. To enable the previews, Amplify Console requires an app to be installed in my GitHub account. I follow the instructions provided by the console to configure my GitHub account. Once set up, I select a branch, click Manage to enable / disable the pull request previews. (At anytime, I can uninstall the Amplify app from my GitHub account by visiting the Applications section of my GitHub account’s settings.)

Now that the mechanism is in place, let’s create a pull request.

I edit App.js directly on GitHub. I customize the withAuthenticator component to change the color of the Sign In button from orange to green. I save the changes and I create a pull request.

On the Pull Request detail page, I click Show all checks to get the status of the Amplify Console test. I see AWS Amplify Console Web Preview in progress. Amplify Console creates a full backend environment to test the pull request, to build and to deploy the frontend.

Eventually, I see All checks have passed and a green mark. I click Details to get the preview url. In case of an error, you can see the detailled log file of the build phase in the Amplify Console.

I can also check the status of the preview in the Amplify Console.

I point my browser to the preview URL to test my change. I can see the green Sign In button instead of the orange one.

When I try to authenticate using the username and password I created previously, I receive an User does not exist error message because this preview URL points to a different backend than the main application. I can see two Cognito user pools in the Cognito console, one for each environment.

I can control who can access the preview URL using similar access control settings that I use for the main URL.

When I am happy with the proposed changes, I merge the pull request on GitHub to trigger a new build and to deploy the change to the production environment. Amplify Console deletes the preview environment upon merging. The ephemeral backend environment created for the pull request also gets deleted.

Cypress testing
In addition to previewing changes before merging them to the main branch, we also added the capability to run end to end tests during your build process. You can use your favorite test framework to add unit or end-to-end tests to your application and automatically run the tests during the build phase. When you use Cypress test framework, Amplify Console detects the tests in your source tree and automatically adds the testing phase in your application build process.

Only projects that are passing all tests are pushed down your pipeline to the deployment phase. You can learn more about this and follow step by step instructions we posted a few weeks ago.

These two additions to Amplify Console allow you to gain higher confidence in the robustness of your pipeline and the quality of the code delivered to your production environment.

Availability
Web previews are available in all Regions where AWS Amplify Console is available today, at no additional cost on top of the regular Amplify Console pricing. With the AWS Free Usage Tier, you can get started for free. Upon sign up, new AWS customers receive 1,000 build minutes per month for the build and deploy feature, and 15 GB served per month and 5 GB data storage per month for the hosting.

— seb

Learn From Your VPC Flow Logs With Additional Meta-Data

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/learn-from-your-vpc-flow-logs-with-additional-meta-data/

Flow Logs for Amazon Virtual Private Cloud enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow Logs data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (S3).

Since we launched VPC Flow Logs in 2015, you have been using it for variety of use-cases like troubleshooting connectivity issues across your VPCs, intrusion detection, anomaly detection, or archival for compliance purposes. Until today, VPC Flow Logs provided information that included source IP, source port, destination IP, destination port, action (accept, reject) and status. Once enabled, a VPC Flow Log entry looks like the one below.

While this information was sufficient to understand most flows, it required additional computation and lookup to match IP addresses to instance IDs or to guess the directionality of the flow to come to meaningful conclusions.

Today we are announcing the availability of additional meta data to include in your Flow Logs records to better understand network flows. The enriched Flow Logs will allow you to simplify your scripts or remove the need for postprocessing altogether, by reducing the number of computations or lookups required to extract meaningful information from the log data.

When you create a new VPC Flow Log, in addition to existing fields, you can now choose to add the following meta-data:

  • vpc-id : the ID of the VPC containing the source Elastic Network Interface (ENI).
  • subnet-id : the ID of the subnet containing the source ENI.
  • instance-id : the Amazon Elastic Compute Cloud (EC2) instance ID of the instance associated with the source interface. When the ENI is placed by AWS services (for example, AWS PrivateLink, NAT Gateway, Network Load Balancer etc) this field will be “-
  • tcp-flags : the bitmask for TCP Flags observed within the aggregation period. For example, FIN is 0x01 (1), SYN is 0x02 (2), ACK is 0x10 (16), SYN + ACK is 0x12 (18), etc. (the bits are specified in “Control Bits” section of RFC793 “Transmission Control Protocol Specification”).
    This allows to understand who initiated or terminated the connection. TCP uses a three way handshake to establish a connection. The connecting machine sends a SYN packet to the destination, the destination replies with a SYN + ACK and, finally, the connecting machine sends an ACK. In the Flow Logs, the handshake is shown as two lines, with tcp-flags values of 2 (SYN), 18 (SYN + ACK).  ACK is reported only when it is accompanied with SYN (otherwise it would be too much noise for you to filter out).
  • type : the type of traffic : IPV4, IPV6 or Elastic Fabric Adapter.
  • pkt-srcaddr : the packet-level IP address of the source. You typically use this field in conjunction with srcaddr to distinguish between the IP address of an intermediate layer through which traffic flows, such as a NAT gateway.
  • pkt-dstaddr : the packet-level destination IP address, similar to the previous one, but for destination IP addresses.

To create a VPC Flow Log, you can use the AWS Management Console, the AWS Command Line Interface (CLI) or the CreateFlowLogs API and select which additional information and the order you want to consume the fields, for example:

Or using the AWS Command Line Interface (CLI) as below:

$ aws ec2 create-flow-logs --resource-type VPC \
                            --region eu-west-1 \
                            --resource-ids vpc-12345678 \
                            --traffic-type ALL  \
                            --log-destination-type s3 \
                            --log-destination arn:aws:s3:::sst-vpc-demo \
                            --log-format '${version} ${vpc-id} ${subnet-id} ${instance-id} ${interface-id} ${account-id} ${type} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${pkt-srcaddr} ${pkt-dstaddr} ${protocol} ${bytes} ${packets} ${start} ${end} ${action} ${tcp-flags} ${log-status}'

# be sure to replace the bucket name and VPC ID !

{
    "ClientToken": "1A....HoP=",
    "FlowLogIds": [
        "fl-12345678123456789"
    ],
    "Unsuccessful": [] 
}

Enriched VPC Flow Logs are delivered to S3. We will automatically add the required S3 Bucket Policy to authorize VPC Flow Logs to write to your S3 bucket. VPC Flow Logs does not capture real-time log streams for your network interface, it might take several minutes to begin collecting and publishing data to the chosen destinations. Your logs will eventually be available on S3 at s3://<bucket name>/AWSLogs/<account id>/vpcflowlogs/<region>/<year>/<month>/<day>/

An SSH connection from my laptop with IP address 90.90.0.200 to an EC2 instance would appear like this :

3 vpc-exxxxxx2 subnet-8xxxxf3 i-0bfxxxxxxaf eni-08xxxxxxa5 48xxxxxx93 IPv4 172.31.22.145 90.90.0.200 22 62897 172.31.22.145 90.90.0.200 6 5225 24 1566328660 1566328672 ACCEPT 18 OK
3 vpc-exxxxxx2 subnet-8xxxxf3 i-0bfxxxxxxaf eni-08xxxxxxa5 48xxxxxx93 IPv4 90.90.0.200 172.31.22.145 62897 22 90.90.0.200 172.31.22.145 6 4877 29 1566328660 1566328672 ACCEPT 2 OK

172.31.22.145 is the private IP address of the EC2 instance, the one you see when you type ifconfig on the instance.  All flags are “OR”ed during aggregation period. When connection is short, probably both SYN and FIN (3), as well as SYN+ACK and FIN (19) will be set for the same lines.

Once a Flow Log is created, you can not add additional fields or modify the structure of the log to ensure you will not accidently break scripts consuming this data. Any modification will require you to delete and recreate the VPC Flow Logs. There is no additional cost to capture the extra information in the VPC Flow Logs, normal VPC Flow Log pricing applies, remember that Enriched VPC Flow Log records might consume more storage when selecting all fields.  We do recommend to select only the fields relevant to your use-cases.

Enriched VPC Flow Logs is available in all regions where VPC Flow logs is available, you can start to use it today.

— seb

PS: I heard from the team they are working on adding additional meta-data to the logs, stay tuned for updates.

Amazon Transcribe Streaming Now Supports WebSockets

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/amazon-transcribe-streaming-now-supports-websockets/

I love services like Amazon Transcribe. They are the kind of just-futuristic-enough technology that excites my imagination the same way that magic does. It’s incredible that we have accurate, automatic speech recognition for a variety of languages and accents, in real-time. There are so many use-cases, and nearly all of them are intriguing. Until now, the Amazon Transcribe Streaming API available has been available using HTTP/2 streaming. Today, we’re adding WebSockets as another integration option for bringing real-time voice capabilities to the things you build.

In this post, we are going to transcribe speech in real-time using only client-side JavaScript in a browser. But before we can build, we need a foundation. We’ll review just enough information about Amazon Transcribe, WebSockets, and the Amazon Transcribe Streaming API to broadly explain the demo. For more detailed information, check out the Amazon Transcribe docs.

If you are itching to see things in action, you can head directly to the demo, but I recommend taking a quick read through this post first.

What is Amazon Transcribe?

Amazon Transcribe applies machine learning models to convert speech in audio to text transcriptions. One of the most powerful features of Amazon Transcribe is the ability to perform real-time transcription of audio. Until now, this functionality has been available via HTTP/2 streams. Today, we’re announcing the ability to connect to Amazon Transcribe using WebSockets as well.

For real-time transcription, Amazon Transcribe currently supports British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US).

What are WebSockets?

WebSockets are a protocol built on top of TCP, like HTTP. While HTTP is great for short-lived requests, it hasn’t historically been good at handling situations that require persistent real-time communications. While an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open. This means that messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, meaning that the server can client can both transmit data at the same time. They were also designed for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP.

HTTP/2 streams solve a lot of the issues that HTTP had with real-time communications, and the first Amazon Transcribe Streaming API available uses HTTP/2. WebSocket support opens Amazon Transcribe Streaming up to a wider audience, and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge.

How the Amazon Transcribe Streaming API Works

Authorization

The first thing we need to do is authorize an IAM user to use Amazon Transcribe Streaming WebSockets. In the AWS Management Console, attach the following policy to your user:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "transcribestreaming",
            "Effect": "Allow",
            "Action": "transcribe:StartStreamTranscriptionWebSocket",
            "Resource": "*"
        }
    ]
}

Authentication

Transcribe uses AWS Signature Version 4 to authenticate requests. For WebSocket connections, use a pre-signed URL, that contains all of the necessary information is passed as query parameters in the URL. This gives us an authenticated endpoint that we can use to establish our WebSocket.

Required Parameters

All of the required parameters are included in our pre-signed URL as part of the query string. These are:

  • language-code: The language code. One of en-US, en-GB, fr-FR, fr-CA, es-US.
  • sample-rate: The sample rate of the audio, in Hz. Max of 16000 for en-US and es-US, and 8000 for the other languages.
  • media-encoding: Currently only pcm is valid.
  • vocabulary-name: Amazon Transcribe allows you to define custom vocabularies for uncommon or unique words that you expect to see in your data. To use a custom vocabulary, reference it here.

Audio Data Requirements

There are a few things that we need to know before we start sending data. First, Transcribe expects audio to be encoded as PCM data. The sample rate of a digital audio file relates to the quality of the captured audio. It is the number of times per second (Hz) that the analog signal is checked in order to generate the digital signal. For high-quality data, a sample rate of 16,000 Hz or higher is recommended. For lower-quality audio, such as a phone conversation, use a sample rate of 8,000 Hz. Currently, US English (en-US) and US Spanish (es-US) support sample rates up to 48,000 Hz. Other languages support rates up to 16,000 Hz.

In our demo, the file lib/audioUtils.js contains a downsampleBuffer() function for reducing the sample rate of the incoming audio bytes from the browser, and a pcmEncode() function that takes the raw audio bytes and converts them to PCM.

Request Format

Once we’ve got our audio encoding as PCM data with the right sample rate, we need to wrap it in an envelope before we send it across the WebSocket connection. Each messages consists of three headers, followed by the PCM-encoded audio bytes in the message body. The entire message is then encoded as a binary event stream message and sent. If you’ve used the HTTP/2 API before, there’s one difference that I think makes using WebSockets a bit more straightforward, which is that you don’t need to cryptographically sign each chunk of audio data you send.

Response Format

The messages we receive follow the same general format: they are binary-encoded event stream messages, with three headers and a body. But instead of audio bytes, the message body contains a Transcript object. Partial responses are returned until a natural stopping point in the audio is determined. For more details on how this response is formatted, check out the docs and have a look at the handleEventStreamMessage() function in main.js.

Let’s See the Demo!

Now that we’ve got some context, let’s try out a demo. I’ve deployed it using AWS Amplify Console – take a look, or push the button to deploy your own copy. Enter the Access ID and Secret Key for the IAM User you authorized earlier, hit the Start Transcription button, and start speaking into your microphone.

Deploy to Amplify Console

The complete project is available on GitHub. The most important file is lib/main.js. This file defines all our required dependencies, wires up the buttons and form fields in index.html, accesses the microphone stream, and pushes the data to Transcribe over the WebSocket. The code has been thoroughly commented and will hopefully be easy to understand, but if you have questions, feel free to open issues on the GitHub repo and I’ll be happy to help. I’d like to extend a special thanks to Karan Grover, Software Development Engineer on the Transcribe team, for providing the code that formed that basis of this demo.

New Regions, New Features, and a New Web Site

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/new-regions-new-features-and-a-new-web-site/

It’s a busy time here on the Digital User Engagement Team at AWS!

Last week, we made Amazon Pinpoint available in the Asia Pacific (Mumbai) and Asia Pacific (Sydney) AWS Regions. This is great news for new Pinpoint customers in these areas of the globe who were previously concerned with issues related to latency and data residency. Existing Amazon Pinpoint customers can also use these new Regions to increase availability and create geographical redundancy.

On Tuesday of this week, we also launched two exciting improvements to the Amazon Pinpoint console. The first improvement is a tool that you can use to import customer segments in just a few clicks. Previously, if you wanted to import customer data into Pinpoint, you had to save the data in a CSV or JSON file, upload it to an S3 bucket, create a segment in Pinpoint, and enter the full path to the S3 bucket. Now, you can drag and drop files right into the segment importer. To learn more, see the Pinpoint User Guide.

The other new feature that we released this week is an improved email editor. Our previous email editor only allowed you to include a limited set of HTML tags in your emails. With our new editor, however, you can include any HTML tags that you want. The new editor also includes a helpful side-by-side view that renders your message in real-time, as shown in the following image.

Users who don’t want to work with HTML code can also use the Design view to create and modify emails in an intuitive, WYSIWYG interface. For more information, see the Pinpoint User Guide.

Finally, we launched a new website for Amazon Pinpoint at https://aws.amazon.com/pinpoint. On our new site, you can learn more about the capabilities of Amazon Pinpoint. You’ll find in-depth information about all of the features, channels, and use cases that Amazon Pinpoint supports.

Every day, we’re amazed by the things that our customers do with Amazon Pinpoint. We hope these changes help you do even more incredible things!

Compute Module 3+ on sale now from $25

Post Syndicated from James Adams original https://www.raspberrypi.org/blog/compute-module-3-on-sale-now-from-25/

Today we bring you the latest iteration of the Raspberry Pi Compute Module series: Compute Module 3+ (CM3+). This newest version of our flexible board for industrial applications offers over ten times the ARM performance, twice the RAM capacity, and up to eight times the Flash capacity of the original Compute Module.

Raspberry Pi Compute Module 3+

A long time ago…

On 7 April 2014 we launched the original Compute Module (CM1), with a Broadcom BCM2835 application processor, a single-core ARM11 at 700MHz, 512MB of RAM, and 4GB of eMMC Flash. Although it seems like yesterday, that was nearly half a decade ago! At that point I had no kids, looked significantly younger (probably because I had no kids), and had more hair (fortunately I’m still better off in that department than Eben). [This is fair – Ed.]

Just under three years later we launched Compute Module 3 (CM3) based on the quad-core BCM2837A1, and now, almost exactly two years on, we bring you the CM3+.

The Compute Module has evolved

While we’ve greatly improved the performance, RAM capacity, and Flash capacity of the Compute Module, some things remain the same: CM3+ is an evolution of CM3 and CM1, bringing new features while keeping the form factor, electrical compatibility, price point, and ease of use of the earlier products.

Our aim for the Compute Module was to deliver the core Raspberry Pi technology in a form factor that allowed others to incorporate it into their own products cheaply and easily. If someone wanted to create a Raspberry Pi-based product but found the Model A or B Raspberry Pi boards did not fit their needs, they could use a Compute Module, create a simple low-tech carrier PCB, and make their own thing.

It’s for enterprises of all sizes

We limit the price so that the “maker in a shed” is not disadvantaged when producing only a few hundred products relative to professionals with much larger production runs. The Compute Module takes care of the high-tech bits (fine-pitched BGAs, high-speed memory interfaces, and core power supply), allowing the designer to focus on the differentiating features they really care about. The eMMC Flash device on a Compute Module is more reliable and robust than normal SD cards, so it is more suited to industrial applications. The Compute Module also provides more interfaces than the regular Raspberry Pi, supporting two cameras and two displays, as well as extra GPIO.

A Compute Module 3+ inserted into a Compute Module IO board

CM3+ in CMIO board

CM1 and CM3 have proven very popular, with sales increasing steadily. We don’t generally get to see what the majority of our module customers are using them for, because they’re often companies that understandably want to keep the insides of their products secret, but one nice example application is Revolution Pi from Kunbus. Many NEC digital-signage displays incorporate a socket for CM3, and there are some excellent community efforts too, of which our current favourite is this nifty dual camera board. We’ve also seen enterprising companies start offering turnkey design services using the Compute Module, such as that offered by Kunst Engineering.

So what is Compute Module 3+?

CM3+ is derived from the CM3 board, but incorporates the improved thermal design and Broadcom BCM2837B0 application processor from Raspberry Pi 3B+. This means that, with the exception of a small increase in z-height, CM3+ is a drop-in replacement for CM3 from an electrical and form-factor perspective. Note that due to power-supply limitations the maximum processor speed remains at 1.2GHz, compared to 1.4GHz for Raspberry Pi 3B+.

One of the most frequent requests from users and customers is for Compute Module variants with more on-board Flash memory. CM1 and CM3 both came with 4GB of Flash, and although we are fans of the Henry Ford philosophy of customer choice (“you can have any colour, as long as it’s black”), it was obvious that there was a need for more official options.

With CM3+ we are making available three different eMMC Flash sizes, in addition to a Flash-less “Lite” variant, all at competitive prices:

ProductUnit price
CM3+/Lite$25
CM3+/8GB$30
CM3+/16GB$35
CM3+/32GB$40

As CM3+ is a new product, it will need a recent version of the Raspberry Pi firmware (and operating system such as Raspbian) to operate correctly.

Thermals

Due to the improved PCB thermal design and BCM2837B0 processor, the CM3+ has better thermal behaviour under load. It has more thermal mass and can draw heat away from the processor faster than CM3. This can translate into lower average temperatures and/or longer sustained operation under heavy load before the processor hits 80°C and begins to reduce its clock speed.

Note that CM3+ will still output the same amount of heat as CM3 for any given application, so performance (and particularly sustained performance) will depend heavily on the design of the carrier PCB and enclosure. As always, we recommend that product designers pay careful attention to thermal performance under expected use cases.

Having characterised the behaviour of the new product, we have broadened the rated ambient temperature range to -20°C to 70°C.

Development Kit

We are also releasing a refreshed Compute Module 3+ Development Kit today. This kit contains 1 x Lite and 1 x 32GB CM3+ module, a Compute Module IO board, camera and display adapters, jumper wires, and a programming cable.

Updated datasheet

Our Compute Module datasheets have been updated to include a new one for CM3+.

Long-term availability

CM3+ will be available until at least January 2026.

We are also moving the “legacy” CM1, CM3 and CM3 Lite products to “not recommended for new designs” status. They will continue to be available until at least January 2023 as previously stated, but we recommend customers use CM3+ for new designs, and where possible move existing designs to CM3+ for improved performance and longer availability.

Compute Module 3+ is, like Raspberry Pi 3B+, the last in a line of 40nm-based Raspberry Pi products. We feel that it’s a fitting end to the line, rolling in the best bits of Raspberry Pi 3B+ and providing users with more design flexibility in an all‑round better product. We hope you enjoy it.

The post Compute Module 3+ on sale now from $25 appeared first on Raspberry Pi.