Tag Archives: announcements

New for AWS Control Tower – Region Deny and Guardrails to Help You Meet Data Residency Requirements

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-guardrails-to-help-you-meet-data-residency-requirements/

Many customers, such as those in highly regulated industries and the public sector, want to have control over where their data is stored and processed. AWS already offers many tools and features to comply with local laws and regulations, but we want to provide a simplified way to translate data residency requirements into controls that can be applied to single- and multi-account environments.

Starting today, you can use AWS Control Tower to deploy data residency preventive and detective controls, referred to as guardrails. These guardrails will prevent provisioning resources in unwanted AWS Regions by restricting access to AWS APIs through service control policies (SCPs) built and managed by AWS Control Tower. In this way, content cannot be created or transferred outside of your selected Regions at the infrastructure level. In this context, content can be software (including machine images), data, text, audio, video, or images hosted on AWS for processing or storage. For example, AWS customers in Germany can deny access to AWS services in Regions outside of Frankfurt with the exception of global services such as AWS Identity and Access Management (IAM) and AWS Organizations.

AWS Control Tower also offers guardrails to further control data residency in underlying AWS service options, for example, blocking Amazon Simple Storage Service (Amazon S3) cross-region replication or blocking the creation of internet gateways.

The AWS account used for managing AWS Control Tower is not restricted by the new Region deny settings. That account can be used for remediation if you have data in an unwanted Region before enabling Region deny.

Detective guardrails are implemented via AWS Config rules and can further detect unexpected configuration changes that should not be allowed.

You still retain a shared responsibility model for data residency at the application level, but these controls can help you restrict what infrastructure and application teams can do on AWS.

Using Data Residency Guardrails in AWS Control Tower
To use the new data residency guardrails, you need to have created a landing zone using AWS Control Tower. See Plan your AWS Control Tower landing zone for more information.

To see all the new controls that are available, I select Guardrails on the left pane of the AWS Control Tower console and then find those in the Data Residency category. I sort results by Behavior. Guardrails that have a Prevention behavior are implemented as SCPs. Those that have a Detection behavior are implemented as AWS Config rules.

Console screenshot.

The most interesting guardrail is probably the one denying access to AWS based on the requested AWS Region. I choose it from the list and find that it is different from the other guardrails because it affects all Organizational Units (OUs) and cannot be activated here but must be activated in the landing zone settings.

Console screenshot.

Below the Overview, in the Guardrail components, there is a link to the full SCP for this guardrail, and I can see the list of the AWS APIs that, when this setting is enabled, are still going to be allowed towards non-governed Regions. Depending on your requirements, some of those services, such as Amazon CloudFront or AWS Global Accelerator, can be further limited by a custom SCP.

In the Landing zone settings, the Region deny guardrail is currently not enabled. I choose Modify settings and then enable the Region deny settings.

Console screenshot.

Below the Region deny settings, there is the list of AWS Regions governed by the landing zone. Those will be the regions allowed when I enable Region deny.

Console screenshot.

In my case, I have four governed Regions, two in the US and two in Europe:

  • US East (N. Virginia), which is also the home Region for the landing zone
  • US West (Oregon)
  • Europe (Ireland)
  • Europe (Frankfurt)

I choose Update landing zone at the bottom. The update of the landing zone takes a few minutes to complete. Now, the vast majority of the AWS APIs are blocked if they are not directed to one of those governed Regions. Let’s do a few tests.

Testing Region Deny in a Sandbox Account
Using AWS Single Sign-On, I copy the AWS credentials to use the sandbox account with AWSAdministratorAccess permissions. In a terminal, I paste the commands setting the environment variables to use those credentials.

Console screenshot.

Now, I try to start a new Amazon Elastic Compute Cloud (Amazon EC2) instance in US East (Ohio), one of the non-governed Regions. In a landing zone, the default VPC is replaced by a VPC managed by AWS Control Tower. To start the instance, I need to specify a VPC subnet. Let’s find a subnet ID that I can use.

aws ec2 describe-subnets --query 'Subnets[0].SubnetId' --region us-east-2

An error occurred (UnauthorizedOperation) when calling the DescribeSubnets operation:
You are not authorized to perform this operation.

As expected, I am not authorized to perform this operation in US East (Ohio). Let’s try to start an EC2 instance without passing the subnet ID.

aws ec2 run-instances --image-id ami-0dd0ccab7e2801812 --region us-east-2 \
    --instance-type t3.small                                     

An error occurred (UnauthorizedOperation) when calling the RunInstances operation:
You are not authorized to perform this operation.
Encoded authorization failure message: <ENCODED MESSAGE>

Again, I am not authorized. More information is included in the encoded authorization failure message that I can decode as described in this article:

aws sts decode-authorization-message --encoded-message <ENCODED MESSAGE>

The decoded message (that I have omitted for brevity) tells me that there was an explicit deny to my request and includes the full SCP that caused the deny. This information is really useful for debugging these kind of errors.

Now, let’s try in US East (N. Virginia), one of the four governed regions.

aws ec2 describe-subnets --query 'Subnets[0].SubnetId' --region us-east-1
"subnet-0f3580c0c5e56c210"

This time, the command returns the subnet ID of the first subnet returned by the request. Let’s start an instance in US East (N. Virginia) using this subnet.

aws ec2 run-instances --image-id  ami-04ad2567c9e3d7893 --region us-east-1 \
    --instance-type t3.small --subnet-id subnet-0f3580c0c5e56c210

As expected, it works, and I can see the EC2 instance running in the console.

Console screenshot.

Similarly, APIs for other AWS services are limited by the Region deny settings. For example, I can’t create an S3 bucket in a non-governed Region.

Console screenshot.

When I try to create the bucket, I get an access denied error.

Console screenshot.

As expected, the creation of an S3 bucket works in a governed Region.

Even if someone gives this account access to a bucket in a non-governed Region, I would not be able to copy any data into that bucket.

Other preventive guardrails can enforce data residency, for example:

  • Disallow cross-region networking for Amazon EC2, Amazon CloudFront, and AWS Global Accelerator
  • Disallow internet access for an Amazon VPC instance managed by a customer
  • Disallow Amazon Virtual Private Network (VPN) connections

Now, let’s see how detective guardrails work.

Testing Detective Guardrails in a Sandbox Account
I enable the following guardrails for all accounts in the sandbox OU:

  • Detect whether Amazon EBS snapshots are restorable by all AWS accounts
  • Detect whether public routes exist in the route table for an internet gateway

Now, I want to see what happens if I go against these guardrails. In the EC2 console, I create an EBS snapshot for the volume of the EC2 instance I started before. Then, I modify permissions to share it with all AWS accounts.

Console screenshot.

Then, in the VPC console, I create an internet gateway, attach it to the AWS Control Tower managed VPC, and update the route table of one of the private subnets to use the internet gateway.

Console screenshot.

After a few minutes, the noncompliant resources in the sandbox account are found by the detective guardrails.

Console screenshot.

I look at the information provided by the guardrails and update my configuration to fix the issues. In a multi-account setup I’d contact the account owner and ask for remediation.

Availability and Pricing
You can use data-residency guardrails to control resources in any AWS Region. To create a landing zone, you should start from one of the Regions where AWS Control Tower is offered. For more information, see the AWS Regional Services List. There is no additional cost for this feature. You pay the costs of other services used, such as AWS Config.

This feature provides you with a framework of controls and guidance for setting up a multi-account environment that addresses data residency requirements. Depending on your use case, you may use any subset of the new data residency guardrails.

Set up guardrails based on your data residency requirements with AWS Control Tower.

Danilo

Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/announcing-amazon-sagemaker-canvas-a-visual-no-code-machine-learning-capability-for-business-analysts/

As an organization facing business problems and dealing with data on a daily basis, the ability to build systems that can predict business outcomes becomes very important. This ability lets you solve problems and move faster by automating slow processes and embedding intelligence in your IT systems.

But how do you make sure that all teams and individual decision makers in the organization are empowered to create these machine learning (ML) systems at scale, and without depending on other data science and data engineering teams? As a business user or data analyst, you’d like to build and use prediction systems based on the data that you analyze and process every day, without having to learn about hundreds of algorithms, training parameters, evaluation metrics, and deployment best practices.

Today, I’m excited to announce the general availability of Amazon SageMaker Canvas, a new visual, no code capability that allows business analysts to build ML models and generate accurate predictions without writing code or requiring ML expertise. Its intuitive user interface lets you browse and access disparate data sources in the cloud or on-premises, combine datasets with the click of a button, train accurate models, and then generate new predictions once new data is available.

SageMaker Canvas leverages the same technology as Amazon SageMaker to automatically clean and combine your data, create hundreds of models under the hood, select the best performing one, and generate new individual or batch predictions. It supports multiple problem types such as binary classification, multi-class classification, numerical regression, and time series forecasting. These problem types let you address business-critical use cases, such as fraud detection, churn reduction, and inventory optimization, without writing a single line of code.

SageMaker Canvas in Action
Imagine that I’m an e-commerce manager who needs to predict whether or not a product will be shipped on time. The datasets at my disposal consist of a product catalog and the historical shipping dataset, both in CSV format.

First, I enter the SageMaker Canvas application where all of my models and datasets are created and inspected.

I select Import, and upload two CSV files: ProductData.csv and ShippingData.csv. I have 120 products and 10,000 shipping records.

I could also fetch data from Amazon Simple Storage Service (Amazon S3) or connect to other cloud or on-premises data sources, such as Amazon Redshift or Snowflake. For this use case, I prefer to upload 1.6 MB of data directly from my computer.

Before confirming the import, I have a chance to preview the two datasets, their columns, and their respective values. For example, each product has a ComputerBrand, ScreenSize, and PackageWeight. In addition to useful columns such as ShippingOrigin, OrderDate, and ShippingPriority, each record in the shipping dataset also contains OnTimeDelivery, which is either On Time or Late. This column will be used by SageMaker Canvas to generate a prediction model based on historical data.

After a few seconds of processing, the datasets are ready, and I decide to join them to create a single dataset containing both product and shipping information. This is an optional step that often lets you increase the precision of a prediction model.

Now I can simply drag and drop the two datasets: SageMaker Canvas will automatically identify the shared ProductId column and apply an Inner Join transformation.

The join preview lets me visualize the resulting columns, identify missing or invalid values, and optionally deselect unwanted columns.

I select Save joined data and provide a new name for this joined dataset, which now includes 16 columns and 10,000 records.

Next, I want to create a model and start by selecting New model in the Models section on the left menu. I call it On Time Prediction Model.

The first step is selecting a dataset.

I select a target column that my model will predict: OnTimeDelivery.

SageMaker Canvas shows me the value distribution and already recommends the most appropriate model type: two categories classification.

Before proceeding with the model training, I have the option to generate an analysis report. This analysis gives me two very important pieces of information: the estimated accuracy and the impact of each column.

The estimated accuracy of 99.9% gives me confidence, but then I notice that the highest impact is provided by the ActualShippingDays column. Unfortunately, this column is not available in advance and I can’t use it for my predictions. So I deselect it and run the analysis again.

The new estimated accuracy is 94.2%, which is still pretty high. The most impactful columns are ShippingPriority, YShippingDistance, XShippingDistance, and Carrier. This is great because all of this information is available in advance and can be used for a prediction. On the other hand, product-related columns, such PackageWeight and ScreenSize, have very small impacts on the prediction. This means that in the future I could simplify the overall process by feeding only shipping information into the training and prediction phases.

I’m happy with the analysis insights. Therefore, I decide to proceed and build a prediction model by selecting the Standard build option.

Now I can go for a walk, attend a few productive meetings, or simply spend some time with family. SageMaker Canvas is doing all of the work for me, training hundreds of models behind the scenes. It will select the best performing one, so that I can start generating accurate predictions in a couple of hours. Of course, the training duration will vary depending on the dataset size and problem type.

After about an hour and a half, the model is ready and the console lets me analyze its accuracy and the column impacts visually. I’m also happy to see that the model predicts the correct value 95.8% of the time, which is even higher than the estimated accuracy.

Optionally, I could also inspect advanced metrics such as Precision, Recall, F1 Score, and so on. These metrics help me understand how the model is performing and what kind of false positives and false negatives I can expect from this model.

From here, I could share the model into Amazon SageMaker Studio or continue using the Canvas UI to generate new predictions.

I decide to continue with the intuitive UI and select Predict. Now I can work with individual records or with a dataset for batch predictions.

When selecting Single prediction, SageMaker Canvas simplifies my life and lets me start from an existing record. I modify the column values and get immediate feedback on the prediction and the corresponding feature importance.

This quick feedback loop and intuitive UI allows me to use the ML model without having to write custom code. In case I decide to integrate the model into an automated production system, the Amazon SageMaker Studio integration lets me share the model easily with other data scientists in my team.

Generally Available Today
SageMaker Canvas is generally available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Europe (Ireland). You can start using it with your local datasets, as well as data already stored on Amazon S3, Amazon Redshift, or Snowflake. With just a few clicks, you’ll prepare and join your datasets, analyze estimated accuracy, verify which columns are impactful, train the best performing model, and generate new individual or batch predictions. We’re excited to hear your feedback and help you solve even more business problems with ML.

Alex

Amazon Kinesis Data Streams On-Demand – Stream Data at Scale Without Managing Capacity

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/amazon-kinesis-data-streams-on-demand-stream-data-at-scale-without-managing-capacity/

Today we are launching Amazon Kinesis Data Streams On-demand, a new capacity mode. This capacity mode eliminates capacity provisioning and management for streaming workloads.

Kinesis Data Streams is a fully-managed, serverless service for real-time processing of streamed data at a massive scale. Kinesis Data Streams can take any amount of data, from any number of sources, and scale up and down as needed. Creating a new data stream is easy, since we announced Kinesis Data Streams in November 2013. To get started, you only need to specify the number of shards with which you must provision your stream.

Shards are the way to define capacity in Kinesis Data Streams. Each shard can ingest 1 MB/s and 1,000 records/second and egress up to 2 MB/s. You can add or remove shards of the stream using Kinesis Data Streams APIs to adjust the stream capacity according to the throughout needs of their workloads. This lets you make sure that producer and consumer applications don’t experience any throttling.

As customers adopt data streaming broadly, workloads with data traffic that can increase by millions of events in a few minutes are becoming more common. For these volatile traffic patterns, customers carefully plan capacity, monitor throughput, and in some cases develop processes that automatically change the Kinesis Data Streams stream capacity.

Kinesis Data Streams On-Demand Mode
That is why today we are announcing Kinesis Data Streams On-demand. This new capacity mode eliminates the need for provisioning and managing the capacity for streaming data. Using Kinesis Data Streams On-demand automatically scales the capacity in response to varying data traffic. Customers are charged per gigabyte of data written, read, and stored in the stream, in a pay-per-throughput fashion.

Data streams in the on-demand mode have the same high durability, high availability, low latency, security, and deep AWS integrations that Kinesis Data Streams already provides. Moreover, there are no new APIs to write or read data. All existing Kinesis Data Streams integrations work in the on-demand mode.

Kinesis Data Streams uses the partition key to distribute data across shards. That is why when using Kinesis Data Streams On-demand, you still must specify a partition key for each record to write data into a data stream, as you do today in Kinesis Data Streams using the provisioned mode. In Kinesis Data Streams On-demand, the data stream automatically adapts to handle uneven data distribution patterns. But you must be careful that no partition key exceeds a shard’s limits. If this happens, then you will receive write throttles, and then you can retry these requests.

When a new data stream is created using Kinesis Data Streams On-demand, it gets created with the default capacity of 4 MB/s and 4,000 records per second for writes. Kinesis Data Streams On-demand can automatically scale up to 200 MB/s and 200,000 records per second for writes.

Kinesis Data Streams On-demand accommodates up to double its previous peak write throughput observed in the last 30 days. As your data stream’s write throughput hits a new peak, Kinesis Data Streams automatically scales the stream’s capacity.

For example, if your data stream has a write throughput that varies between 10 MB/s and 40 MB/s, Kinesis Data Streams will make sure that you can easily burst to double the peak—80 MB/s. And, if later on that same data stream reaches a new peak of 50 MB/s, then Kinesis Data Streams will make sure that there is enough capacity to ingest 100 MB/s. However, write throttling can occur if your traffic grows more than double the previous peak in less than 15 minutes.

When to Use Kinesis Data Streams On-demand
On-demand mode is great for customers that have an unknown or variable workload, or who simply don’t want to deal with capacity management. On-demand mode works best for workloads that have even partition key distribution. For example, you run a mobile game that has variable traffic through the week or day, as customers play mostly on nights or weekends. Or, you run a streaming platform that hosts live shows, and you see a sudden increase in demand depending on the guests you have.

In addition, you can switch between on-demand and provision mode twice a day. For example, you run an e-commerce site with predictable traffic. But, starting next month, there will be many marketing campaigns launched globally. You don’t know the impact that those will have on the site traffic. Switch your Kinesis Data Streams to on-demand mode, and now you can enjoy the automated capacity planning and management for your data streams.

Get Started with Kinesis Data Streams On-demand
Create a new data stream with Kinesis Data Streams On-demand from the AWS console, AWS SDKs, AWS Command Line Interface (CLI), and AWS CloudFormation.

To create one from the console, visit the Kinesis console and Create data stream. When selecting the capacity mode, select On-demand.

Creating a data stream

At the end of the page, all of the settings for the new data stream are presented. These settings can be changed after the data stream has been created.

Data stream settings

Let’s See This in Action!
For this demo, I want to show you how the new Kinesis Data Streams capability works. This situation is best described if you at look at the following Amazon CloudWatch graphs. The green line represents the bytes ingested successfully into the stream, and the red line shows the percentage of traffic that is throttled.

First, we will start with a stream provisioned with five shards. For the first three minutes, we are sending a load of 4 MB/s. You can see that the stream can handle the load.

At the time stamp 21:19, we increase the load to 12 MB/s. Now the stream cannot handle the load, and the throttles start (the red line starts climbing up to 60 percent of request being throttled).

Increase the load on a provisioned stream

At the time stamp 21:23, we change the stream capacity from provisioned to on-demand. You can do that on-the-fly without affecting the stream. See that it takes a very short time for the stream to handle the load when converting from one capacity mode to the other.

In a few minutes (time stamp 21:24) the throttles start to drop as the stream starts scaling up. The stream capacity doubles to 10 shards first (time stamp 21:26), and the stream keeps scaling up until each shard has a load of less than 0.5 MB/s. In this way, if the stream suddenly receives double the amount of load, then it has the capacity ready to handle it.

Change to on-demand mode

At the time stamp 21:26, the load in the stream is increased to 18 MB/s. You can see the green line climbing to 350,000 records – there are no throttles, and the stream ends this demo with 40 open shards. This means that if suddenly the stream receives a load of 40 MB/s, then it could handle it with no problem.

Increase the load

Available Now!
The Amazon Kinesis Data Streams On-demand is available globally in all commercial Regions.

You can learn more about the capacity modes in the Amazon Kinesis Data Streams Developer Guide.

Marcia

Introducing Amazon Redshift Serverless – Run Analytics At Any Scale Without Having to Manage Data Warehouse Infrastructure

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-redshift-serverless-run-analytics-at-any-scale-without-having-to-manage-infrastructure/

We’re seeing the use of data analytics expanding among new audiences within organizations, for example with users like developers and line of business analysts who don’t have the expertise or the time to manage a traditional data warehouse. Also, some customers have variable workloads with unpredictable spikes, and it can be very difficult for them to constantly manage capacity.

With Amazon Redshift, you use SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes. Today, I am happy to introduce the public preview of Amazon Redshift Serverless, a new capability that makes it super easy to run analytics in the cloud with high performance at any scale. Just load your data and start querying. There is no need to set up and manage clusters. You pay for the duration in seconds when your data warehouse is in use, for example, while you are querying or loading data. There is no charge when your data warehouse is idle.

Amazon Redshift Serverless automatically provisions the right compute resources for you to get started. As your demand evolves with more concurrent users and new workloads, your data warehouse scales seamlessly and automatically to adapt to the changes. You can optionally specify the base data warehouse size to have additional control on cost and application-specific SLAs.

With the new serverless option, you can continue to query data in other AWS data stores, such as Amazon Simple Storage Service (Amazon S3) data lakes and Amazon Aurora and Amazon Relational Database Service (RDS) databases.

Amazon Redshift Serverless is ideal when it is difficult to predict compute needs such as variable workloads, periodic workloads with idle time, and steady-state workloads with spikes. This approach is also a good fit for ad-hoc analytics needs that need to get started quickly and for test and development environments.

Let’s see how this works in practice.

Using Amazon Redshift Serverless
I go to the Amazon Redshift console and choose the new serverless option. The first time, I set up the serverless endpoint and configure networking and security.

I confirm the default settings that use all subnets in my default Amazon Virtual Private Cloud (VPC) and its default security group. Data is always encrypted, and I use the default AWS-owned key. Optionally, I can customize all settings. I can associate now or later the AWS Identity and Access Management (IAM) roles to give permissions to access other AWS resources, for example, to be able to load data from an S3 bucket. The configuration of the serverless endpoint will be shared by all my serverless data warehouses in the same AWS account and Region.

Console screenshot.

To query data, I use Amazon Redshift Query Editor V2, a new free web-based tool that we made available a few months back. The query editor provides quick access to a few sample datasets to make it easy to learn Amazon Redshift’s SQL capabilities: TPC-H, TPC-DS, and tickit, a dataset containing information on ticket sales for events.

For a quick test, I use the tickit sample dataset so I don’t need to load any data. I prepare a query to get the list of tickets sold per date, sorted to see the dates with more sales first:

SELECT caldate, sum(qtysold) as sumsold
FROM   tickit.sales, tickit.date
WHERE  sales.dateid = date.dateid 
GROUP BY caldate
ORDER BY sumsold DESC;

By using the web-based query editor, I don’t need to configure a SQL client or set up the network permissions to reach the serverless endpoint. Instead, I just write my SQL query and run it.

Console screenshot.

I am a visual person. I enable the Chart option on the right of the result table and select a bar chart.

Console screenshot.

Satisfied with the clarity of the chart, I export it as an image file. In this way, I can quickly share it or include it in a report.

Bar chart

Amazon Redshift Serverless supports all rich SQL functionality of Amazon Redshift such as semi-structured data support. I can use any JDBC/ODBC-compliant tool or the Amazon Redshift Data API to query my data. To migrate data, I can take a snapshot of an Amazon Redshift provisioned cluster and restore it as serverless. Then, I just need to update my SQL applications to use the new serverless endpoint.

Availability and Pricing
Amazon Redshift Serverless is available in public preview in the following AWS Regions: US East (N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland), Asia Pacific (Tokyo).

With Amazon Redshift Serverless, you pay separately for the compute and storage you use. Compute capacity is measured in Redshift Processing Units (RPUs), and you pay for the workloads in RPU-hours with per-second billing. For storage, you pay for data stored in Amazon Redshift-managed storage and storage used for snapshots, similar to what you’d pay with a provisioned cluster using RA3 instances.

To control your costs, you can specify usage limits and define actions that Amazon Redshift automatically takes if those limits are reached. You can specify usage limits in RPU-hours and associated with a daily, weekly, or monthly duration. Setting higher usage limits can improve the overall throughput of the system, especially for workloads that need to handle high concurrency while maintaining consistently high performance.

Compute resources automatically shutdown behind the scenes when there is no activity and resume when you are loading data, or there are queries coming in. When accessing your S3 data lake via the new serverless endpoint, you do not pay for Amazon Redshift Spectrum separately. You have a unified serverless experience and pay for data lake queries also in RPU-seconds. For more information, see the Amazon Redshift pricing page.

The serverless end point is configured at the AWS account level. If you have multiple teams or projects and want to manage costs separately, you can use separate AWS accounts. You can share data between your provisioned clusters and serverless endpoint and between serverless endpoints across accounts.

To help you get practice, we provide you upfront with $500 in AWS credits to try the Amazon Redshift Serverless public preview. You get the credits when you first create a database with Amazon Redshift Serverless. These credits are used to cover your costs for compute, storage, and snapshot usage of Amazon Redshift Serverless only.

Start using Amazon Redshift Serverless today to run and scale analytics without having to provision and manage data warehouse clusters.

Danilo

AWS Lake Formation – General Availability of Cell-Level Security and Governed Tables with Automatic Compaction

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-lake-formation-general-availability-of-cell-level-security-and-governed-tables-with-automatic-compaction/

A data lake can help you break down data silos and combine different types of analytics into a centralized repository. You can store all of your structured and unstructured data in this repository. However, setting up and managing data lakes involve a lot of manual, complicated, and time-consuming tasks. AWS Lake Formation makes it easy to set up a secure data lake in days instead of weeks or months.

Today, I am excited to share the general availability of some new features that simplify even further loading data, optimizing storage, and managing access to a data lake:

  • Governed Tables – A new type of Amazon Simple Storage Service (Amazon S3) tables that makes it simple and reliable to ingest and manage data at any scale. Governed tables support ACID transactions that let multiple users concurrently and reliably insert and delete data across multiple governed tables. ACID transactions also let you run queries that return consistent and up-to-date data. In case of errors in your extract, transform, and load (ETL) processes, or during an update, changes are not committed and will not be visible.
  • Storage Optimization with Automatic Compaction for governed tables – When this option is enabled, Lake Formation automatically compacts small S3 objects in your governed tables into larger objects to optimize access via analytics engines, such as Amazon Athena and Amazon Redshift Spectrum. By using automatic compaction, you don’t have to implement custom ETL jobs that read, merge, and compress data into new files, and then replace the original files.
  • Granular Access Control with Row and Cell-Level Security – You can control access to specific rows and columns in query results and within AWS Glue ETL jobs based on the identity of who is performing the action. In this way, you don’t have to create (and keep updated) subsets of your data for different roles and legislations. This works for both governed and traditional S3 tables.

Using Governed Tables, ACID Transactions, and Automatic Compaction
In the Lake Formation console, I can enable governed data access and management at table creation. Automatic compaction is enabled by default, and it can be disabled using the AWS Command Line Interface (CLI) or AWS SDKs.

Console screenshot.

Governed tables have a manifest that tracks the S3 objects that are part of the table’s data. I can use the UpdateTableObjects API to keep the manifest updated when adding new objects to the table, and I can call it using the AWS CLI and SDKs. This API is implicitly used by the AWS Glue ETL library.

Moreover, I have access to new Lake Formation APIs to start, commit, or cancel a transaction. I can use these APIs to wrap data loading, data transformation, and output consistent and up-to-date data.

Using Row and Cell-Level Security
There are many use cases where, for a table, you want to restrict access to specific columns, rows, or a combination that depends on the role of the user accessing the data. For example, a company with offices in the US, Germany, and France can create a filter for analysts based in the European Union (EU) to limit access to EU-based customers.

Console screenshot.

The filter can enforce that some columns, such as date of birth (dob) and phone, are not accessible to those analysts. Moreover, access to individual rows can be filtered by using filter expressions. You can configure row filter expressions with a SQL-compatible syntax based on the open-source PartiQL language. In this case, only rows with country equal to Germany or France (country='DE' OR country='FR') are visible.

Console screenshot.

Availability and Pricing
These new features are available today in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), and Asia Pacific (Tokyo).

When querying governed tables, or tables secured with row and cell-level security, you pay by the amount of data scanned (with a 10MB minimum). When using governed tables, transaction metadata is charged by the number of S3 objects tracked, and you pay for the number of transaction requests. Automatic compaction is charged based on the data processed. For more information, see the AWS Lake Formation pricing page.

While implementing these features, we introduced a new Lake Formation Storage API that is integrated with tools such as AWS Glue, Amazon Athena, Amazon Redshift Spectrum, and Amazon QuickSight. You can use this storage API directly in your applications to query tables with a SQL-like syntax (joins are not supported) and get the benefits of governed tables and cell-level security.

See the detailed blog series published during the preview to learn more:

Effective data lakes using AWS Lake Formation

Take advantage of these new features to simplify the creation and management of your data lake.

Danilo

New – Amazon EBS Snapshots Archive

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive/

I am pleased to announce the availability of Amazon EBS Snapshots Archive, a new storage tier for the long-term retention of Amazon Elastic Block Store (EBS) snapshots of your EBS volumes.

In a nutshell, EBS is an easy-to-use high-performance block storage service for your Amazon Elastic Compute Cloud (Amazon EC2) instances. An EBS volume mounted to your EC2 instances lets you boot an operating system and store data for your most performance-demanding workloads. You may use EBS snapshots to create point-in-time copies of your volume data. The first snapshot of a volume contains all of the data written into that volume. Subsequent snapshots are incremental. Snapshots are stored on Amazon Simple Storage Service (Amazon S3), and they may be shared between AWS accounts and AWS Regions.

The ability to take frequent snapshots and easily restore volumes makes EBS snapshots an obvious choice for your data management strategy, alongside other backup options. The incremental nature of snapshots makes them cost-effective for daily and weekly backups that need immediate restores. However, you were telling us that business compliance and regulatory needs have meant that you needed to retain EBS snapshots for longer periods of time (months or years). For example, snapshots taken at the end of a project, or snapshots for test and development preserved for future project releases. The vast majority of these snapshots are taken and never read. For these snapshots, you are looking to lower your storage costs. Today, to benefit from lower storage costs, you may have written complex scripts involving temporary EC2 instances to restore snapshots, mount the corresponding volumes, and transfer the data to lower-cost storage tiers, such as Amazon Glacier.

EBS Snapshots Archive provides a low-cost storage tier to archive full, point-in-time copies of EBS Snapshots that you must retain for 90 days or more for regulatory and compliance reasons, or for future project releases. Now, you can easily archive and manage EBS Snapshots, thereby eliminating the need for custom scripts and third-party tools to manage these snapshots. This lets you move your rarely accessed snapshots to EBS Snapshots Archive to achieve up to 75% lower storage costs, and avoid licensing costs for third-party tools. Furthermore, you can retrieve an archived snapshot within 24-72 hours, and, once restored, use the snapshot to recover an EBS volume.

As per usual, let me show you how it works.

How to Get Started
I have a snapshot available in the US East (N. Virginia) Region, and I want to archive this snapshot for compliance reasons. I open the AWS Management Console, navigate to EC2, then to Snapshots. I select the snapshot I want to archive, and select the Actions menu. I select the Archive snapshot menu option.

EBS Snapshot Archive - create snapshot

I carefully read the confirmation message :-), and I select Archive snapshot.

EBS Snapshot Archive - create snapshot - confirmation

I may monitor the progress of the archive operation with the new Storage Tier tab at the bottom of the screen. After some time, depending on the size of the snapshot, the Tiering status becomes ✅ Archival completed.

EBS Snapshot Archive - create snapshot - archival completedArchived snapshots stay visible in the console. The new Storage tier column indicates the tier used for storage (Standard or Archive).

How do I Restore a Volume?
Restoring a volume from EBS Snapshots Archive is a two-step process. First, I retrieve the snapshot from EBS Snapshots Archive to its original snapshot ID, using RestoreSnapshotTier API call or the management console. It takes between 24-72 hours to retrieve the snapshot from the archive, depending on the snapshot size. Once retrieved, the snapshot appears as a regular snapshot on my account. At this stage, I hydrate the retrieved snapshot into an EBS volume using the default snapshot restore or Fast Snapshot Restore (FSR) for expedited restores, just like usual.

A CloudWatch event is generated when the snapshot is restored. You may listen to this event to avoid pulling the status with the API.

A CreateVolume API call on an archived snapshot will fail. You must restore a snapshot from archive before you use it to create a volume.

Using the AWS Management Console, I select the snapshot that I want to restore, I select the Actions menu, and then I select the Restore snapshot from archive menu option.

EBS Snapshot Archive - create snapshot - restore archive

I have the choice to restore the snapshot permanently, or just temporarily. At the end of the temporary duration, the standard tier snapshot is deleted, and only the archive is preserved.

EBS Snapshot Archive - create snapshot - restore archive - confirmation

After a while, depending on the snapshot size, the archive is restored to standard storage and may be used to recreate a volume, just like usual. I may monitor the progress of the retrieval and the lifetime for temporarily restored archives in the new Storage tier tab in the bottom half of the screen. Temporary restored snapshots may be kept for up to 180 days.

Pricing and Availability
EBS Snapshots Archive is available for you today in 17 AWS Regions. At the time of launch, it is not available in the two Regions in China, Asia Pacific (Seoul), Asia Pacific (Osaka), Canada (Central), and South America (São Paulo).

As per usual, you pay as-you-go, with no minimum or fixed fees. There are two metrics that influence EBS Snapshots Archive billing: data storage and data retrieval. We charge you $0.0125 per GB-month of stored data and $0.03 per GB retrieved. You are charged for a 90-day period at minimum. This means that if you delete a snapshot archive or permanently restore it less than 90 days after creation, then we charge for the full 90-day period. The EBS pricing page has the details.

Go ahead and start to configure your long term storage for EBS snaphots today.

— seb

New – AWS Control Tower Account Factory for Terraform

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/

AWS Control Tower makes it easier to set up and manage a secure, multi-account AWS environment. AWS Control Tower uses AWS Organizations to create what is called a landing zone, bringing ongoing account management and governance based on our experience working with thousands of customers.

If you use AWS CloudFormation to manage your infrastructure as code, you can customize your AWS Control Tower landing zone using Customizations for AWS Control Tower, a solution that helps you deploy custom templates and policies to individual accounts and organizational units (OUs) within your organization.

But what if you use Terraform to manage your AWS infrastructure?

Today, I am happy to share the availability of AWS Control Tower Account Factory for Terraform (AFT), a new Terraform module maintained by the AWS Control Tower team that allows you to provision and customize AWS accounts through Terraform using a deployment pipeline. The source code for the development pipeline can be stored in AWS CodeCommit, GitHub, GitHub Enterprise, or BitBucket. With AFT, you can automate the creation of fully functional accounts that have access to all the resources they need to be productive. The module works with Terraform open source, Terraform Enterprise, and Terraform Cloud.

Architectural diagram.

Let’s see how this works in practice.

Using AWS Control Tower Account Factory for Terraform
First, I create a main.tf file that uses the AWS Control Tower Account Factory for Terraform (AFT) module:

module "aft" {
  source = "[email protected]:aws-ia/terraform-aws-control_tower_account_factory.git"

  # Required Parameters
  ct_management_account_id    = "123412341234"
  log_archive_account_id      = "234523452345"
  audit_account_id            = "345634563456"
  aft_management_account_id   = "456745674567"
  ct_home_region              = "us-east-1"
  tf_backend_secondary_region = "us-west-2"

  # Optional Parameters
  terraform_distribution = "oss"
  vcs_provider           = "codecommit"

  # Optional Feature Flags
  aft_feature_delete_default_vpcs_enabled = false
  aft_feature_cloudtrail_data_events      = false
  aft_feature_enterprise_support          = false
}

The first six parameters are required. As a prerequisite, I need to pass the ID of four AWS accounts in my AWS organization:

  • ct_management_account_id – AWS Control Tower management account
  • log_archive_account_id – Log Archive account
  • audit_account_id – Audit account
  • aft_management_account_id – AFT management account

Then, I have to pass two AWS Regions:

  • ct_home_region – The Region from which this module will be executed. This must be the same Region where AWS Control Tower is deployed.
  • tf_backend_secondary_region – The backend primary Region is the same as the AFT Region. This parameter defines the secondary Region to replicate to. AFT creates a backend for state tracking for its own state. It is also used for Terraform when using the open-source version.

The other parameters are optional and are set to their default value in the previous main.tf file:

  • terraform_distribution – To select between Terraform open source (default), Enterprise, or Cloud
  • vcs_provider – To choose the version control system to use between AWS CodeCommit (default), GitHub, GitHub Enterprise, or BitBucket.

These feature flags are disabled by default and can be omitted unless you want to enable them:

  • aft_feature_delete_default_vpcs_enabled – To automatically delete the default VPC for new accounts.
  • aft_feature_cloudtrail_data_events – To enable AWS CloudTrail data events for new accounts. Be aware that this option, usually required for compliance in highly regulated environments, can have an impact on your costs.
  • aft_feature_enterprise_support – To automatically enroll new accounts with Enterprise Support (if you have an Enterprise Support Plan).

First, I initialize the project and download the plugins:

terraform init

Then, I use AWS Single Sign-On to log in with the AWS Control Tower management account and start the deployment:

terraform apply

I confirm with a yes and, after some time, the deployment is complete.

Now, I use AWS SSO again to log in with the AFT management account. In the AWS CodeCommit console, I find four repositories that I can use to customize the accounts created with AFT.

Console screenshot.

These repositories are used by pipelines managed by AWS CodePipeline to automate the account creation:

  • xaft-account-request – This is where I place requests for accounts provisioned and managed by AFT.
  • aft-global-customizations – I can use this repository to customize all provisioned accounts with customer-defined resources. The resources can be created through Terraform or through Python.
  • aft-account-customizations – Here, I can customize provisioned accounts depending on the value of the account_customizations_name parameter in the aft-account-request repository. In this way, I can create different sets of customizations depending on the role the account will be used for.
  • aft-account-provisioning-customizations – This repository uses AWS Step Functions to customize the provisioning process for new accounts and simplify the integration with additional environments. State machines can use AWS Lambda functions, Amazon Elastic Container Service (Amazon ECS) or AWS Fargate tasks, custom activities hosted either on AWS or on-premises, or Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS) to communicate with external applications.

Currently, these four repositories are all empty. To start, I use the code in the sources/aft-customizations-repos folder in the GitHub repo of the AFT Terraform module.

Using the example in the aft-account-request repository, I prepare a template to create a couple of AWS accounts. One of the two accounts is for a software developer.

To help software developers be productive quickly, I create a specific account customization. In the template, I set the parameter account_customizations_name equal to developer-customization.

Then, in the aft-account-customizations repository, I create a developer-customization folder where I put a Terraform template to automatically create an AWS Cloud9 EC2-based development environment for new accounts of that type. Optionally, I can extend that with my Python code, for example, to invoke internal or external APIs. Using this approach, all new accounts for software developers will have their development environment ready as they go through the delivery pipeline.

I push the changes to the main branch (first for the aft-account-customizations repository, then for the aft-account-request). This triggers the execution of the pipeline. After a few minutes, the two new accounts are ready to be used.

You can customize accounts created by AFT based on your unique requirements. For example, you can provide each account with its own specific security setup (such as IAM roles or security groups) and storage (for example, pre-configured Amazon Simple Storage Service (Amazon S3) buckets).

Availability and Pricing
AWS Control Tower Account Factory for Terraform (AFT) works in any Region where AWS Control Tower is available. There are no additional costs when using AFT. You pay for the services used by the solution. For example, when you set up AWS Control Tower, you will begin to incur costs for AWS services configured to set up your landing zone and mandatory guardrails.

When building this solution, we worked together with HashiCorp. Armon Dadgar, HashiCorp Co-Founder and CTO, told us: “Managing cloud environments with hundreds or thousands of users can be a complex and time-consuming process. Using a software delivery pipeline integrating Terraform and AWS Control Tower makes it easier to achieve consistent governance and compliance requirements across all accounts.”

The pipeline provides an account creation process that monitors when account provisioning is complete and then triggers additional Terraform modules to enhance the account with further customizations. You can configure the pipeline to use your own custom Terraform modules or pick from pre-published Terraform modules for common products and configurations.

Simplify and standardize AWS account creation using AWS Control Tower Account Factory for Terraform.

Danilo

Announcing AWS Data Exchange for APIs: Find, Subscribe to, and Use Third-party APIs with Consistent Authentication

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/data-exchange-for-apis-find-subscribe-use-third-party-apis-consistent-authentication/

Data is at the center of many processes and products, whether it’s a large-scale dataset used to train machine learning models, a relational database, or an API-based integration. AWS Data Exchange lets you discover, subscribe to, and use hundreds of file-based datasets via Amazon Simple Storage Service (Amazon S3) offered by third parties such as Reuters, Foursquare, Change Healthcare, Vortexa, IMDb, and many more. Additionally, AWS Data Exchange for Amazon Redshift makes it even easier to ingest third-party data in your Amazon Redshift data warehouse, without any manual processing or transformation.

However, in many cases your data projects require more than static datasets because you need frequent and synchronous retrieval of small amounts of information – for example, you might need to fetch a stock price every hour. Data APIs let you answer specific questions quickly and without having to build ad-hoc data pipelines to ingest, process, and analyze bulk datasets. But each API provider has its own ease of use, SDK, documentation, and authentication mechanisms, which makes this harder than it needs to be.

Today, I’m happy to announce the general availability of AWS Data Exchange for APIs, a new capability that lets you find, subscribe to, and use third-party APIs with a consistent access using AWS SDKs, as well as consistent AWS-native authentication and governance. This simplifies the lives of developers and IT administrators who have to integrate and secure the access to multiple third-party APIs.

Now you can make RESTful or GraphQL API calls directly to AWS Data Exchange and receive synchronous responses that contain the information you need, using the AWS SDK in the programming language of your choice. We take care of integrating with the API provider, implementing proper authentication, managing the API subscription, and ensuring charges appear on your AWS bill. You can manage API access centrally with AWS Identity and Access Management (IAM).

As a data provider, you make your API discoverable by millions of AWS customers by listing it in the AWS Data Exchange catalog using an OpenAPI specification and fronting it with an Amazon API Gateway endpoint.

AWS Data Exchange for APIs in Action
First, I look for an API product in the AWS Data Exchange catalog, review its subscription terms, support information, and auto-renewal. Each API product might include multiple public or private subscription offers and periods.

I select Subscribe and a couple of minutes later I’m successfully subscribed.

Within the API product, I select an entitled data set and its latest revision.

Each API revision contains one or more API assets that correspond to a specific API endpoint and a unique Asset ARN.

AWS Data Exchange takes care of invoking API endpoints with the correct authentication.

All I need to do is check the Integration notes, which include instructions and code snippets based on the AWS Command Line Interface (CLI).

Of course, I could implement the very same API call with my favorite programming language using one of the AWS SDKs.

For example, here’s how I’d implement a simple wrapper function in Python:

import json
import urllib
import boto3

adx = boto3.client('dataexchange')

def get_api_response(path, method="GET", querystring={}, headers={}, body={}):
    return adx.send_api_asset(
        DataSetId="4b3fbabc31171662851531b8576a3411",
        RevisionId="e8e78e921af12c76499edc40f92e3082",
        AssetId="557d858c317efdfb5b6c9a2860ec4a03",
        Method=method,
        Path=path,
        QueryStringParameters=urllib.urlencode(querystring),
        RequestHeaders=urllib.urlencode(headers),
        Body=json.dumps(body),
    )

Please note that there are no hard-coded credentials in the code above because all the authorization happens via AWS Identity and Access Management (IAM).

And that’s how you make your first API call via AWS Data Exchange for APIs.

Available Today
AWS Data Exchange for APIs is generally available in all AWS Regions where AWS Data Exchange is available. We’re looking forward to helping you simplify and centralize the management and governance of third-party APIs while we take care of the undifferentiated heavy lifting for you.

Today you can start integrating third-party APIs such as Infutor, Variety Business Intelligence, IMDb, PeopleDataLabs, Neustar, Experian, Foursquare, PredictHQ, WeatherTrends International, and many more.

If you’re a developer, check out the new AWS Data Exchange for APIs documentation to learn more about subscribing and using APIs. If you’re an API provider, check out the new publishing documentation to learn more about publishing new APIs on the AWS Data Exchange catalog.

Alex

Improved, Automated Vulnerability Management for Cloud Workloads with a New Amazon Inspector

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/improved-automated-vulnerability-management-for-cloud-workloads-with-a-new-amazon-inspector/

Amazon Inspector is a service used by organizations of all sizes to automate security assessment and management at scale. Amazon Inspector helps organizations meet security and compliance requirements for workloads deployed to AWS, scanning for unintended network exposure, software vulnerabilities, and deviations from application security best practice.

Since the original launch of Amazon Inspector in 2015, vulnerability management for cloud customers has changed considerably. Over the last six years, the team delivered several new customer-requested features, including assessment reporting, support for proxy environments, and integration with Amazon CloudWatch Metrics. However, the team also recognized that there were new requirements to meet – enabling frictionless deployment at scale, support for an expanded set of resource types needing assessment, and a critical need to detect and remediate at speed. Today I’m happy to announce a new Amazon Inspector, able to meet these requirements with the following features:

  • Continual, automated assessment scans—replaces periodic, manual scanning.
  • Automated resource discovery – once enabled, the new Amazon Inspector automatically discovers all running Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Elastic Container Registry repositories.
  • New support for container-based workloads—workloads are now assessed on both EC2 and container infrastructure.
  • Integration with AWS Organizations—allowing security and compliance teams to enable and take advantage of Amazon Inspector across all accounts in an organization.
  • Removal of the stand-alone Amazon Inspector scanning agent—assessment scanning now uses the widely deployed AWS Systems Manager agent, eliminating the need for a separate agent installation.
  • Improved risk scoring—a highly contextualized risk score is now generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources, such as network accessibility. This makes it easier to identify the most critical vulnerabilities to address as a priority.
  • Integration with Amazon EventBridge—integrate with event management and workflow systems such as Splunk and Jira. And, you can trigger automated remediation, for example, system patching using Systems Manager or virtual machine image rebuilds using EC2 Image Builder.
  • Integration with AWS Security Hub—helping your teams to more easily identify those resources with critical vulnerabilities or deviations from security best practices.

Automatically Assessing your Workloads with Amazon Inspector
Tens of thousands of vulnerabilities exist, with new ones being discovered and made public on a regular basis. With this continually growing threat, manual assessment can lead to customers being unaware of an exposure and thus potentially vulnerable between assessments. Additionally, customers with manual processes for managing their inventories of applications resources, the deployment of stand-alone security agents on those resources, and the scheduling of periodic assessments may find the whole process to be a costly and time-consuming exercise. That’s before they have to then sift through the mass of assessment findings to determine the most critical issues to address.

With the new Amazon Inspector, all you need to do is enable the service. It will auto-discover and start continual assessment of your EC2 and your Amazon Elastic Container Registry-based container workloads to evaluate your security posture, even as the underlying resources change.

EC2 instances are discovered and assessed for unintended exposure to external networks and software vulnerabilities using the Systems Manager agent, already included by default in images provided by AWS for instance management, automated patching, and more. Container-based workloads are assessed as the images are pushed to Amazon Elastic Container Registry. Without needing additional software or agents, container images and EC2 instances are assessed in near real time when an event occurs.

Automated assessment is driven by changes in workload configuration and newly published vulnerabilities to ensure resources are only assessed when needed. The new Amazon Inspector collects events from over 50 vulnerability intelligence sources, including CVE, the National Vulnerability Database (NVD), and MITRE. Images that may be affected by a newly identified entry, for example, a new CVE notification, will be automatically rescanned. Image rescanning is enabled for 30 days from the date they are pushed to the registry. You can also enable an option to only scan on image push and not subsequently perform rescans.

Summary page for Amazon Inspector

Selecting either Accounts, Instances, or Repositories from your Dashboard page takes you to a detail summary for the selected resource. Below, I’m viewing summary data for EC2 instances across a couple of accounts.

Viewing instances scanned by Amazon Inspector across accounts

If vulnerabilities are found, you receive actionable assessment findings in a report. Starting today, these findings are summarized with enhanced risk scoring and improved resource detail to help you prioritize the most at-risk resources needing to be addressed. Also new today, the Amazon Inspector console has been redesigned to surface all findings and recommendations for remediation.

Vulnerabilities in container images are also sent to Amazon Elastic Container Registry to be summarized for the owner. And, as I noted earlier, new integrations with AWS Security Hub and Amazon EventBridge allow findings to be sent downstream for additional visibility and remediation by automated workflows. For example, automation can be created to isolate instances, trigger system patching, software image rebuilds, and more. The availability of multiple integration points makes it easier for security and application teams to collaborate to manage remediation. Below, I’m viewing findings from Amazon Inspector in the AWS Security Hub console.

Viewing findings from Amazon Inspector in the Security Hub console

Assessments can result in hundreds of thousands, or more, findings that need to be filtered and sifted to determine the most critical to action. Also available today, organizations can determine which of the findings they consider to be acceptable and mark those findings for temporary or permanent suppression. This helps reduce the volume of alerts, further assisting with prioritization and automated remediation. Suppression filters can be set from several screens. Rules specify one or more filters, such as Severity, that will cause findings that match the filters to be removed from display. When defining rules, a list is shown of the findings that will be suppressed, helping you fine-tune the filter values to match your specific needs.

Setting up suppression rules for Amazon Inspector findings

I mentioned earlier that the new Amazon Inspector implements a contextualized risk assessment score for findings. The screenshot below shows an example of Amazon Inspector‘s risk assessment score, compared to a generic Common Vulnerability Scoring System (CVSS) score. Contextual risk assessment takes into account additional factors such as accessibility to the internet and ease of exploitability to make the score more meaningful. In the image below, Amazon Inspector‘s risk assessment score is lower than the CVSS score because the attack vector requires network access. Amazon Inspector knows that the vulnerability identified in the GNOME Glib will be difficult to exploit because in this resource, there is no network access, and therefore it lowered the risk score.

Risk assessment score

Start a Free Trial with Amazon Inspector Today
The new Amazon Inspector is available now in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo) Regions.

Amazon Inspector offers a free 15-day trial, so you can put it to work to see how Amazon Inspector can help your security and compliance teams reduce operational complexity and cost associated with managing resource inventories, stand-alone security agents, and repetitive manual assessments.

— Steve

Announcing AWS Well-Architected Custom Lenses: Extend the Well-Architected Framework with Your Internal Best Practices

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/well-architected-custom-lenses-internal-best-practices/

We launched the AWS Well-Architected Framework back in 2015 to help you review workloads against architectural best practices, and across pillars such as operational excellence, security, reliability, performance efficiency, and cost optimization. In 2017, we extended the framework with the concept of “lenses” to optimize specific workload types such as the Serverless Lens, the SaaS Lens, and the Foundational Technical Review (FTR) Lens for APN Partners. In 2018, we launched the AWS Well-Architected Tool, a self-service tool designed to help you review AWS workloads at any time, without the need for an AWS Solutions Architect.

Today, I’m happy to announce the general availability of AWS Well-Architected Custom Lenses, a new feature of the AWS Well-Architected Tool that lets you bring your own best practices to complement the existing framework based on your industry, operational plans, and internal processes. Custom Lenses provide a consolidated view and a consistent way to measure and improve your workloads on AWS without relying on external spreadsheets or third-party systems.

In addition to AWS Well-Architected Lenses, now you can create and share custom lenses and include them in your workload reviews, ultimately tailoring the review to your organizational needs. For example, you could define a custom lens to review your workloads against PCI compliance, SOC 2 compliance, or other national or industry regulations. As an AWS Partner, you might include ad-hoc best practices in your custom lenses when reviewing workloads with customers from different industries and segments, ultimately making the review process easier, faster, and more comprehensive.

How to Define a new Custom Lens
You author a new custom lens by editing a JSON preset template, where you define questions, choices, helpful resources, improvement plans, and risk rules.

Here’s how it works: download the template from the AWS Well-Architected Tool, work on it locally, and then re-upload it.

The JSON structure is composed of multiple pillars. Each pillar might contain multiple questions, each with its own choices and risk rules.

Your JSON file will look like this:

{
    "schemaVersion": "2021-11-01",
    "name": "My Test Lens",
    "description": "This is a description of my test lens.",
    "pillars": [
        {
            "id": "pillar_red",
            "name": "Red Pillar",
            "questions": [
                {
                    "id": "pillar_1_q1",
                    "title": "How do you get started with this pillar?",
                    "description": "Optional description.",
                    "choices": [
                        {
                            "id": "choice1",
                            "title": "Best practice #1",
                            "helpfulResource": {
                                "displayText": "This is helpful text for the first choice.",
                                "url": "https://aws.amazon.com"
                            },
                            "improvementPlan": {
                                "displayText": "This is text that will be shown for improvement of this choice."
                            }
                        },
                        {
                            "id": "choice2",
                            "title": "Best practice #2",
                            ...
                        }
                    ],
                    "riskRules": [
                        {
                            "condition": "choice1 && choice2",
                            "risk": "NO_RISK"
                        },
                        {
                            "condition": "choice1 && !choice2",
                            "risk": "MEDIUM_RISK"
                        },
                        {
                            "condition": "default",
                            "risk": "HIGH_RISK"
                        }
                    ]
                }
            ]
            ...
        },
        ...
    ]
}

Once you’re ready to submit your JSON file, proceed with the upload.

And don’t worry about making it perfect on the first try. You’ll be able to improve it and add new versions.

AWS Well-Architected Custom Lenses in Action
You find the list of custom lenses and their latest version in the new Custom Lenses section.

Each custom lens has an owner and can be shared with multiple AWS accounts too.

Before using this new custom lens in a workload review, you’ll need to publish it and assign it a version.

Select Publish lens and provide a version name such as 1.0.

Now you can create a new workload review and apply both AWS-owned lenses and your own custom lenses, in addition to the main framework.

During the workload review, you will go through each pillar and questions of the custom lens, using the same user interface provided by the AWS Well-Architected Tool.

Last but not least, you can share your custom lens with other AWS Identity and Access Management (IAM) principals such as AWS accounts, IAM users, and IAM roles.

Available Today at No Charge
Custom Lenses are available today in all AWS Regions where the AWS Well-Architected Tool is available, at no cost. You can define up to five custom lenses and share them across AWS Accounts, in addition to the existing Well-Architected Framework and AWS-owned Lenses.

Check out the technical documentation here.

We’re looking forward to hearing your feedback and iterating quickly to improve the authoring and sharing experience based on your needs.

Alex

Announcing Pull Through Cache Repositories for Amazon Elastic Container Registry

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-pull-through-cache-repositories-for-amazon-elastic-container-registry/

Organizations, development teams, and individual developers who have chosen to use containers to host their applications may prefer, or perhaps are required, to source all images from Amazon Elastic Container Registry to take advantage of its high availability and security. To satisfy those requirements, customers have needed to take on the burden of manually pulling images from public registries into their private Amazon Elastic Container Registry repositories, and then keeping them in sync. This adds operational complexity and maintenance costs, thereby impacting developer productivity. Additionally, some registries may have limitations or restrictions on how frequently images can be downloaded. When reached, those limitations then begin impacting developers and the release velocity of their business, due to build errors when image pulls are throttled, or even rejected.

Today, we have announced pull through cache repository support in Amazon Elastic Container Registry, for publicly accessible registries that do not require authentication. Pull through cache repositories offer developers the improved performance, security, and availability of Amazon Elastic Container Registry for container images that they source from public registries. Images in pull through cache repositories are automatically kept in sync with the upstream public registries, thereby eliminating the manual work of pulling images and periodically updating.

Pull through cache repositories provide the benefits of the built-in security capabilities in Amazon Elastic Container Registry, such as AWS PrivateLink enabling you to keep all of the network traffic private, image scanning to detect vulnerabilities, encryption with AWS Key Management Service (KMS) keys, cross-region replication, and lifecycle policies. When enabled, cross-region replication is designed to automatically distribute updated images to additional Regions. All you need to do is update the pull URL so that the image is downloaded from the relevant Region.

When consuming images from pull through cache repositories, download throttling is also no longer a problem for developers, as well as the build and deployment infrastructure that supports their applications. While Amazon Elastic Container Registry is designed to automatically keep the cache repository in sync, you can also manually sync a repository at any time. And, if you wish, the automatic sync can be turned off.

Getting Started with Amazon Elastic Container Registry Pull Through Cache Repositories
Setting up pull through cache repositories is a simple process. For the following example, I’m using Amazon Elastic Container Registry Public in the South America (São Paulo) Region as my upstream registry.

First, I must modify my private registry’s settings to add a rule that references the upstream, publicly accessible registry (multiple rules can be set if I need additional upstream registries). In the Amazon Elastic Container Registry console, I begin by selecting Private registry, and then select Edit in the Pull through cache panel to change settings. This takes me to the Pull through cache configuration page, where I select Add rule.

On the Create pull through cache rule page, I choose the upstream registry, which is ECR Public in this example. I also must set a namespace that I’ll use when referring to images in my pull commands. For this example, I’ll accept the suggested namespace, ecr-public.

Configuring ECR Public as the upstream registry

Selecting Save takes me back to the Pull through cache configuration page where my newly configured rule is listed. Now, I’m ready to utilize the cache repository when pulling images.

Newly configured rule for an upstream registry

To reference an image, I must specify the namespace that I chose in the pull URL, using the URL format <accountId>.dkr.ecr.<region>.amazonaws.com/<namespace>/<sourcerepo>:<tag>. When images are pulled, the cache repository associated with the namespace is checked for the image. In my case, the cache repository doesn’t exist yet, but I don’t have to create it myself. The image is fetched from the upstream repository in the public registry associated with the namespace, and then stored in a new cache repository that is created for me automatically.

In the command prompt session below, I first authenticate with my registry, and then pull an Amazon Linux 2 image from Amazon Elastic Container Registry Public into the cache:

C:\ aws ecr get-login-password --region sa-east-1 | docker login --username AWS --password-stdin 111122223333.dkr.ecr.sa-east-1.amazonaws.com/ecr-public
Login Succeeded
C:\ docker pull 111122223333.dkr.ecr.sa-east-1.amazonaws.com/ecr-public/amazonlinux/amazonlinux:latest
latest: Pulling from ecr-public/amazonlinux/amazonlinux
e11e8d46e102: Pull complete
Digest: sha256:916dbbb288948b54c94b5b9f0769085aa601d4468d099e90d8a7da5cfa551b50
Status: Downloaded newer image for 111122223333.dkr.ecr.sa-east-1.amazonaws.com/ecr-public/amazonlinux/amazonlinux:latest
111122223333.dkr.ecr.us-west-2.amazonaws.com/ecr-public/amazonlinux/amazonlinux:latest

In my Amazon Elastic Container Registry console, a check of the Repositories page shows that a new private repository has been created containing the image I pulled, together with an indication that a pull through cache is active.

Pulled image in the cache repository

Working with images and the pull through cache repository is just as straightforward in Dockerfiles. All I need do is reference the image I need using the namespace in the pull URL. If the image is not in the cache repository, then it will be pulled and stored there for me. Cached images are checked once per 24 hours to verify if the cached image is the latest version, with the timer based off the last pull time of the cached image.

Start using Pull through Cache Repositories Today
Pull through cache repositories for Amazon Elastic Container Registry are available for you to take advantage of today in all commercial AWS Regions. There is no charge for using pull through cache repositories, only standard Amazon Elastic Container Registry pricing for storage and data transfer charges applies. You can find more details on pricing at the Amazon Elastic Container Registry pricing page. Learn more about pull through cache repositories in the Amazon Elastic Container Registry User Guide, and get started today.

— Steve

Introducing Amazon Braket Hybrid Jobs – Set Up, Monitor, and Efficiently Run Hybrid Quantum-Classical Workloads

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-braket-hybrid-jobs-set-up-monitor-and-efficiently-run-hybrid-quantum-classical-workloads/

I find quantum computing fascinating! At its simplest level, it extends the concept of bits, that have 0 or 1 values, with quantum bits, or qubits, that can have a combination of two different (quantum) states.

Two characteristics make qubits really interesting:

  • When you look at the value of a qubit, you get only one of the two possible states with a probability that depends on how its own states are combined.
  • Multiple qubits can be “connected” together (this is called quantum entanglement) so that by changing the state of one, even just by reading its value, you alter the states of the others.

These characteristics come from low-level properties described by quantum mechanics, a fundamental theory in physics that provides a description of the physical properties of nature at atomic and subatomic scales. Luckily, we don’t need a degree in quantum mechanics to use quantum computing in the same way we don’t need to be expert in semiconductors to use an ordinary computer.

Using qubits, researchers are designing new algorithms that have the potential to be much faster than what classical computers can achieve. To help speed up scientific research and software development for quantum computing, we introduced Amazon Braket at re:Invent 2019. A fully managed quantum computing service, Amazon Braket allows you to build, test, and run quantum algorithms on simulators and quantum computers.

Hybrid Algorithms and Quantum Processing Units (QPUs)
Quantum algorithms, which would be transformational in many different areas, require the execution of hundreds of thousands to millions of quantum gates. Unfortunately, the current generation of QPUs suffer from noise, creating errors that limit operations to only a few hundreds or thousands of gates before the errors take over.

To help solve this, we can take inspiration from machine learning: instead of using fixed quantum circuits, the logic that implements the algorithm, we let the algorithm “learn” by adjusting the parameters that tune the circuit to have a better chance of solving a given problem by adapting to the noise in a particular device (think of them as “self-learning quantum algorithms”).

This is similar to computer vision: instead of hand-crafting the features to distinguish a dog from a cat (which is notoriously difficult for a computer), machine learning algorithms “learn” the right features by iteratively adjusting parameters of a neural network.

A rapidly emerging area of research in quantum computing uses QPUs, the processors used by quantum computers, in the same way as GPUs are used in machine learning: Quantum circuits are parameterized, initialized with some values, and then run on the QPU. Like the weights in a neural network, these parameters are then iteratively adjusted based on the results of the computation. These so-called hybrid algorithms rely on rapid, iterative computations between classical computers and QPUs.

Architectural diagram.

To run hybrid algorithms, you need to manually set up a classical infrastructure, install the required software, and manage the interaction between your quantum and classical compute processes for the duration of your hybrid algorithm. You then need to build custom monitoring solutions to visualize the progress of your algorithm to make sure it converges to the solution as expected or intervene if necessary to adjust the parameters of the algorithm.

Another big challenge is that QPUs are shared, inelastic resources, and you compete with others for access. This can slow down the execution of your algorithm. A single large workload from another customer can bring the algorithm to a halt, potentially extending your total runtime for hours. This is not only inconvenient but also impacts the quality of the results because today’s QPUs need periodic re-calibration, which can invalidate the progress of a hybrid algorithm. In the worst case, the algorithm fails, wasting budget and time.

Introducing Amazon Braket Hybrid Jobs
Today, I am happy to introduce Amazon Braket Hybrid Jobs, a new capability of Amazon Braket that simplifies the process of setting up, monitoring, and efficiently executing hybrid quantum-classical algorithms. Jobs are fully managed so you can avoid extensive infrastructure and software management and confidently execute your algorithms quickly and predictably, with on-demand priority access to QPUs.

When you create a job, Amazon Braket spins up the job instance (providing a CPU environment based on an Amazon Elastic Compute Cloud (Amazon EC2) instance), executes the algorithm (using quantum hardware or simulators), and releases the resources once the job is completed so that you only pay for what you use. You can also define custom metrics for algorithms, which are automatically logged by Amazon CloudWatch and displayed in near real-time in the Amazon Braket console as the algorithm runs. This provides you with live insights into how your algorithm is progressing, creating the opportunity to adjust your algorithm as necessary and innovate more quickly.

Architectural diagram.

To run hybrid algorithms as jobs, you can define your algorithm using the Amazon Braket SDK or with PennyLane, an open-source library for hybrid quantum computing. Let’s see how that works in practice with a couple of examples.

Using Amazon Braket Hybrid Jobs
Before building a trainable quantum algorithm, let’s get started by running a series of fixed quantum operations, what we’ll refer to as quantum tasks. I use Python and the Amazon Braket SDK to define a circuit that constructs what is called a bell state, a state which has a fifty-fifty chance of resolving to each of two states. It’s the quantum computing equivalent of tossing a coin.

Here’s the content of the algorithm_script.py file:

import os

from braket.aws import AwsDevice
from braket.circuits import Circuit
from braket.jobs import save_job_result


def start_here():

    print("Test job started!")

    device = AwsDevice(os.environ["AMZN_BRAKET_DEVICE_ARN"])

    results = []
    
    bell = Circuit().h(0).cnot(0, 1)
    for count in range(5):
        task = device.run(bell, shots=100)
        print(task.result().measurement_counts)
        results.append(task.result().measurement_counts)

    save_job_result({ "measurement_counts": results })
    
    print("Test job completed!")

This script uses the environment variable AMZN_BRAKET_DEVICE_ARN to instantiate the device that I select when creating the job.

Quantum computing is probabilistic. For this reason, circuits need to be evaluated multiple times to get accurate results. A single run is called a shot. The higher the number of shots, the better the accuracy of the result. In this case, the circuit is run for 100 shots.

I use the save_job_result function to store the results of my job so that I can analyze them at the end.

In the Amazon Braket console, I choose Jobs on the left panel and then Create job. To start, I give the job a name.

Console screenshot.

Then, I pass the file with the algorithm. The CPU component of the hybrid algorithm runs in a container, and I can choose which container image to use. For example, I can use a pre-built container image that includes software my algorithm depends on, such as PennyLane, TensorFlow, or PyTorch, or bring my own custom image. I select the Base container image because I don’t have external dependencies.

I leave all other settings to their default value. In this way, I use the SV1 simulator, rather than quantum hardware, to run the quantum tasks.

After some time, the job has completed, and I follow the link to the Amazon Simple Storage Service (Amazon S3) console to download the result. As expected, for each of the five tasks, the results show that the proportion of the 00 and 11 states is roughly 50:50. The proportions vary slightly because of the probabilistic nature of quantum computing.

{
    "braketSchemaHeader": {
        "name": "braket.jobs_data.persisted_job_data",
        "version": "1"
    },
    "dataDictionary": {
        "measurement_counts": [
            {
                "00": 51,
                "11": 49
            },
            {
                "00": 44,
                "11": 56
            },
            {
                "11": 51,
                "00": 49
            },
            {
                "00": 56,
                "11": 44
            },
            {
                "00": 49,
                "11": 51
            }
        ]
    },
    "dataFormat": "plaintext"
}

This example is quite basic because I am not running any classical logic other than initiating tasks. To see the real value, let’s see how it works with a hybrid algorithm where we tweak the parameters of the quantum circuit iteratively from task to task.

Using Amazon Braket Hybrid Jobs with Hybrid Algorithms
For a more advanced example, I use a well-known example of an actual hybrid algorithm, called the quantum approximate optimization algorithm (QAOA), included in the examples provided by Amazon Braket when creating a notebook from the Braket console. QAOA is a quantum algorithm that produces approximate solutions for combinatorial optimization problems. You can also find the example in this GitHub repo.

In this case, I am using QAOA to solve the Max-Cut problem: when partitioning nodes of a graph in two, what is the maximum number of edges connecting nodes between the two parts? For example, in the figure below, there are six nodes connected by eight edges. The thick yellow line partitions the nodes into two sets by crossing six edges.

In the QAOA example, the tuning of parameters that are used to run the successive rounds of quantum tasks is optimized in a classical computing environment (such as an EC2 instance) using tools like TensorFlow or PyTorch. In one of the notebook cells, I can choose which interface to use to tune the parameters as well as the other hyperparameters in a similar way to what I’d do for machine learning training.

Braket jobs then coordinates running the classical and quantum computing parts of the algorithm and the exchange of parameters and results between them. I can just sit back and relax as I watch my algorithm converge, ready to retrieve my results from S3, as before, for deeper analysis.

Running Hybrid Algorithms in Local Mode
To test and debug hybrid algorithms quickly, the Amazon Braket SDK can run jobs in local mode. With local mode, Braket jobs are run locally on your machine (for example, your laptop). In this way, you can get fast feedback and iterate quickly during the development of your algorithms.

To run a job in local mode, you just need to replace AwsQuantumJob with LocalQuantumJob. Note that AwsQuantumJob is imported from braket.aws , while LocalQuantumJob is imported from braket.jobs.local.

Availability and Pricing
Amazon Braket Hybrid Jobs are available today in all AWS Regions where Amazon Braket is available. For more information, see the AWS Regional Services List.

With Amazon Braket Hybrid Jobs, you only pay for the resources you use. There is no need to deploy, configure, and manage classical infrastructure, making it easy to experiment and improve algorithms iteratively. For more information, see the Amazon Braket pricing page.

Instead of relying on theoretical studies, you can start to use quantum computers as the primary tool to understand and improve hybrid algorithms and test their applicability for industry and research use cases. In this way, you can focus on your research and not deal with setting up and coordinating these different compute resources for your experiments.

During the development of this new capability, we talked with customers and partners to understand their needs. “As application developers, Braket Hybrid Jobs gives us the opportunity to explore the potential of hybrid variational algorithms with our customers,” says Vic Putz head of engineering at QCWare. “We are excited to extend our integration with Amazon Braket and the ability to run our own proprietary algorithms libraries in custom containers means we can innovate quickly in a secure environment. The operational maturity of Amazon Braket and the convenience of priority access to different types of quantum hardware means we can build this new capability into our stack with confidence.”

Simplify running hybrid quantum-classical workloads with Amazon Braket Hybrid Jobs.

Danilo

New – Real-User Monitoring for Amazon CloudWatch

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloudwatch-rum/

Way back in 2009 I wrote a blog post titled New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch. In that post I talked about how Amazon CloudWatch helps you to build applications that are highly scalable and highly available, and noted that it gives you cost-effective real-time visibility into your metrics, with no deployment and no maintenance. Since that launch, we have added many new features to CloudWatch, all with that same goal in mind. For example, last year I showed you how you could Use CloudWatch Synthetics to Monitor Sites, API Endpoints, Web Workflows ,and More.

Real-User Monitoring (RUM)
The next big challenge (and the one that we are addressing today) is monitoring web applications with the goal of understanding performance and providing an optimal experience for your users. Because of the number of variables involved—browser type, browser configuration, user location, connectivity, and so forth—synthetic testing can only go so far. What really matters to your users is the experience that they receive, and that’s what we want to help you to deliver!

Amazon CloudWatch RUM will help you to collect the metrics that give you the insights that will help you to identity, understand, and improve this experience. You simply register your application, add a snippet of JavaScript to the header of each page, and deploy. The snippet runs when your users step through each page of your application, and sends the data to RUM for consolidation and analysis. You can use this tool on its own, and in conjunction with both Amazon CloudWatch ServiceLens and AWS X-Ray.

CloudWatch RUM in Action
To get started, I open the CloudWatch Console and navigate to RUM. Then I click Add app monitor:

I give my monitor a name and specify the domain that hosts my application:

Then I choose the events that I want to monitor & collect, and specify the percentage of sessions. My personal blog does not get a lot of traffic, so I will collect all of the sessions. I can also choose to store data in Amazon CloudWatch Logs in order to keep it around for more than the 30 days provided by CloudWatch RUM:

Finally, I opt to create a new Cognito identy pool, and add a tag. If I want to use CloudWatch ServiceLens and X-Ray, I can expand Active tracing and enable XRay. My app does not make any API requests, so I will not do that. I finish by clicking Add app monitor:

The console then shows me the JavaScript code snippet that I need to insert into the <head> element of my application:

I save the snippet, click Done, and then edit my application (my somewhat neglected personal blog in this case) to add the code snippet. I am using Jekyll, and added the snippet to my blog template:

Then I wait for some traffic to arrive. When I return to the RUM Console, I can see all of my app monitors. I click MonitorMyBlog to learn more:

Then I can explore the aggregated timing data and the other information that has been collected. There’s far more than I have space to show today, so feel free to try this out on your own and do a deeper dive. Each of the tabs contains multiple filters and options to help you to zoom in on areas of interest: specific pages, locations, browsers, user journeys, and so forth.

The Performance tab shows the vital signs for my application, followed by additional information:

The vital signs are apportioned into three levels (Positive, Tolerable, and Frustrating):

The screen above contains a metric (largest contentful paint) that was new to me. As Philip Walton explains it, “Largest Contentful Paint (LCP) is an important user-centered metric for measuring perceived load speed because it marks the point in the page load timeline when the page’s main content has likely loaded.”

I can also see the time consumed by the steps that the browser takes when loading a page:

And I can see average load time by time of day:

I can also see all of this information on a page-by-page basis:

The Browsers & Devices tab also shows a lot of interesting and helpful data. For example, I can learn more about the browsers that are used to access my blog, again with the page-by-page option:

I can also view the user journeys (page sequences) through my blog. Based on this information, it looks like I need to do a better job of leading users from one page to another:

As I noted earlier, there’s a lot of interesting and helpful information here, and you should check it out on your own.

Available Now
CloudWatch RUM is available now and you can start using it today in ten AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). You pay $1 for every 100K events that are collected.

Jeff;

New – Amazon CloudWatch Evidently – Experiments and Feature Management

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/cloudwatch-evidently/

As a developer, I am excited to announce the availability of Amazon CloudWatch Evidently. This is a new Amazon CloudWatch capability that makes it easy for developers to introduce experiments and feature management in their application code. CloudWatch Evidently may be used for two similar but distinct use-cases: implementing dark launches, also known as feature flags, and A/B testing.

Features flags is a software development technique that lets you enable or disable features without needing to deploy your code. It decouples the feature deployment from the release. Features in your code are deployed in advance of the actual release. They stay hidden behind if-then-else statements. At runtime, your application code queries a remote service. The service decides the percentage of users who are exposed to the new feature. You can also configure the application behavior for some specific customers, your beta testers for example.

When you use feature flags you can deploy new code in advance of your launch. Then, you can progressively introduce a new feature to a fraction of your customers. During the launch, you monitor your technical and business metrics. As long as all goes well, you may increase traffic to expose the new feature to additional users. In the case that something goes wrong, you may modify the server-side routing with just one click or API call to present only the old (and working) experience to your customers. This lets you revert back user experience without requiring rollback deployments.

A/B Testing shares many similarities with feature flags while still serving a different purpose. A/B tests consist of a randomized experiment with multiple variations. A/B testing lets you compare multiple versions of a single feature, typically by testing the response of a subject to variation A against variation B, and determining which of the two is more effective. For example, let’s imagine an e-commerce website (a scenario we know quite well at Amazon). You might want to experiment with different shapes, sizes, or colors for the checkout button, and then measure which variation has the most impact on revenue.

The infrastructure required to conduct A/B testing is similar to the one required by feature flags. You deploy multiple scenarios in your app, and you control how to route part of the customer traffic to one scenario or the other. Then, you perform deep dive statistical analysis to compare the impacts of variations. CloudWatch Evidently assists in interpreting and acting on experimental results without the need for advanced statistical knowledge. You can use the insights provided by Evidently’s statistical engine, such as anytime p-value and confidence intervals for decision-making while an experiment is in progress.

At Amazon, we use feature flags extensively to control our launches, and A/B testing to experiment with new ideas. We’ve acquired years of experience to build developers’ tools and libraries and maintain and operate experimentation services at scale. Now you can benefit from our experience.

CloudWatch Evidently uses the terms “launches” for feature flags and “experiments” for A/B testing, and so do I in the rest of this article.

Let’s see how it works from an application developer point of view.

Launches in Action
For this demo, I use a simple Guestbook web application. So far, the guest book page is read-only, and comments are entered from our back-end only. I developed a new feature to let customers enter their comments on the guestbook page. I want to launch this new feature progressively over a week and keep the ability to revert the change back if it impacts important technical or business metrics (such as p95 latency, customer engagement, page views, etc.). Users are authenticated, and I will segment users based on their user ID.

Before launch:
Evidently - experiment off
After launch:
Evidently - experiment on

Create a Project
Let’s start by configuring Evidently. I open the AWS Management Console and navigate to CloudWatch Evidently. Then, I select Create a project.

Evidently - create project

 

I enter a Project name and Description.

Evidently lets you optionally store events to CloudWatch logs or S3, so that you can move them to systems such as Amazon Redshift to perform analytical operations. For this demo, I choose not to store events. When done, I select Create project.

Evidently - create project second part

Add a Feature
Next, I create a feature for this project by selecting Add feature. I enter a Feature name and Feature description. Next, I define my Feature variations. In this example, there are two variations, and I use a Boolean type. true indicates the guestbook is editable and false indicates it is read only. Variations types might be boolean, double, long, or string.

Evidently - create featureI may define overrides. Overrides let me pre-define the variation for selected users. I want the user “seb”, my beta tester, to always receive the editable variation.

Evidently - Create feature - overridesThe console shares the JavaScript and Java code snippets to add into my application.

Evidently - code snippetTalking about code snippets, let’s look at the changes at the code level.

Instrument my Application Code
I use a simple web application for this demo. I coded this application using JavaScript. I use the AWS SDK for JavaScript and Webpack to package my code. I also use JQuery to manipulate the DOM to hide or show elements. I designed this application to use standard JavaScript and a minimum number of frameworks to make this example inclusive to all. Feel free to use higher level tools and frameworks, such as React or Angular for real-life projects.

I first initialize the Evidently client. Just like other AWS Services, I have to provide an access key and secret access key for authentication. Let’s leave the authentication part out for the moment. I added a note at the end of this article to discuss the options that you have. In this example, I use Amazon Cognito Identity Pools to receive temporary credentials.

// Initialize the Amazon CloudWatch Evidently client
const evidently = new AWS.Evidently({
    endpoint: EVIDENTLY_ENDPOINT,
    region: 'us-east-1',
    credentials: fromCognitoIdentityPool({
        client: new CognitoIdentityClient({ region: 'us-west-2' }),
        identityPoolId: IDENTITY_POOL_ID
    }),
});

Armed with this client, my code may invoke the EvaluateFeature API to make decisions about the variation to display to customers. The entityId is any string-based attribute to segment my customers. It might be a session ID, a customer ID, or even better, a hash of these. The featureName parameter contains the name of the feature to evaluate. In this example, I pass the value EditableGuestBook.

const evaluateFeature = async (entityId, featureName) => {

    // API request structure
    const evaluateFeatureRequest = {
        // entityId for calling evaluate feature API
        entityId: entityId,
        // Name of my feature
        feature: featureName,
        // Name of my project
        project: "AWSNewsBlog",
    };

    // Evaluate feature
    const response = await evidently.evaluateFeature(evaluateFeatureRequest).promise();
    console.log(response);
    return response;
}

The response contains the assignment decision from Evidently, as based on traffic rules defined on the server-side.

{
 details: {
   launch: "EditableGuestBook", group: "V2"},
   reason: "LAUNCH_RULE_MATCH", 
   value: {boolValue: false},
   variation: "readonly"
}}

The last part consists of hiding or displaying part of the user interface based on the value received above. Using basic JQuery DOM manipulation, it would be something like the following:

window.aws.evaluateFeature(entityId, 'EditableGuestbook').then((response, error) => {
    if (response.value.boolValue) {
        console.log('Feature Flag is on, showing guest book');
        $('div#guestbook-add').show();
    } else {
        console.log('Feature Flag is off, hiding guest book');
        $('div#guestbook-add').hide();
    }
});

Create a Launch
Now that the feature is defined on the server-side, and the client code is instrumented, I deploy the code and expose it to my customers. At a later stage, I may decide to launch the feature. I navigate back to the console, select my project, and select Create Launch. I choose a Launch name and a Launch description for my launch. Then, I select the feature I want to launch.

Evidently - create launchIn the Launch Configuration section, I configure how much traffic is sent to each variation. I may also schedule the launch with multiple steps. This lets me plan different steps of routing based on a schedule. For example, on the first day, I may choose to send 10% of the traffic to the new feature, and on the second day 20%, etc. In this example, I decide to split the traffic 50/50.

Evidently - launch configurationFinally, I may define up to three metrics to measure the performance of my variations. Metrics are defined by applying rules to data events.

Evidently - Custom MetricsAgain, I have to instrument my code to send these metrics with PutProjectEvents API from Evidently. Once my launch is created, the EvaluateFeature API returns different values for different values of entityId (users in this demo).

At any moment, I may change the routing configuration. Moreover, I also have access to a monitoring dashboard to observe the distribution of my variations and the metrics for each variation.

Evidently - launch monitoringI am confident that your real-life launch graph will get more data than mine did, as I just created it to write this post.

A/B Testing
Doing an A/B test is similar. I create a feature to test, and I create an Experiment. I configure the experiment to route part of the traffic to variation 1, and then the other part to variation 2. When I am ready to launch the experiment, I explicitly select Start experiment.

Evidently - start experiment

In this experiment, I am interested in sending custom metrics. For example:

// pageLoadTime custom metric
const timeSpendOnHomePageData = `{
   "details": {
      "timeSpendOnHomePage": ${timeSpendOnHomePageValue}
   },
   "userDetails": { "userId": "${randomizedID}", "sessionId": "${randomizedID}" }
}`;

const putProjectEventsRequest: PutProjectEventsRequest = {
   project: 'AWSNewsBlog',
   events: [
    {
        timestamp: new Date(),
        type: 'aws.evidently.custom',
        data: JSON.parse(timeSpendOnHomePageData)
    },
   ],
};

this.evidently.putProjectEvents(putProjectEventsRequest).promise().then(res =>{})

Switching to the Results page, I see raw values and graph data for Event Count, Total Value, Average, Improvement (with 95% confidence interval), and Statistical significance. The statistical significance describes how certain we are that the variation has an effect on the metric as compared to the baseline.

These results are generated throughout the experiment and the confidence intervals and the statistical significance are guaranteed to be valid anytime you want to view them. Additionally, at the end of the experiment, Evidently also generates a Bayesian perspective of the experiment that provides information about how likely it is that a difference between the variations exists.

The following two screenshots show graphs for the average value of two metrics over time, and the improvement for a metric within a 95% confidence interval.

Evidently - experiment monitoring - average valuesEvidently - experiment monitoring - improvement

Additional Thoughts
Before we wrap-up, I’d like to share some additional considerations.

First, it is important to understand that I choose to demo Evidently in the context of front-end application development. However, you may use Evidently with any application type: front-end web or mobile, back-end API, or even machine learning (ML). For example, you may use Evidently to deploy two different ML models and conduct experiments just like I showed above.

Second, just like with other AWS Services, Evidently API is available in all of our AWS SDK. This lets you use EvaluateFeature and other APIs from nine programing languages: C++, Go, Java, JavaScript (and Typescript), .Net, NodeJS, PHP, Python, and Ruby. AWS SDK for Rust and Swift are in the making.

Third, for a front-end application as I demoed here, it is important to consider how to authenticate calls to Evidently API. Hard coding access keys and secret access keys is not an option. For the front-end scenario, I suggest that you use Amazon Cognito Identity Pools to exchange user identity tokens for a temporary access and secret keys. User identity tokens may be obtained from Cognito User Pools, or third-party authentications systems, such as Active Directory, Login with Amazon, Login with Facebook, Login with Google, Signin with Apple, or any system compliant with OpenID Connect or SAML. Cognito Identity Pools also allows for anonymous access. No identity token is required. Cognito Identity Pools vends temporary tokens associated with IAM roles. You must Allow calls to the evidently:EvaluateFeature API in your policies.

Finally, when using feature flags, plan for code cleanup time during your sprints. Once a feature is launched, you might consider removing calls to EvaluateFeature API and the if-then-else logic used to initially hide the feature.

Pricing and Availability
Amazon Cloudwatch Evidently is generally available in nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), and Europe (Stockholm). As usual, we will gradually extend to other Regions in the coming months.

Pricing is pay-as-you-go with no minimum or recurring fees. CloudWatch Evidently charges your account based on Evidently events and Evidently analysis units. Evidently analysis units are generated from Evidently events, based on rules you have created in Evidently. For example, a user checkout event may produce two Evidently analysis units: checkout value and the number of items in cart. For more information about pricing, see Amazon CloudWatch Pricing.

Start experimenting with CloudWatch Evidently today!

— seb

New for AWS Compute Optimizer – Resource Efficiency Metrics to Estimate Savings Opportunities and Performance Risks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-compute-optimizer-resource-efficiency-metrics-to-estimate-savings-opportunities-and-performance-risks/

By applying the knowledge drawn from Amazon’s experience running diverse workloads in the cloud, AWS Compute Optimizer identifies workload patterns and recommends optimal AWS resources.

Today, I am happy to share that AWS Compute Optimizer now delivers resource efficiency metrics alongside its recommendations to help you assess how efficiently you are using AWS resources:

  • A dashboard shows you savings and performance improvement opportunities at the account level. You can dive into resource types and individual resources from the dashboard.
  • The Estimated monthly savings (On-Demand) and Savings opportunity (%) columns estimate the possible savings for over-provisioned resources. You can sort your recommendations using these two columns to quickly find the resources on which to focus your optimization efforts.
  • The Current performance risk column estimates the bottleneck risk with the current configuration for under-provisioned resources.

These efficiency metrics are available for Amazon Elastic Compute Cloud (Amazon EC2), AWS Lambda, and Amazon Elastic Block Store (EBS) at the resource and AWS account levels.

For multi-account environments, Compute Optimizer continuously calculates resource efficiency metrics at individual account level in an AWS organization to help identify teams with low cost-efficiency or possible performance risks. This lets you to create goals and track progress over time. You can quickly understand just how resource-efficient teams and applications are, easily prioritize recommendation evaluation and adoption by engineering team, and establish a mechanism that drives a cost-aware culture and accountability across engineering teams.

Using Resource Efficiency Metrics in AWS Compute Optimizer
You can opt in using the AWS Management Console or the AWS Command Line Interface (CLI) to start using Compute Optimizer. You can enroll the account that you’re currently signed in to or all of the accounts within your organization. Depending on your choice, Compute Optimizer analyzes resources that are in your individual account or for each account in your organization, and then generates optimization recommendations for those resources.

To see your savings opportunity in Compute Optimizer, you should also opt in to AWS Cost Explorer and enable the rightsizing recommendations in the AWS Cost Explorer preferences page. For more details, see Getting started with rightsizing recommendations.

I already enrolled some time ago, and in the Compute Optimizer console I see the overall savings opportunity for my account.

Console screenshot.

Below that, I have a recap of the performance improvement opportunity. This includes an overview of the under-provisioned resources, as well as the performance risks that they pose by resource type.

Console screenshot.

Let’s dive into some of those savings. In the EC2 instances section, Compute Optimizer found 37 over-provisioned instances.

Console screenshot.

I follow the 37 instances link to get recommendations for those resources, and then sort the table by Estimated monthly savings (On-Demand) descending.

Console screenshot.

On the right, in the same table, I see which is the current instance type, the recommended instance type based on Computer Optimizer estimates, the difference in pricing, and if there are platform differences between the current and recommended instance types.

Console screenshot.

I can select each instance to further drill down into the metrics collected, as well as the other possible instance types suggested by Computer Optimizer.

Back to the Compute Optimizer Dashboard, in the Lambda functions section, I see that eight functions have under-provisioned memory.

Console screenshot.

Again, I follow the 8 functions link to get recommendations for those resources, and then sort the table by Current performance risk. In my case, the risk is always low, but different values can help prioritize your activities.

Console screenshot.

Here, I see the current and recommended configured memory for those Lambda functions. I can select each function to get a view of the metrics collected. Choosing the memory allocated to Lambda functions is an optimization process that balances speed (duration) and cost. See Profiling functions with AWS Lambda Power Tuning in the documentation for more information.

Availability and Pricing
You can use resource efficiency metrics with AWS Compute Optimizer in any AWS Region where it is offered. For more information, see the AWS Regional Services List. There is no additional charge for this new capability. See the AWS Compute Optimizer pricing page for more information.

This new feature lets you implement a periodic workflow to optimize your costs:

  • You can start by reviewing savings opportunities for all of your accounts to identify which accounts have the highest savings opportunity.
  • Then, you can drill into those accounts with the highest savings opportunity. You can refer to the estimated monthly savings to see which recommendations can drive the largest absolute cost impact.
  • Finally, you can communicate optimization opportunities and priority order to the teams using those accounts.

Start using AWS Compute Optimizer today to find and prioritize savings opportunities in your AWS account or organization.

Danilo

New for AWS Compute Optimizer – Enhanced Infrastructure Metrics to Extend the Look-Back Period to Three Months

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-compute-optimizer-enhanced-infrastructure-metrics-to-extend-the-look-back-period-to-three-months/

By using machine learning to analyze historical utilization metrics, AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Block Store (EBS) volumes, and AWS Lambda functions, based on your utilization data. Today, I am happy to share that AWS Compute Optimizer now supports recommendation preferences where you can opt in or out of features that enhance resource-specific recommendations.

For EC2 instances, AWS Compute Optimizer analyzes Amazon CloudWatch metrics from the past 14 days to generate recommendations. For this reason, recommendations weren’t relevant for a subset of workloads that had monthly or quarterly patterns. For those workloads, you had to look for unoptimized resources and determine the right resource configurations over a longer period of time. This can be time-consuming and requires deep cloud expertise, especially for large organizations.

With the launch of recommendation preferences, Compute Optimizer now offers enhanced infrastructure metrics, a new paid recommendation preference feature that enhances recommendation quality for EC2 instances and Auto Scaling groups. Activating it extends the metrics look-back period to three months. You can activate enhanced infrastructure metrics for individual resources or at the AWS account or AWS organization level.

Let’s see how that works in practice.

Using Enhanced Infrastructure Metrics with AWS Compute Optimizer
Here, I am using the management account of my AWS organization to see organization-level preferences. In the left pane of the Compute Optimizer console, I choose Accounts. Here, there is a new section to set up Organization level preferences for enhanced infrastructure metrics. The console warns me that this is a paid feature.

I want to activate enhanced infrastructure metrics for EC2 instances running in the US East (N. Virginia) Region for all accounts in my organization. I choose the Edit button. For Resource type, I select EC2 instances. For Region, I select US East (N. Virginia). I check that the flag is active and save.

Console screenshot.

If I select one of the AWS accounts on this page, I can choose View preferences and override the setting for that specific account. For example, I can disable accounts that I use for testing because EC2 instances there are created automatically by a CI/CD pipeline and are usually terminated within a few hours.

Console screenshot.

In the console Dashboard, I look at the overall recommendations for EC2 instances and Auto Scaling groups.

Console screenshot.

In the EC2 instances box, I choose View recommendations and then one of the instances. With the Edit button, I can activate or inactivate enhanced infrastructure metrics for this specific resource. Here, I can also see if, considering all settings at organization, account, and resource level, enhanced infrastructure metrics is actually active or not for this specific EC2 instance. I see Active (pending) here because I’ve just changed the setting and it may take a few hours for Compute Optimizer to consider my updated preferences in its recommendations.

Console screenshot.

Below, I see the recommended options for the instance. Considering the current workload, I should change instance type and size from c3.2xlarge to r5d.large and save some money.

Console screenshot.

In a few hours, Compute Optimizer updates its recommendations based on the latest three months of CloudWatch metrics. In this way, I get better suggestions for workloads that have monthly or quarterly activities.

Availability and Pricing
You can activate enhanced infrastructure metrics in the AWS Compute Optimizer account preferences page for all the accounts in your organization or for individual accounts. If you need more granular controls, you can activate (or deactivate) for an individual resource (Auto Scaling group or EC2 instance) in the resource detail page. You can also activate enhanced infrastructure metrics using the AWS Command Line Interface (CLI) or AWS SDKs.

Default preferences in Compute Optimizer (with 14-day look-back) are free. Enabling enhanced infrastructure metrics costs $0.0003360215 per resource per hour and is charged based on the number of hours per month the resource is running. For a resource running a full 31-day month, that’s $0.25. For more information, see the Compute Optimizer pricing page.

Use enhanced infrastructure metrics to generate recommendations with Compute Optimizer based on metrics from the past three months.

Danilo

New – AWS Migration Hub Refactor Spaces Helps to Incrementally Refactor Your Applications

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-migration-hub-refactor-spaces-helps-to-incrementally-refactor-your-applications/

I am excited to announce the preview of AWS Migration Hub Refactor Spaces, a new capability of AWS Migration Hub to let you refactor existing applications into distributed applications, typically based on microservices.

There are multiple reasons why you want to refactor existing applications. You might want to make your code more modular, use more modern frameworks, use different data storage, etc. In general, when refactoring, your objective is to make your application easier to maintain and evolve over time. Other benefits might include handling larger workloads, increasing resiliency, or lowering costs. But let’s face it, refactoring is hard. I usually compare refactoring to changing the engines, cabin seats, and entertainment system of a plane while keeping the plane in the air, fully loaded with passengers, and without having them notice any change.

When talking with customers who have successfully been through these refactoring projects, we noticed a common pattern: the Strangler Fig design pattern.

strangler fig

A strangler fig is a family of plants that grow their roots from the top of the trees that host them, eventually enveloping or replacing their host. Author Martin Fowler first coined the term to describe a migration design pattern. The idea is “to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled”.

How Can I Apply This Plant Behavior To My Application Migration?
Inspired by this family of plants, I might want to extract capabilities from a monolithic application and rewrite them as microservices. Then, I incrementally route traffic away from the old to the new. Over time, all of the requests are routed to microservices, and the existing application is retired.

While effective, this approach to application transformation creates hurdles. I must create the required infrastructure to separate the existing applications and the microservices. In the AWS cloud, this often involves creating multiple AWS accounts, so teams or services can more easily operate independently. Having multiple accounts is the most efficient way to separate concerns and billing across teams. When dealing with multiple AWS accounts, it is required to maintain networking infrastructure to connect my existing application and new services together. Furthermore, I must create a routing control system to route traffic gradually from the old application to the new services in different accounts. Creating and managing that infrastructure at scale is complex. It introduces additional risks and costs to the refactor project.

How Refactor Spaces Helps
AWS Migration Hub Refactor Spaces takes care of the heavy lifting for me. First, it lays down the networking infrastructure to enable connectivity between multiple AWS accounts. Second, it creates and manages a mechanism to route API calls away from my legacy application.

Let’s imagine I have a monolithic application that I want to refactor. The application is made of a web-based front-end using ReactJS. The front-end application is hosted on Amazon Simple Storage Service (Amazon S3) and distributed through Amazon CloudFront. The front-end makes API calls to a monolithic application developed in NodeJS or Python and deployed on several EC2 instances. The API uses a relational database, because this is how we store data since the company existed.

The architecture of this application is illustrated by the following diagram.

refactor spaces - monolith

Each API has a distinct URI. For example the /cart API handles the shopping basket, the /order API handles the ordering system, etc. I apply the strangler fig pattern and decide to extract the /cart capabilities to a set of new microservices. I create an AWS account for these microservices. I develop and deploy a set of AWS Lambda functions to implement the cart management functionalities. I chose to use Amazon DynamoDB for the shopping basket data storage because of its low latency at scale.

The schema of my new architecture is shown in the following diagram:

Refactor Space - target architectureBut now I have two challenges. First, I have to design, code, and deploy a routing mechanism to route API calls made by the front-end application to the correct back-end: either the monolith, or the new microservices. This service will likely be deployed into a distinct AWS account. Then, I have to configure network connectivity between these multiple AWS accounts.

This is where Refactor Spaces comes into the picture.

Introducing AWS Migration Hub Refactor Spaces
Refactor Spaces makes it easy to manage application refactoring by taking care of the two challenges I just described: the routing of the API calls and the network connectivity between AWS accounts. It is made of Environments, Services, and an Application proxy. Let’s see it in action.

I open the AWS Management Console, navigate to AWS Migration Hub, and select Refactor Spaces.

I first create a Refactor Spaces Environment. An Environment is a multi-account network fabric consisting of peered VPCs. This lets AWS resources in service VPCs added to the environment communicate directly across AWS accounts. It also provides a unified view of networking and services across accounts.

In Create environment, I give my environment a name and a description, and then select Next.

Refactor Spaces - Create environment

Then, I define my application. I give my application a name, and select the VPC where the proxy will be deployed.

An application is a services container. It has a proxy that defines routes. The proxy lets your front-end application use a single endpoint to contact multiple services. All of the traffic hits the single proxy endpoint, and then it’s sent to multiple services based on your rules.

Refactor Space - Create application

You may want to use multiple AWS accounts as explained before. Typically, an application is made of one AWS Account that hosts the Refactor Spaces Application proxy, one or multiple AWS accounts to host the legacy application, and one AWS account for the first microservice. Therefore, I invite the other AWS account owners to join this Refactor Spaces environment. I add one principal per AWS Account. Refactor Spaces doesn’t reinvent the wheel, but it leverages AWS Resource Access Manager (RAM) to do so.

This step is optional. Refactor Spaces may work within one AWS Account. It is possible to share the environments with other AWS accounts at a later stage.

I enter the AWS account IDs as Principals, and then select Next.

Refactor Space - Shared Accounts

Finally, I review my choices and select Create & share environment (not shown here).

Assuming that the microservices are ready to use, the next step is registering them as Refactor Spaces Services. Refactor Spaces Services are entities that provide business capabilities, typically microservices. These services are reachable through unique endpoints, and they can interoperate across accounts in a Refactor Spaces Environment. In this example, there are four services:

  • The monolithic app. This is the default service where Refactor Spaces routes all API calls initiated by the front-end.
  • Three microservices to implement the /cart capability. I decided to refactor this capability with three distinct sevices: AddItem, RemoveItem, and ListItems.

A Refactor Spaces Service may target any compute resource type: EC2, containers deployed on AWS Fargate, an Application Load Balancer, an AWS Lambda function, etc.

I select Create service from the left menu. The service configuration is in three steps. First, I select the Refactor Spaces Environment and Application where I want to define this service. Second, I give my service a name and a description. And third, I select the service endpoint: either an HTTP/HTTPS URL in a VPC, or a Lambda function.

The monolithic application is the default route where Refactor Spaces Application proxy routes all of the API calls, unless otherwise specified. I enter / as Source path and select Include child paths. Then, I make make sure Match all is selected for HTTP verbs.

When finished, I select Create service. I repeat this process for each of my microservices. For this demo, I create four Refactor Spaces Services in total.

AWS Migration Hub refactor Spaces - create service

The last step defines the routing rules for the Refactor Spaces Application proxy. When configured, the proxy becomes the new API endpoint for my front-end application. The sole change that I have to make in my front-end application is to point it at the Refactor Spaces Application proxy URI. The proxy routes API calls to Services, according to a route definition. An Application proxy supports routing to all compute platforms with public or private visibility. At the moment, private endpoints must be referred through a public DNS name or their private IP address. Each API call is run against the set of routes configured in the proxy. When a path matches a rule, the request is sent to the target service configured for that path. Proxies have a default route that forwards requests to a default service if they don’t match any of the path rules.

I select the service that I just created. Then, I enter the route Source path and the HTTP Verb to support. When my service expects subpaths (such as /cart/123), I make sure to select Include child paths, as well.

Refactor Spaces - Define route

I repeat this process for the GetItem and RemoveItem microservices. They are invoked for different HTTP verbs: GET and DELETE respectively.

Based on this configuration, Refactor Spaces creates and manages the following architecture for me. The Refactor Spaces Application proxy and network fabric are deployed in a separate AWS account. I might further configure the Amazon API Gateway based on the needs of my monolithic application or microservices.

Refactor Spaces - final architecture

The ultimate change is for the application front-end. I modify its configuration to point to the Refactor Spaces Application proxy endpoint, instead of the monolith’s endpoint. From now on, Refactor Spaces routes API calls to the monolith by default. It routes the /cart calls for GET, POST, and DELETE verbs to my new microservices implemented as Lambda functions.

Over time, I will repeat this process to move other capabilities out of the monolithic application, one-by-one, until the old monolith is strangled replaced by the new microservices architecture.

Pricing and Availaibility
AWS Migration Hub Refactor Spaces is available today in the ten following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Singapore) Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), Europe (London), and Europe (Stockholm). As per usual, we’re looking forward to expanding to additional Regions in the future.

This new capability is available today as an open preview, and no registration is necessary. You can start to use it today. There is no charge for using Refactor Space during the preview period. However, you may be charged for the resources that it provisions on your AWS accounts: Amazon API Gateway, AWS Transit Gateway, and Network Load Balancer. The pricing details are available on AWS Migration Hub’s pricing page. Billing will start when Refactor Spaces will be generally available.

Go and start refactoring your applications today!

— seb

Top Announcements of AWS re:Invent 2021

Post Syndicated from AWS News Blog Team original https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2021/

Welcome to AWS re:Invent! From Nov. 29-Dec. 3, 2021, we’ll update this page daily with the most noteworthy launches from our biggest event of the year. AWS Chief Evangelist Jeff Barr and our team of AWS developer advocates from around the globe share the news and offer helpful tips for getting started with all the latest AWS releases.

More ways to learn:

(This post was last updated: 12:42 a.m., PST, Nov. 29, 2021.)


Quick category links:
Internet of Things |
Security

Internet of Things

Preview – AWS IoT RoboRunner for Building Robot Fleet Management Applications

AWS IoT RoboRunner is a new robotics service that makes it easier for enterprises to build and deploy applications that help fleets of robots work seamlessly together.

Security

Amazon CodeGuru Reviewer Introduces Secrets Detector to Identify Hardcoded Secrets and Secure Them with AWS Secrets Manager
The new Amazon CodeGuru Reviewer Secrets Detector is an automated tool that helps developers detect secrets in source code or configuration files, such as passwords, API keys, SSH keys, and access tokens.

Back to Top

Amazon CodeGuru Reviewer Introduces Secrets Detector to Identify Hardcoded Secrets and Secure Them with AWS Secrets Manager

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets/

Amazon CodeGuru helps you improve code quality and automate code reviews by scanning and profiling your Java and Python applications. CodeGuru Reviewer can detect potential defects and bugs in your code. For example, it suggests improvements regarding security vulnerabilities, resource leaks, concurrency issues, incorrect input validation, and deviation from AWS best practices.

One of the most well-known security practices is the centralization and governance of secrets, such as passwords, API keys, and credentials in general. As many other developers facing a strict deadline, I’ve often taken shortcuts when managing and consuming secrets in my code, using plaintext environment variables or hard-coding static secrets during local development, and then inadvertently commit them. Of course, I’ve always regretted it and wished there was an automated way to detect and secure these secrets across all my repositories.

Today, I’m happy to announce the new Amazon CodeGuru Reviewer Secrets Detector, an automated tool that helps developers detect secrets in source code or configuration files, such as passwords, API keys, SSH keys, and access tokens.

These new detectors use machine learning (ML) to identify hardcoded secrets as part of your code review process, ultimately helping you to ensure that all new code doesn’t contain hardcoded secrets before being merged and deployed. In addition to Java and Python code, secrets detectors also scan configuration and documentation files. CodeGuru Reviewer suggests remediation steps to secure your secrets with AWS Secrets Manager, a managed service that lets you securely and automatically store, rotate, manage, and retrieve credentials, API keys, and all sorts of secrets.

This new functionality is included as part of the CodeGuru Reviewer service at no additional cost and supports the most common API providers, such as AWS, Atlassian, Datadog, Databricks, GitHub, Hubspot, Mailchimp, Salesforce, SendGrid, Shopify, Slack, Stripe, Tableau, Telegram, and Twilio. Check out the full list here.

Secrets Detectors in Action
First, I select CodeGuru from the AWS Secrets Manager console. This new flow lets me associate a new repository and run a full repository analysis with the goal of identifying hardcoded secrets.

Associating a new repository only takes a few seconds. I connect my GitHub account, and then select a repository named hawkcd, which contains a few Java, C#, JavaScript, and configuration files.

A few minutes later, my full repository is successfully associated and the full scan is completed. I could also have a look at a demo repository analysis called DemoFullRepositoryAnalysisSecrets. You’ll find this demo in the CodeGuru console, under Full repository analysis, in your AWS Account.

I select the repository analysis and find 42 recommendations, including one recommendation for a hardcoded secret (you can filter recommendations by Type=Secrets). CodeGuru Reviewer identified a hardcoded AWS Access Key ID in a .travis.yml file.

The recommendation highlights the importance of storing these secrets securely, provides a link to learn more about the issue, and suggests rotating the identified secret to make sure that it can’t be reused by malicious actors in the future.

CodeGuru Reviewer lets me jump to the exact file and line of code where the secret appears, so that I can dive deeper, understand the context, verify the file history, and take action quickly.

Last but not least, the recommendation includes a Protect your credential button that lets me jump quickly to the AWS Secrets Manager console and create a new secret with the proper name and value.

I’m going to remove the plaintext secret from my source code and update my application to fetch the secret value from AWS Secrets Manager. In many cases, you can keep the current configuration structure and use existing parameters to store the secret’s name instead of the secret’s value.

Once the secret is securely stored, AWS Secrets Manager also provides me with code snippets that fetch my new secret in many programming languages using the AWS SDKs. These snippets let me save time and include the necessary SDK call, as well as the error handling, decryption, and decoding logic.

I’ve showed you how to run a full repository analysis, and of course the same analysis can be performed continuously on every new pull request to help you prevent hardcoded secrets and other issues from being introduced in the future.

Available Today with CodeGuru Reviewer
CodeGuru Reviewer Secrets Detector is available in all regions where CodeGuru Reviewer is available, at no additional cost.

If you’re new to CodeGuru Reviewer, you can try it for free for 90 days with repositories up to 100,000 lines of code. Connecting your repositories and starting a full scan takes only a couple of minutes, whether your code is hosted on AWS CodeCommit, BitBucket, or GitHub. If you’re using GitHub, check out the GitHub Actions integration as well.

You can learn more about Secrets Detector in the technical documentation.

Alex

Preview – AWS IoT RoboRunner for Building Robot Fleet Management Applications

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/preview-aws-iot-roborunner-for-building-robot-fleet-management-applications/

In 2018, we launched AWS RoboMaker, a cloud-based simulation service that enables robotics developers to run, scale, and automate simulation without managing any infrastructure. As we worked with robot developers and operators, we have repeatedly heard that they face challenges in operating different robot types in their automation efforts, including autonomous guided vehicles (AGV), autonomous mobile vehicles (AMR), and robotic manipulators.

Many customers choose different types of robots – often from different vendors in a single facility. Robot operator want to access the unified data required to build applications that work across a fleet of robots. However, when a new robot is added to an autonomous operation, complex and time-consuming software integration work is required to connect the robot control software to work management systems.

Today, we are launching a public preview of AWS IoT RoboRunner, a new robotics service that makes it easier for enterprises to build and deploy applications that help fleets of robots work seamlessly together. AWS IoT RoboRunner lets you connect your robots and work management systems, thereby enabling you to orchestrate work across your operation through a single system view.

This new service builds on the same technology used in Amazon fulfillment centers, and now we are excited to make it available to all developers to build advanced robotics applications for their businesses.

AWS IoT RoboRunner in Action
You can create a single facility (e.g., site name and location) in the AWS Management Console to get started with AWS IoT RoboRunner. Behind the scenes, AWS IoT RoboRunner automatically creates centralized repositories for storing facility, robot, destination, and task data. Then, the robots working on this site are setup as a “Fleet”, and each individual robot is setup in AWS IoT RoboRunner as a “Robot” within a fleet.

You can download the Fleet Gateway Library to develop integration codes for connecting your robots and WMS systems with AWS IoT RoboRunner to send and receive data from individual robot fleets. You can also develop the first robotics management application using the Task Manager Library and deploy Task Manager codes as an AWS Lambda function and Fleet Gateway codes on-premises as an AWS IoT Greengrass component.

To enable a single-system view of the robots, status of the systems, and progress of tasks on the same interface, AWS IoT RoboRunner provides APIs that let you build a user application. AWS IoT RoboRunner provides sample applications for allocating tasks to robot fleets so that you can get started quickly. You can customize the task allocation code with business requirements that align to your use case.

Learn more by reading Getting started with AWS IoT RoboRunner in the AWS IoT RoboRunner Developer Guide. Watch a quick introductory video about AWS IoT RoboRunner for more information.

Try Public Preview Now
AWS IoT RoboRunner is now available in public preview, and you can start using them today in the US East (N. Virginia) and Europe (Frankfrut) Regions. There will be no additional cost to use this feature during the preview period.

You can send feedback to [email protected], the AWS forum for AWS IoT, or through your usual AWS Support contacts.

Channy