New – Amazon QuickSight Q Answers Natural-Language Questions About Business Data

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/amazon-quicksight-q-to-answer-ad-hoc-business-questions/

We launched Amazon QuickSight as the first Business Intelligence (BI) service with Pay-per-Session pricing. Today, we are happy to announce the preview of Amazon QuickSight Q, a Natural Language Query (NLQ) feature powered by machine learning (ML). With Q, business users can now use QuickSight to ask questions about their data using everyday language and receive accurate answers in seconds.

For example, in response to questions such as, “What is my year-to-date year-over-year sales growth?” or “Which products grew the most year-over-year?” Q automatically parses the questions to understand the intent, retrieves the corresponding data and returns the answer in the form of a number, chart, or table in QuickSight. Q uses state-of-the art ML algorithms to understand the relationships across your data and build indexes to provide accurate answers. Also, since Q does not require BI teams to pre-build data models on specific datasets, you can ask questions across all your data.

The Need for Q
Traditionally, BI engineers and analysts create dashboards to make it easier for business users to view and monitor key metrics. When a new business question arises and no answers are found in the data displayed on an existing dashboard, the business user must submit a data request to the BI Team, which is often thinly staffed, and wait several weeks for the question to be answered and added to the dashboard.

A sales manager looking at a dashboard that outlines daily sales trends may want to know what their overall sales were for last week, in comparison to last month, the previous quarter, or the same time last year. They may want to understand how absolute sales compare to growth rates, or how growth rates are broken down by different geographies, product lines, or customer segments to identify new opportunities for growth. This may require a BI team to reconstruct the data, create new data models, and answer additional questions. This process can take from a few days to a few weeks. Such specific data requests increase the workload for BI teams that may be understaffed, increases the time spent waiting for answers, and frustrates business users and executives who need the data to make timely decisions.

How Q Works
To ask a question, you simply type your question into the QuickSight Q search bar. Once you start typing in your question, Q provides autocomplete suggestions with key phrases and business terms to speed up the process. It also automatically performs spell check, and acronym and synonym matching, so you don’t have to worry about typos or remember the exact business terms in the data. Q uses natural language understanding techniques to extract business terms (e.g., revenue, growth, allocation, etc.) and intent from your questions, retrieves the corresponding data from the source, and returns the answers in the form of numbers and graphs.

Q further learns from user interactions from within the organization to continually improve accuracy. For example, if Q doesn’t understand a phrase in a question, such as what “my product” refers to, Q prompts the user to choose from a drop-down menu of suggested options in the search bar. Q then remembers the phrase for next time, thus improving accuracy with use. If you ask a question about all your data, Q provides an answer using that data. Users are not limited to asking questions that are confined to a pre-defined dashboard and can ask any questions relevant to your business.

Let’s see a demo. We assume that there is a dashboard of sales for a company.

Dashboard of Quicksight

The business users of the dashboard can drill down and slice and dice the data simply by typing their questions on the Q search bar above.

Let’s use the Q search bar to ask a question, “Show me last year’s weekly sales in California.” Q generates numbers and a graph within seconds.

Generated dashboard

You can click “Looks good” or “Not quite right” on the answer. When clicking “Not quite right,” you can submit your feedback to your BI team to help improve Q. You can also investigate the answer further. Let’s add “versus New York” to the end of the question and hit enter. A new answer will pop up.

Generated new graph

Next, let’s investigate further in California. Type in “What are the best selling categories in California.

categories detail

With Q, you can easily change the presentation. Let’s see another diagram for the same question.

line start

Next, let’s take a look at the biggest industry, “Finance.” Type in “Show me the sales growth % week over week in the Finance sector” to Q, and specify “Line chart” to check weekly sales revenue growth.

The sales revenue shows growth, but it has peak and off-peak spikes. With these insights, you might now consider how to stabilize for a better profit structure.

Getting Started with Amazon QuickSight Q
A new “Q Topics” link will appear on the left navigation bar. Topics are a collection of one or more datasets and are meant to represent a subject area that users can ask questions about. For example, a marketing team may have Q Topics for “Ad Spending,” “Email Campaign,” “Website Analytics,” and others. Additionally, as an author, you can:

  • Add friendly names, synonyms, and descriptions to datasets and columns to improve Q’s answers.
  • Share the Topic to your users so they can ask questions about the Topic.
  • See questions your users are asking, how Q answered these questions, and improve upon the answer.

topic tool barSelect Topics, and set Topic name and its Description.

setting up topics

After clicking the Continue button, you can add datasets to a topic in two ways: You can add one or more datasets directly to your topic by selecting Add datasets, or you can import all the datasets in an existing dashboard into your topic by selecting Import dashboard.

The next step is to make your datasets natural-language friendly. Generally, names of datasets and columns are based on technical naming conventions and do not reflect how they are referred to by end users. Q relies heavily on names to match the right dataset and column with the terms used in questions. Therefore, such technical names must be converted to user-friendly names to ensure that they can be mapped correctly. Below are examples:

  • Dataset Name – D_CUST_DLY_ORD_DTL → Friendly Name: Customer Daily Order Details.
  • Column Name: pdt_cd Column → Friendly name: Product Code

Also, you can set up synonyms for each column so users can use the terms they are most comfortable with. For example, some users might input the term “client” or “segment” instead of “industry.” Q provides a feature to correct to the right name when typing the query, but BI operators can also set up synonyms for frequently used words. Click “Topics” in the left pane and choose the dashboard where you want to set synonyms.

Then, choose “datasets.

Now, we can set a Friendly Name or synonyms as Aliases, such as “client” for “Customer,” or “Segment” for “Industry.”

setting up Friendly Name

After adding synonyms, a user can save the changes and start asking questions in the Q search bar.

Amazon QuickSight Q Preview Available Today
Q is available in preview for US East (N. Virginia), US West (Oregon), US East (Ohio) and Europe (Ireland). Getting started with Q is just a few clicks away from QuickSight. You can use Q with AWS data sources such as Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon Athena, and Amazon S3, or third-party commercial sources such as SQL Server, Teradata, and Snowflake. Salesforce, ServiceNow, and Adobe automatically integrate with all data sources supported by QuickSight, including business applications such as Analytics or Excel.

Learn more about Q and get started with the preview today.

– Kame

 

New- Amazon DevOps Guru Helps Identify Application Errors and Fixes

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/amazon-devops-guru-machine-learning-powered-service-identifies-application-errors-and-fixes/

Today, we are announcing Amazon DevOps Guru, a fully managed operations service that makes it easy for developers and operators to improve application availability by automatically detecting operational issues and recommending fixes. DevOps Guru applies machine learning informed by years of operational excellence from Amazon.com and Amazon Web Services (AWS) to automatically collect and analyze data such as application metrics, logs, and events to identify behavior that deviates from normal operational patterns.

Once a behavior is identified as an operational problem or risk, DevOps Guru alerts developers and operators to the details of the problem so they can quickly understand the scope of the problem and possible causes. DevOps Guru provides intelligent recommendations for fixing problems, saving you time resolving them. With DevOps Guru, there is no hardware or software to deploy, and you only pay for the data analyzed; there is no upfront cost or commitment.

Distributed/Complex Architecture and Operational Excellence
As applications become more distributed and complex, operators need more automated practices to maintain application availability and reduce the time and effort spent on detecting, debugging, and resolving operational issues. Application downtime, for example, as caused by misconfiguration, unbalanced container clusters, or resource depletion, can result in significant revenue loss to an enterprise.

In many cases, companies must invest developer time in deploying and managing multiple monitoring tools, such as metrics, logs, traces, and events, and storing them in various locations for analysis. Developers or operators also spend time developing and maintaining custom alarms to alert them to issues such as sudden spikes in load balancer errors or unusual drops in application request rates. When a problem occurs, operators receive multiple alerts related to the same issue and spend time combining alerts to prioritize those that need immediate attention.

How DevOps Guru Works
The DevOps Guru machine learning models leverages AWS expertise in running highly available applications for the world’s largest e-commerce business for the past 20 years. DevOps Guru automatically detects operational problems, details the possible causes, and recommends remediation actions. DevOps Guru provides customers with a single console experience to search and visualize operational data by integrating data across multiple sources supporting Amazon CloudWatch, AWS Config, AWS CloudTrail, AWS CloudFormation, and AWS X-Ray and reduces the need to use multiple tools.

Getting Started with DevOps Guru
Activating DevOps Guru is as easy as accessing the AWS Management Console and clicking Enable. When enabling DevOps Guru, you can select the IAM role. You’ll then choose the AWS resources to analyze, which may include all resources in your AWS account or just specified CloudFormation StackSets. Finally, you can set an Amazon SNS topic if you want to send notifications from DevOps Guru via SNS.

DevOps Guru starts to accumulate logs and analyze your environment; it can take up to several hours. Let’s assume we have a simple serverless architecture as shown in this illustration.

When the system has an error, the operator needs to investigate if the error came from Amazon API Gateway, AWS Lambda, or AWS DynamoDB. They must then determine the root cause and how to fix the issue. With DevOps Guru, the process is now easy and simple.

When a developer accesses the management console of DevOps Guru, they will see a list of insights which is a collection of anomalies that are created during the analysis of the AWS resources configured within your application. In this case, Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Each insight contains observations, recommendations, and contextual data you can use to better understand and resolve the operational problem.

The list below shows the insight name, the status (closed or ongoing), severity, and when the insight was created. Without checking any logs, you can immediately see that in the most recent issue (line1), a problem with a Lambda function within your stack was the cause of the issue, and it was related to duration. If the issue was still occurring, the status would be listed as Ongoing. Since this issue was temporary, the status is showing Closed.

Insights

Let’s look deeper at the most recent anomaly by clicking through the first insight link. There are two tabs: Aggregated metrics and Graphed anomalies.

Aggregated metrics display metrics that are related to the insight. Operators can see which AWS CloudFormation stack created the resource that emitted the metric, the name of the resource, and its type. The red lines on a timeline indicate spans of time when a metric emitted unusual values. In this case, the operator can see the specific time of day on Nov 24 when the anomaly occurred for each metric.

Graphed anomalies display detailed graphs for each of the insight’s anomalies. Operators can investigate and look at an anomaly at the resource level and per statistic. The graphs are grouped by metric name.

metrics

By reviewing aggregated and graphed anomalies, an operator can see when the issue occurred, whether it is still ongoing, as well as the resources impacted. It appears the increased Lambda duration had a corresponding impact on API Gateway causing timeouts and resulted in 5XX errors in API Gateway.

Dev Ops Guru also provides Relevant events which are related to activities that changed your application’s configuration as illustrated below.

Events

We can now see that a configuration change happened 2 hours before this issue occurred. If we click the point on the graph at 20:30 on 11/24, we can learn more and see the details of that change.

If you click through to the Ops event, the AWS CloudTrail logs would show that the configuration change was twofold: 1) a change in the concurrency provisioned capacity on a Lambda function and 2) the reduction in the integration timeout on an API integration latency.

recommendations to fix

The recommendations tell the operator to evaluate the provisioned concurrency for Lambda and how to troubleshoot errors in API Gateway. After further evaluation, the operator will discover this is exactly correct. The root cause is a mismatch between the Lambda provisioned concurrency setting and the API Gateway integration latency timeout. When the Lambda configuration was updated in the last deployment, it altered how this application responded to burst traffic, and it no longer fit within the API Gateway timeout window. This error is unlikely to have been found in unit testing and will occur repeatedly if the configurations are not updated.

DevOps Guru can send alerts of anomalies to operators via Amazon SNS, and it is integrated with AWS Systems Manager OpsCenter, enabling customers to receive insights directly within OpsCenter as quickly diagnose and remediate issues.

Available for Preview Today
Amazon DevOps Guru is available for preview in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo). To learn more about DevOps Guru, please visit our web site and technical documentation, and get started today.

– Kame

 

 

New for AWS Lambda – Container Image Support

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

With AWS Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda.

To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads. Just like functions packaged as ZIP archives, functions deployed as container images benefit from the same operational simplicity, automatic scaling, high availability, and native integrations with many services.

We are providing base images for all the supported Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby) so that you can easily add your code and dependencies. We also have base images for custom runtimes based on Amazon Linux that you can extend to include your own runtime implementing the Lambda Runtime API.

You can deploy your own arbitrary base images to Lambda, for example images based on Alpine or Debian Linux. To work with Lambda, these images must implement the Lambda Runtime API. To make it easier to build your own base images, we are releasing Lambda Runtime Interface Clients implementing the Runtime API for all supported runtimes. These implementations are available via native package managers, so that you can easily pick them up in your images, and are being shared with the community using an open source license.

We are also releasing as open source a Lambda Runtime Interface Emulator that enables you to perform local testing of the container image and check that it will run when deployed to Lambda. The Lambda Runtime Interface Emulator is included in all AWS-provided base images and can be used with arbitrary images as well.

Your container images can also use the Lambda Extensions API to integrate monitoring, security and other tools with the Lambda execution environment.

To deploy a container image, you select one from an Amazon Elastic Container Registry repository. Let’s see how this works in practice with a couple of examples, first using an AWS-provided image for Node.js, and then building a custom image for Python.

Using the AWS-Provided Base Image for Node.js
Here’s the code (app.js) for a simple Node.js Lambda function generating a PDF file using the PDFKit module. Each time it is invoked, it creates a new mail containing random data generated by the faker.js module. The output of the function is using the syntax of the Amazon API Gateway to return the PDF file.

const PDFDocument = require('pdfkit');
const faker = require('faker');
const getStream = require('get-stream');

exports.lambdaHandler = async (event) => {

    const doc = new PDFDocument();

    const randomName = faker.name.findName();

    doc.text(randomName, { align: 'right' });
    doc.text(faker.address.streetAddress(), { align: 'right' });
    doc.text(faker.address.secondaryAddress(), { align: 'right' });
    doc.text(faker.address.zipCode() + ' ' + faker.address.city(), { align: 'right' });
    doc.moveDown();
    doc.text('Dear ' + randomName + ',');
    doc.moveDown();
    for(let i = 0; i < 3; i++) {
        doc.text(faker.lorem.paragraph());
        doc.moveDown();
    }
    doc.text(faker.name.findName(), { align: 'right' });
    doc.end();

    pdfBuffer = await getStream.buffer(doc);
    pdfBase64 = pdfBuffer.toString('base64');

    const response = {
        statusCode: 200,
        headers: {
            'Content-Length': Buffer.byteLength(pdfBase64),
            'Content-Type': 'application/pdf',
            'Content-disposition': 'attachment;filename=test.pdf'
        },
        isBase64Encoded: true,
        body: pdfBase64
    };
    return response;
};

I use npm to initialize the package and add the three dependencies I need in the package.json file. In this way, I also create the package-lock.json file. I am going to add it to the container image to have a more predictable result.

$ npm init
$ npm install pdfkit
$ npm install faker
$ npm install get-stream

Now, I create a Dockerfile to create the container image for my Lambda function, starting from the AWS provided base image for the nodejs12.x runtime:

FROM amazon/aws-lambda-nodejs:12
COPY app.js package*.json ./
RUN npm install
CMD [ "app.lambdaHandler" ]

The Dockerfile is adding the source code (app.js) and the files describing the package and the dependencies (package.json and package-lock.json) to the base image. Then, I run npm to install the dependencies. I set the CMD to the function handler, but this could also be done later as a parameter override when configuring the Lambda function.

I use the Docker CLI to build the random-letter container image locally:

$ docker build -t random-letter .

To check if this is working, I start the container image locally using the Lambda Runtime Interface Emulator:

$ docker run -p 9000:8080 random-letter:latest

Now, I test a function invocation with cURL. Here, I am passing an empty JSON payload.

$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

If there are errors, I can fix them locally. When it works, I move to the next step.

To upload the container image, I create a new ECR repository in my account and tag the local image to push it to ECR. To help me identify software vulnerabilities in my container images, I enable ECR image scanning.

$ aws ecr create-repository --repository-name random-letter --image-scanning-configuration scanOnPush=true
$ docker tag random-letter:latest 123412341234.dkr.ecr.sa-east-1.amazonaws.com/random-letter:latest
$ aws ecr get-login-password | docker login --username AWS --password-stdin 123412341234.dkr.ecr.sa-east-1.amazonaws.com
$ docker push 123412341234.dkr.ecr.sa-east-1.amazonaws.com/random-letter:latest

Here I am using the AWS Management Console to complete the creation of the function. You can also use the AWS Serverless Application Model, that has been updated to add support for container images.

In the Lambda console, I click on Create function. I select Container image, give the function a name, and then Browse images to look for the right image in my ECR repositories.

Screenshot of the console.

After I select the repository, I use the latest image I uploaded. When I select the image, the Lambda is translating that to the underlying image digest (on the right of the tag in the image below). You can see the digest of your images locally with the docker images --digests command. In this way, the function is using the same image even if the latest tag is passed to a newer one, and you are protected from unintentional deployments. You can update the image to use in the function code. Updating the function configuration has no impact on the image used, even if the tag was reassigned to another image in the meantime.

Screenshot of the console.

Optionally, I can override some of the container image values. I am not doing this now, but in this way I can create images that can be used for different functions, for example by overriding the function handler in the CMD value.

Screenshot of the console.

I leave all other options to their default and select Create function.

When creating or updating the code of a function, the Lambda platform optimizes new and updated container images to prepare them to receive invocations. This optimization takes a few seconds or minutes, depending on the size of the image. After that, the function is ready to be invoked. I test the function in the console.

Screenshot of the console.

It’s working! Now let’s add the API Gateway as trigger. I select Add Trigger and add the API Gateway using an HTTP API. For simplicity, I leave the authentication of the API open.

Screenshot of the console.

Now, I click on the API endpoint a few times and download a few random mails.

Screenshot of the console.

It works as expected! Here are a few of the PDF files that are generated with random data from the faker.js module.

Output of the sample application.

 

Building a Custom Image for Python
Sometimes you need to use your custom container images, for example to follow your company guidelines or to use a runtime version that we don’t support.

In this case, I want to build an image to use Python 3.9. The code (app.py) of my function is very simple, I just want to say hello and the version of Python that is being used.

import sys
def handler(event, context): 
    return 'Hello from AWS Lambda using Python' + sys.version + '!'

As I mentioned before, we are sharing with you open source implementations of the Lambda Runtime Interface Clients (which implement the Runtime API) for all the supported runtimes. In this case, I start with a Python image based on Alpine Linux. Then, I add the Lambda Runtime Interface Client for Python (link coming soon) to the image. Here’s the Dockerfile:

# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.9"
ARG DISTRO_VERSION="3.12"

# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC
FROM python:${RUNTIME_VERSION}-alpine${DISTRO_VERSION} AS python-alpine
# Install GCC (Alpine uses musl but we compile and link dependencies with GCC)
RUN apk add --no-cache \
    libstdc++

# Stage 2 - build function and dependencies
FROM python-alpine AS build-image
# Install aws-lambda-cpp build dependencies
RUN apk add --no-cache \
    build-base \
    libtool \
    autoconf \
    automake \
    libexecinfo-dev \
    make \
    cmake \
    libcurl
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY app/* ${FUNCTION_DIR}
# Optional – Install the function's dependencies
# RUN python${RUNTIME_VERSION} -m pip install -r requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}

# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM python-alpine
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
COPY https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie
COPY entry.sh /
ENTRYPOINT [ "/entry.sh" ]
CMD [ "app.handler" ]

The Dockerfile this time is more articulated, building the final image in three stages, following the Docker best practices of multi-stage builds. You can use this three-stage approach to build your own custom images:

  • Stage 1 is building the base image with the runtime, Python 3.9 in this case, plus GCC that we use to compile and link dependencies in stage 2.
  • Stage 2 is installing the Lambda Runtime Interface Client and building function and dependencies.
  • Stage 3 is creating the final image adding the output from stage 2 to the base image built in stage 1. Here I am also adding the Lambda Runtime Interface Emulator, but this is optional, see below.

I create the entry.sh script below to use it as ENTRYPOINT. It executes the Lambda Runtime Interface Client for Python. If the execution is local, the Runtime Interface Client is wrapped by the Lambda Runtime Interface Emulator.

#!/bin/sh
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
    exec /usr/bin/aws-lambda-rie /usr/local/bin/python -m awslambdaric
else
    exec /usr/local/bin/python -m awslambdaric
fi

Now, I can use the Lambda Runtime Interface Emulator to check locally if the function and the container image are working correctly:

$ docker run -p 9000:8080 lambda/python:3.9-alpine3.12

Not Including the Lambda Runtime Interface Emulator in the Container Image
It’s optional to add the Lambda Runtime Interface Emulator to a custom container image. If I don’t include it, I can test locally by installing the Lambda Runtime Interface Emulator in my local machine following these steps:

  • In Stage 3 of the Dockerfile, I remove the commands copying the Lambda Runtime Interface Emulator (aws-lambda-rie) and the entry.sh script. I don’t need the entry.sh script in this case.
  • I use this ENTRYPOINT to start by default the Lambda Runtime Interface Client:
    ENTRYPOINT [ "/usr/local/bin/python", “-m”, “awslambdaric” ]
  • I run these commands to install the Lambda Runtime Interface Emulator in my local machine, for example under ~/.aws-lambda-rie:
mkdir -p ~/.aws-lambda-rie
curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie
chmod +x ~/.aws-lambda-rie/aws-lambda-rie

When the Lambda Runtime Interface Emulator is installed on my local machine, I can mount it when starting the container. The command to start the container locally now is (assuming the Lambda Runtime Interface Emulator is at ~/.aws-lambda-rie):

docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 \
       --entrypoint /aws-lambda/aws-lambda-rie lambda/python:3.9-alpine3.12
       /lambda-entrypoint.sh app.handler

Testing the Custom Image for Python
Either way, when the container is running locally, I can test a function invocation with cURL:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

The output is what I am expecting!

"Hello from AWS Lambda using Python3.9.0 (default, Oct 22 2020, 05:03:39) \n[GCC 9.3.0]!"

I push the image to ECR and create the function as before. Here’s my test in the console:

Screenshot of the console.

My custom container image based on Alpine is running Python 3.9 on Lambda!

Available Now
You can use container images to deploy your Lambda functions today in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Europe (Ireland), Europe (Frankfurt), South America (São Paulo). We are working to add support in more Regions soon. The container image support is offered in addition to ZIP archives and we will continue to support the ZIP packaging format.

There are no additional costs to use this feature. You pay for the ECR repository and the usual Lambda pricing.

You can use container image support in AWS Lambda with the console, AWS Command Line Interface (CLI), AWS SDKs, AWS Serverless Application Model, and solutions from AWS Partners, including Aqua Security, Datadog, Epsagon, HashiCorp Terraform, Honeycomb, Lumigo, Pulumi, Stackery, Sumo Logic, and Thundra.

This new capability opens up new scenarios, simplifies the integration with your development pipeline, and makes it easier to use custom images and your favorite programming platforms to build serverless applications.

Learn more and start using container images with AWS Lambda.

Danilo

New – Amazon EBS gp3 Volume Lets You Provision Performance Apart From Capacity

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/new-amazon-ebs-gp3-volume-lets-you-provision-performance-separate-from-capacity-and-offers-20-lower-price/

Amazon Elastic Block Store (EBS) is an easy-to-use, high-performance block storage service designed for use with Amazon EC2 instances for both throughput and transaction-intensive workloads of all sizes. Using existing general purpose solid state drive (SSD) gp2 volumes, performance scales with storage capacity. By provisioning larger storage volume sizes, you can improve application input / output operations per second (IOPS) and throughput.

However some applications, such as MySQL, Cassandra, and Hadoop clusters, require high performance but not high storage capacity. Customers want to meet the performance requirements of these types of applications without paying for more storage volumes than they need.

Today I would like to tell you about gp3, a new type of SSD EBS volume that lets you provision performance independent of storage capacity, and offers a 20% lower price than existing gp2 volume types.

New gp3 Volume Type

With EBS, customers can choose from multiple volume types based on the unique needs of their applications. We introduced general purpose SSD gp2 volumes in 2014 to offer SSD performance at a very low price. gp2 provides an easy and cost-effective way to meet the performance and throughput requirements of many applications our customers use such as virtual desktops, medium-sized databases such as SQLServer and OracleDB, and development and testing environments.

That said, some customers need higher performance. Because the basic idea behind gp2 is that the larger the capacity, the faster the IOPS, customers may end up provisioning more storage capacity than desired. Even though gp2 offers a low price point, customers end up paying for storage they don’t need.

The new gp3 is the 7th variation of EBS volume types. It lets customers independently increase IOPS and throughput without having to provision additional block storage capacity, paying only for the resources they need.

gp3 is designed to provide predictable 3,000 IOPS baseline performance and 125 MiB/s regardless of volume size. It is ideal for applications that require high performance at a low cost such as MySQL, Cassandra, virtual desktops and Hadoop analytics. Customers looking for higher performance can scale up to 16,000 IOPS and 1,000 MiB/s for an additional fee. The top performance of gp3 is 4 times faster than max throughput of gp2 volumes.

How to Switch From gp2 to gp3

If you’re currently using gp2, you can easily migrate your EBS volumes to gp3 using Amazon EBS Elastic Volumes, an existing feature of Amazon EBS. Elastic Volumes allows you to modify the volume type, IOPS, and throughput of your existing EBS volumes without interrupting your Amazon EC2 instances. Also, when you create a new Amazon EBS volume, Amazon EC2 instance, or Amazon Machine Image (AMI), you can choose the gp3 volume type. New AWS customers receive 30GiB of gp3 storage with the baseline performance at no charge for 12 months.

Available Today

The gp3 volume type is available for all AWS Regions. You can access the AWS Management Console to launch your first gp3 volume.

For more information, see Amazon Elastic Block Store and get started with gp3 today.

– Kame

 

Preview: AWS Proton – Automated Management for Container and Serverless Deployments

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/preview-aws-proton-automated-management-for-container-and-serverless-deployments/

Today, we are excited to announce the public preview of AWS Proton, a new service that helps you automate and manage infrastructure provisioning and code deployments for serverless and container-based applications.

Maintaining hundreds – or sometimes thousands – of microservices with constantly changing infrastructure resources and configurations is a challenging task for even the most capable teams.

AWS Proton enables infrastructure teams to define standard templates centrally and make them available for developers in their organization. This allows infrastructure teams to manage and update infrastructure without impacting developer productivity.

How AWS Proton Works
The process of defining a service template involves the definition of cloud resources, continuous integration and continuous delivery (CI/CD) pipelines, and observability tools. AWS Proton will integrate with commonly used CI/CD pipelines and observability tools such as CodePipeline and CloudWatch. It also provides curated templates that follow AWS best practices for common use cases such as web services running on AWS Fargate or stream processing apps built on AWS Lambda.

Infrastructure teams can visualize and manage the list of service templates in the AWS Management Console.

This is what the list of templates looks like.

AWS Proton also collects information about the deployment status of the application such as the last date it was successfully deployed. When a template changes, AWS Proton identifies all the existing applications using the old version and allows infrastructure teams to upgrade them to the most recent definition, while monitoring application health during the upgrade so it can be rolled-back in case of issues.

This is what a service template looks like, with its versions and running instances.

Once service templates have been defined, developers can select and deploy services in a self-service fashion. AWS Proton will take care of provisioning cloud resources, deploying the code, and health monitoring, while providing visibility into the status of all the deployed applications and their pipelines.

This way, developers can focus on building and shipping application code for serverless and container-based applications without having to learn, configure, and maintain the underlying resources.

This is what the list of deployed services looks like.

Available in Preview
AWS Proton is now available in preview in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland); it’s free of charge, as you only pay for the underlying services and resources. Check out the technical documentation.

You can get started using the AWS Management Console here.

Alex

New for AWS Lambda – Functions with Up to 10 GB of Memory and 6 vCPUs

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/

AWS Lambda runs your code on an highly available and scalable compute infrastructure so that you can focus on what you want to build. Do you want to get the advantages of Lambda for workloads that are memory or computationally intensive? Wait no more!

Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment. In this way, your multithreaded and multiprocess applications run faster. Since Lambda charges are proportional to memory configured and function duration (GB-seconds), the additional costs for using more memory may be offset by lower duration. I have more on this in the example below.

With more memory and CPU power, and support for the AVX2 instruction set, new use cases — such as machine learning applications; batch and extract, transform, load (ETL) jobs; modelling; genomics; gaming; high-performance computing (HPC); and media processing — become easier to implement and scale with Lambda functions.

Let’s see how this works in practice!

Lambda Function Performance as Memory Increases
When I first wrote about the capability of mounting a shared Amazon Elastic File System (EFS) for Lambda functions, one of the examples I used was a function doing machine learning inference to classify images of birds. The function is using PyTorch to run the inference, applying a pre-trained machine learning model.

Now, I can execute the same function in the updated Lambda execution environment. Let’s see how increasing memory affects the duration of the function. Here are the results of using memory configurations between 1 and 10 GB. To get these numbers, I ran 20 invocations for each memory configuration. Then, I computed the average duration, discarding function initializations. To avoid possible outliers, I also excluded from the average the top and bottom 10% of reported durations. Based on the results, I estimated the charges I would have for 1 million invocations with each configuration.

Graph showing Function Duration and Charges for 1M Invocations as Memory Increases

As you can see, the function is able to use the additional CPU power that comes with more memory, decreasing the duration of the invocations. What is interesting is the impact of increasing memory on my costs.

Lambda charges are related to memory and duration, so if I increase memory and this is reducing duration by the same proportion, the overall charges are about the same. For example, looking at the graph above, when I configure 5 GB of memory, I have the same costs as when I have 1 GB of memory (about $61 for one million invocations), but the function is 5x faster. If I need lower latency, I can increase memory up to 10 GB, where the function is 7.6x faster and I pay a little more ($80 for one million invocations).

Depending on your code and business case, you can find out which memory configuration gives the optimal trade-off between cost and performance. To help you with that, my colleague and friend Alex Casalboni started the AWS Lambda Power Tuning project to help you optimize your Lambda functions in a data-driven way. This open source tool is really useful and has been improved by the support of many contributors. Give it a try!

In my tests, PyTorch is also using the optimizations of the Advanced Vector Extensions 2 (AVX2) instruction set, now available in the Lambda execution environment. With the AVX2 instruction set, the processor allows running a certain set of operations simultaneously. This is extremely beneficial for applications with operations that can run in parallel such as matrix multiplication. As a result, using AVX2 can improve performance by increasing CPU throughput per cycle. This typically helps compute intensive workloads such as machine learning inference, multimedia processing, scientific simulations, and financial modeling applications.

Available Now
AWS Lambda support for larger functions is available in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Milano), EU (Paris), EU (Stockholm), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon).

You can configure up to 10 GB of memory for new or existing Lambda functions using the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and Serverless Application Model.

Here’s a snapshot of the new console experience. We replaced the slider with a field, and you can now configure memory in 1 MB increments (it was 64MB increments before). In this way, the console works similarly to the Lambda API that always accepted memory configurations with 1MB granularity.

There is no change in Lambda pricing, you pay for requests and usage, with duration and Provisioned Concurrency charged at a rate proportional to the amount of memory configured.

Start using Lambda functions with up to 10 GB of memory and 6 vCPUs today.

Danilo

New for AWS Lambda – 1ms Billing Granularity Adds Cost Savings

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-lambda-1ms-billing-granularity-adds-cost-savings/

What I like about AWS Lambda is that it lets you run code without provisioning or managing servers, and you pay only for what you use. Since we launched Lambda in 2014, you have been charged for the number of times your code is triggered (requests) and for the time your code executes, rounded up to the nearest 100ms (duration).

Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time.

With this new pricing, you are going to pay less most of the time, but it’s going to be more noticeable when you have functions whose execution time is much lower than 100ms, such as low latency APIs.

For example, let’s look at a simple web app that I have running. In the Amazon CloudWatch Logs, for each invocation there is a REPORT line. To improve readability, I am breaking the REPORT line into three lines here:

REPORT RequestId: 35a7e0cb-4902-490d-b8d3-eb315dded660
Duration: 27.40 ms  Billed Duration: 100 ms Memory Size: 1024 MB  Max Memory Used: 472 MB

With 1ms billing granularity that becomes:

REPORT RequestId: a24d03b5-429d-4ca3-a490-878a52a0182f
Duration: 27.55 ms  Billed Duration: 28 ms Memory Size: 1024 MB  Max Memory Used: 472 MB

My application doesn’t have a lot of traffic, so let’s do a simple production scenario. Let’s say I have 100,000 users for a web/mobile app. I expect each user to call this function via the web/mobile app about 20 times per day. The duration of those invocations is on average 28ms. Each month, I should expect:

  • 100,000 users * 20 invocations * 30 days = 60 million invocations.

Let’s estimate the costs in US East (N. Virginia). For simplicity, I am not considering the Lambda free tier.

The Lambda monthly request charges are unchanged:

  • 60 million invocations * $0.20 per 1M requests = $12

To that, I have to add compute charges based on duration.

The Lambda monthly compute charges with the old 100ms rounded up pricing would have been:

  • 60 million invocations* 100ms * 1G memory * $0.0000166667 for every GB-second = $100

With the new 1ms billing granularity, the duration costs are:

  • 60 million invocations * 28ms * 1G memory * $0.0000166667 for every GB-second = $28

For this scenario, overall costs including request and compute charges are much cheaper ($40) than before ($112).

With this pricing, there is now more of an incentive to optimize the duration of functions even if it is already well below 100ms. Your engineering efforts can reduce costs even more.

If you increase memory to get more CPU power and speed up your functions, you now get the benefit of a lower billed duration below 100ms as well. That means that increasing performance and reducing latency is going to be cheaper than before.

We are applying 1ms billing granularity for duration, including when you have Provisioned Concurrency enabled, in all AWS Regions with the exception of those based in China starting with the December 2020 billing period. Regions in China will get the change from January.

Enjoy the new pricing!

Danilo

Amazon Elastic Container Registry Public: A New Public Container Registry

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amazon-ecr-public-a-new-public-container-registry/

In November, we announced that we intended to create a public container registry, and today at AWS re:Invent, we followed through on that promise and launched Amazon Elastic Container Registry Public (ECR Public).

ECR Public allows you to store, manage, share, and deploy container images for anyone to discover and download globally. You have long been able to host private container images on AWS with Amazon Elastic Container Registry, and now with the release of ECR Public, you can host public ones too, enabling anyone (with or without an AWS account) to browse and pull your published container artifacts.

As part of the announcement, a new website allows you to browse and search for public container images, view developer provided details, and discover the commands you need to pull containers.

If you check the website out now, you will see we have hosted some of our container images on the registry, including including the Amazon EKS Distro images. We also have hundreds of images from partners such as Bitnami, Canonical and HashiCorp.

Publishing A Container
Before I can upload a container, I need to create a repository. There is now a Public tab in the Repositories section of the Elastic Container Registry console. In this tab, I click the button Create repository.

I am taken to the Create repository screen, where I select that I would like the repository to be public.

I give my repository the name news-blog and upload a logo that will be used on the gallery details page. I provide a description and select Linux as the Content type, There is also an option to select the CPU architectures that the image supports, if you have a multi-architecture image you can select more than one architecture type. The content types are used for filtering on the gallery website, this will enable people using the gallery to filter their searches by architecture and operating system support.

I enter some markdown in the About section. The text I enter here will be displayed on the gallery page and would ordinarily explain what’s included in the repository, any licensing details, or other relevant information. There is also a Usage section where I enter some sample text to explain how to use my image and that I created the repository for a demo. Finally, I click Create repository.

Back at the repository page, I now have one public repository. There is also a button that says View push commands. I click on this so I can learn more about how to push containers to the repository.

I follow the four steps contained in this section. The first step helps me authenticate with ECR Public so that I can push containers to my repository. Steps two, three, and four show me how to build, tag, and push my container to ECR Public.

The application that I have containerized is a simple app that runs and outputs a terminal message. I use the docker CLI to push my container to my repository, it’s quite a small container, so it only takes a minute or two.

Once complete, I head over to the ECR Public gallery and can see that my image is now publicly available for anyone to pull.

Pulling A Container
You pull containers from ECR Public using the familiar docker pull command with the URL of the image.

You can easily find this URL on the ECR Public website, where the image URL is displayed along with other published information. Clicking on the URL copies the image URL to your clipboard for an easy copy-paste.

ECR Public image URLs are in the format public.ecr.aws/<namespace>/<image>:<tag>

For example, if I wanted to pull the image I uploaded earlier, I would open my terminal and run the following command (please note: you will need docker installed locally to run these commands).

docker pull public.ecr.aws/r6g9m2o3/news-blog:latest

I have now download the Docker Image onto my machine, and I can run the container using the following command:

docker run public.ecr.aws/r6g9m2o3/news-blog:latest

My application runs and writes a message from Jeff Barr. If you are wondering about the switches and parameters I have used on my docker run command, it’s to make sure that the log is written in color because we wouldn’t want to miss out on Jeff’s glorious purple hair.

Nice to Know
ECR Public automatically replicates container images across two AWS Regions to reduce download times and improve availability. Therefore, using public images directly from ECR Public may simplify your build process if you were previously creating and managing local copies. ECR Public caches image layers in Amazon CloudFront, to improve pull performance for a global audience, especially for popular images.

ECR Public also has some nice integrations that will make working with containers easier on AWS. For example, ECR Public will automatically notify services such as AWS CodeBuild to rebuild an application when a public container image changes.

Pricing
All AWS customers will get 50 GB of free storage each month, and if you exceed that limit, you will pay nominal charges. Check out the pricing page for details.

Anyone who pulls images anonymously will get 500 GB of free data bandwidth each month, after which they can sign up or sign in to an AWS account to get more. Simply authenticating with an AWS account increases free data bandwidth up to 5 TB each month when pulling images from the internet.

Finally, workloads running in AWS will get unlimited data bandwidth from any region when pulling publicly shared images from ECR Public.

Verification and Namespaces
You can create a custom namespace such as your organization or project name to be used in a ECR Public URL subdomain unless it’s a reserved namespace. Namespaces such as sagemaker and eks that identify AWS services are reserved. Namespaces that identify AWS Marketplace sellers are also reserved.

Available Today
ECR Public is available today and you can find out more over on the product page. Visit the gallery to explore and use available public container software and log into the ECR Console to share containers publicly.

Happy Containerizing!

— Martin

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/838684/rss

Security updates have been issued by Debian (libxstream-java, musl, mutt, pdfresurrect, vips, and zsh), Fedora (libuv, nodejs, thunderbird, and xen), openSUSE (libssh2_org, mutt, neomutt, and thunderbird), Oracle (firefox and thunderbird), Red Hat (firefox, rh-nodejs12-nodejs, rh-php73-php, and thunderbird), Scientific Linux (thunderbird), SUSE (libX11, mariadb, mutt, python-pip, python-setuptools, and python36), and Ubuntu (containerd, php-pear, and sniffit).

Coming Soon – Amazon EC2 G4ad Instances Featuring AMD GPUs for Graphics Workloads

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/

Customers with high performance graphic workloads, such as game streaming, animation, and video rendering for example, are always looking for higher performance with less cost. Today, I’m happy to announce new Amazon Elastic Compute Cloud (EC2) instances in the G4 instance family are in the works and will be available soon, to improve performance and reduce cost for graphics-intensive workloads. The new G4ad instances feature AMD’s latest Radeon Pro V520 GPUs and 2nd generation EPYC processors, and are the first in EC2 to feature AMD GPUs.

G4dn instances, released in 2019 and featuring NVIDIA T4 GPUs, were previously the most cost-effective GPU-based instances in EC2. G4dn instances are ideal for deploying machine learning models in production and also graphics-intensive applications. However, when compared to G4dn the new G4ad instances enable up to 45% better price performance for graphics-intensive workloads, including the aforementioned game streaming, remote graphics workstations, and rendering scenarios. Compared to an equally-sized G4dn instance, G4ad instances offer up to 40% improvement in performance.

G4dn instances will continue to be the best option for small-scale machine learning (ML) training and GPU-based ML inference due to included hardware optimizations like Tensor Cores. Additionally, G4dn instances are still best suited for graphics applications that need access to NVIDIA libraries such as CUDA, CuDNN, and NVENC. However, when there is no dependency on NVIDIA’s libraries, we recommend customers try the G4ad instances to benefit from the improved price and performance.

AMD Radeon Pro V520 GPUs in G4ad instances support DirectX 11/12, Vulkan 1.1, and OpenGL 4.5 APIs. For operating systems, customers can choose from Windows Server 2016/2019, Amazon Linux 2, Ubuntu 18.04.3, and CentOS 7.7. Instances using G4ad can be purchased as On-Demand, Savings Plan, Reserved Instances, or Spot Instances. Three instance sizes are available, from G4ad.4xlarge, with 1 GPU, to G4ad.16xlarge with 4 GPUs, as shown below.

Instance Size GPUs GPU Memory (GB) vCPUs Memory (GB) Storage EBS Bandwidth (Gbps) Network Bandwidth (Gbps)
g4ad.4xlarge 1 8 16 64 600 Up to 3 Up to 10
g4ad.8xlarge 2 16 32 128 1200 3 15
g4ad.16xlarge 4 32 64 256 2400 6 25

The new G4ad instances will be available soon in US East (N. Virginia), US West (Oregon), and Europe (Ireland).

Learn more about G4ad instances.

— Steve

New EC2 M5zn Instances – Fastest Intel Xeon Scalable CPU in the Cloud

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-m5zn-instances-fastest-intel-xeon-scalable-cpu-in-the-cloud/

We launched the compute-intensive z1d instances in mid-2018 for customers who asked us for extremely high per-core performance and a high memory-to-core ratio to power their front-end Electronic Design Automation (EDA), actuarial, and CPU-bound relational database workloads.

In order to address a complementary set of use cases, customers have asked us for an EC2 instance that will give them high per-core performance like z1d, with no local NVMe storage, higher networking throughput, and a reduced memory-to-vCPU ratio. They have indicated if we built an instance with this set of attributes, it would be an excellent fit for workloads such as gaming, financial applications, simulation modeling applications such as those used in the automobile, aerospace, energy and telecommunication industries, and High Performance Computing (HPC).

Introducing M5zn
Building on the success of the z1d instances, we are launching M5zn instances in seven sizes today. These instances use 2nd generation custom Intel® Xeon® Scalable (Cascade Lake) processors with a sustained all-core turbo clock frequency of up to 4.5 GHz. M5zn instances feature high frequency processing, are a variant of the general-purpose M5 instances, and are built on the AWS Nitro System. These instances also feature low latency 100 Gbps networking and the Elastic Fabric Adapter (EFA), in order to improve performance for HPC and communication-intensive applications.

Here are the M5zn instances (all VPC-only, HVM-only, and EBS-Optimized, with support for Optimize vCPU). As you can see, the memory-to-vCPU ratio on these instances is half that of the existing z1d instances:

Instance Name vCPUs
RAM
Network Bandwidth EBS-Optimized Bandwidth
m5zn.large 2 8 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.xlarge 4 16 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.2xlarge 8 32 GiB Up to 25 Gbps 3.170 Gbps
m5zn.3xlarge 12 48 GiB Up to 25 Gbps 4.750 Gbps
m5zn.6xlarge 24 96 GiB 50 Gbps 9.500 Gbps
m5zn.12xlarge 48 192 GiB 100 Gbps 19 Gbps
m5zn.metal 48 192 GiB 100 Gbps 19 Gbps

The Nitro Hypervisor allows M5zn instances to deliver performance that is just about indistinguishable from bare metal. Other AWS Nitro System components such as the Nitro Security Chip and hardware-based processing for EBS increase performance, while VPC encryption provides greater security.

Things To Know
Here are a couple of “fun facts” about the M5zn instances:

Placement Groups – M5zn instances can be used in Cluster (for low latency and high network throughput), Spread (to keep critical instances separate from each other), and Partition (to reduce correlated failures) placement groups.

Networking – M5zn instances support the Elastic Network Adapter (ENA) with dedicated 100 Gbps network connections and a dedicated 19 Gbps connection to EBS. If you are building distributed ML or HPC applications for use on a cluster of M5zn instances, be sure to take a look at the Elastic Fabric Adapter (EFA). Your HPC applications can use the Message Passing Interface (MPI) to communicate efficiently at high speed while scaling to thousands of nodes.

C-State Control – You can configure CPU Power Management on m5zn.6xlarge and m5zn.12xlarge instances. This is definitely an advanced feature, but one worth exploring in those situations where you need to squeeze every possible cycle of available performance from the instance.

NUMA – You can make use of Non-Uniform Memory Access on m5zn.12xlarge instances. This is also an advanced feature, but worth exploring in situations where you have an in-depth understanding of your application’s memory access patterns.

To learn more about these and other features, visit the EC2 M5 Instances page.

Available Now
As you can see, the M5zn instances are a great fit for gaming, HPC and simulation modeling workloads such as those used by the financial, automobile, aerospace, energy, and telecommunications industries.

You can launch M5zn instances today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) Regions in On-Demand, Reserved Instance, Savings Plan, and Spot form. Dedicated Instances and Dedicated Hosts are also available.

Support is available in the EC2 Forum or via your usual AWS Support contact. The EC2 team is interested in your feedback and you can contact them at [email protected].

Jeff;

 

 

Coming Soon – EC2 C6gn Instances – 100 Gbps Networking with AWS Graviton2 Processors

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/coming-soon-ec2-c6gn-instances-100-gbps-networking-with-aws-graviton2-processors/

Based on the amazing feedback from customers such as Snap, NextRoll, Intuit, SmugMug, and Honeycomb who are running their workloads on Amazon Elastic Compute Cloud (EC2) instances powered by AWS Graviton2, today we are announcing an addition to our broad Arm-based Graviton2 portfolio with C6gn instances that deliver up to 100 Gbps network bandwidth, up to 38 Gbps Amazon Elastic Block Store (EBS) bandwidth, up to 40% higher packet processing performance, and up to 40% better price/performance versus comparable current generation x86-based network optimized instances.

Compared to C6g instances, this new instance type provides 4x higher network bandwidth, 4x higher packet processing performance, and 2x higher EBS bandwidth. This means that customers with workloads that need high networking bandwidth such as high performance computing (HPC), network appliance, real-time video communications, and data analytics, will be able to bring their biggest and most challenging applications to Arm and take advantage of the performance and cost-optimization.

C6gn instances will be available in 8 sizes:

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
c6gn.medium 1 2 Up to 25 Up to 9.5
c6gn.large 2 4 Up to 25 Up to 9.5
c6gn.xlarge 4 8 Up to 25 Up to 9.5
c6gn.2xlarge 8 16 Up to 25 Up to 9.5
c6gn.4xlarge 16 32 25 9.5
c6gn.8xlarge 32 64 50 19
c6gn.12xlarge 48 96 75 28.5
c6gn.16xlarge 64 128 100 38

The new instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that maximize resource efficiency. C6gn instances support Elastic Fabric Adapter (EFA) on the c6gn.16xlarge sizes for workloads that can take advantage of lower network latency (such as HPC and video processing) and use Message Passing Interface (MPI) for highly scalable clusters. These new instances also fully support network frameworks like Data Plane Development Kit (DPDK), making it easier to migrate network appliance workloads.

Coming Soon
EC2 C6gn instances will be available later this month and make it easier to optimize costs for HPC and workloads that require high network bandwidth and low latency. Let me know what you are going to build with them!

To get practice with the AWS Graviton2 architecture, you can try t4g.micro instances for free for up to 750 hours per month until March 31st, 2021.

Learn more about EC2 C6gn instances today.

Danilo

New – Amazon EC2 R5b Instances Provide 3x Higher EBS Performance

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r5b-instances-providing-3x-higher-ebs-performance/

In July 2018, we announced memory-optimized R5 instances for the Amazon Elastic Compute Cloud (Amazon EC2). R5 instances are designed for memory-intensive applications such as high-performance databases, distributed web scale in-memory caches, in-memory databases, real time big data analytics, and other enterprise applications.

R5 instances offer two different block storage options. R5d instances offer up to 3.6TB of NMVe instance storage for applications that need access to high-speed, low latency local storage. In addition, all R5b instances work with Amazon Elastic Block Store. Amazon EBS is an easy-to-use, high-performance and highly available block storage service designed for use with Amazon EC2 for both throughput- and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

Today, we are happy to announce the availability of R5b, a new addition to the R5 instance family. The new R5b instance is powered by the AWS Nitro System to provide the best network-attached storage performance available on EC2. This new instance offers up to 60Gbps of EBS bandwidth and 260,000 I/O operations per second (IOPS).

Amazon EC2 R5b Instance
Many customers use R5 instances with EBS for large relational database workloads such as commerce platforms, ERP systems, and health record systems, and they rely on EBS to provide scalable, durable, and high availability block storage. These instances provide sufficient storage performance for many use cases, but some customers require higher EBS performance on EC2.

R5 instances provide bandwidth up to 19Gbps and maximum EBS performance of 80K IOPS, while the new R5b instances support bandwidth up to 60Gbps and EBS performance of 260K IOPS, providing 3x higher EBS-Optimized performance compared to R5 instances, enabling customers to lift and shift large relational databases applications to AWS. R5b and R5 vCPU to memory ratio and network performance are the same.

Instance Name vCPUs Memory EBS Optimized Bandwidth (Mbps) EBS Optimized IOPs@16KiB (IO/s)
r5b.large 2 16 GiB Up to 10,000 Up to 43,333
r5b.xlarge 4 32 GiB Up to 10,000 Up to 43,333
r5b.2xlarge 8 64 GiB Up to 10,000 Up to 43,333
r5b.4xlarge 16 128 GiB 10,000 43,333
r5b.8xlarge 32 256 GiB 20,000 86,667
r5b.12xlarge 48 384 GiB 30,000 130,000
r5b.16xlarge 64 512 GiB 40,000 173,333
r5b.24xlarge 96 768 GiB 60,000 260,000
r5b.metal 96 768 GiB 60,000 260,000

Customers operating storage performance sensitive workloads can migrate from R5 to R5b to consolidate their existing workloads into fewer or smaller instances. This can reduce the cost of both infrastructure and licensed commercial software working on those instances. R5b instances are supported by Amazon RDS for Oracle and Amazon RDS for SQL Server, simplifying the migration path for large commercial database applications and improving storage performance for current RDS customers by up to 3x.

All Nitro compatible AMIs support R5b instances, and the EBS-backed HVM AMI must have NVMe 1.0e and ENA drivers installed at R5b instance launch. R5b supports io1, io2 Block Express (in preview), gp2, gp3, sc1, st1 and standard volumes. R5b does not support io2 volumes and io1 volumes that have multi-attach enabled, which are coming soon.

Available Today

R5b instances are available in the following regions: US West (Oregon), Asia Pacific (Tokyo), US East (N. Virginia), US East (Ohio), Asia Pacific (Singapore), and Europe (Frankfurt). RDS on r5b is available in US East (Ohio), Asia Pacific (Singapore), and Europe (Frankfurt), and support in other regions is coming soon.

Learn more about EC2 R5 instances and get started with Amazon EC2 today.

– Kame;

EC2 Update – D3 / D3en Dense Storage Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-update-d3-d3en-dense-storage-instances/

We have launched several generations of EC2 instances with dense storage including the HS1 in 2012 and the D2 in 2015. As you can guess from the name, our customers use these instances when they need massive amounts of very economical on-instance storage for their data warehouses, data lakes, network file systems, Hadoop clusters, and the like. These workloads demand plenty of I/O and network throughput, but work fine with a high ratio of storage to compute power.

New D3 and D3en Instances
Today we are launching the D3 and D3en instances. Like their predecessors, they give you access to massive amounts of low-cost on-instance HDD storage. The D3 instances are available in four sizes, with up to 32 vCPUs and 48 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3.xlarge 4 32 GiB 6 TB (3 x 2 TB)  580 MiBps Up to 15 Gbps 850 Mbps
d3.2xlarge 8 64 GiB 12 TB (6 x 2 TB) 1,100 MiBps Up to 15 Gbps 1,700 Mbps
d3.4xlarge 16 128 GiB 24 TB (12 x 2 TB) 2,300 MiBps Up to 15 Gbps 2,800 Mbps
d3.8xlarge 32 256 GiB 48 TB (24 x 2 TB) 4,600 MiBps 25 Gbps 5,000 Mbps

As you can see from the table above, the D3 instances are available in the same configurations as the D2 instances for easy migration. You’ll get 5% more memory per vCPU, a 30% boost in compute power, and 2.5x higher network performance if you migrate from D2 to D3. The instances provide low-cost dense storage that delivers high performance sequential access to large data sets. They are perfect for distributed file systems such as HDFS and MapR FS, big data analytical workloads, data warehouses, log processing, and data processing.

The D3en instances are available in six sizes, with up to 48 vCPUs and 336 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3en.xlarge 4 16 GiB 28 TB (2 x 14 TB) 500 MiBps Up to 25 Gbps 850 Mbps
d3en.2xlarge 8 32 GiB 56 TB (4 x 14 TB) 1,000 MiBps Up to 25 Gbps 1,700 Mbps
d3en.4xlarge 16 64 GiB 112 TB (8 x 14 TB) 2,000 MiBps 25 Gbps 2,800 Mbps
d3en.6xlarge 24 96 GiB 168 TB (12 x 14 TB) 3,100 MiBps 40 Gbps 4,000 Mbps
d3en.8xlarge 32 128 GiB  224 TB (16 x 14 TB) 4,100 MiBps 50 Gbps 5,000 Mbps
d3en.12xlarge 48 192 GiB 336 TB (24 x 14 TB) 6,200 MiBps 75 Gbps 7,000 Mbps

The D3en instances have a high ratio of storage to vCPU, and are optimized for high throughput and high sequential I/O to very large data sets, with a cost-per-TB that is 80% lower than on D2 instances. D3en instances can host Lustre, BeeGFS, GPFS, and other distributed file systems, they can store your data lakes, and they can run your Amazon EMR, Spark, and Hadoop analytical workloads.

Both of the instance types are built on the AWS Nitro System and are powered by custom Intel® Second Generation Scalable Xeon® (Cascade Lake) processors that can deliver all-core turbo performance of up to 3.1 GHz. The HDD storage is encrypted at rest using AES-256-XTS; traffic between D3 or D3en instances in the same VPC or within peered VPCs is encrypted using a 256-bit key.

Things to Know
Here are a couple of things that you should keep in mind regarding the D3 and D3en instances:

Regions – D3en instances are available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions; D3en instances are available in all of those regions and also in the US East (Ohio) Region, with more regions coming soon.

Purchase Options – You can purchase D3 and D3 instances in On-Demand, Savings Plan, Reserved Instance, Spot, and Dedicated Instance form.

AMIs – You must use AMIs that include the Elastic Network Adapter (ENA) and NVMe drivers.

Now Available
D3 and D3en instances are available now and you can start using them today!

Jeff;

Spread the joy of learning through making

Post Syndicated from original https://www.raspberrypi.org/blog/spread-the-joy-of-learning-through-making/

From our first prototype way back in 2006, to the very latest Raspberry Pi 400, everything we have built here at Raspberry Pi has been driven by a desire to inspire learning. I hope that each of you who uses our products discovers — or rediscovers — the joy of learning through making. The journey from technology consumer to technology creator can be a transformational one; today, on Giving Tuesday, I’m asking you to help even more young people make that journey.

Too few young people have the chance to learn how technology works and how to harness its power. Pre-existing disparities in access to computing education have been exacerbated by the coronavirus pandemic. At the Raspberry Pi Foundation, we’re on a mission to change this, and we’re working harder than ever to support young people and educators with free learning opportunities. Our partner CanaKit supports the Raspberry Pi Foundation’s mission, and they’ve extended the generous offer to match your donations up to a total of $5,000.

Alongside our low-cost, high-performance computers and free software, you may know that the Raspberry Pi Foundation provides free educational programmes including coding clubs and educator training for millions of people each year in dozens of countries. You might not know that the Raspberry Pi Foundation was founded as, and still remains, a nonprofit organisation. Our education mission is powered by dedicated volunteers, and our programmes are funded in part thanks to our customers who buy Raspberry Pi products, and in part by charitable donations from people like you.

A smiling girl holding a robot buggy in her lap

Every donation we receive makes an impact on the young people and educators who rely on the Raspberry Pi Foundation. Ryka, for example, is a 10-year-old who attends one of our CoderDojo clubs. Since March she’s been using our project guides and following our Digital Making at Home code-along live streams. Her parents tell us: 

“We were looking at ways to keep Ryka engaged during this lockdown period and came across Digital Making at Home. As a parent I can see that there has been discernible improvement in her abilities. We’ve noticed that she is engaged and takes interest in showing us what she was able to build. It has been a great use of her time.” 

– Parent of a young person who learns through our programmes

Ryka joins millions of learners in our community around the world, many of whom now rely on us more than ever with schools and extracurricular activities disrupted. Through the ongoing support of our donors and volunteers, we’ve been able to rise to the challenge of the pandemic:

Young coders and digital makers need our help in the year ahead as they take control of their computing education under challenging and uncertain circumstances. As a donor to the Raspberry Pi Foundation, you will be investing in our youngest generation of innovators and helping to create a spark in a young person’s life. On Giving Tuesday, I am grateful to each of you for the role you play in creating a world where everyone can learn, solve problems, and shape their future through the power of technology. 

PS Thank you again to our friends at CanaKit for doubling the impact of every donation, up to $5000! 

The post Spread the joy of learning through making appeared first on Raspberry Pi.

Manipulating Systems Using Remote Lasers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/manipulating-systems-using-remote-lasers.html

Many systems are vulnerable:

Researchers at the time said that they were able to launch inaudible commands by shining lasers — from as far as 360 feet — at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

[…]

They broadened their research to show how light can be used to manipulate a wider range of digital assistants — including Amazon Echo 3 — but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems.

The researchers also delved into how the ecosystem of devices connected to voice-activated assistants — such as smart-locks, home switches and even cars — also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said.

Another article. The researchers will present their findings at Black Hat Europe — which, of course, will be happening virtually — on December 10.

New – Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, ipadOS, tvOS, and watchOS Apps

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

Throughout the course of my career I have done my best to stay on top of new hardware and software. As a teenager I owned an Altair 8800 and an Apple II. In my first year of college someone gave me a phone number and said “call this with modem.” I did, it answered “PENTAGON TIP,” and I had access to ARPANET!

I followed the emerging PC industry with great interest, voraciously reading every new issue of Byte, InfoWorld, and several other long-gone publications. In early 1983, rumor had it that Apple Computer would soon introduce a new system that was affordable, compact, self-contained, and very easy to use. Steve Jobs unveiled the Macintosh in January 1984 and my employer ordered several right away, along with a pair of the Apple Lisa systems that were used as cross-development hosts. As a developer, I was attracted to the Mac’s rich collection of built-in APIs and services, and still treasure my phone book edition of the Inside Macintosh documentation!

New Mac Instance
Over the last couple of years, AWS users have told us that they want to be able to run macOS on Amazon Elastic Compute Cloud (EC2). We’ve asked a lot of questions to learn more about their needs, and today I am pleased to introduce you to the new Mac instance!


The original (128 KB) Mac

Powered by Mac mini hardware and the AWS Nitro System, you can use Amazon EC2 Mac instances to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari. The instances feature an 8th generation, 6-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz, with Turbo Boost up to 4.6 GHz. There’s 32 GiB of memory and access to other AWS services including Amazon Elastic Block Store (EBS), Amazon Elastic File System (EFS), Amazon FSx for Windows File Server, Amazon Simple Storage Service (S3), AWS Systems Manager, and so forth.

On the networking side, the instances run in a Virtual Private Cloud (VPC) and include ENA networking with up to 10 Gbps of throughput. With EBS-Optimization, and the ability to deliver up to 55,000 IOPS (16KB block size) and 8 Gbps of throughput for data transfer, EBS volumes attached to the instances can deliver the performance needed to support I/O-intensive build operations.

Mac instances run macOS 10.14 (Mojave) and 10.15 (Catalina) and can be accessed via command line (SSH) or remote desktop (VNC). The AMIs (Amazon Machine Images) for EC2 Mac instances are EC2-optimized and include the AWS goodies that you would find on other AWS AMIs: An ENA driver, the AWS Command Line Interface (CLI), the CloudWatch Agent, CloudFormation Helper Scripts, support for AWS Systems Manager, and the ec2-user account. You can use these AMIs as-is, or you can install your own packages and create custom AMIs (the homebrew-aws repo contains the additional packages and documentation on how to do this).

You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments that I mentioned earlier. You can provision new instances in minutes, giving you the ability to quickly & cost-effectively build code for multiple targets without having to own & operate your own hardware. You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2.

EC2 Mac Instances in Action
As always, I asked the EC2 team for access to an instance in order to put it through its paces. The instances are available in Dedicated Host form, so I started by allocating a host:

$ aws ec2 allocate-hosts --instance-type mac1.metal \
  --availability-zone us-east-1a --auto-placement on \
  --quantity 1 --region us-east-1

Then I launched my Mac instance from the command line (console, API, and CloudFormation can also be used):

$ aws ec2 run-instances --region us-east-1 \
  --instance-type mac1.metal \
  --image-id  ami-023f74f1accd0b25b \
  --key-name keys-jbarr-us-east  --associate-public-ip-address

I took Luna for a very quick walk, and returned to find that my instance was ready to go. I used the console to give it an appropriate name:

Then I connected to my instance:

From here I can install my development tools, clone my code onto the instance, and initiate my builds.

I can also start a VNC server on the instance and use a VNC client to connect to it:

Note that the VNC protocol is not considered secure, and this feature should be used with care. I used a security group that allowed access only from my desktop’s IP address:

I can also tunnel the VNC traffic over SSH; this is more secure and would not require me to open up port 5900.

Things to Know
Here are a couple of fast-facts about the Mac instances:

AMI Updates – We expect to make new AMIs available each time Apple releases major or minor versions of each supported OS. We also plan to produce AMIs with updated Amazon packages every quarter.

Dedicated Hosts – The instances are launched as EC2 Dedicated Hosts with a minimum tenancy of 24 hours. This is largely transparent to you, but it does mean that the instances cannot be used as part of an Auto Scaling Group.

Purchase Models – You can run Mac instances On-Demand and you can also purchase a Savings Plan.

Apple M1 Chip – EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021.

Launch one Today
You can start using Mac instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions today, and check out this video for more information!

Jeff;

 

New in Amazon QuickSight – session capacity pricing for large scale deployments, embedding without user provisioning, and developer portal for embedded analytics

Post Syndicated from Jose Kunnackal original https://aws.amazon.com/blogs/big-data/new-in-amazon-quicksight-embedding-without-user-provisioning-session-capacity-pricing-and-embedded-developer-portal/

Amazon QuickSight Enterprise edition now offers a new, session capacity-based pricing model starting at $250/month, with annual commitment options that provide scalable pricing for embedded analytics and BI rollouts to 100s of 1000s of users. QuickSight now also supports embedding dashboards in apps, websites, and wikis without the need to provision and manage users (readers) in QuickSight, which utilizes this new pricing model. Lastly, we also have a new developer portal for embedded analytics that allows you to learn more about the different embedded solutions available with QuickSight and experience it first-hand.

Session Capacity Pricing

Amazon QuickSight’s new session capacity-based pricing model provides scalable pricing for large scale deployments. Session capacity pricing allows Developers, Independent Software Vendors (ISVs) and Enterprises to benefit from lower per-session rates as they roll out embedded analytics and BI to 100s of 1000s of users. In such scenarios, average session consumption per user is usually low (<10 sessions/month) but aggregate session usage across users within the account is high. With session capacity pricing, sessions are simply charged in 30-minute blocks of usage starting with first access. Session capacity pricing is also required for embedding without user management, where per-user pricing is not meaningful. Unlike traditional options for capacity pricing which require a server with annual commitments, QuickSight’s session capacity pricing allows you to get started easily with a $250/month starter option. QuickSight’s session capacities do not slow down with increased user concurrency or higher analytical complexity of dashboards (both common in a server-based model), but instead automatically scale to ensure a consistent, fast, end-user experience. After starting with the monthly option, you can move to one of QuickSight’s annual session capacity tiers as your sessions usage increases – ensuring that costs scale with growing usage. Any usage beyond the committed levels (monthly or annual) is charged at the overage session rates indicated, with no manual intervention for scaling needed – no more scrambling to add servers as you have bursts in usage or just greater success with your application/website. Annual session capacities are billed for monthly usage of sessions, with consumption of all committed sessions expected by end of the period. Annual session capacities allow sessions to be consumed across the year, providing flexibility in ramping up on production traffic, balancing session needs across busy/lean periods of the year (e.g., first/last week of the month are busy, holidays may be slow or busy depending on the nature of the business). For more details on the new pricing options, visit the QuickSight pricing page.

Embedding without user provisioning

Before this launch, Amazon QuickSight provided embedding dashboards in apps and portals where each end-user could be identified and provisioned in QuickSight, and charged using QuickSight’s per-user pricing model. This works well for situations where each end user can be uniquely identified, and often has permissions associated with their data access levels. The NFL chose QuickSight to embed dashboards in their next-gen stats portal and share insights with clubs, broadcasters and editorial teams. 1000s of customers have since chosen to launch embedded dashboards in QuickSight for both their internal and external end-users, including Blackboard, Comcast, Panasonic Avionics, EHE Health. Customers can scale from 10s of users to 100s of 1000s without any server provisioning or management and also benefit from being able to utilize embedded QuickSight authoring capabilities to enable self-service dashboard creation.

With today’s launch, QuickSight will now also enable use cases with dashboards for 100s of 1000s of readers, where it is not possible to provision and manage users (or is highly inconvenient to do so).

Examples of these situations include embedded dashboards for sharing public statistics within a city/county, dashboards of company-wide statistics available to all users to be embedded on an internal corporate portal, or embedded dashboards with the same data intended for 10s or 1000s of users per customer, department or job location/site within an app.

Let’s take a look at a couple of examples and then how to embed these dashboards. First, a dashboard that shows a live stream of the three main stock indices (S&P 500, DOW Jones, and NASDAQ) that uses QuickSight’s newly launched connector to Amazon Timestream to provide a real-time view of the market index in US Central Time.

Second, a dashboard showing industries and count of firms in those industries using a choropleth map at the state level with the ability to drill down to county-level data.

Both dashboards are setup so that they can be accessed without any user restrictions, and we don’t have to setup users to roll this out. You can see these dashboards accessible for anyone here. To try this out, you can use the AWS Command Line Interface (AWS CLI); note that the AWS CLI used here is simply to illustrate this process, and for actual integration into a website/app you have to use the AWS SDK to obtain an embeddable URL for every new visit to the page.

4 steps to embed a dashboard

  1. Configure an AWS Identity and Access Management (IAM) role for your application to use for embedding
    arn:aws:iam::xxxxxxxxxxxx:role/YourApplicationRole

  2. Attach the following policy to the role so the role can run the GetDashboardEmbedURL API for anonymous identity type:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Action":[
                "quicksight:GetDashboardEmbedUrl",
                "quickSight:GetAnonymousUserEmbedUrl"
             ],
             "Resource":[
                "replace with ARN for dashboard1",
                "replace with ARN for dashboard2",
                "replace with ARN for dashboard3"
             ]
          }
       ]
    }

  3. Run the GetDashboardEmbedURL API with IdentityType set to ANONYMOUS. This returns a response with the EmbedURL:
    INPUT
    aws quicksight get-dashboard-embed-url \
        --region us-west-2 \
        --namespace default \
        --dashboard-id c6e91d20-0909-4836-b87f-e4c115a88b65 \
        --identity-type ANONYMOUS \
        --aws-account-id 123456789012
        
    RESPONSE
    {
        "Status": 200,
        "EmbedUrl": "https://us-west-2.quicksight.aws.amazon.com/embed/bc973ae439ce45b49e011c9fc8c855ea/dashboards/c6e91d20-0909-4836-b87f-e4c115a88b65?code=AYABeJcLJ0WqjqtWBi0sdFZ2GP8AAAABAAdhd3Mta21zAEthcm46YXdzOmttczp1cy13ZXN0LTI6ODQ1MzU0MDA0MzQ0OmtleS85ZjYzYzZlOS0xMzI3LTQxOGYtODhmZi1kM2Y3ODExMzI5MmIAuAECAQB421ynKsVxdYWD7qmNX3Zzbra88wGZIZL-RXp78eF_lpIBMX2cuRvnCU-OpFLUps57PQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDOfOUSMMuEepqj8bzAIBEIA77tMfykw7WnJT__2sRSInn0gymHK1_vXmBAWAlyG2mwcNsD-HGI3xNNUoaSEUdvFQ6c0XuFQAgLz8ufICAAAAAAwAABAAAAAAAAAAAAAAAAAAHkJ674fL1usbIVG0oIYfCv____8AAAABAAAAAAAAAAAAAAABAAAAm3wfjZv5rICOROeYqIPsu6jFmWxU6fBEnSHTBkaw4ZPLnIGr3Cr1HU0D7DJM90dmCQ6t9kTVOy2XdgwNm606yqoEhSjwq4OWU-_rjGilwbKpes_5uKZR0IZNh2SMqgUPuu4Q1z884FhHQmX3yRI_RxWEyTnjR2sajl1m6OQCgvRJ3kEeh3cB0wWSsSdcUeZt-iNxYRbckKa3Eb6viPXHYRs-Q_skcSTsjfJ6GQ%3D%3D&identityprovider=quicksight&isauthcode=true",
        "RequestId": "1c82321c-6934-45b0-83f4-a8ce2f641067"
    }

You can embed the returned URL in the IFRAME code of your application. Make sure your application is added to the allow list in QuickSight. This is a single-use URL and has to be generated dynamically by invoking get-dashboard-embed-url from the backend compute layer upon each load of the parent page.

  1. Embed the dashboard in your application with the following HTML code:
    <head>
        <title>Basic Embed</title>
        <!-- You can download the latest QuickSight embedding SDK version from https://www.npmjs.com/package/amazon-quicksight-embedding-sdk -->
        <!-- Or you can do "npm install amazon-quicksight-embedding-sdk", if you use npm for javascript dependencies -->
        <script src="./quicksight-embedding-js-sdk.min.js"></script>
        <script type="text/javascript">
            var dashboard;
    
            function embedDashboard() {
                var containerDiv = document.getElementById("embeddingContainer");
                var options = {
                    // replace this dummy url with the one generated via embedding API
                    url: "https://us-east-1.quicksight.aws.amazon.com/sn/dashboards/dashboardId?isauthcode=true&identityprovider=quicksight&code=authcode",  
                    container: containerDiv,
                    scrolling: "no",
                    height: "700px",
     
                    footerPaddingEnabled: true
                };
                dashboard = QuickSightEmbedding.embedDashboard(options);
            }
        </script>
    </head>
    
    <body onload="embedDashboard()">
        <div id="embeddingContainer"></div>
    </body>
    
    </html> 

If you use NPM to manage you front end dependencies, run npm install amazon-quicksight-embedding-sdk. And then you can add use the following code to embed the URL in your application:

import { embedDashboard } from 'amazon-quicksight-embedding-sdk';

var options = {
    url: "https://us-west-2.quicksight.aws.amazon.com/embed/bc973ae439ce45b49e011c9fc8c855ea/dashboards/c6e91d20-0909-4836-b87f-e4c115a88b65?code=AYABeJcLJ0WqjqtWBi0sdFZ2GP8AAAABAAdhd3Mta21zAEthcm46YXdzOmttczp1cy13ZXN0LTI6ODQ1MzU0MDA0MzQ0OmtleS85ZjYzYzZlOS0xMzI3LTQxOGYtODhmZi1kM2Y3ODExMzI5MmIAuAECAQB421ynKsVxdYWD7qmNX3Zzbra88wGZIZL-RXp78eF_lpIBMX2cuRvnCU-OpFLUps57PQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDOfOUSMMuEepqj8bzAIBEIA77tMfykw7WnJT__2sRSInn0gymHK1_vXmBAWAlyG2mwcNsD-HGI3xNNUoaSEUdvFQ6c0XuFQAgLz8ufICAAAAAAwAABAAAAAAAAAAAAAAAAAAHkJ674fL1usbIVG0oIYfCv____8AAAABAAAAAAAAAAAAAAABAAAAm3wfjZv5rICOROeYqIPsu6jFmWxU6fBEnSHTBkaw4ZPLnIGr3Cr1HU0D7DJM90dmCQ6t9kTVOy2XdgwNm606yqoEhSjwq4OWU-_rjGilwbKpes_5uKZR0IZNh2SMqgUPuu4Q1z884FhHQmX3yRI_RxWEyTnjR2sajl1m6OQCgvRJ3kEeh3cB0wWSsSdcUeZt-iNxYRbckKa3Eb6viPXHYRs-Q_skcSTsjfJ6GQ%3D%3D&identityprovider=quicksight&isauthcode=true",
    container: document.getElementById("embeddingContainer"),
    parameters: {
        country: "United States",
        states: [
            "California",
            "Washington"
        ]
    },
    scrolling: "no",
    height: "700px",
    width: "1000px",
    locale: "en-US",
    footerPaddingEnabled: true
};
const dashboardSession = embedDashboard(options);

If you want to embed multiple dashboards and switch between them, you can pass more dashboard IDs by using the additional-dashboard-ids option while generating the URL. This generates the URL with authorization for all specified dashboard IDs and launches the dashboard specified under the dashboard-id option. See the following code:

INPUT
aws quicksight get-dashboard-embed-url \
    --region us-west-2 \
    --namespace default \
    --dashboard-id c6e91d20-0909-4836-b87f-e4c115a88b65 \
    --identity-type ANONYMOUS \
    --aws-account-id 123456789012
    --additional-dashboard-ids dashboardid1 dashboardid2
    
RESPONSE
{
    "Status": 200,
    "EmbedUrl": "https://us-west-2.quicksight.aws.amazon.com/embed/bc973ae439ce45b49e011c9fc8c855ea/dashboards/c6e91d20-0909-4836-b87f-e4c115a88b65?code=AYABeJcLJ0WqjqtWBi0sdFZ2GP8AAAABAAdhd3Mta21zAEthcm46YXdzOmttczp1cy13ZXN0LTI6ODQ1MzU0MDA0MzQ0OmtleS85ZjYzYzZlOS0xMzI3LTQxOGYtODhmZi1kM2Y3ODExMzI5MmIAuAECAQB421ynKsVxdYWD7qmNX3Zzbra88wGZIZL-RXp78eF_lpIBMX2cuRvnCU-OpFLUps57PQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDOfOUSMMuEepqj8bzAIBEIA77tMfykw7WnJT__2sRSInn0gymHK1_vXmBAWAlyG2mwcNsD-HGI3xNNUoaSEUdvFQ6c0XuFQAgLz8ufICAAAAAAwAABAAAAAAAAAAAAAAAAAAHkJ674fL1usbIVG0oIYfCv____8AAAABAAAAAAAAAAAAAAABAAAAm3wfjZv5rICOROeYqIPsu6jFmWxU6fBEnSHTBkaw4ZPLnIGr3Cr1HU0D7DJM90dmCQ6t9kTVOy2XdgwNm606yqoEhSjwq4OWU-_rjGilwbKpes_5uKZR0IZNh2SMqgUPuu4Q1z884FhHQmX3yRI_RxWEyTnjR2sajl1m6OQCgvRJ3kEeh3cB0wWSsSdcUeZt-iNxYRbckKa3Eb6viPXHYRs-Q_skcSTsjfJ6GQ%3D%3D&identityprovider=quicksight&isauthcode=true",
    "RequestId": "1c82321c-6934-45b0-83f4-a8ce2f641067"
}

In the preceding code, we assign our primary dashboard’s ID to the --dashboard-id option, and the other dashboard IDs to the --additional-dashboard-ids option as a space-separated value. You can pass up to 20 dashboard IDs in this option.

The EmbedURL value in the response is the URL for the primary dashboard. You can embed this dashboard in your app, wiki, or website and use the JavaScript SDK to switch between dashboards. To switch to another dashboard without having to generate a fresh embed URL, invoke the navigateToDashboard function (available in our JavaScript library) with any of the dashboard IDs that were initially included in the get-dashboard-embed-url call. See the following example code:

var options = 
     {
      dashboardId: "dashboardid1", 
      parameters: 
       {
         country: 
          [
            "United States"
          ]
     }
    };
dashboard.navigateToDashboard(options);

For more information about the JavaScript SDK, see the GitHub repo.

Embedding without users is now generally available to all Amazon QuickSight Enterprise Edition users and requires session capacity pricing. 

To opt-in for session capacity pricing and embed dashboards without user provisioning, or simply enable discounted QuickSight usage for large scale deployments, you can opt-into session capacity pricing starting with the $250/month option via the “Manage QuickSight” > “Your subscriptions” page accessible to QuickSight administrators.

Embedded Developer Portal

We have a new embedded developer portal at https://developer.quicksight.aws that allows you to quickly interact with three key embedded scenarios – 1) embedded dashboards accessible to anyone accessing a website or portal (no user provisioning required), 2) embedded dashboards accessible to only authenticated users, and 3) embedded dashboard authoring for power users of apps.

The interactive dashboards, code snippets and setup instructions allow you to learn more about the rich capabilities of QuickSight embedding, easily get started with embedding QuickSight.

Summary

With new embedding capability, QuickSight offers a modern, serverless approach to deploying dashboards and visualizations into websites, apps and corporate portals in hours, without any user provisioning or management needed to scale to 100s of 1000s of users. Unlike traditional server-based models, QuickSight’s session capacity model allows you to start at a low $250/month, month-to-month price point. As usage grows, the available annual commitment models offer scalable pricing by reducing the per-session cost – enabling both ISVs and Enterprises to roll out embedded QuickSight dashboards at large scale, whether for internal or external users. Finally, with QuickSight’s new developer portal, you have instant access to samples of interactive embedded QuickSight dashboards as well as steps to integrating various embedded capabilities into your own websites, apps and portals.


About the Authors

Jose Kunnackal John is Sr. Manager for Amazon QuickSight, AWS’ cloud-native, fully managed BI service. Jose started his career with Motorola, writing software for telecom and first responder systems. Later he was Director of Engineering at Trilibis Mobile, where he built a SaaS mobile web platform using AWS services. Jose is excited by the potential of cloud technologies and looks forward to helping customers with their transition to the cloud.

 

 

Kareem Syed-Mohammed is a Product Manager at Amazon QuickSight. He focuses on embedded analytics, APIs, and developer experience. Prior to QuickSight he has been with AWS Marketplace and Amazon retail as a PM. Kareem started his as a career developer and then PM for call center technologies, Local Expert and Ads for Expedia. He worked as a consultant with McKinsey and Company for a short while.

 

 

Arun Santhosh is a Specialized World Wide Solution Architect for Amazon QuickSight. Arun started his career at IBM as a developer and progressed on to be an Application Architect. Later, he worked as a Technical Architect at Cognizant. Business Intelligence has been his core focus in these prior roles as well

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close