All posts by Sébastien Stormacq

Introducing AWS Systems Manager Change Manager

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/introducing-systems-manager-change-manager/

Because you are constantly listening to the feedback from your customer, you are iterating, innovating, and improving your applications and infrastructures. You continually modify your IT systems in the cloud. And let’s face it, changing something in a working system risks breaking things or introducing side effects that are sometimes unpredictable; it doesn’t matter how many tests you do. On the other hand, not making changes is stasis, followed by irrelevance, followed by death.

This is why organizations of all sizes and types have embraced a culture of controlling changes. Some organizations adopt change management processes such as the ones defined in ITIL v4. Some have adopted DevOps’ Continuous Deployment, or other methods. In any case, to support your change management processes, it is important to have tools.

Today, we are launching AWS Systems Manager Change Manager, a new change management capability for AWS Systems Manager. It simplifies the way ops engineers track, approve, and implement operational changes to their application configurations and infrastructures.

Using Change Manager has two primary advantages. First, it can improve the safety of changes made to application configurations and infrastructures, reducing the risk of service disruptions. It makes operational changes safer by tracking that only approved changes are being implemented. Secondly, it is tightly integrated with other AWS services, such as AWS Organizations and AWS Single Sign-On, or the integration with the Systems Manager change calendar and Amazon CloudWatch alarms.

Change Manager provides accountability with a consistent way to report and audit changes made across your organization, their intent, and who approved and implemented them.

Change Manager works across AWS Regions and multiple AWS accounts. It works closely with Organizations and AWS SSO to manage changes from a central point and to deploy them in a controlled way across your global infrastructure.

Terminology
You can use AWS Systems Manager Change Manager on a single AWS account, but most of the time, you will use it in a multi-account configuration.

The way you manage changes across multiple AWS accounts depends on how these accounts are linked together. Change Manager uses the relationships between your accounts defined in AWS Organizations. When using Change Manager, there are three types of accounts:

  • The management account – also known as the “main account” or “root account.” The management account is the root account in an AWS Organizations hierarchy. It is the management account by virtue of this fact.
  • The delegated administrator account – A delegated administrator account is an account that has been granted permission to manage other accounts in Organizations. In the Change Manager context, this is the account from which change requests will be initiated. You will typically log in to this account to manage templates and change requests. Using a delegated administrators account allows you to limit connections made to the root account. It also allows you to enforce a least privileges policy by using a specific subset of permissions required by the changes.
  • The member accounts – Member accounts are accounts that are not the management account or a delegated administrator account, but are still included in Organizations. In my mental model for Change Manager, these would be the accounts that hold the resources where changes are deployed. A delegated administrator account would initiate a change request that would impact resources in a member account. System administrators are discouraged from logging directly into these accounts.

Let’s see how you can use AWS Systems Manager Change Manager by taking a short walk-through demo.

One-Time Configuration
In this scenario, I show you how to use Change Manager with multiple AWS accounts linked together with Organizations. If you are not interested in the one-time configuration, jump to the Create a Change Request section below.

There are four one-time configuration actions to take before using Change Manager: one action in the root account and three in the delegated administrator account. In the root account, I use Quick Setup to define my delegated administrator account and initially configure permissions on the accounts. In the delegated administrator account, you define your source of user identities, you define what users have permissions to approve change templates, and you define a change request template.

First, I ensure I have an Organization in place and my AWS accounts are organized in Organizational Units (OU). For the purpose of this simple example, I have three accounts: the root account, the delegated administrator account in the management OU and a member account in the managed OU. When ready, I use Quick Setup on the root account to configure my accounts. There are multiple paths leading to Quick Setup; for this demo, I use the blue banner on top of the Quick Setup console, and I click Setup Change Manager.

Change Manager Quick Setup

 

On the Quick Setup page, I enter the ID of the delegated administrator account if I haven’t defined it already. Then I choose the permissions boundaries I grant to the delegated administrator account to perform changes on my behalf. This is the maximum permissions Change Manager receives to make changes. I will further restrict this permission set when I create change requests in a few minutes. In this example, I grant Change Manager permissions to call any ec2 API. This effectively authorizes Change Manager to only run changes related to EC2 instances.

Change Manager Quick Setup

Lower on the screen, I choose the set of accounts that are targets for my changes. I choose between Entire organization or Custom to select one or multiple OUs.

Change Manager Quick Setup 2

After a while, Quick Setup finishes configuring my AWS accounts permission and I can move to the second part of the one-time setup.

Change Manager Quick Setup 3

Second, I switch to my delegated administrator account. Change Manager asks me how I manage users in my organization: with AWS Identity and Access Management (IAM) or AWS Single Sign-On? This defines where Change Manager pulls user identities when I choose approvers. This is a one-time configuration option. This can be changed at any time in the Change Manager Settings page.

Change Manager Settings

Third, on the same page, I define an Amazon Simple Notification Service (SNS) topic to receive notifications about template reviews. This channel is notified any time a template is created or modified, to let template approvers review and approve templates. I also define the IAM (or SSO) user with permission to approve change templates (more about these in one minute).

Change Manager Template Reviewers

Optionally, you can use the existing AWS Systems Manager Change Calendar to define the periods where changes are not authorized, such as marketing events or holiday sales.

Finally, I define a change template. Every change request is created from a template. Templates define common parameters for all change requests based on them, such as the change request approvers, the actions to perform, or the SNS topic to send notifications of progress. You can enforce the review and approval of templates before they can be used. It makes sense to create multiple templates to handle different type of changes. For example, you can create one template for standard changes, and one for emergency changes that overrides the change calendar. Or you can create different templates for different types of automation run books (documents).

To help you to get started, we created a template for you: the “Hello World” template. You can use it as a starting point to create a change request and test out your approval flow.

At any time, I can create my own template. Let’s imagine my system administrator team is frequently restarting EC2 instances. I create a template allowing them to create change requests to restart one or multiple instances. Using the delegated administrator account, I navigate to the Change Manager management console and click Create template.

Change Manager Create Template

In a nutshell, a template defines the list of authorized actions, where to send notifications and who can approve the change request. Actions are an AWS Systems Manager runbook. Emergency change templates allow change requests to bypass the change calendar I wrote about earlier. Under Runbook Options, I choose one or multiple runbooks allowed to run. For this example, I choose the AWS EC2RestartInstance runbook.

I use the console to create the template, but templates are defined internally as YAML. I can edit the YAML using the Editor tab, or when I am using the AWS Command Line Interface (CLI) or API. This means I can version control them just like the rest of my infrastructure (as code).Change Manager Create Template part 1

Just below, I document my template using text formatted as markdown format. I use this section to document the defining characteristics of the template and provide any necessary instructions, such as back-out procedures, to the requestor.

Change Manager Template Documentation

I scroll down that page and click Add Approver to define approvers. Approvers can be individual users or groups. The list of approvers are defined either at the template level or in the change request itself. I also choose to create an SNS topic to inform approvers when any requests are created that require their approval.

In the Monitoring section I select the alarm that, when active, stops any change based on this template, and initiate a rollback.

In the Notifications section, I select or create another SNS topic so I’m notified when status changes for this template occur.

Change Manager Create Template part 2

Once I am done, I save the template and submit it for review.

Change Manager Submit Template for Review

Templates have to be reviewed and approved before they can be used. To approve the template, I connect the console as the template_approver user I defined earlier. As template_approver user, I see pending approvals on the Overview tab. Or, I navigate to the Templates tab, select the template I want to review. When I am done reviewing it, I click Approve.

Change Manager Approve Template

Voila, now we’re ready to create change requests based on this template. Remember that all the preceding steps are one-time configurations and can be amended at any time. When existing templates are modified, the changes go through a review and approval process again.

Create a Change Request
To create a change request on any account linked to the Organization, I open a AWS Systems Manager Change Manager console from the delegated administrator account and click Create request.

Change Manager Create Request

I choose the template I want to use and click Next.

Change Manager Select Template I enter a name for this change request. The change is initiated immediately after all approvals are granted, or I specify an optional scheduled time. When the template allows me, I choose the approver for this change. In this example, the approver is defined by the template and cannot be changed. I click Next.

Change Manager Create CR part 1

On the next screen, there are multiple important configuration options, relating to the actual execution of the change:

  • Target location – lets me define on which target AWS accounts and AWS Region I want to run this change.
  • Deployment target – lets me define which resources are the target of this change. One EC2 instance? Or multiple ones identified by their tags, their resources groups, a list of instance IDs, or all EC2 instances.
  • Runbook parameters – lets me define the parameters I want to pass to my runbook, if any.
  • Execution role – lets me define the set of permissions I grant the System Manager to deploy with this change. The permission set must have service changemanagement.ssm.amazonaws.com as principal for the trust policy. Selecting a role allows me to grant the Change Manager runtime a different permission set than the one I have.

Here is an example allowing Change Manager to stop an EC2 instance (you can scope it down to a specific AWS account, specific Region, or specific instances):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}

And the associated trust policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "changemanagement.ssm.aws.internal"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

When I am ready, I click Next. On the last page, I review my data entry and click Submit for approval.

At this stage, the approver receives a notification, based on the SNS topic configured in the template. To continue this demo, I sign out of the console and sign in again as the cr_approver user, which I created, with permission to view and approve change requests.

As the cr_approver user, I navigate to the console, review the change request, and click Approve.

Change Manager Review Change Request

The change request status switches to scheduled, and eventually turns green to Success. At any time, I can click the change request to get the status, and to collect errors, if any.

Change Manager Dashboard with Succeeded Request

I click on the change request to see the details. In particular, the Timeline tab shows the history of this CR.

Change Management CR Timeline

Availability and Pricing
AWS Systems Manager Change Manager is available today in all commercial AWS Regions, except mainland China. The pricing is based on two dimensions: the number of change requests you submit and the total number of API calls made. The number of change requests you submit will be the main cost factor. We will charge $0.29 per change request. Check the pricing page for more details.

You can evaluate Change Manager for free for 30 days, starting on your first change request.

As usual, let us know what you think and let’s get started today

— seb

Amazon Connect – Now Smarter and More Integrated With Third-Party Tools

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-connect-smarter-and-more-integrated/

We launched Amazon Connect in 2017 and, since then, thousands of customers have created their own contact centers in the cloud. Amazon Connect makes it easy for non-technical customers to design interaction flows, manage agents, and track performance metrics.

For example, when I book a Best Western hotel room in Europe by phone, the call is managed by Amazon Connect. In the UK, the Post Office went from ideation to production rollout in just three weeks. In France, WebHelp, a global leader in Business Process Outsourcing, activated thousands of workstations and remote agents in just 72 hours.

Since I last blogged about Amazon Connect, the team has been continuously listening for your feedback and, today, I am happy to announce a new set of capabilities to make Amazon Connect smarter and more integrated with third-party tools.

We are using Machine Learning (ML) to make Amazon Connect smarter at analyzing conversations in real time, finding relevant information needed by contact center agents, and authenticating customers by the sound of their voice. The second set of capabilities makes Amazon Connect easier to integrate with third party tools or services to present unified customer profile information to contact center agents, and to make it easier to manage their tasks.

Let’s go into the details, one by one.

Contact Lens Real Time
Contact Lens for Amazon Connect is a set of machine learning (ML) capabilities allowing contact center supervisors to better understand the sentiment, trends, and compliance of customer conversations. It was first announced during re:Invent 2019 and available since July 2020. It allows to effectively train agents, replicate successful interactions, and to identify crucial company and product feedback.

Starting today, you can get real-time insights into customer experience during the live calls, such as a customer expressing dissatisfaction. Customer experience analytics and alerts for live calls are delivered in Amazon Connect’s real-time metrics dashboard. It makes it easy for supervisors to identify when to listen-in on a critical call, and to provide guidance to the agent via chat, or have the agent transfer the call to them for assistance.

You, as the contact center manager, can define rules using specific terms such as “not happy,” “poor quality product,” and “cancel my subscription.” Contact Lens uses natural language processing (NLP) to perform intelligent matching to automatically detect variations of the spoken words even when the example phrases are limited.

Create Rules for real time analytics

Contact Lens analyzes in-progress calls in real time to detect when the rule criteria for a customer experience issue is met, and immediately creates an alert next to the live call in the Amazon Connect dashboard to notify supervisors of the situation.

Real Time alert based on rules

With this launch, we are adding 13 language variants to post-call analytics, in addition to the 5 already supported :English (United States), English (Great Britain), English (Australia), English (India), and Spanish (United States).

The new language variants for post-call analytics are: English (Ireland), English (Scotland), English (Wales), Spanish (Spain), French (Canada), French (France), Portuguese (Portugal), Portuguese (Brazil), German (Germany), German (Switzerland), Italian (Italy), Arabic (Gulf), and Hindi (India).

Contact Lens for Amazon Connect real-time is available in 4 language variants: English (United States), English (Great Britain), English (Australia), and Spanish (United States). More language variants will be added at later stage.

For more details visit this launch page.

Amazon Connect Wisdom (Preview)
Wisdom provides built-in agent assistance capabilities in Amazon Connect, including machine learning (ML) powered search and real-time recommendations, to quickly enable agents with relevant information for resolving customer issues.

As an agent, I can type questions or phrases in the Wisdom search box, without guessing what keyword I should use. Wisdom understands what information I am searching for. It surfaces results in the agent’s preferred Amazon Connect application, the web-based one we do provide, or the ones you built.

Amazon Connect Wisdom search results

Wisdom comes with pre-built connectors to third-party knowledge repositories to provide most relevant results to agents. Wisdom includes connectors for Salesforce and ServiceNow during the preview, with more to come at launch.

Wisdom may use Contact Lens Real Time analytics to analyse the conversation in real-time. It detects the customer issue, finds related content in the connected repositories, and provides proactive recommendations to help the agent resolve it. For example, Wisdom can detect that a customer is talking about a problem with the handbag they bought last week, recommend an article that describes similar products defect, and provide instructions with a link to the order management application needed to initiate an exchange.

Wisdom is available in preview, you can signup today or visit the launch page.

Amazon Connect Voice ID (Preview)
Amazon Connect Voice ID provides real-time caller authentication which makes voice interactions in contact centers more secure and efficient.

To effectively recognise me as “Sébastien”, Voice ID must learn how I am talking. This is the enrolment phase. Then it compares the sound of my voice with the one enrolled earlier. This is the verification phase.

To meet with personal data protection laws, contact center agents capture my consent to use Voice ID.

During the enrolment phase, Voice ID listens to the call until it has captured 30 seconds of my voice. Then it creates my voiceprint, which uniquely authenticates me. A voiceprint is a mathematical representation that captures unique aspects of an individual’s voice such as speech rhythm, pitch, intonation, and loudness. I do not need to say or repeat any specific phrases to let Voice ID create my voiceprint. Voice ID provides an API that can be used to opt-out a customer.

When I call back in, Voice ID just needs 10 seconds of my voice to authenticate me. My voice can be captured as part of a typical interaction with the Interactive Voice Response (IVR) happening at the start of the call, or when I first start to talk with the agent. For example when I am answering questions, such as “what’s your first and last name?” and “what are you calling about?”, Voice ID uses this audio to generate my voiceprint again. It compares it with the one enrolled earlier. Voice ID then generates an authentication score depending on the confidence of the match. Contact center managers can use this score to create policies in Amazon Connect to let agents see a real-time result (“authenticated” or “not authenticated”) in their web-based application. Agents can then decide to proceed with the call or ask for additional authentication credentials.

Amazon Connect Voice ID is available in preview. You can signup today or visit this launch page.

Amazon Connect Customer Profiles
Customer Profiles is a unified profile for Amazon Connect that brings together customer information from disparate sources without having to build integrations or wrangle data.

Providing agents (or automatic IVR systems) with accurate and unified customer profile information at the right moment helps them to deliver better service to customers, and to resolve calls faster. Using Customer Profiles, agents must not navigate out of Amazon Connect, or switch between different applications to get the customer insights they need.

With just a few clicks, System Administrators can integrate customer profile data from applications like Salesforce, ServiceNow, Zendesk, and Marketo to build your own homegrown integration. Setting up connectors for Customer Profiles requires no programming or data integration expertise.

Once enabled, Customer Profiles automatically detects customer records from the applications. It matches and deduplicates them. This results in accurate and up-to-date profiles displayed to agents within their Connect web-based experience.

Amazon Connect Customer Profile

Learn more about Amazon Connect Customer Profile by visiting the launch page.

Amazon Connect Tasks
Amazon Connect Tasks makes it easy to automate, track, and manage contact center agent tasks. It provides a single place for contact center managers to prioritize, assign, and track customer service tasks across the disparate applications used by agents, so that they are focused on the highest priority work of any type.

Tasks can be sourced from third-party applications, such as a CRM solution, or to update a business-specific system. For example, you can programmatically create tasks for agents to follow-up on a customer case in a third party application like Salesforce, or complete an action item in a business-specific application, such as processing a claim in an insurance system. You can also automate tasks that dont require agent interaction, to ensure your agents spend more time focused on customers.

Using Amazon Connect tasks, agents no longer need to switch between applications to know what work should be completed, and with what priority. Agents can see all their assigned tasks right from the Amazon Connect contact control panel, the same web-based application they use to interact with customers over calls and chat. When a task is assigned, the agent receives a notification with the description of the task, and when required, links to any external applications needed to complete the action. Agents can also create tasks so that follow-up work is not forgotten, for example calling a customer back to provide a status update.

Amazon Connect Accept TasksAmazon Connect View TaskCreate a task uisng Amazon Connect Tasks
Incoming TasksTask DetailsCreate a new Task

Amazon Connect Tasks provides pre-built connectors fo Salesforce and Zendesk. With just a few clicks, you can easily set up rules to automatically create tasks based on pre-defined conditions, as sown on the screenshot hereunder. It also provides an API to create tasks from any other application.

Amazon Connect Task Rules

Learn more about how to configure and to get started with Tasks by visiting the launch page.

Available Today
Three of these new capabilities are available today: Contact Lens Real Time, Customer Profiles, and Tasks. You must register to the preview program to test Wisdom and Voice ID.

Customer Profile and Tasks are available in all AWS Regions where Amazon Connect is available : US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London). Contact Lens Real Time is available in US West (Oregon), US East (N. Virginia), and Asia Pacific (Sydney) at the moment. Wisdom is available in US East (N. Virginia) and US West (Oregon) during the preview, while Voice ID is available only in US West (Oregon) during the preview.

With Amazon Connect, you only pay for what you use. There are no required up-front payments, long-term commitments, or minimum monthly fees. The price metrics for these new capabilities are detailed on the Amazon Connect pricing page.

Should you need help adding Amazon Connect any of these capabilities to contact flows, please reach out to one of the dozens of Amazon Connect partners available worldwide.

— seb

New – Attributes Based Access Control with AWS Single Sign On

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/

Starting today, you can pass user attributes in the AWS session when your workforce sign-in into the cloud using AWS Single Sign-On. This gives you the centralized account access management of AWS Single Sign-On and ABAC, with the flexibility to use AWS SSO, Active Directory, or an external identity provider as your identity source. To learn more about the advantages of ABAC policies on AWS, you may read my previous blog post on the subject.

Overview
On one side, system administrators configure user attributes on the AWS Single Sign-On identity repository, or the managed Active Directory. System administrators may also configure an external identity provider, such as Okta, OneLogin or PingFederate to pass existing user attributes in the AWS sessions when their workforce federates into AWS. These attributes are known as session tags in AWS. On the other side, cloud administrators create fine-grained permissions policies such that your workforce get only access to cloud resources with matching resource tags.

Creating policies based on matching attributes instead of functional roles helps to reduce the number of distinct permissions and roles you must create and manage in your AWS environment. For example, when developers Bob from team red and Alice from team blue sign-in into AWS and assume the same AWS Identity and Access Management (IAM) role, they get distinct permissions to project resources tagged for their team. The identity system sends the team name attribute in the AWS session when Bob and Alice sign-in into AWS. The role’s permissions grant access to project resources with matching team name tags. Now, if Bob moves to team blue and system administrators update his team name in their identity provider directory, Bob automatically gets access to team blue’s project resources without requiring permissions updates in IAM.

How to Configure AWS SSO to Map User Attributes
Before to configure AWS SSO, there are two important points to highlight. First, ABAC will work with attributes from any identity source configured in AWS SSO : AWS SSO itself, a managed Active Directory, or an external identity provider. Second, there are two ways to pass attributes for access control to AWS SSO. Either you can pass attributes directly in the SAML assertion using the prefix https://aws.amazon.com/SAML/Attributes/AccessControl, or you can use attributes that are in the AWS SSO identity store. Those attributes are configured by your AWS SSO administrator for users created in AWS SSO, synchronized in from an Active Directory, or synchronized in from an external identity provider using automatic provisioning (SCIM).

For this demo, I choose to use an external identity provider and SCIM.

I can enable ABAC in AWS using AWS SSO with three steps:

Step 1: I configure my identity source with the associated user identities and attributes in the external identity provider. As of today, AWS SSO supports identity synchronization via SCIM with Azure AD, Okta, OneLogin, and PingFederate. Check this page to get an up-to-date list. The specifics depend on each identity provider.

Step 2: I configure the SCIM attributes I want to use for access control using the new Access Control Attributes global setting in the AWS SSO console or API. This screen allows me to select attributes for access control from the identity source I configured in step 1.

Attributes for Access Control

Step 3: I author ABAC rules through permission sets and resource-based policies using the attributes I configured in Step 2. More about this in a minute.

Now, when my workforce federates into an AWS account using SSO, they get access to their AWS resources based on matching attributes.

Attributes are passed as session tags. They are passed as comma-separated key:value pairs. The total character length of all the attributes together must be less than or equal to 460 characters.

What Does a Policy Look Like?
I now can use user attributes in my permission sets using the aws:PrincipalTag condition key when creating access control rules. For example, I can tag all the resources in my organization with their respective department name, and use a single permission set that grants developers access only to their department resources. Now, whenever developers federate into the AWS account, AWS SSO creates a department session tag with the value received from the identity provider. The security policies allow them to only get access to the resources in their respective department. As the team adds more developers and resources to their project, I only have to tag resources with the correct department name. As a result, as the organization adds new resources and developers to departments, developers can only manage resources aligned to their department without needing any permission updates.

An ABAC SSO permission set policy might look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Department": "${aws:PrincipalTag/Department}"
                }
            }
        }
    ]
}

This policy allows anybody to DescribeInstances, but only users with a aws:PrincipalTag/Department tag’s value matching the EC2 instance ec2:ResourceTag/Department tag’s value are authorized to stop or to start instances.

I attach this policy to an AWS Account’s Permission Set. On the left part of the AWS Single Sign-On console, I click AWS Accounts and select the Permission sets tab. Then I click Create permission set. On the next screen, I select Create a customer permission set.

Create a custom permission set

I enter a name and description, I make sure Create a custom permissions policy is selected. Then I can copy/paste the previous policy allowing to start and stop EC2 instances when the department name tag value is equal to the person’s department name tag value.

Create Custom Policy for Permission Set

On the next screen, I enter some tags, then I review my configuration before clicking Create. Et voila, I am ready to go.

If you have existing federation configured with AWS Security Token Service, remember that external identity providers consider AWS SSO as a new application configuration. This means when you move from direct IAM federation to AWS SSO, you have to update your external identity provider configuration to connect with AWS SSO and to introduce attributes as session tags for this configuration.

Available Today
There is no additional charge to configure user attributes with AWS Single Sign-On. You can start to use it today in all AWS Regions where AWS SSO is available.

— seb

New – Multi-Factor Authentication with WebAuthn for AWS SSO

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/multi-factor-authentication-with-webauthn-for-aws-sso/

Starting today, you can add WebAuthn as a new multi-factor authentication (MFA) to AWS Single Sign-On, in addition to currently supported one-time password (OTP) and Radius authenticators. By adding support for WebAuthn, a W3C specification developed in coordination with FIDO Alliance, you can now authenticate with a wide variety of interoperable authenticators provisioned by your system administrator or built into your laptops or smartphones. For example, you can now tap a hardware security key, touch a fingerprint sensor on your Mac, or use facial recognition on your mobile device or PC to authenticate into the AWS Management Console or AWS Command Line Interface (CLI).

With this addition, you can now self-register multiple MFA authenticators. Doing so allows you to authenticate on AWS with another device in case you lose or misplace your primary authenticator device. We make it easy for you to name your devices for long-term manageability.

WebAuthn two-factor authentication is available for identities stored in the AWS Single Sign-On internal identity store and those stored in Microsoft Active Directory, whether it is managed by AWS or not.

What are WebAuthn and FIDO2?

Before exploring how to configure two-factor authentication using your FIDO2-enabled devices, and to discover the user experience for web-based and CLI authentications, let’s recap how FIDO2, WebAuthn and other specifications fit together.

FIDO2 is made of two core specifications: Web Authentication (WebAuthn) and Client To Authenticator Protocol (CTAP).

Web Authentication (WebAuthn) is a W3C standard that provides strong authentication based upon public key cryptography. Unlike traditional code generator tokens or apps using TOTP protocol, it does not require sharing a secret between the server and the client. Instead, it relies on a public key pair and digital signature of unique challenges. The private key never leaves a secured device, the FIDO-enabled authenticator. When you try to authenticate to a website, this secured device interacts with your browser using the CTAP protocol.

WebAuthn is strong: Authentication is ideally backed by a secure element, which can safely store private keys and perform the cryptographic operations. It is scoped: A key pair is only useful for a specific origin, like browser cookies. A key pair registered at console.amazonaws.com cannot be used at console.not-the-real-amazon.com, mitigating the threat of phishing. Finally, it is attested: Authenticators can provide a certificate that helps servers verify that the public key did in fact come from an authenticator they trust, and not a fraudulent source.

To start to use FIDO2 authentication, you therefore need three elements: a website that supports WebAuthn, a browser that supports WebAuthn and CTAP protocols, and a FIDO authenticator. Starting today, the SSO Management Console and CLI now support WebAuthn. All modern web browsers are compatible (Chrome, Edge, Firefox, and Safari). FIDO authenticators are either devices you can use from one device or another (roaming authenticators), such as a YubiKey, or built-in hardware supported by Android, iOS, iPadOS, Windows, Chrome OS, and macOS (platform authenticators).

How Does FIDO2 Work?
When I first register my FIDO-enabled authenticator on AWS SSO, the authenticator creates a new set of public key credentials that can be used to sign a challenge generated by AWS SSO Console (the relaying party). The public part of these new credentials, along with the signed challenge, are stored by AWS SSO.

When I want to use WebAuthn as second factor authentication, the AWS SSO console sends a challenge to my authenticator. This challenge can then be signed with the previously generated public key credentials and sent back to the console. This way, AWS SSO console can verify that I have the required credentials.

How Do I Enable MFA With a Secure Device in the AWS SSO Console?
You, the system administrator, can enable MFA for your AWS SSO workforce when the user profiles are stored in AWS SSO itself, or stored in your Active Directory, either self-managed or a AWS Directory Service for Microsoft Active Directory.

To let my workforce register their FIDO or U2F authenticator in self-service mode, I first navigate to Settings, click Configure under Multi-Factor Authentication. On the following screen, I make four changes. First, under Users should be prompted for MFA, I select Every time they sign in. Second, under Users can authenticate with these MFA types, I check Security Keys and built-in authenticators. Third, under If a user does not yet have a registered MFA device, I check Require them to register an MFA device at sign in. Finally, under Who can manage MFA devices, I check Users can add and manage their own MFA devices. I click on Save Changes to save and return.

Configure SSO 2

That’s it. Now your workforce is prompted to register their MFA device the next time they authenticate.

What Is the User Experience?
As an AWS console user, I authenticate on the AWS SSO portal page URL that I received from my System Administrator. I sign in using my user name and password, as usual. On the next screen, I am prompted to register my authenticator. I check Security Key as device type. To use a biometric factor such as fingerprints or face recognition, I would click Built-in authenticator.

Register MFA Device

The browser asks me to generate a key pair and to send my public key. I can do that just by touching a button on my device, or providing the registered biometric, e.g. TouchID or FaceID.Register a security keyThe browser does confirm and shows me a last screen where I have the possibility to give a friendly name to my device, so I can remember which one is which. Then I click Save and Done.Confirm device registrationFrom now on, every time I sign in, I am prompted to touch my security device or use biometric authentication on my smartphone or laptop. What happens behind the scene is the server sending a challenge to my browser. The browser sends the challenge to the security device. The security device uses my private key to sign the challenge and to return it to the server for verification. When the server validates the signature with my public key, I am granted access to the AWS Management Console.

Additional verification required

At any time, I can register additional devices and manage my registered devices. On the AWS SSO portal page, I click MFA devices on the top-right part of the screen.

MFA device management

I can see and manage the devices registered for my account, if any. I click Register device to register a new device.

How to Configure SSO for the AWS CLI?
Once my devices are configured, I can configure SSO on the AWS Command Line Interface (CLI).

I first configure CLI SSO with aws configure sso and I enter the SSO domain URL that I received from my system administrator. The CLI opens a browser where I can authenticate with my user name, password, and my second-factor authentication configured previously. The web console gives me a code that I enter back into the CLI prompt.aws configure sso

When I have access to multiple AWS Accounts, the CLI lists them and I choose the one I want to use. This is a one-time configuration.

Once this is done, I can use the aws CLI as usual, the SSO authentication happens automatically behind the scene. You are asked to re-authenticate from time to time, depending on the configuration set by your system administrator.

Available today
Just like AWS Single Sign-On, FIDO2 second-factor authentication is provided to you at no additional cost, and is available in all AWS Regions where AWS SSO is available.

As usual, we welcome your feedback. The team told me they are working on other features to offer you additional authentication options in the near future.

You can start to use FIDO2 as second factor authentication for AWS Single Sign-On today. Configure it now.

— seb

Lightsail Containers: An Easy Way to Run your Containers in the Cloud

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/lightsail-containers-an-easy-way-to-run-your-containers-in-the-cloud/

When I am delivering an introduction to the AWS Cloud for developers, I usually spend a bit of time to mention and to demonstrate Amazon Lightsail. It is by far the easiest way to get started on AWS. It allows you to get your application running on your own virtual server in a matter of minutes. Today, we are adding the possibility to deploy your container-based workloads on Amazon Lightsail. You can now deploy your container images to the cloud with the same simplicity and the same bundled pricing Amazon Lightsail provides for your virtual servers.

Amazon Lightsail is an easy-to-use cloud service that offers you everything needed to deploy an application or website, for a cost effective and easy to understand monthly plan. It is ideal to deploy simple workloads, websites, or to get started with AWS. The typical Lightsail customers range from developers to small businesses or startups who are looking to get quickly started in the cloud and AWS. At any time, you can later adopt the broad AWS Services when you are getting more familiar with the AWS cloud.

Under the hood, Lightsail is powered by Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS), Application Load Balancer, and other AWS services. It offers the level of security, reliability, and scalability you are expecting from AWS.

When deploying to Lightsail, you can choose between six operating systems (4 Linux distributions, FreeBSD, or Windows), seven applications (such as WordPress, Drupal, Joomla, Plesk…), and seven stacks (such as Node.js, Lamp, GitLab, Django…). But what about Docker containers?

Starting today, Amazon Lightsail offers an simple way for developers to deploy their containers to the cloud. All you need to provide is a Docker image for your containers and we automatically containerize it for you. Amazon Lightsail gives you an HTTPS endpoint that is ready to serve your application running in the cloud container. It automatically sets up a load balanced TLS endpoint, and take care of the TLS certificate. It replaces unresponsive containers for you automatically, it assigns a DNS name to your endpoint, it maintains the old version till the new version is healthy and ready to go live, and more.

Let’s see how it works by deploying a simple Python web app as a container. I assume you have the AWS Command Line Interface (CLI) and Docker installed on your laptop. Python is not required, it will be installed in the container only.

I first create a Python REST API, using the Flask simple application framework. Any programming language and any framework that can run inside a container works too. I just choose Python and Flask because they are simple and elegant.

You can safely copy /paste the following commands:

mkdir helloworld-python
cd helloworld-python
# create a simple Flask application in helloworld.py
echo "

from flask import Flask, request
from flask_restful import Resource, Api

app = Flask(__name__)
api = Api(app)

class Greeting (Resource):
   def get(self):
      return { "message" : "Hello Flask API World!" }
api.add_resource(Greeting, '/') # Route_1

if __name__ == '__main__':
   app.run('0.0.0.0','8080')

"  > helloworld.py

Then I create a Dockerfile that contains the steps and information required to build the container image:

# create a Dockerfile
echo '
FROM python:3
ADD helloworld.py /
RUN pip install flask
RUN pip install flask_restful
EXPOSE 8080
CMD [ "python", "./helloworld.py"]
 '  > Dockerfile

Now I can build my container:

docker build -t lightsail-hello-world .

The build command outputs many lines while it builds the container, it eventually terminates with the following message (actual ID differs):

Successfully built 7848e055edff
Successfully tagged lightsail-hello-world:latest

I test the container by launching it on my laptop:

docker run -it – rm -p 8080:8080 lightsail-hello-world

and connect a browser to localhost:8080

Testing Flask API in the container

When I am satisfied with my app, I push the container to Docker Hub.

docker tag lightsail-hello-world sebsto/lightsail-hello-world
docker login
docker push sebsto/lightsail-hello-world

Now that I have a container ready on Docker Hub, let’s create a Lightsail Container Service.

I point my browser to the Amazon Lightsail console. I can see container services already deployed and I can manage them. To create a new service, I click Create container service:Lighsail Container Console

On the next screen, I select the size of the container I want to use, in terms of vCPU and memory available to my application. I also select the number of container instances I want to run in parallel for high availability or scalability reasons. I can change the number of container instances or their power (vCPU and RAM) at any time, without interrupting the service. Both these parameters impact the price AWS charges you per month. The price is indicated and dynamically adjusted on the screen, as shown on the following video.

Lightsail choose capacity

Slightly lower on the screen, I choose to skip the deployment for now. I give a name for the service (“hello-world“). I click Create container service.

Lightsail container name

Once the service is created, I click Create your first deployment to create a deployment. A deployment is a combination of a specific container image and version to be deployed on the service I just created.

I chose a name for my image and give the address of the image on Docker Hub, using the format user/<my container name>:tag. This is also where I have the possibility to enter environment variables, port mapping, or a launch command.

My container is offering a network service on port TCP 8080, so I add that port to the deployment configuration. The Open Ports configuration specifies which ports and protocols are open to other systems in my container’s network. Other containers or virtual machines can only connect to my container when the port is explicitly configured in the console or EXPOSE‘d in my Dockerfile. None of these ports are exposed to the public internet.

But in this example, I also want Lightsail to route the traffic from the public internet to this container. So, I add this container as an endpoint of the hello-world service I just created. The endpoint is automatically configured for TLS, there is no certificate to install or manage.

I can add up to 10 containers for one single deployment. When ready, I click Save and deploy.

Lightsail Deployment

After a while, my deployment is active and I can test the endpoint.

Lightsail Deployment Active

The endpoint DNS address is available on the top-right side of the console. If I must, I can configure my own DNS domain name.

Lightsail endpoint DNSI open another tab in my browser and point it at the https endpoint URL:

Testing Container DeploymentWhen I must deploy a new version, I use the console again to modify the deployment. I spare you the details of modifying the application code, build, and push a new version of the container. Let’s say I have my second container image version available under the name sebsto/lightsail-hello-world:v2. Back to Amazon Lightsail console, I click Deployments, then Modify your Deployments. I enter the full name, including the tag, of the new version of the container image and click Save and Deploy.

Lightsail Deploy updated VersionAfter a while, the new version is deployed and automatically activated.

Lightsail deployment sucesful

I open a new tab in my browser and I point it to the endpoint URI available on the top-right corner of Amazon Lightsail console. I observe the JSON version is different. It now has a version attribute with a value of 2.

lightsail v2 is deployed

When something goes wrong during my deployment, Amazon Lightsail automatically keeps the last deployment active, to avoid any service interruption. I can also manually activate a previous deployment version to reverse any undesired changes.

I just deployed my first container image from Docker Hub. I can also manage my services and deploy local container images from my laptop using the AWS Command Line Interface (CLI). To push container images to my Amazon Lightsail container service directly from my laptop, I must install the LightSail Controler Plugin. (TL;DR curl, cp and chmod are your friends here, I also maintain a DockerFile to use the CLI inside a container.)

To create, list, or delete a container service, I type:

aws lightsail create-container-service – service-name myservice – power nano – scale 1

aws lightsail get-container-services
{
   "containerServices": [{
      "containerServiceName": "myservice",
      "arn": "arn:aws:lightsail:us-west-2:012345678901:ContainerService/1b50c121-eac7-4ee2-9078-425b0665b3d7",
      "createdAt": "2020-07-31T09:36:48.226999998Z",
      "location": {
         "availabilityZone": "all",
         "regionName": "us-west-2"
      },
      "resourceType": "ContainerService",
      "power": "nano",
      "powerId": "",
      "state": "READY",
      "scale": 1,
      "privateDomainName": "",
      "isDisabled": false,
      "roleArn": ""
   }]
}

aws lightsail delete-container-service – service myservice

I can also use the CLI to deploy container images directly from my laptop. Be sure lightsailctl is installed.

# Build the new version of my image (v3)
docker build -t sebsto/lightsail-hello-world:v3 .

# Push the new image.
aws lightsail push-container-image – service-name hello-world – label hello-world – image sebsto/lightsail-hello-world:v3

After a while, I see the output:

Image "sebsto/lightsail-hello-world:v3" registered.
Refer to this image as ":hello-world.hello-world.1" in deployments.

I create a lc.json file to hold the details of the deployment configuration. it is aligned to the options I see on the console. I report the name given by the previous command on the image property:

{
  "serviceName": "hello-world",
  "containers": {
     "hello-world": {
        "image": ":hello-world.hello-world.1",
        "ports": {
           "8080": "HTTP"
        }
     }
  },
  "publicEndpoint": {
     "containerName": "hello-world",
     "containerPort": 8080
  }
}

Finally, I create a new service version with:
aws lightsail create-container-service-deployment – cli-input-json file://lc.json

I can query the deployment status with
aws lightsail get-container-services

...
"nextDeployment": {
   "version": 4,
   "state": "ACTIVATING",
   "containers": {
      "hello-world": {
      "image": ":hello-world.hello-world.1",
      "command": [],
      "environment": {},
      "ports": {
         "8080": "HTTP"
      }
     }
},
...

After a while, the status  becomes  ACTIVE, and I can test my endpoint.

curl https://hello-world.nxxxxxxxxxxx.lightsail.ec2.aws.dev/
{"message": "Hello Flask API World!", "version": 3}

If you plan to later deploy your container to Amazon ECS or Amazon Elastic Kubernetes Service, no changes are required. You can pull the container image from your repository, just like you do with Amazon Lightsail.

You can deploy your containers on Lightsail in all AWS Regions where Amazon Lightsail is available. As of today, this is US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Paris).

As usual when using Amazon Lightsail, pricing is easy to understand and predictable. Amazon Lightsail Containers have a fixed price per month per container, depending on the size of the container (the vCPU/memory combination you use). You are charged on the prorated hours you keep the service running. The price per month is the maximum price you will be charged for running your service 24h/7. The prices are identical in all AWS Regions. They are ranging from $7 / month for a Nano container (512MB memory and 0.25 vCPU) to $160 / month for a X-Large container (8GB memory and 4 vCPU cores). This price not only includes the container itself, but also the load balancer, the DNS, and a generous data transfer tier. The details and prices for other AWS Regions are on the Lightsail pricing page.

I can’t wait to discover what solutions you will build and deploy on Amazon Lightsail Containers!

— seb

Majority of Alexa Now Running on Faster, More Cost-Effective Amazon EC2 Inf1 Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/

Today, we are announcing that the Amazon Alexa team has migrated the vast majority of their GPU-based machine learning inference workloads to Amazon Elastic Compute Cloud (EC2) Inf1 instances, powered by AWS Inferentia. This resulted in 25% lower end-to-end latency, and 30% lower cost compared to GPU-based instances for Alexa’s text-to-speech workloads. The lower latency allows Alexa engineers to innovate with more complex algorithms and to improve the overall Alexa experience for our customers.

AWS built AWS Inferentia chips from the ground up to provide the lowest-cost machine learning (ML) inference in the cloud. They power the Inf1 instances that we launched at AWS re:Invent 2019. Inf1 instances provide up to 30% higher throughput and up to 45% lower cost per inference compared to GPU-based G4 instances, which were, before Inf1, the lowest-cost instances in the cloud for ML inference.

Alexa is Amazon’s cloud-based voice service that powers Amazon Echo devices and more than 140,000 models of smart speakers, lights, plugs, smart TVs, and cameras. Today customers have connected more than 100 million devices to Alexa. And every month, tens of millions of customers interact with Alexa to control their home devices (“Alexa, increase temperature in living room,” “Alexa, turn off bedroom’“), to listen to radios and music (“Alexa, start Maxi 80 on bathroom,” “Alexa, play Van Halen from Spotify“), to be informed (“Alexa, what is the news?” “Alexa, is it going to rain today?“), or to be educated, or entertained with 100,000+ Alexa Skills.

If you ask Alexa where she lives, she’ll tell you she is right here, but her head is in the cloud. Indeed, Alexa’s brain is deployed on AWS, where she benefits from the same agility, large-scale infrastructure, and global network we built for our customers.

How Alexa Works
When I’m in my living room and ask Alexa about the weather, I trigger a complex system. First, the on-device chip detects the wake word (Alexa). Once detected, the microphones record what I’m saying and stream the sound for analysis in the cloud. At a high level, there are two phases to analyze the sound of my voice. First, Alexa converts the sound to text. This is known as Automatic Speech Recognition (ASR). Once the text is known, the second phase is to understand what I mean. This is Natural Language Understanding (NLU). The output of NLU is an Intent (what does the customer want) and associated parameters. In this example (“Alexa, what’s the weather today ?”), the intent might be “GetWeatherForecast” and the parameter can be my postcode, inferred from my profile.

This whole process uses Artificial Intelligence heavily to transform the sound of my voice to phonemes, phonemes to words, words to phrases, phrases to intents. Based on the NLU output, Alexa routes the intent to a service to fulfill it. The service might be internal to Alexa or external, like one of the skills activated on my Alexa account. The fulfillment service processes the intent and returns a response as a JSON document. The document contains the text of the response Alexa must say.

The last step of the process is to generate the voice of Alexa from the text. This is known as Text-To-Speech (TTS). As soon as the TTS starts to produce sound data, it is streamed back to my Amazon Echo device: “The weather today will be partly cloudy with highs of 16 degrees and lows of 8 degrees.” (I live in Europe, these are Celsius degrees 🙂 ). This Text-To-Speech process also heavily involves machine learning models to build a phrase that sounds natural in terms of pronunciations, rhythm, connection between words, intonation etc.

Alexa is one of the most popular hyperscale machine learning services in the world, with billions of inference requests every week. Of Alexa’s three main inference workloads (ASR, NLU, and TTS), TTS workloads initially ran on GPU-based instances. But the Alexa team decided to move to the Inf1 instances as fast as possible to improve the customer experience and reduce the service compute cost.

What is AWS Inferentia?
AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

AWS Inferentia can be used natively from popular machine-learning frameworks like TensorFlow, PyTorch, and MXNet, with AWS Neuron. AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable you to run high-performance and low latency inference.

Who Else is Using Amazon EC2 Inf1?
In addition to Alexa, Amazon Rekognition is also adopting AWS Inferentia. Running models such as object classification on Inf1 instances resulted in 8x lower latency and doubled throughput compared to running these models on GPU instances.

Customers, from Fortune 500 companies to startups, are using Inf1 instances for machine learning inference. For example, Snap Inc.​ incorporates machine learning (ML) into many aspects of Snapchat, and exploring innovation in this field is a key priority for them. Once they heard about AWS Inferentia, they collaborated with AWS to adopt Inf1 instances to help with ML deployment, including around performance and cost. They started with their recommendation models inference, and are now looking forward to deploying more models on Inf1 instances in the future.

Conde Nast, one of the world’s leading media companies, saw a 72% reduction in cost of inference compared to GPU-based instances for its recommendation engine. And Anthem, one of the leading healthcare companies in the US, observed 2x higher throughput compared to GPU-based instances for its customer sentiment machine learning workload.

How to Get Started with Amazon EC2 Inf1
You can start using Inf1 instances today.

If you prefer to manage your own machine learning application development platforms, you can get started by either launching Inf1 instances with AWS Deep Learning AMIs, which include the Neuron SDK, or you can use Inf1 instances via Amazon Elastic Kubernetes Service or Amazon ECS for containerized machine learning applications. To learn more about running containers on Inf1 instances, read this blog to get started on ECS and this blog to get started on EKS.

The easiest and quickest way to get started with Inf1 instances is via Amazon SageMaker, a fully managed service that enables developers to build, train, and deploy machine learning models quickly.

Get started with Inf1 on Amazon SageMaker today.

— seb

PS: The team just released this video, check it out!

New – Amazon RDS on Graviton2 Processors

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-rds-on-graviton2-processors/

I recently wrote a post to announce the availability of M6g, R6g and C6g families of instances on Amazon Elastic Compute Cloud (EC2). These instances offer better cost-performance ratio than their x86 counterparts. They are based on AWS-designed AWS Graviton2 processors, utilizing 64-bit Arm Neoverse N1 cores.

Starting today, you can also benefit from better cost-performance for your Amazon Relational Database Service (RDS) databases, compared to the previous M5 and R5 generation of database instance types, with the availability of AWS Graviton2 processors for RDS. You can choose between M6g and R6g instance families and three database engines (MySQL 8.0.17 and higher, MariaDB 10.4.13 and higher, and PostgreSQL 12.3 and higher).

M6g instances are ideal for general purpose workloads. R6g instances offer 50% more memory than their M6g counterparts and are ideal for memory intensive workloads, such as Big Data analytics.

Graviton2 instances provide up to 35% performance improvement and up to 52% price-performance improvement for RDS open source databases, based on internal testing of workloads with varying characteristics of compute and memory requirements.

Graviton2 instances family includes several new performance optimizations such as larger L1 and L2 caches per core, higher Amazon Elastic Block Store (EBS) throughput than comparable x86 instances, fully encrypted RAM, and many others as detailed on this page. You can benefit from these optimizations with minimal effort, by provisioning or migrating your RDS instances today.

RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you:

Instance SizevCPUMemory (GiB)Dedicated EBS Bandwidth (Mbps)Network Bandwidth
(Gbps)
M6gR6g
large2816Up to 4750Up to 10
xlarge41632Up to 4750Up to 10
2xlarge83264Up to 4750Up to 10
4xlarge16641284750Up to 10
8xlarge32128256900012
12xlarge481923841350020
16xlarge642565121900025

Let’s Start Your First Graviton2 Based Instance
To start a new RDS instance, I use the AWS Management Console or the AWS Command Line Interface (CLI), just like usual, and select one of the db.m6g or db.r6ginstance types (this page in the documentation has all the details).

RDS Launch Graviton2 instance

Using the CLI, it would be:

aws rds create-db-instance
 – region us-west-2 \
 – db-instance-identifier $DB_INSTANCE_NAME \
 – db-instance-class db.m6g.large \
 – engine postgres \
 – engine-version 12.3 \
 – allocated-storage 20 \
 – master-username $MASTER_USER \
 – master-user-password $MASTER_PASSWORD

The CLI confirms with:

{
    "DBInstance": {
        "DBInstanceIdentifier": "newsblog",
        "DBInstanceClass": "db.m6g.large",
        "Engine": "postgres",
        "DBInstanceStatus": "creating",
...
}

Migrating to Graviton2 instances is easy, in the AWS Management Console, I select my database and I click Modify.

Modify RDS database

The I select the new DB instance class:

modify db instance

Or, using the CLI, I can use the modify-db-instance API call.

There is a short service interruption happening when you switch instance type. By default, the modification will happen during your next maintenance window, unless you enable the ApplyImmediately option.

You can provision new or migrate to Graviton2 Amazon Relational Database Service (RDS) instances in all regions where EC2 M6g and R6g are available : US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt) AWS Regions.

As usual, let us know your feedback on the AWS Forum or through your usual AWS contact.

— seb

New – Amazon EC2 Instances based on AWS Graviton2 with local NVMe-based SSD storage

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-graviton2-instance-types-c6g-r6g-and-their-d-variant/

A few weeks ago, I wrote a post to announce the new AWS Graviton2 Amazon Elastic Compute Cloud (EC2) instance type, the M6g. Since then, hundreds of customers have observed significant cost-performance benefits. These include Honeycomb.io, SmugMug, Redbox, and Valnet Inc.

On June 11, we announced two new families of instances based on AWS Graviton2 processors: the C6g and R6g, in addition of the existing M6g. The M family instances are work horses intended to address a broad array of general-purpose workloads such as application servers, gaming servers, midsize databases, caching fleets, web tier, and the likes. The C family of instances is well suited for compute-intensive workloads, such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modeling, distributed analytics, and CPU-based machine learning inference. The R family of instances is well suited for memory-intensive workloads, such as open-source databases, in-memory caches, or real-time big data analysis. To learn more about these instance types and why they are relevant for your workloads, you can hear more about them in this video from James Hamilton, VP & Distinguished Engineer at AWS.

Today, I have additional news to share with you: we are adding a “d” variant to all three families. The M6gd, C6gd, and R6gd instance types have NVM Express (NVMe) locally attached SSD drives, up to 2 x 1.9 TB. They offer 50% more storage GB/vCPU compared to M5d, C5d, and R5d instances. These are a great fit for applications that need access to high-speed, low latency local storage including those that need temporary storage of data for scratch space, temporary files, and caches. The data on an instance store volume persists only during the life of the associated EC2 instance.

Instance types based on Graviton2 processors deliver up to 40% better price performance than their equivalent x86-based M5, C5, and R5 families.

All AWS Graviton2 based instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software that allows the delivery of isolated multi-tenancy, private networking, and fast local storage. These instances provide up to 19 Gbps Amazon Elastic Block Store (EBS) bandwidth and up to 25 Gbps network bandwidth.

Partner ecosystem support for these Arm-based AWS Graviton2 instances is robust, from Linux distributions (Amazon Linux 2, CentOS, Debian, Fedora, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, or FreeBSD), to language runtimes (Java with Amazon Corretto, Node.js, Python, Go,…), container services (Docker, Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, Amazon Elastic Container Registry), agents (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector), developer tools (AWS Code Suite, Jenkins, GitLab, Chef, Drone.io, Travis CI), and security & monitoring solutions (such as Datadog, Dynatrace, Crowdstrike, Qualys, Rapid7, Tenable, or Honeycomb.io).

Here is a table to summarize the technical characteristics of each instance type in these families.

Instance SizevCPUMemory
(GiB)
Local Storage
“d” variant
Network Bandwidth
(Gbps)
EBS Bandwidth
(Mbps)
M6g / R6g / C6gM6g / R6g / C6gM6gd / R6gd / C6gd
medium14/ 8 / 21 x 59 GB NVMeUp to 10Up to 4,750
large28 / 16 / 41 x 118 GB NVMeUp to 10Up to 4,750
xlarge416 / 32 / 81 x 237 GB NVMeUp to 10Up to 4,750
2xlarge832 / 64 / 161 x 474 GB NVMeUp to 10Up to 4,750
4xlarge1664 / 128 / 321 x 950 GB NVMeUp to 104,750
8xlarge32128 / 256 / 641 x 1900 GB NVMe129,000
12xlarge48192/ 384 / 962 x 1425 GB NVMe2013,500
16xlarge64256 / 512 / 1282 x 1900 GB NVMe2519,000
metal64256 / 512 / 1282 x 1900 GB NVMe2519,000

These M6g, C6g, and R6g families of instances are available for you today in the following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). Their disk-based variants M6gd, C6gd, and R6gd are available in US East (N. Virginia), US West (Oregon), US East (Ohio) and Europe (Ireland) AWS Regions.

If you are optimizing applications for Arm architecture, be sure to have a look at our Getting Started collection of resources or learn more about AWS Graviton2 based EC2 instances.

Let us know your feedback!

— seb

Amazon EBS Fast Snapshot Restore for Shared EBS Snapshots

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-ebs-fast-snapshot-restore-for-shared-ebs-snapshots/

Snapshots are an integral part of Amazon Elastic Block Store (EBS). Snapshots allow you to create a block-level, point-in-time copy of your volumes for backup, or disaster-recovery purposes. Snapshots are incremental, only data modified since the last snapshots are copied again. You can share snapshots between AWS Regions, or AWS Accounts. Once you have a snapshot, you can create a new Amazon Elastic Block Store (EBS) volume based on a snapshot. The new volume begins as an exact replica of the original volume that was used to create the snapshot.

When you restore volumes from snapshots, they are available for use almost instantaneously. In the background, EBS lazy loads the data from the snapshot as the operating systems accesses the blocks, this reduces the I/O performance of the volume until it is fully initialized. Some I/O demanding workloads however need the volume to operate at full capacity as soon as it is available. This is why we introduced Fast Snapshot Restore (FSR). Once enabled, FSR allows to create volumes that deliver their maximum performance and do not need to be initialized.

Many AWS customers are sharing their snapshots with other AWS Accounts, and there are many reasons to do this. You might want to centrally prepare and manage golden AMIs, with your applications, monitoring, or management tools pre-backed. In the context of Disaster Recovery (DR), your company policies might require to store all backups in one dedicated account. Until today, only the AWS Account owning the snapshot could enable FSR.

Today, you can enable Fast Snasphot Restore (FSR) on snapshots shared with you.

To enable FSR on a shared snapshot, I first create a snapshot on the source AWS Account. Once the snapshot is created, I share it with another account of mine. To do so, I click Actions, and Modify Permissions. I enter the destination AWS Account Number, click Add Permission and Save.

EBS Share Snapshots

I connect the destination account and navigate to EC2 console. When the snapshot is not visible, I check if the Private Snapshots option is selected.

EBS Private SnapshotsI select the snapshot I want to be available for FSR and select Actions, then Manage Fast Snapshot Restore.

EBS Enable fast snapshot restoreI select the Availability Zones where I want to be able to fast restore my snapshot and click Save.

Enable EBS Fast Snapshot Restore

After the settings are saved, I receive a confirmation:

FSR Restore Confirmation

The snapshot stays in enabling mode for a couple of minutes and then becomes enabled. Once done, you can create Amazon Elastic Block Store (EBS) volumes from it. The volumes are fully initialized.

You can also do all this from the API or the AWS Command Line Interface (CLI).

aws ec2 enable-fast-snapshot-restores            \
         – source-snapshot-ids snap-0b00000000d9 \
         – availability-zones us-west-1a         \
         – region us-west-1

{
    "Successful": [
        {
            "SnapshotId": "snap-0b00000000d9",
            "AvailabilityZone": "us-west-1a",
            "State": "enabling",
            "StateTransitionReason": "Client.UserInitiated",
            "OwnerId": "00123456789",
            "EnablingTime": "2020-06-26T16:40:19.720000+00:00"
        }
    ],
    "Unsuccessful": []
}

At any moment I can check what are the volumes I restored from a FSR.

aws ec2 describe-volumes – filters Name=fast-restored,Values=true

{
    "Volumes": [
        {
            "Attachments": [],
            "AvailabilityZone": "us-west-1a",
            "CreateTime": "2020-01-26T00:34:11.093Z",
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/8c5b2c63-0000-0000-0000-5513e232e843",
            "Size": 20,
            "SnapshotId": "snap-0b00000000d9",
            "State": "available",
            "VolumeId": "vol-0d000000000000b0",
            "Iops": 100,
            "VolumeType": "gp2",
            "FastRestored": true
        }
    ]
}

The AWS Account where you enable Fast Snapshot Restore is charged an hourly price. The owner of the snapshot is not charged for enabling FSR in another AWS Account. When the owner of your shared snapshot deletes the snapshot or stops sharing the snapshot with you, the FSR for your shared snapshot is automatically disabled and FSR billing for the snapshot is terminated.

You can enable Fast Snapshot Restore in all commercial AWS Regions today.

As usual, let us know your feedback by posting messages on the AWS Forum, or leave a comment on this post.

— seb

Create Snapshots From Any Block Storage Using EBS Direct APIs

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-create-snapshots-from-any-block-storage/

I am excited to announce you can now create Amazon Elastic Block Store (EBS) snapshots from any block storage data, such as on-premises volumes, volumes from another cloud provider, existing block data stored on Amazon Simple Storage Service (S3), or even your own laptop 🙂

AWS customers using the cloud for disaster recovery of on-premises infrastructure all have the same question: how can I transfer my on-premises volume data to the cloud efficiently and at low cost? You usually create temporary Amazon Elastic Compute Cloud (EC2) instances, attach Amazon Elastic Block Store (EBS) volumes, transfer the data at block level from on-premises to these new Amazon Elastic Block Store (EBS) volumes, take a snapshot of every EBS volumes created and tear-down the temporary infrastructure. Some of you choose to use CloudEndure to simplify this process. Or maybe you just gave up and did not copy your on-premises volumes to the cloud because of the complexity.

To simplify this, we are announcing today 3 new APIs that are part of EBS direct API, a new set of APIs we announced at re:Invent 2019. We initially launched a read and diff APIs. We extend it today with write capabilities. These 3 new APIs allow to create Amazon Elastic Block Store (EBS) snapshots from your on-premises volumes, or any block storage data that you want to be able to store and recover in AWS.

With the addition of write capability in EBS direct API, you can now create new snapshots from your on-premises volumes, or create incremental snapshots, and delete them. Once a snapshot is created, it has all the benefits of snapshots created from Amazon Elastic Block Store (EBS) volumes. You can copy them, share them between AWS Accounts, keep them available for a Fast Snapshot Restore, or create Amazon Elastic Block Store (EBS) volumes from them.

Having Amazon Elastic Block Store (EBS) snapshots created from any volumes, without the need to spin up Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Store (EBS) volumes, allows you to simplify and to lower the cost of the creation and management of your disaster recovery copy in the cloud.

Let’s have a closer look at the API
You first call StartSnapshot to create a new snapshot. When the snapshot is incremental, you pass the ID of the parent snapshot. You can also pass additional tags to apply to the snapshot, or encrypt these snapshots and manage the key, just like usual. If you choose to encrypt snapshots, be sure to check our technical documentation to understand the nuances and options.

Then, for each block of data, you call PutSnapshotBlock. This API has 6 mandatory parameters: snapshot-id, block-index, block-data, block-length, checksum, and checksum-algorithm. The API supports block lengths of 512 KB. You can send your blocks in any order, and in parallel, block-index keeps the order correct.

After you send all the blocks, you call CompleteSnapshot with changed-blocks-count parameter having the number of blocks you sent.

Let’s put all these together
Here is the pseudo code you must write to create a snapshot.

AmazonEBS amazonEBS = AmazonEBSClientBuilder.standard()
   .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointName, awsRegion))
   .withCredentials(credentialsProvider)
   .build();

response = amazonEBS.startSnapshot(startSnapshotRequest)
snapshotId = response.getSnapshotId();

for each (block in changeset) {
    putResponse = amazonEBS.putSnapshotBlock(putSnapshotBlockRequest);
}
amazonEBS.completeSnapshot(completeSnapshotRequest);

As usual, when using this code, you must have appropriate IAM policies allowing to call the new API. For example:

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*" }]

Also include some KMS related permissions when creating encrypted snapshots.

In addition of the storage cost for snapshots, there is a charge per API call when you call PutSnapshotBlock.

These new snapshot APIs are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

You can start to use them today.

— seb

Single Sign-On between Okta Universal Directory and AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/single-sign-on-between-okta-universal-directory-and-aws/

Enterprises adopting the AWS Cloud want to effectively manage identities. Having one central place to manage identities makes it easier to enforce policies, to manage access permissions, and to reduce the overhead by removing the need to duplicate users and user permissions across multiple identity silos. Having a unique identity also simplifies access for all of us, the users. We all have access to multiple systems, and we all have troubles to remember multiple distinct passwords. Being able to connect to multiple systems using one single combination of user name and password is a daily security and productivity gain. Being able to link an identity from one system with an identity managed on another trusted system is known as “Identity Federation“, which single sign-on is a subset of. Identity Federation is made possible thanks to industry standards such as Security Assertion Markup Language (SAML), OAuth, OpenID and others.

Recently, we announced a new evolution of AWS Single Sign-On, allowing you to link AWS identities with Azure Active Directory identities. We did not stop there. Today, we are announcing the integration of AWS Single Sign-On with Okta Universal Directory.

Let me show you the experience for System Administrators, then I will demonstrate the single sign-on experience for the users.

First, let’s imagine that I am an administrator for an enterprise that already uses Okta Universal Directory to manage my workforce identities. Now I want to enable a simple and easy to use access to our AWS environments for my users, using their existing identities. As most enterprises, I manage multiple AWS Accounts. I want more than just a single sign-on solution, I want to manage access to my AWS Accounts centrally. I do not want to duplicate my Okta groups and user memberships by hand, nor maintain multiple identity systems (Okta Universal Directory and one for each AWS Account I manage). I want to enable automatic user synchronization between Okta and AWS. My users will sign in to the AWS environments using the experience they are already familiar with in Okta.

Connecting Okta as an identity source for AWS Single Sign-On
The first step is to add AWS Single Sign-On as an “application” Okta users can connect to. I navigate to the Okta administration console and login with my Okta administrator credentials, then I navigate to the Applications tab.

Okta admin consoleI click the green Add Application button and I search for AWS SSO application. I click Add.

Okta add applicationI enter a name to the app (you can choose whatever name you like) and click Done.

On the next screen, I configure the mutual agreement between AWS Single Sign-On and Okta. I first download the SAML Meta Data file generated by Okta by clicking the blue link Identity Provider Metadata. I keep this file, I need it later to configure the AWS side of the single sign-on.

Okta Identity Provider metadata

Now that I have the metadata file, I open to the AWS Management Console in a new tab. I keep the Okta tab open as the procedure is not finished there yet. I navigate to AWS Single Sign-On and click Enable AWS SSO.

I click Settings in the navigation panel. I first set the Identity source by clicking the Change link and selecting External identity provider from the list of options. Secondly, I browse to and select the XML file I downloaded from Okta in the Identity provider metadata section.

SSO configure metadata

I click Next: Review, enter CONFIRM in the provided field, and finally click Change identity source to complete the AWS Single Sign-On side of the process. I take note of the two values AWS SSO ACS URL and AWS SSO Issuer URL as I must enter these in the Okta console.

AWS SSO Save URLsI return to the tab I left open to my Okta console, and copy the values for AWS SSO ACS URL and AWS SSO Issuer URL.

OKTA ACS URLsI click Save to complete the configuration

Configuring Automatic Provisioning
Now that Okta is configured for single sign-on for my users to connect using AWS Single Sign-On I’m going to enable automatic provisioning of user accounts. As new accounts are added to Okta, and assigned to the AWS SSO application, a corresponding AWS Single Sign-On user is created automatically. As an administrator, I do not need to do any work to configure a corresponding account in AWS to map to the Okta user.

From the AWS Single Sign-On Console, I navigate to Settings and then click the Enable identity synchronization link. This opens a dialog containing the values for the SCIM endpoint and an OAuth bearer access token (hidden by default). I need both of these values to use in the Okta application settings.

AWS SSO SCIMI switch back to the tab open on the Okta console, and click on Provisioning tab under the AWS SSO Application. I select Enable API Integration. Then I copy / paste the values Base URL (I paste the value copied in AWS Single Sign-On Console SCIM endpoint) and API Token (I paste the value copied AWS Single Sign-On Console Access token)

Okta API IntegrationI click Test API Credentials to verify everything works as expected. Then I click To App to enable users creation, update, and deactivate.

Okta Provisioning To App

With provisioning enabled, my final task is to assign the users and groups that I want to synchronize from Okta to AWS Single Sign-On. I click the Assignments tab and add Okta users and groups. I click Assign, and I select the Okta users and groups I want to have access to AWS.

OKTA AssignmentsThese users are synchronized to AWS Single Sign-On, and the users now see the AWS Single Sign-On application appear in their Okta portal.

Okta Portal User ViewTo verify user synchronization is working, I switch back to the AWS Single Sign-On console and select the Users tab. The users I assigned in Okta console are present.

AWS SSO User View

I Configured Single Sign-On, Now What?
Okta is now my single source of truth for my user identities and their assignment into groups, and periodic synchronization automatically creates corresponding identities in AWS Single Sign-On. My users sign into their AWS accounts and applications with their Okta credentials and experience, and don’t have to remember an additional user name or password. However, as things stand my users have only access to sign in. To manage permissions in terms of what they can access once signed into AWS, I must set up permissions in AWS Single Sign-On.

Back to AWS SSO Console, I click AWS Accounts on the left tab bar and select the account from my AWS Organizations that I am giving access to. For enterprises having multiple accounts for multiple applications or environment, it gives you the granularity to grant access to a subset of your AWS accounts.

AWS SSO Select AWS AccountI click Assign users to assign SSO users or groups to a set of IAM permissions. For this example, I assign just one user, the one with @example.com email address.

Assign SSO UsersI click Next: Permission sets and Create new permission set to create a set of IAM policies to describe the set of permissions I am granting to these Okta users. For this example, I am granting a read-only permission on all AWS services.SSO Permission setAnd voila, I am ready to test this setup.

SSO User Experience for the console
Now that I showed you the steps System Administrators take to configure the integration, let me show you what is the user experience.

As an AWS Account user, I can sign-in on Okta and get access to my AWS Management Console. I can start either from the AWS Single Sign-On user portal (the URL is on the AWS Single Sign-On settings page) or from the Okta user portal page and select the AWS SSO app.

I choose to start from the AWS SSO User Portal. I am redirected to the Okta login page. I enter my Okta credentials and I land on the AWS Account and Role selection page. I click on AWS Account, select the account I want to log into, and click Management console. After a few additional redirections, I land on the AWS Console page.

SSO User experience

SSO User Experience for the CLI
System administrators, DevOps engineers, Developers, and your automation scripts are not using the AWS console. They use the AWS Command Line Interface (CLI) instead. To configure SSO for the command line, I open a terminal and type aws configure sso. I enter the AWS SSO User Portal URL and the Region.

$aws configure sso
SSO start URL [None]: https://d-0123456789.awsapps.com/start
SSO Region [None]: eu-west-1
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:

https://device.sso.eu-west-1.amazonaws.com/

Then enter the code:

AAAA-BBBB

At this stage, my default browser pops up and I enter my Okta credentials on the Okta login page. I confirm I want to enable SSO for the CLI.

SSO for the CLIand I close the browser when I receive this message:

AWS SSO CLI Close Browser Message

The CLI automatically resumes the configuration, I enter the default Region, the default output format and the name of the CLI profile I want to use.

The only AWS account available to you is: 012345678901
Using the account ID 012345678901
The only role available to you is: ViewOnlyAccess
Using the role name "ViewOnlyAccess"
CLI default client Region [eu-west-1]:
CLI default output format [None]:
CLI profile name [okta]:

To use this profile, specify the profile name using – profile, as shown:

aws s3 ls – profile okta

I am now ready to use the CLI with SSO. In my terminal, I type:

aws – profile okta s3 ls
2020-05-04 23:14:49 do-not-delete-gatedgarden-audit-012345678901
2015-09-24 16:46:30 elasticbeanstalk-eu-west-1-012345678901
2015-06-11 08:23:17 elasticbeanstalk-us-west-2-012345678901

If the machine you want to configure CLI SSO has no graphical user interface, you can configure SSO in headless mode, using the URL and the code provided by the CLI (https://device.sso.eu-west-1.amazonaws.com/ and AAAA-BBBB in the example above)

In this post, I showed how you can take advantage of the new AWS Single Sign-On capabilities to link Okta identities to AWS accounts for user single sign-on. I also make use of the automatic provisioning support to reduce complexity when managing and using identities. Administrators can now use a single source of truth for managing their users, and users no longer need to manage an additional identity and password to sign into their AWS accounts and applications.

AWS Single Sign-On with Okta is free to use, and is available in all Regions where AWS Single Sign-On is available. The full list is here.

To see all this in motion, you can check out the following demo video for more details on getting started.

— seb

New – AWS Amplify Libraries for Android and iOS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-amplify-libraries-for-android-and-ios/

When you develop mobile applications, you must develop a set of cloud-powered functionalities for each project. For example, most applications require user authentication or detailed in-app analytics. Your application most probably calls REST or GraphQL APIs and is required to support offline scenario and data synchronization. AWS Amplify makes it easy to integrate such functionalities in your mobile and web applications.

AWS Amplify is a set of tools and services for building secure, scalable mobile and web applications. It is made out of three components: an open source set of libraries and UI components for adding cloud-powered functionalities, a command line interactive toolchain to create and manage a cloud backend, and the AWS Amplify Console, an AWS Service to deploy and host full stack serverless web applications.

Today, I am happy to announce the availability of Amplify iOS and Amplify Android libraries and tools, to help mobile application developers to easily build secure and scalable cloud-powered applications.

Until today, when you developed a cloud-powered mobile application, you were using a combination of tools and SDKs: the Amplify CLI to create and manage your backend, and one or several AWS Mobile SDKs to access the backend. In general, AWS Mobile SDKs are low-level wrappers around the AWS Services APIs. They require you to understand the API details and, most of the time, to write many lines of undifferentiated code, such as object (de)serialization, error handling, etc.

Amplify iOS and Amplify Android simplify this. First, they provide native libraries oriented around use-cases, such as Authentication, Data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions. Thinking in terms of use cases instead of AWS Services results in higher-level abstraction, faster development cycles, and fewer lines of code. Secondly, they provide tools that integrate with your native IDE toolchain: XCode for iOS and Gradle for Android.

Using Amplify iOS or Amplify Android is our recommended way to integrate a cloud-based backend in your mobile application.

How to get started?
I’ve built two simple mobile applications (one on iOS and one on Android) to show you how to get started. The sources for these examples are available on my GitHub. As you see, I am not a graphic designer. The applications have a list of UI buttons to trigger different flows and the results are only visible in the console.

Amplify iOS & Android Demo

Amplify libraries for mobile are organized around categories for Auth, API (REST and GraphQL), Analytics, File Storage, DataStore, and Predictions. In this example, I use three categories. Auth, to implement sign-in, sign-up, and Login with Facebook flow. DataStore to use a query-able, on-device persistent storage engine. It seamlessly synchronizes data between the app and the cloud with built-in versioning, conflict detection and resolution capabilities. I also use Predictions category to add automatic translation between english and french languages.

Let’s review the four main steps and lines of code to get started on each platform. For a detailed step-by-step tutorial, have a look at the Amplify iOS or Amplify Android documentation.

The first step is to set up your project, to add required dependencies and build steps.

On iOS, you add a couple of lines to your Podfile and add the AWS Amplify build script to the build phase of your project.
On Android, you do the same in your Gradle file for the module and for the app.

// iOS Podfile
target 'amplify-lib-ios-demo' do
  # Comment the next line if you don't want to use dynamic frameworks
  use_frameworks!

  # Pods for amplify-lib-ios-demo
    pod 'Amplify'
    pod 'Amplify/Tools'

    pod 'AmplifyPlugins/AWSAPIPlugin'
    pod 'AmplifyPlugins/AWSDataStorePlugin'
    pod 'AmplifyPlugins/AWSCognitoAuthPlugin'
    pod 'AWSPredictionsPlugin'
// Android build.gradle fragment (Module: app) 
...
compileOptions {
    sourceCompatibility JavaVersion.VERSION_1_8
    targetCompatibility JavaVersion.VERSION_1_8
}
dependencies {
    implementation 'com.amplifyframework:core:1.0.0'
    implementation 'com.amplifyframework:aws-datastore:1.0.0'
    implementation 'com.amplifyframework:aws-api:1.0.0'
    implementation 'com.amplifyframework:aws-predictions:1.0.0'
    implementation 'com.amplifyframework:aws-auth-cognito:1.0.0'
}
...
// Android build.gradle fragment (Project: My Application)
...
repositories {
    mavenCentral()
    google()
    jcenter()
}
dependencies {
        classpath 'com.amplifyframework:amplify-tools-gradle-plugin:1.0.0'
}
apply plugin: 'com.amplifyframework.amplifytools'
...

On iOS, you also must manually add an amplify-tools.sh to your build steps.

When this is done, you type pod install for iOS or you sync the project with Gradle.

The second step is to add the plugins for each category to Amplify at application initialization time. On iOS, I am using didFinishLaunchingWithOptions from the AppDelegate. On Android, I am using onCreate from MainActivity. You’re free to initialize Amplify at any stage in your app, it is not necessary to be at app startup time.

    // iOS AppDelegate class
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        
        do {
            try Amplify.add(plugin: AWSAPIPlugin())
            try Amplify.add(plugin: AWSDataStorePlugin(modelRegistration: AmplifyModels()))
            try Amplify.add(plugin: AWSCognitoAuthPlugin())
            try Amplify.add(plugin: AWSPredictionsPlugin())
            
            try Amplify.configure()
            print("Amplify initialized")
        } catch {
            print("Failed to configure Amplify \(error)")
        }
}
   // Android MainActivity class (Kotlin version)
   override fun onCreate(savedInstanceState: Bundle?) {
        // ...

        try {
            Amplify.addPlugin(AWSDataStorePlugin())
            Amplify.addPlugin(AWSApiPlugin())
            Amplify.addPlugin(AWSCognitoAuthPlugin())
            Amplify.addPlugin(AWSPredictionsPlugin())
            Amplify.configure(applicationContext)
            Log.i(TAG, "Initialized Amplify")
        } catch (error: AmplifyException) {
            Log.e(TAG, "Could not initialize Amplify", error)
        }
    }

The third step varies from one category to the other. Usually, it involves using the AWS Amplify command line to provision and configure your backend. Type commands like amplify add auth or amplify add predictions to configure a category.

For example, to configure the user authentication with Amazon Cognito and social identity providers, such as Login With Facebook, you type something like the below. This step is identical for iOS and Android as we are creating and configuring the cloud backend.

To learn how to configure single sign-on with social identity providers such as Facebook, Google or Amazon, you can refer to the step-by-step instructions I wrote in this Amplify iOS Workshop (I will update the workshop soon to take advantage of these new AWS Amplify libraries).

Configuring the DataStore involves creating a GraphQL schema for your data. Amplify generates native (Swift or Java) code to represent your data in your app. It transparently handles an offline datastore to store your data and sync them with the backend when network connectivity is available.

The fourth and last step is to actually invoke Amplify’s library code at runtime.

For example, to trigger an authentication using Amazon Cognito hosted web user interface, you use the following code:

// iOS (swift) in AppDelegate object
    func signIn() {
        _ = Amplify.Auth.signInWithWebUI(presentationAnchor: UIApplication.shared.windows.first!) { (result) in
            switch(result) {
                case .success(let result):
                    print(result)
                case .failure(let error):
                    print("Can not signin \(error)")
            }
        }
    }
// Android (Kotlin) in MainActivity 
    fun signIn(view: View?) {
        Amplify.Auth.signInWithWebUI(
            this,
            { result: AuthSignInResult -> Log.i(TAG, result.toString()) },
            { error: AuthException -> Log.e(TAG, error.toString()) }
        )
    }

The above triggers the following web view:

Hosted UI for Cognito

Similarly, to create an item in the Datastore (and persisting it to Amazon DynamoDB over GraphQL), you need the following code:

    // iOS 
    func create() {
        let note = Note(content: "Build iOS application")
        Amplify.DataStore.save(note) {
            switch $0 {
            case .success:
                print("Added note")
            case .failure(let error):
                print("Error adding note - \(error.localizedDescription)")
            }
        }
    }
   // Android 
    fun create(view: View?) {
        val note: Note = Note.builder()
            .content("Build Android application")
            .build()

        Amplify.DataStore.save(
            note,
            { success -> Log.i(TAG, "Saved item: " + success.item.content) },
            { error -> Log.e(TAG, "Could not save item to DataStore", error) }
        )

And to trigger a text translation with the Predictions category, you just need the following code:

    // iOS 
    func translate(text: String) {
        _ = Amplify.Predictions.convert(textToTranslate: text, language: LanguageType.english, targetLanguage: LanguageType.french) {
            switch $0 {
            case .success(let result):
                // update UI on main thread 
                DispatchQueue.main.async() {
                    self.data.translatedText = result.text
                }
            case .failure(let error):
                print("Error adding note - \(error.localizedDescription)")
            }
        }
    }
   // Android
    fun translate(view: View?) {
        Log.i(TAG, "Translating")

        val et : EditText = findViewById(R.id.toBeTranslated)
        val tv : TextView = findViewById(R.id.translated)

        Amplify.Predictions.translateText(
            et.text.toString(),
            LanguageType.ENGLISH,
            LanguageType.FRENCH,
            { success -> tv.setText(success.translatedText) },
            { failure -> Log.e(TAG, failure.localizedMessage) }
        )
    }

Short and slick isn’t it ?

Amplify Mobile demo translation

Price and Availability
AWS Amplify is available free of charge, you only pay for the backend services your application use, above the free tier.

Amplify iOS and Amplify Android are available today from the CocoaPods and Maven Central code repository. The source code is available on GitHub (iOS or Android). Do not hesitate to send us your feedback (Doc, iOS, and Android) or to send us a Pull Request 🙂

I am also curious to learn about the amazing mobile apps you are building with AWS Amplify. Do not hesitate to share your screenshots or App Store links with me.

Happy building!

— seb

New – EC2 M6g Instances, powered by AWS Graviton2

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-m6g-ec2-instances-powered-by-arm-based-aws-graviton2/

Starting today, you can use our first 6th generation Amazon Elastic Compute Cloud (EC2) General Purpose instance: the M6g. The “g” stands for “Graviton2“, our next generation Arm-based chip designed by AWS (and Annapurna Labs, an Amazon company), utilizing 64-bit Arm Neoverse N1 cores.

Graviton 2 chipset

These processors support 256-bit, always-on, DRAM encryption. They also include dual SIMD units to double the floating point performance versus the first generation Graviton, and they support int8/fp16 instructions to accelerate machine learning inference workloads. You can read this full review published by AnandTech for in-depth details.

The M6g instances are available in 8 sizes with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs, or as bare metal instances. They support configurations with up to 256 GiB of memory, 25 Gbps of network performance, and 19 Gbps of EBS bandwidth. These instances are powered by AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor.

For those of you running typical open-source application stacks, generally deployed on x86-64 architectures, migrating to Graviton will give you up to 40% improvement on cost-performance ratio, compared to similar-sized M5 instances. M6g instances are well-suited for workloads such as application servers, gaming servers, mid-size databases, caching fleets, web tier and the likes.

We ran an extensive preview program to collect customer feedback on this 6th generation instance type. For example, Honeycomb uses 30% fewer instances vs C5, KeyDB observes 65% better performance and 20% cost reduction vs M5, Inter Systems reported 28% performance improvement and 20% cost reduction compared to equivalent M5 instances, and Treasure Data benchmarked 30% increase of performance for 20% cost reduction compared to similarly sized M5 instances. You can read more customer stories including Hotelbeds, Redbox, Nielsen, Mobiuspace, RayGun on the M6g web page.

Several AWS services teams are evaluating these instances too. For example, during their testing, the [elasticcache] service team found that M6g instances deliver up to 50% throughput improvement over M5 instances on Redis.

Major Linux distributions are available on Arm architecture, just select the Amazon Machine Image (AMI) corresponding to the Arm version of your favorite distribution when launching an instance in the AWS Management Console. Be sure to select the 64-bit (Arm) button on the right part of the screen.

Launch ARM AMI in the console

If you choose the AWS Command Line Interface (CLI) instead, use the corresponding image-id for your region, architecture, and distribution. For example, to start an Amazon Linux 2 instance:

AMI_ID=$(aws ssm get-parameters-by-path – path /aws/service/ami-amazon-linux-latest – output text – query "Parameters[?contains(Name, 'ami-hvm-arm64')].Value")
aws ec2 run-instances – image-id $AMI_ID – instance-type m6g.large – key-name my-ssh-key-name – security-group-ids sg-1234567

(you need to adjust the ssh key name and the security group ID in the above command)

Once the instance is started, it behaves like any Amazon Elastic Compute Cloud (EC2) instance:

~ % ssh [email protected]
Warning: Permanently added 'ec2-01-01-01-01.compute-1.amazonaws.com,01.01.01.01' (ECDSA) to the list of known hosts.
Last login: Wed Apr 22 12:26:44 2020 from 01-01-01-01.amazon.com

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[[email protected] ~]$ uname -a
Linux ip-172-31-16-155.ec2.internal 4.14.171-136.231.amzn2.aarch64 #1 SMP Thu Feb 27 20:25:45 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

The Arm software ecosystem is broad and deep, from Linux distributions (Amazon Linux 2, Ubuntu, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, FreeBSD), to language runtimes (Java with Amazon Corretto , NodeJS, Python, Go,…), container services (Docker, Amazon ECS, Amazon Elastic Kubernetes Service, Amazon Elastic Container Registry), agents (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector), developer tools (AWS Code Suite, Jenkins, GitLab, Chef, Drone.io, Travis CI), and security & monitoring solutions such as Datadog, Crowdstrike, Qualys, Rapid7, Tenable, or Honeycomb.io.

You will find Arm versions of commonly used software packages available for installation through the same mechanisms that you currently use (yum, apt-get, pip, npm …). While some applications may require re-compilation, the vast majority of applications that are based on interpreted languages (such as Java, NodeJS, Python, Go) should run unmodified on M6g instances. In the rare cases where you will need to recompile or debug code, we have assembled some resources to help you to get started.

We are not going to stop at general purposes M6g instances, compute optimized C6g instances and memory optimized R6g instances are coming soon, stay tuned.

Now it’s you turn to give it a try in one the following AWS Regions : US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo).

As usual, let us know your feedback.

— seb

Now Open – AWS Africa (Cape Town) Region

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/now-open-aws-africa-cape-town-region/

The AWS Region in Africa that Jeff promised you in 2018 is now open. The official name is Africa (Cape Town) and the API name is af-south-1. You can start using this new Region today to deploy workloads and store your data in South Africa.

The addition of this new Region enables all organizations to bring lower latency services to their end-users across Africa, and allows more African organisations to benefit from the performance, security, flexibility, scalability, reliability, and ease of use of the AWS cloud. It enables organisations of all sizes to experiment and innovate faster.

AWS Regions meet the highest levels of security, compliance, and data protection. With the new Region, local customers with data residency requirements, and those looking to comply with the Protection of Personal Information Act (POPIA), will be able to store their content in South Africa with the assurance that they retain complete ownership of their data and it will not move unless they choose to move it.

Africa (Cape Town) is the 23rd AWS Region, and the first one in Africa. It is comprised of three Availability Zones, bringing the Global AWS Infrastructure to a total of 73 Availability Zones (AZ).

Instances and Services
Applications running on this 3-AZs Region can use C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can use a long list of AWS services including Amazon API Gateway, Amazon Aurora (both MySQL and PostgreSQL), Amazon CloudWatch, Amazon CloudWatch Logs, CloudWatch Events, Amazon DynamoDB, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Registry, Amazon ECS, Elastic Load Balancing (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Data Streams, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Auto Scaling, AWS Artifact, AWS Certificate Manager, AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Personal Health Dashboard, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Marketplace, AWS Mobile SDK, AWS Shield (regional), AWS Site-to-Site VPN, AWS Step Functions, AWS Support, AWS Systems Manager, AWS Trusted Advisor, AWS X-Ray, AWS Import/Export

A Growing Presence in Africa
This new Region is a continuation of the AWS investment in Africa. In 2004, Amazon opened a Development Center in Cape Town that focuses on building pioneering networking technologies, next generation software for customer support, and the technology behind Amazon Elastic Compute Cloud (EC2). AWS has also added a number of teams including account managers, customer service reps, partner managers, solutions architects, developer advocates, and more, helping customers of all sizes as they move to the cloud.

In 2015, we continued our expansion, opening an office in Johannesburg, and in 2017 we brought the Amazon Global Network to Africa through AWS Direct Connect. In 2018 we launched infrastructure on the African continent introducing Amazon CloudFront to South Africa, with two edge locations in Cape Town and Johannesburg, and recently in Nairobi, Kenya. We also support the growth of technology education with AWS Academy and AWS Educate, and continue to support the growth of new businesses through AWS Activate. The addition of the AWS Region in South Africa helps builders in organisations of all sizes, from startups to enterprises, as well as educational institutions, NGOs, and the public sector across Africa, to innovate and grow.

The new Region is open to all: existing AWS customers, partners, and new African customers working with local partners across the region. If you are planning to deploy workloads on the Africa (Cape Town) Region, don’t hesitate to contact us. I am also taking advantage of this post to remind we have dozens of open positions in the region, for many different roles, such as Account Management, Solution Architects, Customer Support, Product Management, Software Development and more. Visit amazon.jobs and send us your resume.

More to Come
We are continuously expanding our global infrastructure to allow you to deploy workloads close to your end-users. We already announced two future AWS Regions in APAC: Indonesia and Japan, and two in Europe: Italy and Spain. Stay tuned for more posts like this one.

— seb

(Photo via Good Free Photos)

Amazon Redshift update – ra3.4xlarge instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-redshift-update-ra3-4xlarge-instances/

Since we launched Amazon Redshift as a cloud data warehouse service more than seven years ago, tens of thousands of customers built their workloads using it. We are always listening to your feedback and, in December last year, we announced our 3rd generation RA3 node type providing you the ability to scale compute and storage separately. Previous generation DS2 and DC2 nodes had a fixed amount of storage and required adding more nodes to your cluster to increase storage capacity. The new RA3 nodes let you determine how much compute capacity you need to support your workload and then scale the amount of storage based on your needs. The first member of the RA3 family was the ra3.16xlarge which we heard from many customers was fantastic, but more than they needed for their workload needs.

Today we are adding a new smaller member to the RA3 family: the ra3.4xlarge.

The RA3 node type is based on AWS Nitro and includes support for Redshift managed storage. Redshift managed storage automatically manages data placement across tiers of storage and caches the hottest data in high-performance SSD storage while automatically offloading colder data to Amazon Simple Storage Service (S3). Redshift managed storage uses advanced techniques such as block temperature, data block age, and workload patterns to optimize performance.

RA3 nodes with managed storage are a great fit for analytics workloads that require massive storage capacity and can be a great fit for workloads such as operational analytics, where the subset of data that is most important evolves constantly over time. In the past, there was pressure to offload or archive old data to other storage because of fixed storage limits. This made maintaining the operational analytics data set and the larger historical dataset difficult to query when needed.

The new ra3.4xlarge node provides 12 vCPUs, 96 GiB of RAM, and addresses up to 64 Tb of managed storage. A cluster can contain up to 32 of these instances, for a total storage of 2048 TB (that’s 2 petabytes!).

The differences between ra3.16xlarge and ra3.4xlarge nodes are summarized in the table below.

vCPUMemoryAddressable Storage I/OPrice
(US East (N. Virginia))
ra3.4xlarge1296 GiB64TB RMS2 GB/sec$3.26 per Hour
ra3.16xlarge28384 GiB64TB RMS8 GB/sec$13.04 per Hour

To create a new cluster, I am using the Redshift AWS Management Console or AWS Command Line Interface (CLI). In the console. I click Create Cluster and choose ra3.4xlarge instances.

If you have a DS2 or DC2 instance-based cluster you create a new RA3 cluster to evaluate the new instance with managed storage. You use a recent snapshot of your Redshift DS2 or DC2 cluster to create a new cluster based on ra3.4xlarge instances. You keep the two clusters running in parallel to evaluate the compute needs of your application.

You can resize your RA3 cluster at anytime by using elastic resize to add or remove compute capacity. If elastic resize is not available for your chosen configuration then you can do a classic resize.

RA3 instances are now available in 14 AWS Regions : US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Canada (Central), and South America (São Paulo).

The price vary from one region to the other, starting at $3.26/hr/node in US East (N. Virginia). Check the Amazon Redshift pricing page for details.

— seb

Amazon Detective – Rapid Security Investigation and Analysis

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-detective-rapid-security-investigation-and-analysis/

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in all commercial AWS Regions, except China. You can start to use it today.

— seb

Materialize your Amazon Redshift Views to Speed Up Query Execution

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/materialize-your-amazon-redshift-views-to-speed-up-query-execution/

At AWS, we take pride in building state of the art virtualization technologies to simplify the management and access to cloud services such as networks, computing resources or object storage.

In a Relational Database Management Systems (RDBMS), a view is virtualization applied to tables : it is a virtual table representing the result of a database query. Views are frequently used when designing a schema, to present a subset of the data, summarized data (such as aggregated or transformed data) or to simplify data access across multiple tables. When using data warehouses, such as Amazon Redshift, a view simplifies access to aggregated data from multiple tables for Business Intelligence (BI) tools such as Amazon QuickSight or Tableau.

Views provide ease of use and flexibility but they are not speeding up data access. The database system must evaluate the underlying query representing the view each time your application accesses the view. When performance is key, data engineers use create table as (CTAS) as an alternative. A CTAS is a table defined by a query. The query is executed at table creation time and your applications can use it like a normal table, with the downside that the CTAS data set is not refreshed when underlying data are updated. Furthermore, the CTAS definition is not stored in the database system. It is not possible to know if a table was created by a CTAS or not, making it difficult to track which CTAS needs to be refreshed and which is current.

Today, we are introducing materialized views for Amazon Redshift. A materialized view (MV) is a database object containing the data of a query. A materialized view is like a cache for your view. Instead of building and computing the data set at run-time, the materialized view pre-computes, stores and optimizes data access at the time you create it. Data are ready and available to your queries just like regular table data.

Using materialized views in your analytics queries can speed up the query execution time by orders of magnitude because the query defining the materialized view is already executed and the data is already available to the database system.

Materialized views are especially useful for queries that are predictable and repeated over and over. Instead of performing resource-intensive queries on large tables, applications can query the pre-computed data stored in the materialized view.

When the data in the base tables are changing, you refresh the materialized view by issuing a Redshift SQL statement “refresh materialized view“. After issuing a refresh statement, your materialized view contains the same data as would have been returned by a regular view. Refreshes can be incremental or full refreshes (recompute). When possible, Redshift incrementally refreshes data that changed in the base tables since the materialized view was last refreshed.

Let’s see how it works. I create a sample schema to store sales information : each sales transaction and details about the store where the sales took place.

To view the total amount of sales per city, I create a materialized view with the create materialized view SQL statement. I connect to the Redshift console, select the query Editor and type the following statement to create a materialized view (city_sales) joining records from two tables and aggregating sales amount (sum(sales.amount)) per city (group by city):

CREATE MATERIALIZED VIEW city_sales AS (
  SELECT st.city, SUM(sa.amount) as total_sales
  FROM sales sa, store st
  WHERE sa.store_id = st.id
  GROUP BY st.city
);

The resulting schema is below:

Now I can query the materialized view just like a regular view or table and issue statements like “SELECT city, total_sales FROM city_sales” to get the below results. The join between the two tables and the aggregate (sum and group by) are already computed, resulting to significantly less data to scan.

When the data in the underlying base tables change, the materialized view is not automatically reflecting those changes. The data stored in the materialized can be refreshed on demand with latest changes from base tables using the SQL refreshmaterialized view command. Let’s see a practical example:

! – let's add a row in the sales base table
INSERT INTO sales (id, item, store_id, customer_id, amount)
VALUES(8, 'Gaming PC Super ProXXL', 1, 1, 3000);

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|        690|

! – the new sale is not taken into account !

! – let's refresh the materialized view
REFRESH MATERIALIZED VIEW city_sales;

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|       3690|

! – now the view has the latest sales data

The full code for this very simple demo is available as a gist.

You can start to use materialized views today in all AWS Regions.

There is nothing to change in your existing clusters to start to use materialized views, you can start to create them today at no additional cost.

Happy building !

New: Use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Infrastructure-as-code is the process of managing and creating IT infrastructure through machine-readable text files, such as JSON or YAML definitions or using familiar programming languages, such as Java, Python, or TypeScript. AWS Customers typically uses AWS CloudFormation or the AWS Cloud Development Kit to automate the creation and management of their cloud infrastructure.

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When we launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

Use CloudFormation StackSets with Organizations
Today, we are simplifying the use of CloudFormation StackSets for customers managing multiple accounts with AWS Organizations.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

In addition to setting permissions, CloudFormation StackSets now offers the option for automatically creating or removing your CloudFormation stacks when a new AWS account joins or quits your Organization. You do not need to remember to manually connect to the new account to deploy your common infrastructure or to delete infrastructure when an account is removed from your Organization. When an account leaves the organization, the stack will be removed from the management of StackSets. However, you can choose to either delete or retain the resources managed by the stack.

Lastly, you choose whether to deploy a stack to your entire Organization or just to one or more Organization Units (OU). You also choose a couple of deployment options: how many accounts will be prepared in parallel, and how many failures you tolerate before stopping the entire deployment.

For a full description how StackSets works, you can read the initial blog article from Jeff.

There is no extra cost for using AWS CloudFormation StackSet with AWS Organizations. The integration is available in all AWS Regions where StackSets is available.

— seb

AWS Backup: EC2 Instances, EFS Single File Restore, and Cross-Region Backup

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/

Since we launched AWS Backup last year, over 20,000 AWS customers are protecting petabytes of data every day. AWS Backup is a fully managed, centralized backup service simplifying the management of your backups for your Amazon Elastic Block Store (EBS) volumes, your databases (Amazon Relational Database Service (RDS) or Amazon DynamoDB), AWS Storage Gateway and your Amazon Elastic File System (EFS) filesystems.

We continuously listen to your feedback and today, we are bringing additional enterprise data capabilities to AWS Backup :

Here are the details.

EC2 Instance Backup
Backing up and restoring an EC2 instance requires additional protection than just the instance’s individual EBS volumes. To restore an instance, you’ll need to restore all EBS volumes but also recreate an identical instance: instance type, VPC, Security Group, IAM role etc.

Today, we are adding the ability to perform backup and recovery tasks on whole EC2 instances. When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two (Elastic Inference Accelerator and user data script).

Once the backup is complete, you can easily restore the full instance using the console, API, or AWS Command Line Interface (CLI). You will be able to restore and edit all parameters using the API or AWS Command Line Interface (CLI), and in the console, you will be able to restore and edit 16 parameters from your original EC2 instance.

To get started, open the Backup console and select either a backup plan or an on-demand backup. For this example, I chose On-Demand backup. I select EC2 from the list of services and select the ID of the instance I want to backup.

Note that you need to stop write activity and flush filesystem caches in case you’re using RAID volumes or any other type of technique to group your volumes.

After a while, I see the backup available in my vault. To restore the backup, I select the backup and click Restore.

Before actually starting the restore, I can see the EC2 configuration options that have been backed up and I have the opportunity to modify any value listed before re-creating the instance.

After a few seconds, my restored instance starts and is available in the EC2 console.

Single File Restore for EFS
Often AWS Backup customers would like to restore an accidentally deleted or corrupted file or folder. Before today, you would need to perform a full restore of the entire filesystem, which makes it difficult to meet your strict RTO objectives.

Starting today, you can restore a single file or directory from your Elastic File System filesystem. You select the backup, type the relative path of the file or directory to restore, and AWS Backup will create a new Elastic File System recovery directory at the root of your filesystem, preserving the original path hierarchy. You can restore your files to an existing filesystem or to a new filesystem.

To restore a single file from an Elastic File System backup, I choose the backup from the vault and I click Restore. On the Restore backup window, I choose between restoring the full filesystem or individual items. I enter the path relative to the root of the filesystem (not including the mount point) for the files and directories I want to restore. I also choose if I want to restore the items in the existing filesystem or in a new filesystem. Finally, I click Restore backup to start the restore job.

Cross-region Backup
Many enterprise AWS customers have strict business continuity policies requiring a minimum distance between two copies of their backup. To help enterprises to meet this requirement, we’re adding the capability to copy a backup to another Region, either on-demand when you need it or automatically, as part of a backup plan.

To initiate an on-demand copy of my backup to another Region, I use the console to browse my vaults, select the backup I want to copy and click Copy. I chose the destination Region, the destination vault, and keep the default value for other options. I click Copy on the bottom of the page.

The time to make the copy depends on the size of the backup. I monitor the status on the new Copy Jobs tab of the Job section:

Once the copy is finished, I switch my console to the target Region, I see the backup in the target vault and I can initiate a restore operation, just like usual.

I also can use the AWS Command Line Interface (CLI) or one of our AWS SDKs to automate or to integrate any of these processes in other applications.

Pricing
Pricing depends on the type of backup:

  • there is no additional charge for EC2 instance backup, you will be charged for the storage used by all EBS volumes attached to your instance,
  • for Elastic File System single file restore, you will be charged a fixed fee per restore and for the number of bytes you restore,
  • and for cross-region backup, you will be charged for the cross-region data transfer bandwidth and for the new warm storage space in the target Region.

These three new features are available today in all commercial AWS Regions where AWS Backup is available (you can verify services availability per Region on this web page).

As it is usual with any backup system, it is best practice to regularly perform backups and backup testing. Restore-able backups are the best kind of backups.

— seb

Amplify DataStore – Simplify Development of Offline Apps with GraphQL

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amplify-datastore-simplify-development-of-offline-apps-with-graphql/

The open source Amplify Framework is a command line tool and a library allowing web & mobile developers to easily provision and access cloud based services. For example, if I want to create a GraphQL API for my mobile application, I use amplify add api on my development machine to configure the backend API. After answering a few questions, I type amplify push to create an AWS AppSync API backend in the cloud. Amplify generates code allowing my app to easily access the newly created API. Amplify supports popular web frameworks, such as Angular, React, and Vue. It also supports mobile applications developed with React Native, Swift for iOS, or Java for Android. If you want to learn more about how to use Amplify for your mobile applications, feel free to attend one the workshops (iOS or React Native) we prepared for the re:Invent 2019 conference.

AWS customers told us the most difficult tasks when developing web & mobile applications is to synchronize data across devices and to handle offline operations. Ideally, when a device is offline, your customers should be able to continue to use your application, not only to access data but also to create and modify them. When the device comes back online, the application must reconnect to the backend, synchronize the data and resolve conflicts, if any. It requires a lot of undifferentiated code to correctly handle all edge cases, even when using AWS AppSync SDK’s on-device cache with offline mutations and delta sync.

Today, we are introducing Amplify DataStore, a persistent on-device storage repository for developers to write, read, and observe changes to data. Amplify DataStore allows developers to write apps leveraging distributed data without writing additional code for offline or online scenario. Amplify DataStore can be used as a stand-alone local datastore in web and mobile applications, with no connection to the cloud, or the need to have an AWS Account. However, when used with a cloud backend, Amplify DataStore transparently synchronizes data with an AWS AppSync API when network connectivity is available. Amplify DataStore automatically versions data, implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for my programming language based on the GraphQL schema developers provide.

Let’s see how it works.

I first install the Amplify CLI and create a React App. This is standard React, you can find the script on my git repo. I add Amplify DataStore to the app with npx amplify-app. npx is specific for NodeJS, Amplify DataStore also integrates with native mobile toolchains, such as the Gradle plugin for Android Studio and CocoaPods that creates custom XCode build phases for iOS.

Now that the scaffolding of my app is done, I add a GraphQL schema representing two entities: Posts and Comments on these posts. I install the dependencies and use AWS Amplify CLI to generate the source code for the objects defined in the GraphQL schema.

# add a graphql schema to amplify/backend/api/amplifyDatasource/schema.graphql
echo "enum PostStatus {
  ACTIVE
  INACTIVE
}

type Post @model {
  id: ID!
  title: String!
  comments: [Comment] @connection(name: "PostComments")
  rating: Int!
  status: PostStatus!
}
type Comment @model {
  id: ID!
  content: String
  post: Post @connection(name: "PostComments")
}" > amplify/backend/api/amplifyDatasource/schema.graphql

# install dependencies 
npm i @aws-amplify/core @aws-amplify/DataStore @aws-amplify/pubsub

# generate the source code representing the model 
npm run amplify-modelgen

# create the API in the cloud 
npm run amplify-push

@model and @connection are directives that the Amplify GraphQL Transformer uses to generate code. Objects annotated with @model are top level objects in your API, they are stored in DynamoDB, you can make them searchable, version them or restrict their access to authorised users only. @connection allows to express 1-n relationships between objects, similarly to what you would define when using a relational database (you can use the @key directive to model n-n relationships).

The last step is to create the React app itself. I propose to download a very simple sample app to get started quickly:

# download a simple react app
curl -o src/App.js https://raw.githubusercontent.com/sebsto/amplify-datastore-js-e2e/master/src/App.js

# start the app 
npm run start

I connect my browser to the app http://localhost:8080and start to test the app.

The demo app provides a basic UI (as you can guess, I am not a graphic designer !) to create, query, and to delete items. Amplify DataStore provides developers with an easy to use API to store, query and delete data. Read and write are propagated in the background to your AppSync endpoint in the cloud. Amplify DataStore uses a local data store via a storage adapter, we ship IndexedDB for web and SQLite for mobile. Amplify DataStore is open source, you can add support for other database, if needed.

From a code perspective, interacting with data is as easy as invoking the save(), delete(), or query() operations on the DataStore object (this is a Javascript example, you would write similar code for Swift or Java). Notice that the query() operation accepts filters based on Predicates expressions, such as item.rating("gt", 4) or Predicates.All.

function onCreate() {
  DataStore.save(
    new Post({
      title: `New title ${Date.now()}`,
      rating: 1,
      status: PostStatus.ACTIVE
    })
  );
}

function onDeleteAll() {
  DataStore.delete(Post, Predicates.ALL);
}

async function onQuery(setPosts) {
  const posts = await DataStore.query(Post, c => c.rating("gt", 4));
  setPosts(posts)
}

async function listPosts(setPosts) {
  const posts = await DataStore.query(Post, Predicates.ALL);
  setPosts(posts);
}

I connect to Amazon DynamoDB console and observe the items are stored in my backend:

There is nothing to change in my code to support offline mode. To simulate offline mode, I turn off my wifi. I add two items in the app and turn on the wifi again. The app continues to operate as usual while offline. The only noticeable change is the _version field is not updated while offline, as it is populated by the backend.

When the network is back, Amplify DataStore transparently synchronizes with the backend. I verify there are 5 items now in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan – table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   – filter-expression "#deleted <> :value"            \
                   – expression-attribute-names '{"#deleted" : "_deleted"}' \
                   – expression-attribute-values '{":value" : { "BOOL": true} }' \
                   – query "Count"

5 // <= there are now 5 non deleted items in the table !

Amplify DataStore leverages GraphQL subscriptions to keep track of changes that happen on the backend. Your customers can modify the data from another device and Amplify DataStore takes care of synchronizing the local data store transparently. No GraphQL knowledge is required, Amplify DataStore takes care of the low level GraphQL API calls for you automatically. Real-time data, connections, scalability, fan-out and broadcasting are all handled by the Amplify client and AppSync, using WebSocket protocol under the cover.

We are effectively using GraphQL as a network protocol to dynamically transform model instances to GraphQL documents over HTTPS.

To refresh the UI when a change happens on the backend, I add the following code in the useEffect() React hook. It uses the DataStore.observe() method to register a callback function ( msg => { ... } ). Amplify DataStore calls this function when an instance of Post changes on the backend.

const subscription = DataStore.observe(Post).subscribe(msg => {
  console.log(msg.model, msg.opType, msg.element);
  listPosts(setPosts);
});

Now, I open the AppSync console. I query existing Posts to retrieve a Post ID.

query ListPost {
  listPosts(limit: 10) {
    items {
      id
      title
      status
      rating
      _version
    }
  }
}

I choose the first post in my app, the one starting with 7d8… and I send the following GraphQL mutation:

mutation UpdatePost {
  updatePost(input: {
    id: "7d80688f-898d-4fb6-a632-8cbe060b9691"
    title: "updated title 13:56"
    status: ACTIVE
    rating: 7
    _version: 1
  }) {
    id
    title
    status
    rating
    _lastChangedAt
    _version
    _deleted    
  }
}

Immediately, I see the app receiving the notification and refreshing its user interface.

Finally, I test with multiple devices. I first create a hosting environment for my app using amplify add hosting and amplify publish. Once the app is published, I open the iOS Simulator and Chrome side by side. Both apps initially display the same list of items. I create new items in both apps and observe the apps refreshing their UI in near real time. At the end of my test, I delete all items.

I verify there are no more items in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan – table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   – filter-expression "#deleted <> :value"            \
                   – expression-attribute-names '{"#deleted" : "_deleted"}' \
                   – expression-attribute-values '{":value" : { "BOOL": true} }' \
                   – query "Count"

0 // <= all the items have been deleted !

When syncing local data with the backend, AWS AppSync keeps track of version numbers to detect conflicts. When there is a conflict, the default resolution strategy is to automerge the changes on the backend. Automerge is an easy strategy to resolve conflit without writing client-side code. For example, let’s pretend I have an initial Post, and Bob & Alice update the post at the same time:

The original item:

{
   "_version": 1,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
Alice updates the rating:

{
   "_version": 2,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
At the same time, Bob updates the title:

{
   "_version": 2,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}
The final item after auto-merge is:

{
   "_version": 3,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}

Automerge strictly defines merging rules at field level, based on type information defined in the GraphQL schema. For example List and Map are merged, and conflicting updates on scalars (such as numbers and strings) preserve the value existing on the server. Developers can chose other conflict resolution strategies: optimistic concurrency (conflicting updates are rejected) or custom (an AWS Lambda function is called to decide what version is the correct one). You can choose the conflit resolution strategy with amplify update api. You can read more about these different strategies in the AppSync documentation.

The full source code for this demo is available on my git repository. The app has less than 100 lines of code, 20% being just UI related. Notice that I did not write a single line of GraphQL code, everything happens in the Amplify DataStore.

Your Amplify DataStore cloud backend is available in all AWS Regions where AppSync is available, which, at the time I write this post are: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London).

There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB (see here and here for the pricing detail). Both services have a free tier allowing you to discover and to experiment for free.

Amplify DataStore allows you to focus on the business value of your apps, instead of writing undifferentiated code. I can’t wait to discover the great applications you’re going to build with it.

— seb