All posts by Sébastien Stormacq

New – Amazon EC2 M1 Mac Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m1-mac-instances/

Last year, during the re:Invent 2021 conference, I wrote a blog post to announce the preview of EC2 M1 Mac instances. I know many of you requested access to the preview, and we did our best but could not satisfy everybody. However, the wait is over. I have the pleasure of announcing the general availability of EC2 M1 Mac instances.

EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance. It connects to your Amazon Virtual Private Cloud (Amazon VPC), boots from Amazon Elastic Block Store (EBS) volumes, and uses EBS snapshots, Amazon Machine Images (AMIs), security groups and other AWS services such as Amazon CloudWatch and AWS Systems Manager.

The availability of EC2 M1 Mac instances lets you access machines built around the Apple-designed M1 System on Chip (SoC). If you are a Mac developer and re-architecting your apps to natively support Macs with Apple silicon, you may now build and test your apps and take advantage of all the benefits of AWS. Developers building for iPhone, iPad, Apple Watch, and Apple TV will also benefit from faster builds. EC2 M1 Mac instances deliver up to 60 percent better price performance over the x86-based EC2 Mac instances for iPhone and Mac app build workloads.

For example, I tested the time it takes to clean, build, archive, and run the unit tests on a sample project I wrote. The new EC2 M1 Mac instances complete this set of tasks in 49 seconds on average. This is 47.8 percent faster than the same set of tasks running on the previous generation of EC2 Mac instances.

To see how to launch an EC2 M1 Mac instance from the AWS Management Console or the AWS Command Line Interface (CLI), I invite you to read my last blog post on the subject.

EC2 Mac M1 Instance

During the six months of the preview, we collected your feedback and fine-tuned the service to your needs.

We’ve added a new FAQ section to our documentation to get started with EC2 M1 Mac instances. Agents for management and observability, such as Systems Manager and CloudWatch, are pre-installed on all our macOS AMIs, along with tools such as the AWS Command Line Interface (CLI) and our AWS SDKs. EC2 M1 Mac instances integrate with other AWS services, such as Amazon Elastic File System (Amazon EFS) for file storage, AWS Auto Scaling, or AWS Secrets Manager.

For example, I am using Secrets Manager to securely store my build secrets, such as the signing keys and certificates used to sign my binaries before to distribute them on the App Store. From my laptop, I first make sure to export the certificate from the macOS keychain. I then upload my certificate to Secrets Manager with this command:

aws secretsmanager create-secret            \
       --name apple-signing-dev-certificate \
       --secret-binary fileb://./secrets/apple_dev_seb.p12 

On the EC2 M1 Mac instance, to prepare my instance before the build phase, I download the certificate, decode it (it is base64-encoded), and store it in the EC2 M1 Mac instance keychain, where the codesign tool will find it during the build.

# download the certificate from Secrets Manager
SIGNING_DEV_KEY=$($aws secretsmanager get-secret-value  \
      --secret-id apple-signing-dev-certificate         \
      --query SecretBinary --output text)
	  

# save the certificate as a file
echo $SIGNING_DEV_KEY | base64 -d > seb_dev_certificate.p12

# import the certificate in the keychain 
security import seb_dev_certificate.p12 \
                -P "my_cert_password"   \
                -k my.dev.keychain      \
                -T /usr/bin/security -T /usr/bin/codesign -T /usr/bin/xcodebuild

# delete the certificate from disk
rm seb_dev_certificate.p12

There are a few more configuration steps to get code signing work from the macOS command line. You can check out this presentation I made or my code repository for the details.

We are preparing a couple of events to help you learn more about EC2 M1 Mac instance use cases and configuration. First, we recently had an online webinar to learn how to take advantage of EC2 Mac instances for iOS development, content is available for you to consume on-demand after a free registration step. Second, we are preparing a one-day, in-person developer conference for later this year. The conference agenda will be packed with technical content and workshops. Stay tuned on social media to learn more about it.

Last and not least, but not related to EC2 Mac instances, the Apple WWDC 2022 conference took place last month, from June 6–8, 2022, and the content is available online. This is a great occasion to learn more about development for Apple systems in general.

And now, go build 😉

— seb

New – High Volume Outbound Communication with Amazon Connect Outbound Campaigns

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-high-volume-outbound-communication-with-amazon-connect-outbound-campaigns/

The new high volume outbound communication capability in Amazon Connect which was announced at Enterprise Connect last year, is now generally available to all. It is named Amazon Connect outbound campaigns.

If you haven’t heard about Amazon Connect, it is an easy-to-use cloud contact center service that helps companies of any size deliver superior customer service at lower cost. You can read the original blog post Jeff wrote at launch in 2017, with amazing Lego art 🙂

Contact centers not only receive calls and communications, but they also send outbound communications to customers. There are a variety of reasons to send outbound communication: appointment reminders, telemarketing, subscription renewals, and billing reminders. The vast majority of these communications are phone calls, and in many contact centers, agents make the calls manually using customer contact lists in external systems. Since customers only answer about ten percent of calls, these agents can spend nearly half of their time dialing and waiting. This can result in millions of dollars in lost productivity each year for a contact center with as few as 200 agents.

To help you to address this challenge, today we are adding to Amazon Connect outbound campaigns a set of high-volume outbound communication capabilities that allows you to proactively reach more of your customers across voice, SMS, and email. When using this capability, you will have a scalable way for proactive outreach for hundreds to millions of your customers, and you will increase your agents’ productivity and lower your operational costs.

Amazon Connect outbound campaigns delivers a predictive phone dialer. The dialer includes an answering machine detection system powered by machine learning. It allows the automatic detection of answering machines for voice calls and passes calls to agents only when the call is answered by a human. The dialer also adjusts the call rate depending on factors such as percentage of human-answered the calls, call duration, and agent availability. There is no integration required to get the benefit of existing Amazon Connect features, such as automated workflows, routing, and machine learning capabilities like Contact Lens. You now have a single system for inbound and outbound communications.

To further refine the customer experience or use multiple channels in your campaigns, for example, to send an SMS or email message to your customers when they do not answer calls, you have the option to use Amazon Pinpoint. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. It allows you to define customer segments, define the customer journey, define the contact strategy, and more. Amazon Pinpoint is the system handling high-volume SMS and email campaigns.

To better understand how Amazon Connect, Amazon Pinpoint, and other AWS services work together, you can refer to this very detailed blog post.

Let’s show you how it works
Imagine I am a contact center manager, and I want to create an outbound call campaign to target a selected list of customers.

I first import my customer contact list from a spreadsheet on Amazon S3. I may also import it from popular customer relationship management (CRM) and marketing automation applications, such as Marketo, Salesforce, Twilio’s Segment, ServiceNow, Shopify, Zendesk, and Amazon Pinpoint itself.

Amazon Connect outbound campaigns - import contact 2

Then I create a campaign and define some journey parameters: the communication channel, the start time, and the corresponding content, such as a call script, email template, or SMS message. At the scheduled start time, the journey is executed using Amazon Connect for calls or Amazon Pinpoint for SMS or emails, as specified.

Amazon Connect outbound campaigns - create campaign

When I configure the campaign to run in Predictive dial mode, as I mentioned before, the dialer automatically adjusts the dial rate based on the duration of calls and the real-time availability of agents. Once a call is answered, Amazon Connect distinguishes whether it is a live voice or a recorded message and routes the live customer to an available agent in the Amazon Connect agent application, where the agent can see the call script that I specified during setup, along with relevant customer information.

As explained earlier, I may use Amazon Pinpoint to define the customer journey. By doing so, I can combine voice, email, and SMS channels in the same outbound communication campaign to improve the efficiency of my agents and my customer’s experience. For example, a financial institution can use Amazon Connect to send an SMS notification to remind a customer of a missed payment and include a link to request a call back from an agent. When a call is requested, Amazon Connect automatically queues the call, dials the customer’s number, detects their voice, and connects an available agent to the customer.

Amazon Connect outbound campaigns - journey workflow

Amazon Pinpoint allows you to define the details of the customer journey.

Amazon Connect outbound campaigns - setup quiet times

As usual with AWS services, I can analyze contact events sent via Amazon EventBridge. EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated software-as-a-service (SaaS) applications, and AWS service. When filtering or analyzing events posted to EventBridge, I can create metrics such as time to connect to an agent, duration of the contact, and call abandonment rate

These metrics help me understand the status of my campaign and ensure compliance with applicable regulations, such as maximum call abandonment rates. I also can use historical reports of these metrics to understand the effectiveness of all my communications campaigns over time.

Amazon Connect outbound campaigns - jounrey metrics

Speaking of compliance, we do not want anyone to abuse the system, intentionally or not, or to break any local compliance rules.

Access and Compliance
Using automated services to drive outbound communication campaigns is strictly regulated in several countries and territories. For example, the US adopted the Telephone Consumer Protection Act (TCPA) in 1991, and the United Kingdom’s Office of Communications has similar rules.

Amazon Connect outbound campaigns gives you the tools to stay compliant with these regulations and many others. However, just like with traditional IT security, it is a shared responsibility. It is your responsibility to use the service in a compliant manner. We are happy to assist you in addressing specific use cases.

Let’s share two examples to illustrate how Amazon Connect outbound campaigns can help you meet your compliance status: respect quiet time and monitor call abandonment rate.

The use of quiet times allows contact center managers to configure a schedule for channel communications based on the day of the week and the hours of the day. More precise delivery times means your customers are most likely to engage with the communication and increase metrics such as open rates for SMS and email, as well as pick-up rates for voice calls. It also allows contact center managers to follow country and state-level voice dialing legislation. The following screenshot shows how you can configure quiet times using Amazon Pinpoint.

Amazon Connect outbound campaigns - quiet times

According to TCPA, call abandonment rate is the percentage of calls picked up by a live customer but not connected to a live agent within two seconds after the customer greeting. I found it interesting that in the UK, the time is measured from the start of your customer greetings, while in the US, it is measured from the end of the greeting. Amazon Connect outbound campaigns provides you with metrics, such as customerGreetingStart, customerGreetingStop, andconnectedToAgent for each outbound communication. Contact center managers can use these to compute the abandonment rate and dial up or down the outgoing communication channel accordingly.

Other metrics, configuration parameters, and AWS Lambda API integration allow contact center managers to consult a Do-Not-Call (DNC) registry or list scrubbing and verify your customer’s local time zone or bank holiday calendars, just to name a few.

Pricing and Availability
Amazon Connect outbound campaigns is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) AWS Regions. This allows you to start your outbound campaigns for customers in the USA, UK, Australia, and New Zealand.

As usual, pricing is based on your usage; you only pay for what you use with no upfront or minimum engagement. The key metrics we are using for pricing are the minutes of outbound calls. The pricing page has all the details.

And now, go build your contact centers.

— seb

Modernize Your Mainframe Applications & Deploy Them In The Cloud

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/modernize-your-mainframe-applications-deploy-them-in-the-cloud/

Today, we are launching AWS Mainframe Modernization service to help you modernize your mainframe applications and deploy them to AWS fully-managed runtime environments. This new service also provides tools and resources to help you plan and implement migration and modernization.

Since the introduction of System/360 on April 7 1964, mainframe computers have enabled many industries to transform themselves. The mainframe has revolutionized the way people buy things, how people book and purchase travel, and how governments manage taxes or deliver social services. Two thirds of the Fortune 100 companies have their core businesses located on a mainframe. And according to a 2018 estimate, $3 trillion ($3 x 10^12) in daily commerce flows through mainframes.

Mainframes are using their very own set of technologies: programming languages such as COBOL, PL/1, and Natural, to name a few, or databases and data files such as VSAM, DB2, IMS DB, or Adabas. They also run “application servers” (or transaction managers as we call them) such as CICS or IMS TM. Recent IBM mainframes also run applications developed in the Java programming language deployed on WebSphere Application Server.

Many of our customers running mainframes told us they want to modernize their mainframe-based applications to take advantage of the AWS cloud. They want to increase their agility and their capacity to innovate, gain access to a growing pool of talents with experience running workloads on AWS, and benefit from the continual AWS trend of improving cost/performance ratio.

Application modernization is a journey composed of four phases:

  • First, you assess the situation. Are you ready to migrate? You define the business case and educate the migration team.
  • Second, you mobilize. You kick off the project, identify applications for a proof of concept, and refine your migration plan and business cases.
  • Third, you migrate and modernize. For each application, you run in-depth discovery, decide on the right application architecture and migration journey, replatform or refactor the code base, and test and deploy to production.
  • Last, you operate and optimize. You monitor deployed applications, manage resources, and ensure that security and compliance are up to date.

AWS Mainframe Modernization helps you during each phase of your journey.

Assess and Mobilize
During the assessment and mobilization phase, you have access to analysis and development tools to discover the scope of your application portfolio and to transform source code as needed. Typically, the service helps you discover the assets of your mainframe applications and identify all the data and other dependencies. We provide you with integrated development environments where you can adapt or refactor your source code, depending on whether you are replatforming or refactoring your applications.

Application Automated Refactoring
You may choose to use the automated refactoring pattern, where mainframe application assets are automatically converted into a modern language and ecosystem. With automated refactoring, AWS Mainframe Modernization uses Blu Age tools to convert your COBOL, PL/1, or JCL code to Java services and scripts. It generates modern code, data access, and data format by implementing patterns and rules to transform screens, indexed files, and batch applications to a modern application stack.

AWS Mainfraime Modernization Refactoring

Application Replatforming
You may also choose to replatform your applications, meaning move them to AWS with minimal changes to the source code. When replatforming, the fully-managed runtime comes preinstalled with the Micro Focus mainframe-compatible components, such as transaction managers, data mapping tools, screen and maps readers, and batch execution environments, allowing you to run your application with minimum changes.

AWS Mainfraime Modernization Replatforming

This blog post can help you learn more about nuances between replatforming and refactoring.

DevOps For Your Mainframe Applications
AWS Mainframe Modernization service provides you with AWS CloudFormation templates to easily create continuous integration and continuous deployment pipelines. It also deploys and configures monitoring services to monitor the managed runtime. This allows you to maintain or continue to evolve your applications once migrated, using best practices from Agile and DevOps methodologies.

Managed Services
AWS Mainframe Modernization takes care of the undifferentiated heavy lifting and provides you with fully managed runtime environments based on 15 years of cloud architecture best practices in terms of security, high availability, scalability, system management, and using infrastructure as code. These are all important for the business-critical applications running on mainframes.

The analysis tools, development tools, and the replatforming or refactoring runtimes come preinstalled and ready to use. But there is much more than preinstalled environments. The service deploys and manages the whole infrastructure for you. It deploys the required network, load balancer, and configure log collection with Amazon CloudWatch, among others. It manages application versioning, deployments, and high availability dependencies. This saves you days of designing, testing, automating, and deploying your own infrastructure.

The fully managed runtime includes extensive automation and managed infrastructure resources that you can operate via the AWS console, the AWS Command Line Interface (CLI), and application programming interfaces (APIs). This removes the burden and undifferentiated heavy lifting of managing a complex infrastructure. It allows you to spend time and focus on innovating and building new capabilities.

Let’s Deploy an App
As usual, I like to show you how it works. I am using a demo banking application. The application has been replatformed and is available as two .zip files. The first one contains the application binaries, and the second one the data files. I uploaded the content of these zipped files to an Amazon Simple Storage Service (Amazon S3) bucket. As part of the prerequisites, I also created a PostgreSQL Aurora database, stored its username and password in AWS Secrets Manager, and I created an encryption key in AWS Key Management Service (KMS).

Sample Banking Application files

Create an Environment
Let’s deploy and run the BankDemo sample application in an AWS Mainframe Modernization managed runtime environment with the Micro Focus runtime engine. For brevity, I highlight only the main steps. The full tutorial is available as part of the service documentation.

I open the AWS Management Console and navigate to AWS Mainframe Modernization. I navigate to Environments and select Create environment.

AWS Mainframe Migration - Create EnvironmentI give the environment a name and select Micro Focus runtime since we are deploying a replatformed application. Then I select Next.

AWS Mainframe Modernization - Create Environment 2In the Specify Configurations section, I leave all the default values: a Standalone runtime environment, the M2.m5.large EC2 instance type, and the default VPC and subnets. Then I select Next.

AWS Mainframe Modernization - Create Environment 3

On the Attach Storage section, I mount an EFS endpoint as /m2/mount/demo. Then I select Next.

AWS Mainframe Modernization - Create Environment 4In the Review and create section, I review my configuration and select Create environment. After a while, the environment status switches to Available.

AWS Mainframe Modernization - environment available

Create an Application
Now that I have an environment, let’s deploy the sample banking application on it. I select the Applications section and select Create application.

AWS Mainframe Modernization - Create ApplicatioI give my application a name, and under Engine type, I select Micro Focus.

AWS Mainframe Modernization - Create Application 2In the Specify resources and configurations section, I enter a JSON definition of my application. The JSON tells the runtime environment where my application’s various files are located and how to access Secrets Manager. You can find a sample JSON file in the tutorial section of the documentation.

AWS Mainframe Modernization - Create Application 3In the last section, I Review and create the application. I select Create application. After a moment, the application becomes available.

AWS Mainframe Modernization - application is availableOnce available, I deploy the application to the environment. I select the AWSNewsBlog-SampleBanking app, then I select the Actions dropdown menu, and I select Deploy application.

AWS Mainframe Modernization - deploy the appAfter a while, the application status changes to Ready.

Import Data sets
The last step before starting the application is to import its data sets. In the navigation pane, I select Applications, then choose AWSNewsBlog-SampleBank. I then select the Data sets tab and select Import. I may either specify the data set configuration values individually using the console or provide the location of an S3 bucket that contains a data set configuration JSON file.

AWS Mainframe Modernization - import data setsI use the JSON file provided by the tutorial in the documentation. Before uploading the JSON file to S3, I replace the $S3_DATASET_PREFIX variable with the actual value of my S3 bucket and prefix. For this example, I use awsnewsblog-samplebank/catalog.

AWS Mainframe Modernization - import data sets 2After a while, the data set status changes to Completed.

My application and its data set are now deployed into the cloud.

Start the Application
The last step is to start the application. I navigate to the Applications section. I then select AWSNewsBlog-SampleBank. In the Actions dropdown menu, I select Start application. After a moment, the application status changes to Running.

AWS Mainframe Modernization - application running

Access the Application
To access the application, I need a 3270 terminal emulator. Depending on your platform, a couple of options are available. I choose to use a web-based TN3270 web-based client provided by Micro Focus and available on the AWS Marketplace. I configure the terminal emulator to point it to the AWS Mainframe Modernization environment endpoint, and I use port 6000.

TN3270 Configuration

Once the session starts, I receive the CICS welcome prompt. I type BANK and press ENTER to start the app. I authenticate with user BA0001 and password A. The main application menu is displayed. I select the first option of the menu and press ENTER.

TN3270 SampleBank demo

Congrats, your replatformed application has been deployed in the cloud and is available through a standard IBM 3270 terminal emulator.

Pricing and Availability
AWS Mainframe Modernization service is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Ireland), and South America (São Paulo).

You only pay for what you use. There are no upfront costs. Third-party license costs are included in the hourly price. Runtime environments for refactored applications, based on Blu Age, start at $2.50/hour. Runtime environments for replatformed applications, based on Micro Focus, start at $5.55/hour. This includes the software licenses (Blu Age or Micro Focus). As usual, AWS Support plans are available. They also cover Blu Age and Micro Focus software.

Committed plans are available for pricing discounts. The pricing details are available on the service pricing page.

And now, go build 😉

— seb

New – Amazon EC2 C7g Instances, Powered by AWS Graviton3 Processors

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ec2-c7g-instances-powered-by-aws-graviton3-processors/

I am excited to announce that Amazon Elastic Compute Cloud (Amazon EC2) C7g instances powered by the latest AWS Graviton3 processors that have been available in preview since re:Invent last year are now available for all.

Let’s decompose the name C7g: the “C” instance family is designed for compute-intensive workloads. This is the 7th generation of this instance family. And the “g” means it is based on AWS Graviton, the silicon designed by AWS. These instances are the first instances to be powered by the latest generation of AWS Graviton, the Graviton3 processors.

As you bring more diverse workloads to the cloud, and as your compute, storage, and networking demands increase at a rapid pace, you are asking us to push the price performance boundary even further so that you can accelerate your migration to the cloud and optimize your costs. Additionally, you are looking for more energy-efficient compute options to help you reduce your carbon footprint and achieve your sustainability goals. We do this by working back from your requests, and innovating at a rapid pace across all levels of the AWS infrastructure. Our Graviton chips offer better performance at lower cost along with enhanced capabilities. For example, AWS Graviton3 processors offer you enhanced security with always-on memory encryption, dedicated caches for every vCPU, and support for pointer authentication.

Let’s illustrate this with numbers. When we launched Graviton2-based instances, they provided up to 40 percent better price/performance for a wide variety of workloads over comparable fifth-generation x86-based instances. We now have 12 instance families (M6g, M6gd, C6g, C6gd, C6gn, R6g, R6gd, T4g, X2gd, Im4gn, Is4gen, and G5g) that are powered by AWS Graviton2 processors that provide significant price performance benefits for a wide range of workloads. In 2021, we saw tens of thousands of AWS customers take advantage of this innovation by using Graviton2-based EC2 instances.

Our next generation, Graviton3 processors, deliver up to 25 percent higher performance, up to 2x higher floating-point performance, and 50 percent faster memory access based on leading-edge DDR5 memory technology compared with Graviton2 processors.

Graviton3 also uses up to 60 percent less energy for the same performance as comparable EC2 instances, which helps you reduce your carbon footprint.

Snap Inc, known for its popular social media services such as Snapchat and Bitmoji, adopted AWS Graviton2-based instances to optimize their price performance on Amazon EC2. Aaron Sheldon, software engineer at Snap, told us: “We trialed the new AWS Graviton3-based Amazon EC2 C7g instances and found that they provide significant performance improvements on real workloads compared to previous generation C6g instances. We are excited to migrate our Graviton2-based workloads to Graviton3, including messaging, storage, and friend graph workloads.”

The C7g instances are available in eight sizes with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs. C7g instances support configurations up to 128 GiB of memory, 30 Gbps of network performance, and 20 Gbps of Amazon Elastic Block Store (EBS) performance. These instances are powered by the AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor.

The following table summarizes the key characteristics of each instance type in this family.

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
c7g.medium 1 2 GiB up to 12.5 Gbps up to 10 Gbps
c7g.large 2 4 GiB up to 12.5 Gbps up to 10 Gbps
c7g.xlarge 4 8 GiB up to 12.5 Gbps up to 10 Gbps
c7g.2xlarge 8 16 GiB up to 15 Gbps up to 10 Gbps
c7g.4xlarge 16 32 GiB up to 15 Gbps up to 10 Gbps
c7g.8xlarge 32 64 GiB 15 Gbps 10 Gbps
c7g.12xlarge 48 96 GiB 22.5 Gbps 15 Gbps
c7g.16xlarge 64 128 GiB 30 Gbps 20 Gbps

C7g instances are initially available in US East (N. Virginia) and US West (Oregon) AWS Regions; other Regions will be added shortly after launch.

As usual, you can purchase C7g capacity on demand, as Reserved Instances, or as Spot instances, and use your Saving Plans. The pricing details are available on the EC2 pricing page.

I have the chance to talk with AWS customers on a daily basis, and many of my discussions are around price performance and the sustainability of their workloads. With more than 500 instance types to choose from, one question I often receive is: what are the workloads that would benefit from C7g?

You will find that C7g instances provide the best price performance within their instance families for a broad spectrum of compute-intensive workloads, including application servers, micro services, high-performance computing, electronic design automation, gaming, media encoding, or CPU-based ML inference. These instances are ideal for all Linux-based workloads, including containerized and micro service-based applications built using Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Registry, Kubernetes, and Docker, and written in popular programming languages such as C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP.

The next question I receive is: given that Graviton instances are based on Arm architecture, how difficult is it to migrate from x86?

Graviton3 instances are supported by a broad choice of operating systems, independent software vendors, container services, agents, and developer tools, enabling you to migrate your workloads with minimal effort.

Applications and scripts written in high-level programming languages such as Python, Node.js, Ruby, Java, or PHP will typically just require a redeployment. Applications written in lower-level programming languages such as C/C++, Rust, or Go will require a re-compilation.

But you don’t always need to migrate your applications. Several managed services are based on Graviton already, such as Amazon ElastiCache, Amazon EKS, Amazon ECS, Amazon Relational Database Service (RDS), Amazon EMR, Amazon Aurora, and Amazon OpenSearch Service, and your application can benefit from Graviton with minimal efforts. A French customer told me recently they migrated a significant portion of their Amazon EMR clusters to Graviton by doing just one line change in their Terraform scripts; all the rest worked as-is.

For those of you building with serverless, we have also released Graviton support for AWS Fargate and AWS Lambda, extending the price, efficiency, and performance benefits of Graviton to serverless workloads. Lambda functions using Graviton2 can see up to 34 percent better price/performance.

Reducing the carbon footprint of your organization is also of paramount importance. Reducing the carbon footprint of cloud-based workloads is a shared responsibility between you and us. We do our part by innovating at all levels: from the materials used to build our facilities, the usage of water for cooling, and the production of renewable energy, down to inventing new silicons that are more energy efficient. To help you meet your own sustainability goals, we added a sustainability pillar to the AWS Well-Architected framework, and we released the Customer Carbon Footprint tool. Graviton3 fits into that context. It uses up to 60 percent less energy for the same performance as comparable EC2 instances.

We do our part in this shared responsibility model, and now, it is your turn. You can use our innovations and tools to help you optimize your workloads and only use the resources you need. Take the occasion to write clever code that uses fewer CPU cycles, less storage, or less network bandwidth. And be sure to select energy-efficient options, such as Graviton3-based instance types or managed services, when deploying your code.

To help you to get started migrating your applications to Graviton instance types today, we curated this list of technical resources. Have a look at it. To learn more about Graviton-based instances, visit the Graviton page or the C7g page and check out this video:

If you’d like to get started with Graviton-based instances for free, we also just reintroduced the free trial on T4g.small instances for up to 750 hours/month until the end of this year (December 31, 2022).

And now, go build 😉

— seb

AWS Week In Review – May 23, 2022

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-27-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

This is the right place to quickly learn about recent AWS news from last week, in just about five minutes or less. This week, I have collected a couple of news items that might be of interest to you, the IT professionals, developers, system administrators, or any type of builders that have their hands on the AWS console, the CLI, or that are writing code.

Last Week’s Launches
The launches that caught my attention last week are the following:

EC2 now supports NitroTPM and SecureBoot – A Trusted Platform Module is often a discrete chip in a computer where you can store secrets and release them to the operating system only when the system is in a known good state. You typically use TPM modules to store operating-system-level volume encryption keys, such as the ones used by BitLocker on Windows or LUKS. NitroTPM is a virtual TPM module available on selected instance families that allows you to deploy your workloads depending on TPM functionalities on EC2 instances.

Amazon EC2 Auto Scaling now backfills predictive scaling forecasts so you can quickly validate forecast accuracy. Auto Scaling Predictive Scaling is a capability of Auto Scaling that allows you to scale your fleet in and out based on observed usage patterns. It uses AI/ML to predict when your fleet needs more or less capacity. It allows you to scale a fleet in advance of the scaling event and have the fleet prepared at peak times. The new backfills shows you how predictive scaling would have scaled your fleet during the last 14 days. This allows you to quickly decide if the predictive scaling policy is accurate for your applications by comparing the demand and capacity forecasts against actual demand immediately after you create a predictive scaling policy.

AWS Backup adds support for two new managed file systems, Amazon FSx for OpenZFS and Amazon Fsx for NetApp ONTAP. These additions helps you meet your centralized data protection and regulatory compliance needs. You can now use AWS Backup’s policy-based capabilities to centrally protect Amazon FSx for NetApp ONTAP or Amazon Fsx for OpenZFS, along with the other AWS services for storage, database, and compute that AWS Backup supports.

AWS App Mesh now supports IPv6 AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. The new support for IPv6 allows you to support workloads running in IPv6 networks and to invoke App Mesh APIs over IPv6. This helps you meet IPv6 compliance requirements, and removes the need for complex networking configuration to handle address translation between IPv4 and IPv6.

Amazon Chime SDK now supports video background replacement and blur on iOS and Android. When you want to integrate audio and video call capabilities in your mobile applications, the Chime SDK is the easiest way to get started. It provides an easy-to-use API that uses the scalable and robust Amazon Chime backend to power your communications. For example, Slack is using Chime as backend for the communications in their apps. The Chime SDK client libraries for iOS and Android now include video background replacement and blur, which developers can use to reduce visual distractions and help increase visual privacy for mobile users on iOS and Android.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

Amazon Redshift: Ten years of continuous reinvention. This is an Amazon Redshift research paper that will be presented at a leading international forum for database researchers. The authors reflect on how far the first petabyte-scale cloud data warehouse has advanced since it was announced ten years ago.

Improve Your Security at the Edge with AWS IoT Services is a new blog post on the IoT channel. We understand the risks associated with operating at the edge and that you need additional capabilities to ensure that your data is protected. AWS IoT services can help you with end-to-end data protection, device security, and device identification to create the foundation of an expanded information security model and confidently operate at the edge.

AWS Open Source News and Updates – Ricardo Sueiras, my colleague from the AWS Developer Relation team, runs this newsletter. It brings you all the latest open-source projects, posts, and more. Read edition #113 here.

Upcoming AWS Events
CDK Day, on May 26 is a one-day fully virtual event dedicated to the AWS Cloud Development Kit. With four versions of the CDK released (AWS, Terraform, CDK8s, and Projen), we tought the CDK deserves its own full-fledged conference. We will take one day and showcase the brightest and best of CDK from across the whole product family. Let’s talk serverless, Kubernetes and multi-cloud all on the same day! CDK Day will take place on May 26, 2022 and will be fully virtual, live-streamed to our YouTube channel. Book your ticket now, it’s free.

The AWS Summit season is mostly over in Europe, but there are upcoming Summits in North America and the Asia Pacific Regions. Here are some virtual and in-person Summits that might be close to you:

More to come in July, August, and September.

You can register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all for this week. Check back next Monday for another Week in Review!

— seb

Amazon EC2 Now Supports NitroTPM and UEFI Secure Boot

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-ec2-now-supports-nitrotpm-and-uefi-secure-boot/

In computing, Trusted Platform Module (TPM) technology is designed to provide hardware-based, security-related functions. A TPM chip is a secure crypto-processor that is designed to carry out cryptographic operations. There are three key advantages of using TPM technology. First, you can generate, store, and control access to encryption keys outside of the operating system. Second, you can use a TPM module to perform platform device authentication by using the TPM’s unique RSA key, which is burned into it. And third, it may help to ensure platform integrity by taking and storing security measurements.

During re:Invent 2021, we announced the future availability of NitroTPM, a virtual TPM 2.0-compliant TPM module for your Amazon Elastic Compute Cloud (Amazon EC2) instances, based on AWS Nitro System. We also announced Unified Extensible Firmware Interface (UEFI) Secure Boot availability for EC2.

I am happy to announce you can start to use both NitroTPM and Secure Boot today in all AWS Regions outside of China, including the AWS GovCloud (US) Regions.

You can use NitroTPM to store secrets, such as disk encryption keys or SSH keys, outside of the EC2 instance memory, protecting them from applications running on the instance. NitroTPM leverages the isolation and security properties of the Nitro System to ensure only the instance can access these secrets. It provides the same functions as a physical or discrete TPM. NitroTPM follows the ISO TPM 2.0 specification, allowing you to migrate existing on-premises workloads that leverage TPMs to EC2.

The availability of NitroTPM unlocks a couple of use cases to strengthen the security posture of your EC2 instances, such as secured key storage and access for OS-level volume encryption or platform attestation for measured boot or identity access.

Secured Key Storage and Access
NitroTPM can create and store keys that are wrapped and tied to certain platform measurements (known as Platform Configuration Registers – PCR). NitroTPM unwraps the key only when those platform measurements have the same value as they had at the moment the key was created. This process is referred to as “sealing the key to the TPM.” Decrypting the key is called unsealing. NitroTPM only unseals keys when the instance and the OS are in a known good state. Operating systems compliant with TPM 2.0 specifications use this mechanism to securely unseal volume encryption keys. You can use NitroTPM to store encryption keys for BitLocker on Microsoft Windows. Linux Unified Key Setup (LUKS) or dm-verity on Linux are examples of OS-level applications that can leverage NitroTPM too.

Platform Attestation
Another key feature that NitroTPM provides is “measured boot” a process where the bootloader and operating system extend PCRs with measurements of the software or configuration that they load during the boot process. This improves security in the event that, for example, a malicious program overwrites part of your kernel with malware. With measured boot, you can also obtain signed PCR values from the TPM and use them to prove to remote servers that the boot state is valid, enabling remote attestation support.

How to Use NitroTPM
There are three prerequisites to start using NitroTPM:

  • You must use an operating system that has Command Response Buffer (CRB) drivers for TPM 2.0, such as recent versions of Windows or Linux. We tested the following OSes: Red Hat Enterprise Linux 8, SUSE Linux Enterprise Server 15, Ubuntu 18.04, Ubuntu 20.04, and Windows Server 2016, 2019, and 2022.
  • You must deploy it on a Nitro-based EC2 instance. At the moment, we support all Intel and AMD instance types that support UEFI boot mode. Graviton1, Graviton2, Xen-based, Mac, and bare-metal instances are not supported.
  • Note that NitroTPM does not work today with some additional instance types, but support for these instance types will come soon after the launch. The list is: C6a, C6i, G4ad, G4dn, G5, Hpc6a, I4i, M6a, M6i, P3dn, R6i, T3, T3a, U-12tb1, U-3tb1, U-6tb1, U-9tb1, X2idn, X2iedn, and X2iezn.
  • When you create your own AMI, it must be flagged to use UEFI as boot mode and NitroTPM. Windows AMIs provided by AWS are flagged by default. Linux-based AMI are not flagged by default; you must create your own.

How to Create an AMI with TPM Enabled
AWS provides AMIs for multiple versions of Windows with TPM enabled. I can verify if an AMI supports NitroTPM using the DescribeImagesAPI call. For example:

aws ec2 describe-images --image-ids ami-0123456789

When NitroTPM is enabled for the AMI, “TpmSupport”: “v2.0” appears in the output, such as in the following example.

{
   "Images": [
      {
         ...
         "BootMode": "uefi",
         "TpmSupport": "v2.0"
      }
   ]
}

I may also query for tpmSupport using the DescribeImageAttribute API call.

When creating my own AMI, I may enable TPM support using the RegisterImage API call, by setting boot-mode to uefi and tpm-support to v2.0.

aws ec2 register-image             \
       --region us-east-1           \
       --name my-image              \
       --boot-mode uefi             \
       --architecture x86_64        \
       --root-device-name /dev/xvda \
       --block-device-mappings DeviceName=/dev/xvda,Ebs={SnapshotId=snap-0123456789example} DeviceName=/dev/xvdf,Ebs={VolumeSize=10} \
       --tpm-support v2.0

Now that you know how to create an AMI with TPM enabled, let’s create a Windows instance and configure BitLocker to encrypt the root volume.

A Walk Through: Using NitroTPM with BitLocker
BitLocker automatically detects and uses NitroTPM when available. There is no extra configuration step beyond what you do today to install and configure BitLocker. Upon installation, BitLocker recognizes the TPM module and starts to use it automatically.

Let’s go through the installation steps. I start the instance as usual, using an AMI that has both uefi and TPM v2.0 enabled. I make sure I use a supported version of Windows. Here I am using Windows Server 2022 04.13.

Once connected to the instance, I verify that Windows recognizes the TPM module. To do so, I launch the tpm.msc application, and the Trusted Platform Module (TPM) Management window opens. When everything goes well, it shows Manufacturer Name: AMZN under TPM Manufacturer Information.

Trusted Platform Module ManagementNext, I install BitLocker.

I open the servermanager.exe application and select Manage at the top right of the screen. In the dropdown menu, I select Add Roles and Features.

Add roles and featuresI select Role-based or feature-based installation from the wizard.

Install BitLocker - Step 1I select Next multiple times until I reach the Features section. I select BitLocker Drive Encryption, and I select Install.

Install BitLocker - Step 2I wait a bit for the installation and then restart the server at the end of the installation.

After reboot, I reconnect to the server and open the control panel. I select BitLocker Drive Encryption under the System and Security section.

Turn on Bitlocker - part 1I select Turn on BitLocker, and then I select Next and wait for the verification of the system and the time it takes to encrypt my volume’s data.

Just for extra safety, I decide to reboot at the end of the encryption. It is not strictly necessary. But I encrypted the root volume of the machine (C:) so I am wondering if the machine can still boot.

After the reboot, I reconnect to the instance, and I verify the encryption status.

Turn on Bitlocker - part 2I also verify BitLocker’s status and key protection method enabled on the volume. To do so, I open PowerShell and type

manage-bde -protectors -get C:

Bitlocker statusI can see on the resulting screen that the C: volume encryption key is coming from the NitroTPM module and the instance used Secure Boot for integrity validation. I can also view the recovery key.

I left the recovery key in plain text in the previous screenshot because the instance and volume I used for this demo will not exist anymore by the time you will read this. Do not share your recovery keys publicly otherwise.

Important Considerations
Now that I have shown how to use NitroTPM to protect BitLocker’s volume encryption key, I’ll go through a couple of additional considerations:

  • You can only enable an AMI for NitroTPM support by using the RegisterImage API via the AWS CLI and not via the Amazon EC2 console.
  • NitroTPM support is enabled by setting a flag on an AMI. After you launch an instance with the AMI, you can’t modify the attributes on the instance. The ModifyInstanceAttribute API is not supported on running or stopped instances.
  • Importing or exporting EC2 instances with NitroTPM, such as with the ImportImage API, will omit NitroTPM data.
  • The NitroTPM state is not included in EBS snapshots. You can only restore an EBS snapshot to the same EC2 instance.
  • BitLocker volumes that are encrypted with TPM-based keys cannot be restored on a different instance. It is possible to change the instance type (stop, change instance type, and restart it).

At the moment, we support all Intel and AMD instance types that supports UEFI boot mode. Graviton1, Graviton2, Xen-based, Mac, and bare-metal instances are not supported. Some additional instance types are not supported at launch (I shared the exact list previously). We will add support for these soon after launch.

There is no additional cost for using NitroTPM. It is available today in all AWS Regions, including the AWS GovCloud (US) Regions, except in China.

And now, go build 😉

— seb

AWS Week in Review – April 4, 2022

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-4-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

Welcome to the April 4 edition of the AWS Week in Review. This week, alongside the main launches, I also captured a couple of new capabilities, such as a new API to manage your AWS accounts within AWS Organizations, an easier process to update your AWS Lambda layers, and a new behavior of Amazon Elastic Compute Cloud (Amazon EC2).

Last Week’s Launches
Here are some launches that caught my attention last week:

Sustainability Pillar is now available in the Well Architect Tool – The Well Architected Tool is a central place for cloud architecture best practices and guidance. The Sustainability Pillar was announced at the re:Invent 2021 conference. It helps you to learn, measure, and improve your workloads using environmental best practices for cloud computing.

Close an AWS Member Account with an API Call – This feature was launched with little fanfare, but it is a big deal for those of you managing large numbers of AWS accounts through Organizations.  The Twitter community first spotted the change, noticing a commit in the AWS SDK for Go. See the official blog post announcement for more information!

The Lambda Console Now Allows Updates a Lambda Layer in All or a Subset of Functions – Lambda layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. Previously, it was challenging to identify and update all the functions that used a specific layer version. With this release, the Lambda console displays a list of all the functions using a given layer and allows you to select multiple functions to be updated with a newer layer version. It eliminates the need to update one function at a time or utilize an external script to perform the update on multiple functions.

Amazon EC2 Launched Automatic Recovery on Hardware Failure by Default – This new feature makes it easier to recover your instance when it becomes unreachable. Automatic recovery improves instance availability by recovering the instance if it becomes impaired due to an underlying hardware issue. Automatic recovery migrates the instance to another hardware during an instance reboot while retaining its instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. You can choose to disable automatic recovery for your instance if you wish.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Beside launches, here are other news worthy items and a blog that caught my attention:

New AWS podcast for Sub-Saharan AWS communities – There are AWS podcasts in many different languages: English, French, Italian, German, three in Spanish, and Russian just to name a few. This week, my colleague Veliswa launched an English podcast aimed at highlighting the Sub-Saharian AWS communities and customers. You can listen to it using any good podcast application (including but not only Spotify and Apple).

100th episode of Le Podcast AWS en Français – This week also marked the publication of the 100th episode of the AWS French Podcast. Since its start in 2019, the podcast has seen 250k downloads. Thank you for listening.

AWS Open Source News and Updates My colleague Ricardo writes this weekly open-source newsletter. In the 106th edition, I noticed two pieces of information important for the Java community:

First, we released Amazon Corretto 18. This version supports the latest Java feature release OpenJDK 18, and is available on Linux, Windows, and macOS. OpenJDK 18 offers a new internet-address resolution capability, a Simple Web Server, an updated Vector API, a new @snippet Tag for JavaDoc, a new implementation of Core Reflection, a change to UTF-8 as the default character set (charset) of the standard Java APIs, a second iteration of the foreign memory API, advancements in pattern matching for switch statements, and the deprecation of finalization.

Second, we published a blog post showing how to reduce Lambda cold start time by deploying your Java-based Lambda function on Quarkus. Quarkus was created by Java Champion Emmanuel Bernard. It is an open-source native Java stack tailored for GraalVM and OpenJDK HotSpot, crafted from the best of breed Java libraries and standards. It is designed to have an extremely low memory footprint and fast startup time. And yes, Quarkus runs on Corretto too.

A Cloud Guru Answers a Common Question – Nearly every week, people ask me what AWS certification they should take. A Cloud Guru walks through the decision in Which AWS certification is right for me?

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

The AWS Summit season has started – The Brussels Summit was last week, and the next ones are Paris, San Francisco, and London, in that order. I will be delivering the closing keynote at the Paris Summit and will be around the Formula1 GameDay area in London. Be sure to stop by and say “Hi!” if you’re around. You can sign up to receive a notification when registration opens for a Summit in your area. If you can’t attend a Summit in person this year, we will have an online Summit for EMEA in June (at European time, but all sessions will stay available on-demand until September).

.NET Enterprise Developer Day EMEA registrations are open – .NET Enterprise Developer Day EMEA 2022 is a free, one-day virtual conference providing enterprise developers with the most relevant information to swiftly and efficiently migrate and modernize their .NET applications and workloads on AWS. It will happen online on April 26, 2022.

re:Mars conference registrations are open – Mars stands for Machine learning, Automation, Robotics, and Space. You will learn from recognized thought leaders and technical experts who are building the future of AI/ML. It will happen in Las Vegas, Nevada, between June 21 and 24, 2022.

re:Inforce conference registrations are open – Security is our first priority at AWS, and it deserves its own two-day conference to reinforce your AWS security posture. You’ll hear the latest from industry-leading speakers in security, compliance, identity, and privacy. It will happen in Boston, Massachusetts, on July 26 and 27, 2022.

That’s all for this week. Come back next Monday for another Week in Review!

— seb

New Amazon RDS for MySQL & PostgreSQL Multi-AZ Deployment Option: Improved Write Performance & Faster Failover

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-rds-multi-az-db-cluster/

Today, we are announcing a new Amazon Relational Database Service (RDS) Multi-AZ deployment option with up to 2x faster transaction commit latency, automated failovers typically under 35 seconds, and readable standby instances.

Amazon RDS offers two replication options to enhance availability and performance:

  • Multi-AZ deployments gives high availability and automatic failover. Amazon RDS creates a storage-level replica of the database in a second Availability Zone. It then synchronously replicates data from the primary to the standby DB instance for high availability. The primary DB instance serves application requests, while the standby DB instance remains ready to take over in case of a failure. Amazon RDS manages all aspects of failure detection, failover, and repair actions so the applications using the database can be highly available.
  • Read replicas allow applications to scale their read operations across multiple database instances. The database engine replicates data asynchronously to the read replicas. The application sends the write requests (INSERT, UPDATE, and DELETE) to the primary database, and read requests (SELECT) can be load balanced across read replicas. In case of failure of the primary node, you can manually promote a read replica to become the new primary database.

Multi-AZ deployments and read replicas serve different purposes. Multi-AZ deployments give your application high availability, durability, and automatic failover. Read replicas give your applications read scalability.

But what about applications that require both high availability with automatic failover and read scalability?

Introducing the New Amazon RDS Multi-AZ Deployment Option With Two Readable Standby Instances.
Starting today, we’re adding a new option to deploy RDS databases. This option combines automatic failover and read replicas: Amazon RDS Multi-AZ with two readable standby instances. This deployment option is available for MySQL and PostgreSQL databases. This is a database cluster with one primary and two readable standby instances. It provides up to 2x faster transaction commit latency and automated failovers, typically under 35 seconds.

The following diagram illustrates such a deployment:

Three AZ RDS databases

When the new Multi-AZ DB cluster deployment option is enabled, RDS configures a primary database and two read replicas in three distinct Availability Zones. It then monitors and enables failover in case of failure of the primary node.

Just like with traditional read replicas, the database engine replicates data between the primary node and the read replicas. And just like with the Multi-AZ one standby deployment option, RDS automatically detects and manages failover for high availability.

You do not have to choose between high availability or scalability; Multi-AZ DB cluster with two readable standby enables both.

What Are the Benefits?
This new deployment option offers you four benefits over traditional multi-AZ deployments: improved commit latency, faster failover, readable standby instances, and optimized replications.

First, write operations are faster when using Multi-AZ DB cluster. The new Multi-AZ DB cluster instances leverage M6gd and R6gd instance types. These instances are powered by AWS Graviton2 processors. They are equipped with fast NVMe SSD for local storage, ideal for high speed and low-latency storage. They deliver up to 40 percent better price performance and 50 percent more local storage GB per vCPU over comparable x86-based instances.

Multi-AZ DB instances use Amazon Elastic Block Store (EBS) to store the data and the transaction log. The new Multi-AZ DB cluster instances use local storage provided by the instances to store the transaction log. Local storage is optimized to deliver low-latency, high I/O operations per second (IOPS) to applications. Write operations are first written to the local storage transaction log, then flushed to permanent storage on database storage volumes.

Second, failover operations are typically faster than in the Multi-AZ DB instance scenario. The read replicas created by the new Multi-AZ DB cluster are full-fledged database instances. The system is designed to fail over as quickly as 35 seconds, plus the time to apply any pending transaction log. In case of failover, the system is fully automated to promote a new primary and reconfigure the old primary as a new reader instance.

Third, the two standby instances are hot standbys. Your applications may use the cluster reader endpoint to send their read requests (SELECT) to these standby instances. It allows your application to spread the database read load equally between the instances of the database cluster.

And finally, leveraging local storage for transaction log optimizes replication. The existing Multi-AZ DB instance replicates all changes at storage-level. The new Multi-AZ DB cluster replicates only the transaction log and uses a quorum mechanism to confirm at least one standby acknowledged the change. Database transactions are committed synchronously when one of the secondary instances confirms the transaction log is written on its local disk.

Migrating Existing Databases
For those of you having existing RDS databases and willing to take advantage of this new Multi-AZ DB cluster deployment option, you may take a snapshot of your database to create a storage-level backup of your existing database instance. Once the snapshot is ready, you can create a new database cluster, with Multi-AZ DB cluster deployment option, based on this snapshot. Your new Multi-AZ DB cluster will be a perfect copy of your existing database.

Let’s See It in Action
To get started, I point my browser to the AWS Management Console and navigate to RDS. The Multi-AZ DB cluster deployment option is available for MySQL version 8.0.28 or later and PostgreSQL version 13.4 R1 and 13.5 R1. I select either database engine, and I ensure the version matches the minimum requirements. The rest of the procedure is the same as a standard Amazon RDS database launch.

Under Deployment options, I select PostgreSQL, version 13.4 R1, and under Availability and Durability, I select Multi-AZ DB cluster.

Three AZ RDS launch console

If required, I may choose the set of Availability Zones RDS uses for the cluster. To do so, I create a DB subnet group and assign the cluster to this subnet group.

Once launched, I verify that three DB instances have been created. I also take note of the two endpoints provided by Amazon RDS: the primary endpoint and one load-balanced endpoint for the two readable standby instances.

RDS Three AZ list of instances

To test the new cluster, I create an Amazon Linux 2 EC2 instance in the same VPC, within the same security group as the database, and I make sure I attach an IAM role containing the AmazonSSMManagedInstanceCore managed policy. This allows me to connect to the instance using SSM instead of SSH.

Once the instance is started, I use SSM to connect to the instance. I install PostgreSQL client tools.

sudo amazon-linux-extras enable postgresql13
sudo yum clean metadata
sudo yum install postgresql

I connect to the primary DB. I create a table and INSERT a record.

psql -h awsnewsblog.cluster-c1234567890r.us-east-1.rds.amazonaws.com -U postgres

postgres=> create table awsnewsblogdemo (id int primary key, name varchar);
CREATE TABLE

postgres=> insert into awsnewsblogdemo (id,name) values (1, 'seb');
INSERT 0 1

postgres=> exit

To verify the replication works as expected, I connect to the read-only replica. Notice the -ro- in the endpoint name. I check the table structure and enter a SELECT statement to confirm the data have been replicated.

psql -h awsnewsblog.cluster-ro-c1234567890r.us-east-1.rds.amazonaws.com -U postgres

postgres=> \dt

              List of relations
 Schema |      Name       | Type  |  Owner
--------+-----------------+-------+----------
 public | awsnewsblogdemo | table | postgres
(1 row)

postgres=> select * from awsnewsblogdemo;
 id | name
----+------
  1 | seb
(1 row)

postgres=> exit

In the scenario of a failover, the application will be disconnected from the primary database instance. In that case, it is important that your application-level code try to reestablish network connection. After a short period of time, the DNS name of the endpoint will point to the standby instance, and your application will be able to reconnect.

To learn more about Multi-AZ DB clusters, you can refer to our documentation.

Pricing and Availability
Amazon RDS Multi-AZ deployments with two readable standbys is generally available in the following Regions: US East (N. Virginia), US West (Oregon), and Europe (Ireland). We will add more regions to this list.

You can use it with MySQL version 8.0.28 or later, or PostgreSQL version 13.4 R1 or 13.5 R1.

Pricing depends on the instance type. In US regions, on-demand pricing starts at $0.522 per hour for M6gd instances and $0.722 per hour for R6gd instances. As usual, the Amazon RDS pricing page has the details for MySQL and PostgreSQL.

You can start to use it today.

Let Your IPv6-only Workloads Connect to IPv4 Services

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/let-your-ipv6-only-workloads-connect-to-ipv4-services/

Today we are announcing two new capabilities for Amazon Virtual Private Cloud (VPC) NAT gateway and Amazon Route 53, allowing your IPv6-only workloads to transparently communicate with IPV4-only services. Curious? Read on; I have details for you.

Some of you are running very large workloads involving tens of thousands of virtual machines, containers, or micro-services. To do so, you configured these workloads to work in the IPv6 address space. This avoids the problem of running out of available IPv4 addresses (a single VPC has a maximum theoretical size of 65,536 IPv4 addresses, compared to /56 ranges for IPv6, allowing for a maximum theoretical size of 2^73 -1 IPv6 addresses), and it saves you from additional headaches caused by managing complex IPv4-based networks (think about non-overlapping subnets in between VPCs belonging to multiple AWS accounts, AWS Regions, or on-premises networks).

But can you really run an IPv6 workload in isolation from the rest of the IPv4 world? Most of you told us it is important to let such workloads continue to communicate with IPv4 services, either to make calls to older APIs or just as a transient design, while you are migrating multiple dependent workloads from IPv6 to IPv4. Not having the ability to call an IPv4 service from IPv6 hosts makes migrations slower and more difficult than it needs to be. It obliged some of you to build custom solutions that are hard to maintain.

This is why we are launching two new capabilities allowing your IPv6 workloads to transparently communicate with IPv4 services: NAT64 (read “six to four”) for the VPC NAT gateway and DNS64 (also “six to four”) for the Amazon Route 53 resolver.

How Does It Work?
As illustrated by the following diagram, let’s imagine I have an Amazon Elastic Compute Cloud (Amazon EC2) instance with an IPv6-only address that has to make an API call to an IPv4 service running on another EC2 instance. In the diagram, I chose to have the IPv4-only host in a separate VPC in the same AWS account, but these capabilities work to connect to any IPv4 service, whether in the same VPC or in another AWS account’s VPC, your on-premises network, or even on the public internet. My IPv6-only host only knows the DNS name of the service.

NAT64 DNS64 beforeHere is the sequence happening when the IPv6-only host initiates a connection to the IPv4 service:

1. The IPV6 host makes a DNS call to resolve the service name to an IP address. Without DNS64, Route 53 would have returned an IPv4 address. The IPv6-only hosts would not have been able to connect to that IPv4 address. But starting today, you can turn on DNS64 for your subnet. The DNS resolver first checks if the record contains an IPv6 address (AAAA record). If it does, the IPv6 address is returned. The IPv6 host can connect to the service using just IPv6. When the record only contains an IPv4 address, the Route 53 resolver synthesizes an IPv6 address by prepending the well-known 64:ff9b::/96 prefix to the IPv4 address.

For example, when the IPv4 service has the address 34.207.250.62, Route 53 returns 64:ff9b::ffff:22cf:fa3e.

IPv6 (hexadecimal) : 64:ff9b::ffff: 22 cf fa 3e
IPv4 (decimal) : 34 207 250 62

64:ff9b::/96is a well-known prefix defined in the RFC 6052 proposed standard to the IETF. Reading the text of the standard is a great way to fall asleep rapidly to learn all the details about IPv6 to IPv4 translation.

2. The IPv6 host initiates a connection to 64:ff9b::ffff:22cf:fa3e. You may configure subnet routing to send all packets starting with 64:ff9b::/96 to the NAT gateway. The NAT gateway recognizes the IPv6 address prefix, extracts the IPv4 address from it, and initiates an IPv4 connection to the destination. As usual, the source IPv4 address is the IPv4 address of the NAT gateway itself.

3. When the packet response arrives, the NAT gateway repopulates the destination host IPv6 address and prepends the well-known prefix 64:ff9b::/96 to the source IP address of the response packet.

Now that you understand how it works, how can you configure your VPC to take advantage of these two new capabilities?

How to Get Started
To enable these two capabilities, I have to adjust two configurations: first, I flag the subnets that require DNS64 translation, and second, I add a route to the IPv6 subnet routing table to send part of the IPv6 traffic to the NAT gateway.

To enable DNS64, I have to use the new --enable-dns64 option to modify my existing subnets. In this demo, I use the modify-subnet-attribute command. This is a one-time operation. I can do it using the VPC API, the AWS Command Line Interface (CLI), or the AWS Management Console. Notice this is a subnet-level configuration that must be turned on explicitly. By default, the existing behavior is maintained.

aws ec2 modify-subnet-attribute --subnet-id subnet-123 --enable-dns64

I have to add a route to the subnet’s routing table to allow VPC to forward IPv6 packets prefixed by DNS64 to the NAT gateway. It tells it to route all packets with destination 64:ff9b::/96 to the NAT gateway.

aws ec2 create-route --route-table-id rtb-123 –-destination-ipv6-cidr-block 64:ff9b::/96 –-nat-gateway-id nat-123

The following diagram illustrates these two simple configuration changes.

NAT64 DNS64 afterWith these two simple changes, my IPv6-only workloads in the subnet may now communicate with IPv4 services. The IPv4 service might live in the same VPC, in another VPC, or anywhere on the internet.

You can continue to use your existing NAT gateway, and no change is required on the gateway itself or on the routing table attached to the NAT gateway subnet.

Pricing and Availability
These two new capabilities to the VPC NAT gateway and Route 53 are available today in all AWS Regions at no additional costs. Regular NAT gateway charges may apply.

Go and build your IPv6-only networks!

— seb

Happy 10th Birthday, DynamoDB! 🎉🎂🎁

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/happy-birthday-dynamodb/

On January 18th 2012, Jeff and Werner announced the general availability of Amazon DynamoDB, a fully managed flexible NoSQL database service for single-digit millisecond performance at any scale.

During the last 10 years, hundreds of thousands of customers have adopted DynamoDB. It regularly reaches new peaks of performance and scalability. For example, during the last Prime Day sales in June 2021, it handled trillions of requests over 66 hours while maintaining single-digit millisecond performance and peaked at 89.2 million requests per second. Disney+ uses DynamoDB to ingest content, metadata, and billions of viewers actions each day. Even during unprecedented demands caused by the pandemic, DynamoDB was able to help customers as many across the world had to change their way of working, needing to meet and conduct business virtually. For example, Zoom was able to scale from 10 million to 300 million daily meeting participants when we all started to make video calls in early 2020.

A decade of innovation with Amazon DynamoDB

On this special anniversary, join us for an unique online event on Twitch on March 1st. I’ll tell you more about this at the end of this post. But before talking about this event, let’s take this opportunity to reflect back on the genesis of this service and the main capabilities we added since the original launch 10 years ago.

The History Behind DynamoDB
The story of DynamoDB started long before the launch 10 years ago. It started with a series of outages on Amazon’s e-commerce platform during the holiday shopping season in 2004. At that time, Amazon was transitioning from a monolithic architecture to microservices. The design principle was (and still is) that each stateful microservice uses its own data store, and other services are required to access a microservice’s data through a publicly exposed API. Direct database access was not an option anymore. At that time, most microservices were using a relational database provided by a third-party vendor. Given the volume of traffic during the holiday season in 2004, the database system experienced some hard-to-debug and hard-to-reproduce deadlocks. The e-commerce platform was pushing the relational databases to their limits, despite the fact that we were using simple usage patterns, such as query by primary keys only. These usage patterns do not require the complexity of a relational database.

At Amazon and AWS, after an outage happens, we start a process called Correction of Error (COE) to document the root cause of the issue, to describe how we fixed it, and to detail the changes we’re making to avoid recurrence. During the COE for this database issue, a young, naïve, 20-year-old intern named Swaminathan (Swami) Sivasubramanian (now VP of the database, analytics, and ML organization at AWS) asked the question, “Why are we using a relational database for this? These workloads don’t need the SQL level of complexity and transactional guarantees.”

This led Amazon to rethink the architecture of its data stores and to build the original Dynamo database. The objective was to address the demanding scalability and reliability requirements of the Amazon e-commerce platform. This non-relational, key-value database was initially targeted at use cases that were the core of the Amazon e-commerce operations, such as the shopping basket and the session service.

AWS published the Dynamo paper in 2007, three years later, to describe our design principles and provide the lessons learned from running this database to support Amazon’s core e-commerce operations. Over the years, we saw several Dynamo clones appear, proving other companies were searching for scalable solutions, just like Amazon.

After a couple of years, Dynamo was adopted by several core service teams at Amazon. Their engineers were very satisfied with the performance and scalability. However, we started to interview engineers to understand why it was not more broadly adopted within Amazon. We learned Dynamo was giving teams the reliability, performance, and scalability they needed, but it did not simplify the operational complexity of running the system. Teams were still needed to install, configure, and operate the system in Amazon’s data centers.

At the time, AWS was proposing Amazon SimpleDB as a NoSQL service. Many teams preferred the operational simplicity of SimpleDB despite the difficulties to scale a domain beyond 10 GB, its non-predictable latency (it was affected by the size of the database and its indexes), and its eventual consistency model.

We concluded the ideal solution would combine the strengths of Dynamo—the scalability and the predictable low latency to retrieve data—with the operational simplicity of SimpleDB—just having a table to declare and let the system handle the low-level complexity transparently.

DynamoDB was born.

DynamoDB frees developers from the complexity of managing hardware and software. It handles all the complexity of scaling partitions and re-partitions your data to meet your throughput requirements. It scales seamlessly without the need to manually re-partition tables, and it provides predictable low latency access to your data (single-digit milliseconds).

At AWS, the moment we launch a new service is not the end of the project. It is actually the beginning. Over the last 10 years, we have continuously listened to your feedback, and we have brought new capabilities to DynamoDB. In addition to hundreds of incremental improvements, we added:

… and many more.

Lastly, during the last AWS re:Invent conference, we announced Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). This new DynamoDB table class allows you to lower the cost of data storage for infrequently accessed data by 60%. The ideal use case is for data that you need to keep for the long term and that your application needs to occasionally access, without compromising on access latency. In the past, to lower storage costs for such data, you were writing code to move infrequently accessed data to lower-cost storage alternatives, such as Amazon Simple Storage Service (Amazon S3). Now you can switch to the DynamoDB Standard-IA table class to store infrequently accessed data while preserving the high availability and performance of DynamoDB.

How To Get Started
To get started with DynamoDB, as a developer, you can refer to the Getting Started Guide in our documentation or read the excellent DynamoDB, Explained, written by Alex DeBrie, one of our AWS Heroes, and author of The DynamoDB Book. To dive deep into DynamoDB data modeling, AWS Hero Jeremy Daly is preparing a video course “DynamoDB Modeling for the rest of us“.

Customers now leverage DynamoDB across virtually any industry vertical, geographic area, and company size. You are continually surprising us with how you innovate on DynamoDB, and you are continually pushing us to continue to evolve DynamoDB to make it easier to build the next generation of applications. We are going to continue to work backwards from your feedback to meet your ever evolving needs and to enable you to innovate and scale for decades to come.

A Decade of Innovation with DynamoDB – A Virtual Event
As I mentioned at the beginning, we also would love to celebrate this anniversary with you. We prepared a live Twitch event for you to learn best practices, see technical demos, and attend a live Q&A. You will hear stories from two of our long-time customers : SmugMug CEO Don MacAskill, and engineering leaders from Dropbox. In addition, you’ll get a chance to ask your questions to and chat with AWS’ blog legend and Chief Evangelist Jeff Barr, and DynamoDB‘s product managers and engineers. Finally, AWS heroes Alex DeBrie and Jeremy Daly will host two deep dive technical sessions. Have a look at the full agenda here.

This will be live on Twitch on March 1st, you can register today. The first 1,000 registrants from US will receive a free digital copy of the DynamoDB book (this has a $79 retail value).

To DynamoDB’s next 10 years. Cheers 🥂.

— seb

Amazon GuardDuty Enhances Detection of EC2 Instance Credential Exfiltration

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-guardduty-enhances-detection-of-ec2-instance-credential-exfiltration/

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon Simple Storage Service (Amazon S3). Informed by a multitude of public and AWS-generated data feeds and powered by machine learning, GuardDuty analyzes billions of events in pursuit of trends, patterns, and anomalies that are recognizable signs that something is amiss. You can enable it with a click and see the first findings within minutes.

Today, we are adding to GuardDuty the ability to detect when your Amazon Elastic Compute Cloud (Amazon EC2) instance credentials are being used from another AWS Account. EC2 instance credentials are the temporary credentials made available through the EC2 metadata service to any applications running on an instance, when an AWS Identity and Access Management (IAM) role is attached to it.

What Are the Risks?
When your workloads deployed on EC2 instances access AWS services, they use an access key, a secret access key, and a session token. The secure mechanism to pass access key credentials to your workloads is to define the permissions required by your workload, create one or several IAM policies with the permissions, attach the policies to an IAM role and, finally, attach the role to the instance.

Any process running on an EC2 instance with a role attached can retrieve the security credentials by calling the EC2 metadata service:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

These credentials are limited in time and in scope. They are valid for a maximum of six hours. They are limited to the scope of the permissions attached to the IAM role associated with the EC2 instance.

All AWS SDK are able to retrieve and renew such credentials automatically. No additional code is necessary in your application.

Now imagine that your application running on the EC2 instance is compromised and a malicious actor managed to access the instance’s meta data service. The malicious actor would extract the credentials. These credentials have the permissions you defined in the IAM role attached to the instance. Depending on your application, attackers might have the possibility to exfiltrate data from S3 or DynamoDB, to start or terminate EC2 instances, or even to create new IAM users or roles.

Since the launch of GuardDuty, it has detected when such credentials are used from IP addresses outside of AWS. Smart attackers therefore might hide their activity from another AWS account to operate outside of the sight of GuardDuty. Starting today, GuardDuty also detects when the credentials are used from other AWS accounts, inside the AWS network.

What Alerts Are Generated?
There are legitimate reasons why the source IP address communicating with AWS Services APIs might be different than the EC2 instance IP address. Think about complex network topologies that route traffic to one or multiple VPCs; AWS Transit Gateway, or AWS Direct Connect for example. In addition, multi-Region configurations, or not using AWS Organizations, makes it non trivial to detect if the AWS account using the credentials belongs to you or not. Large companies have implemented their own solution to detect such security compromises, but these type of solutions are not easy to build and to maintain. Only a handful of organizations have the resources required to tackle this challenge. When they do so, they distract their engineering efforts from their core business. This is why we decided to address this.

Starting today, GuardDuty generates alerts when it detects a misuse of EC2 instance credentials. When the credentials are used from an affiliated account, the alert is labeled as medium-severity. Otherwise, a high-severity alert is generated. Affiliated accounts are accounts monitored by the same GuardDuty administrator account, also known as GuardDuty member accounts. They might be part of your organization or not.

In Practice
To learn how it’s working, let’s capture and exfiltrate a set of EC2 credentials from one of my EC2 instances. I use SSH to connect to one of my instances, and I use curl to retrieve the credentials, as shown earlier:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

The instance has an IAM role with permissions allowing to read S3 buckets in this AWS account. I copy and paste the credentials. Then I connect to another EC2 instance running in a different AWS account, not affiliated with the same GuardDuty administrator account. I use SSH to connect to that other instance, and then I configure the AWS CLI with the compromised credentials. I attempt to access a private S3 bucket.


# first verify I do not have access 
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

# then I configure the CLI using the compromised credentials
[ec2-user@ip-1-1-0-79 ~]$ aws configure
AWS Access Key ID [None]: AS...J5
AWS Secret Access Key [None]: r1...9m
Default region name [None]: us-east-1
Default output format [None]:

[ec2-user@ip-1-1-0-79 ~]$ aws configure set aws_session_token IQ...z5Q==

# Finally, I attempt to access S3 again
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket
                     PRE folder1/
                     PRE folder2/
                     PRE folder3/
2021-01-22 16:37:48 6148 .DS_Store

Shortly after, I use the AWS Management Console to access GuardDuty in the AWS account where I stole the credentials. I can verify a high-severity alert was generated.

GuardDuty EC2 credentials exfiltration alarm

And So What?
Attackers may extract credentials when they have remote code execution (RCE), local presence on the instance, or by exploiting application-level vulnerabilities like Server Side Request Forgery (SSRF) and XML External Entity (XXE) injection. There are multiple methods to mitigate RCE or local access, including rebuilding the instances from a secured and patched AMI to eliminate remote access, rotate access credentials, and so on. When the vulnerability is at the application level, you or the application vendor are required to patch the application code to eliminate the vulnerability.

When you receive an alert indicating a risk of compromised credentials, the first thing to do is to verify the account ID. Is it one of your company accounts or not? During the analysis, when the business case allows, you may terminate the compromised instances or shut down the application. This prevents the attacker from extracting renewed instance credentials upon expiration. When in doubt, contact the AWS Trust & Safety team using the Report Amazon AWS abuse form or by contacting [email protected]. Provide all the necessary information, including the suspicious AWS account ID, logs in plaintext, and so on, when you submit your request.

Availability
This new ability is available in all AWS Regions at no additional cost. It is enabled by default when GuardDuty is already enabled on your AWS account.

Otherwise, enable GuardDuty now, and start the 30-day trial period.

— seb

A New AWS Console Home Experience

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/a-new-aws-console-home-experience/

If you are reading this blog, there is a high chance you frequently use the AWS Management Console. I taught AWS classes for years. During classes, students’ first hands-on experience with the AWS Cloud happened on the console, and I bet yours did too.

Until today, the home page of the console showed your most recently used services and a set of static links organized in sections, such as Getting Started with AWS, Build a Solution, or Explore AWS with links to training courses. However, we learned from our data that their usage is very different depending on your profile. You also told us it is cumbersome and time-consuming to navigate to different parts of the console to get an overview of important information for you.

We listened to your feedback, and I’m happy to announce a redesigned home page for the AWS Management Console. This new home page experience includes dynamic content, can be customized, and includes data from multiple AWS Regions.

The screenshot below shows the default view of this new console home page:

New console default layout

New console homepage action

The new Console Home is made of widgets. I may choose which widget to display on the page and where to include it. I may use the actions in the Actions drop down to customize my home page.

I may move and arrange widgets on the home page to organize the content as I want. When I click on the three little dots on the widget title bar, I may choose to remove the widget or resize it. I have the choice between Regular view and Extended view.

New console resize widget

At launch, the console provides eight widgets, and we will add more over time. Three widgets provide me with static links to learn how to build a solution or to explore AWS (Welcome to AWS, Build a Solution and Explore AWS). The other five are dynamic; their content depends on the usage of AWS by my applications and infrastructure:

  • AWS Health: this widget provides information on important events and changes
  • Cost and usage: this widget provides an overview of service costs, with a break down per AWS service.
  • Favorites: this widget shows a list of services that I have bookmarked
  • Recently visited: this widget provides the list of top recently visited services
  • Trusted Advisor: this widget provides recommendations to follow AWS best practices

AWS News Console List of widgets

As usual, we pay attention to the importance of not disturbing existing workflows and habits. You can use the new Console Home after opt-in. You can revert back to the old console home with a simple click.

This new Console Home is the first step to bring you more relevant content on this very first page you see every day. Stay tuned for more.

The new Console Home is available today in all AWS Regions at no additional cost. Go and customize your console homepage today.

— seb

Amazon Elastic Kubernetes Service Adds IPv6 Networking

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-elastic-kubernetes-service-adds-ipv6-networking/

Starting today, you can deploy applications that use IPv6 address space on Amazon Elastic Kubernetes Service (EKS).

Many of our customers are standardizing Kubernetes as their compute infrastructure platform for cloud and on-premises applications. Amazon EKS makes it easy to deploy containerized workloads. It provides highly available clusters and automates tasks such as patching, node provisioning, and updates.

Kubernetes uses a flat networking model that requires each pod to receive an IP address. This simplified approach enables low-friction porting of applications from virtual machines to containers but requires a significant number of IP addresses that many private VPC IPv4 networks are not equipped to handle. Some cluster administrators work around this IPv4 space limitation by installing container network plugins (CNI) that virtualize IP addresses a layer above the VPC, but this architecture limits an administrator’s ability to effectively observe and troubleshoot applications and has a negative impact on network performance at scale. Further, to communicate with internet services outside the VPC, traffic from IPv4 pods is routed through multiple network hops before reaching its destination, which adds latency and puts a strain on network engineering teams who need to maintain complex routing setups.

To avoid IP address exhaustion, minimize latency at scale, and simplify routing configuration, the solution is to use IPv6 address space.

IPv6 is not new. In 1996, I bought my first book on “IPng, Internet Protocol Next Generation”, as it was called 25 years ago. It provides a 64-bit address space, allowing 3.4 x 10^38 possible IP addresses for our devices, servers, or containers. We could assign an IPv6 address to every atom on the surface of the planet and still have enough addresses left to do another 100-plus Earths.

IPng Internet protocol Next Generation bookThere are a few advantages to using Amazon EKS clusters with an IPv6 network. First, you can run more pods on one single host or subnet without the risk of exhausting all available IPv4 addresses available in your VPC. Second, it allows for lower-latency communications with other IPv6 services, running on-premises, on AWS, or on the internet, by avoiding an extra NAT hop. Third, it relieves network engineers of the burden of maintaining complex routing configurations.

Kubernetes cluster administrators can focus on migrating and scaling applications without spending efforts working around IPv4 limits. Finally, pod networking is configured so that the pods can communicate with IPv4-based applications outside the cluster, allowing you to adopt the benefits of IPv6 on Amazon EKS without requiring that all dependent services deployed across your organization are first migrated to IPv6.

As usual, I built a short demo to show you how it works.

How It Works
Before I get started, I create an IPv6 VPC. I use this CDK script to create an IPv6-enabled VPC in a few minutes (thank you Angus Lees for the code). Just install CDK v2 (npm install -g aws-cdk@next) and deploy the stack (cdk bootstrap && cdk deploy).

When the VPC with IPv6 is created, I use the console to configure auto-assignment of IPv6 addresses to resources deployed in the public subnets (I do this for each public subnet).

auto assign IPv6 addresses in subnet

I take note of the subnet IDs created by the CDK script above (they are listed in the output of the script) and define a couple of variables I’ll use throughout the demo. I also create a cluster IAM role and a node IAM role, as described in the Amazon EKS documentation. When you already have clusters deployed, these two roles exist already.

I open a Terminal and type:


CLUSTER_ROLE_ARN="arn:aws:iam::0123456789:role/EKSClusterRole"
NODE_ROLE_ARN="arn:aws:iam::0123456789:role/EKSNodeRole"
SUBNET1="subnet-06000a8"
SUBNET2="subnet-03000cc"
CLUSTER_NAME="AWSNewsBlog"
KEYPAIR_NAME="my-key-pair-name"

Next, I create an Amazon EKS IPv6 cluster. In a terminal, I type:


aws eks create-cluster --cli-input-json "{
\"name\": \"${CLUSTER_NAME}\",
\"version\": \"1.21\",
\"roleArn\": \"${CLUSTER_ROLE_ARN}\",
\"resourcesVpcConfig\": {
\"subnetIds\": [
    \"${SUBNET1}\", \"${SUBNET2}\"
],
\"endpointPublicAccess\": true,
\"endpointPrivateAccess\": true
},
\"kubernetesNetworkConfig\": {
    \"ipFamily\": \"ipv6\"
}
}"

{
    "cluster": {
        "name": "AWSNewsBlog",
        "arn": "arn:aws:eks:us-west-2:486652066693:cluster/AWSNewsBlog",
        "createdAt": "2021-11-02T17:29:32.989000+01:00",
        "version": "1.21",

...redacted for brevity...

        "status": "CREATING",
        "certificateAuthority": {},
        "platformVersion": "eks.4",
        "tags": {}
    }
}

I use the describe-cluster while waiting for the cluster to be created. When the cluster is ready, it has "status" : "ACTIVE"

aws eks describe-cluster --name "${CLUSTER_NAME}"

Then I create a node group:

aws eks create-nodegroup                       \
        --cluster-name ${CLUSTER_NAME}         \
        --nodegroup-name AWSNewsBlog-nodegroup \
        --node-role ${NODE_ROLE_ARN}           \
        --subnets "${SUBNET1}" "${SUBNET2}"    \
        --remote-access ec2SshKey=${KEYPAIR_NAME}
		
{
    "nodegroup": {
        "nodegroupName": "AWSNewsBlog-nodegroup",
        "nodegroupArn": "arn:aws:eks:us-west-2:0123456789:nodegroup/AWSNewsBlog/AWSNewsBlog-nodegroup/3ebe70c7-6c45-d498-6d42-4001f70e7833",
        "clusterName": "AWSNewsBlog",
        "version": "1.21",
        "releaseVersion": "1.21.4-20211101",

        "status": "CREATING",
        "capacityType": "ON_DEMAND",

... redacted for brevity ...

}		

Once the node group is created, I see two EC2 instances in the console. I use the AWS Command Line Interface (CLI) to verify that the instances received an IPv6 address:

aws ec2 describe-instances --query "Reservations[].Instances[? State.Name == 'running' ][].NetworkInterfaces[].Ipv6Addresses" --output text 

2600:1f13:812:0000:0000:0000:0000:71eb
2600:1f13:812:0000:0000:0000:0000:3c07

I use the kubectl command to verify the cluster from a Kubernetes point of view.

kubectl get nodes -o wide

NAME                                       STATUS   ROLES    AGE     VERSION               INTERNAL-IP                              EXTERNAL-IP    OS-IMAGE         KERNEL-VERSION                CONTAINER-RUNTIME
ip-10-0-0-108.us-west-2.compute.internal   Ready    <none>   2d13h   v1.21.4-eks-033ce7e   2600:1f13:812:0000:0000:0000:0000:2263   18.0.0.205   Amazon Linux 2   5.4.149-73.259.amzn2.x86_64   docker://20.10.7
ip-10-0-1-217.us-west-2.compute.internal   Ready    <none>   2d13h   v1.21.4-eks-033ce7e   2600:1f13:812:0000:0000:0000:0000:7f3e   52.0.0.122   Amazon Linux 2   5.4.149-73.259.amzn2.x86_64   docker://20.10.7

Then I deploy a Pod. I follow these steps in the EKS documentation. It deploys a sample nginx web server.

kubectl create namespace aws-news-blog
namespace/aws-news-blog created

# sample-service.yml is available at https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html
kubectl apply -f  sample-service.yml 
service/my-service created
deployment.apps/my-deployment created

kubectl get pods -n aws-news-blog -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP                           NODE                                       NOMINATED NODE   READINESS GATES
my-deployment-5dd5dfd6b9-7rllg   1/1     Running   0          17m   2600:0000:0000:0000:405b::2   ip-10-0-1-217.us-west-2.compute.internal   <none>           <none>
my-deployment-5dd5dfd6b9-h6mrt   1/1     Running   0          17m   2600:0000:0000:0000:46f9::    ip-10-0-0-108.us-west-2.compute.internal   <none>           <none>
my-deployment-5dd5dfd6b9-mrkfv   1/1     Running   0          17m   2600:0000:0000:0000:46f9::1   ip-10-0-0-108.us-west-2.compute.internal   <none>           <none>

I take note of the IPv6 address of my pods, and try to connect it from my laptop. As my awesome service provider doesn’t provide me with an IPv6 at home yet, the connection fails. This is expected as the pods do not have an IPv4 address at all. Notice the -g option telling curl to not consider : in the IP address as the separator for the port number and -6 to tell curl to connect through IPv6 only (required when you provide curl with a DNS hostname).

curl -g -6 http://\[2600:0000:0000:35000000:46f9::1\]
curl: (7) Couldn't connect to server

To test IPv6 connectivity, I start a dual stack (IPv4 and IPv6) EC2 instance in the same VPC as the cluster. I SSH connect to the instance and try the curl command again. I see I receive the default HTML page served by nginx. IPv6 connectivity to the pod works!

curl -g -6 http://\[2600:0000:0000:35000000:46f9::1\]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

... redacted for brevity ...

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

If it does not work for you, verify the security group for the cluster EC2 nodes and be sure it has a rule allowing incoming connections on port TCP 80 from ::/0.

A Few Things to Remember
Before I wrap up, I’d like to answer some frequent questions received from customers who have already experimented with this new capability:

Pricing and Availability
IPv6 support for your Amazon Elastic Kubernetes Service (EKS) cluster is available today in all AWS Regions where Amazon EKS is available, at no additional cost.

Go try it out and build your first IPv6 cluster today.

— seb

Use New Amazon EC2 M1 Mac Instances to Build & Test Apps for iPhone, iPad, Mac, Apple Watch, and Apple TV

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/use-amazon-ec2-m1-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

Last year at AWS re:Invent, Jeff Barr wrote about the exciting availability of Amazon Elastic Compute Cloud (Amazon EC2) Mac instances. Today, we’re announcing the preview of a new EC2 M1 Mac instance.

The introduction of EC2 Mac instances brought the flexibility, scalability, and cost benefits of AWS to all Apple developers. EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance. It connects to your Amazon Virtual Private Cloud (VPC), boot from Amazon Elastic Block Store (EBS) volumes, and leverage EBS snapshots, security groups and other AWS services. EC2 Mac instances let you scale your build and test fleets of Macs, paying as you go. There is no hypervisor involved, and you get full bare metal performance of the underlying Mac mini. An EC2 dedicated host reserves a Mac mini for your usage.

The availability (in preview) of EC2 M1 Mac instances lets you access machines built around the Apple-designed M1 System on Chip (SoC). If you are a Mac developer and re-architecting your apps to natively support Macs with Apple silicon, you may now build and test your apps and take advantage of all the benefits of AWS. Developers building for iPhone, iPad, Apple Watch, and Apple TV will also benefit from faster builds. EC2 M1 Mac instances deliver up to 60% better price performance over the x86-based EC2 Mac instances for iPhone and Mac app build workloads.

EC2 M1 Mac instances are powered by a combination of two hardware components:

  • The Mac mini, featuring M1 SoC with 8 CPU cores, 8 GPU cores, 16 GiB of memory, and a 16 core Apple Neural Engine.
  • The AWS Nitro System, providing up to 10 Gbps of VPC network bandwidth and 8 Gbps of EBS storage bandwidth through a high-speed Thunderbolt connection.

How to Get Started
As I explained previously, when using EC2 Mac instances, there is no virtual machine involved. These are running on bare metal servers, each hosting a Mac mini. The first step, therefore, involves grabbing a dedicated server. I open the AWS Management Console, navigate to the Amazon EC2 section, then I select Dedicated Hosts. I select Allocate Dedicated Host to allocate a server to my AWS account.

EC2 Mac2 Instances - Dedicated Hosts

Alternatively, I may use the AWS Command Line Interface (CLI).

➜  ~ aws ec2 allocate-hosts                  \
         --instance-type mac2.metal          \
         --availability-zone us-east-2b      \
         --quantity 1 
{
    "HostIds": [
        "h-0fxxxxxxx90"
    ]
}

Once the host is allocated, I start an EC2 instance on it. The procedure is no different from starting any EC2 instance type. I just have to ensure I select a macOS AMI version that suits my requirements. I select the mac2.metal instance type and select host Tenancy and the dedicated Host I just created.

EC2 Dedicated TenancyAlternatively, I may use the CLI.

➜ ~ aws ec2 run-instances                                     \
	    --instance-type mac2.metal                             \
        --key-name my_key                                      \
        --placement HostId=h-0fxxxxxxx90                       \
        --security-group-ids sg-01000000000000032              \
        --image-id AWS_OR_YOUR_AMI_ID
{
    "Groups": [],
    "Instances": [
        {
            "AmiLaunchIndex": 0,
            "ImageId": "ami-01xxxxbd",
            "InstanceId": "i-08xxxxx5c",
            "InstanceType": "mac2.metal",
            "KeyName": "my_key",
            "LaunchTime": "2021-11-08T16:47:39+00:00",
            "Monitoring": {
                "State": "disabled"
            },
... redacted for brevity ....

When you use EC2 Mac instances for the first time, you’re likely to ask questions such as, “How do I connect through Apple Remote Desktop?” or “How do I increase the size of the APFS file system on the EBS volume?” The EC2 Mac documentation covers the answers for you and provides examples of commands to run on macOS to perform these common tasks.

I use SSH to connect to the newly launched instance as usual.

EC2 Mac M1 Instance uname -a

I may enable Apple Remote Desktop and start a VNC session to the EC2 instance. The EC2 Mac instance documentation page has the details.

mac2 GUI VNC

Availability and Pricing
EC2 M1 Mac instances are now available in preview in US East (N. Virginia) and US West (Oregon), with other AWS Regions coming at launch.

Pricing metrics are similar to the previous generation of EC2 Mac instances. You are charged per hour of reservation of the dedicated host, not for the time the instance is running, and there is a minimum charge of 24 hours for reserving a dedicated host.

In the two preview Regions, the on-demand price is $0.6498 per hour. You can save up to 42 percent over the on-demand price with Savings Plans. Check our Dedicated Host on-demand pricing page, as well as the Savings Plans page to learn the details.

You can sign up for the preview of EC2 Mac M1 instances today!

— seb

New – Site-to-Site Connectivity with AWS Direct Connect SiteLink

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-site-to-site-connectivity-with-aws-direct-connect-sitelink/

We are launching AWS Direct Connect SiteLink, a new capability of AWS Direct Connect that lets you create connections between your on-premises networks through the AWS global network backbone.

Until today, when you needed direct connectivity between your data centers or branch offices, you had to rely on public internet or expensive and hard-to-deploy fixed networks. These are geographically constrained and can be tied to long-term contracts. This rigidity becomes a pain point as you expand your businesses globally. In turn, you’re required to create custom workarounds to interconnect networks from different providers, which increases your operating costs.

Starting today, you may connect your sites through Direct Connect locations, without sending your traffic through an AWS Region. We have 108 Direct Connect locations available in 32 countries as I am writing this post, located across Africa, Americas, Asia-Pacific, Europe, and the Middle East. Traffic flows from one Direct Connect location to another following the shortest possible path. You no longer need to connect through the closest AWS Region and manage and configure an AWS Transit Gateway for site-to-site network connectivity.

You can take advantage of Direct Connect’s reliability and global footprint to build a network that grows with your business, with no long-term contracts, flexible pay-as-you-go pricing, and a wide range of port-speeds, from 50 Mbps to 100 Gbps. SiteLink also integrates with other AWS services, letting you reach your VPCs, other AWS services, and your on-premises networks from your Direct Connect connections.

When talking about network topology, a small diagram is always more descriptive than long phrases.

The following diagram shows the way that you use Direct Connect today. Direct Connect is currently optimized to let you reach your AWS Resources running in any Region as quickly as possible. Sending data from one Direct Connect location to another is not possible.

Once you connect your locations (NY1, AM3, Paris, and TY2 in the diagram) to a Direct Connect gateway, those connections can reach any AWS Region (except the two AWS China Regions). No peering between Regions is necessary, because Direct Connect gateways are global resources.

Site-to-site connectivity without SiteLink

The following diagram shows how you connect multiple sites using SiteLink. The data flows between Direct Connect locations without going through an AWS Region.

Site-to-site connectivity with SiteLink

How to Get Started?
Configuring these connections is very similar to what you do today. The first step is to connect my network to Direct Connect locations. After that, SiteLink can be enabled or disabled in minutes.

Using the AWS Management Console, I navigate to the Direct Connect section, and I select Create virtual interface to create a virtual interface. Under the Additional Settings section, I make sure the SiteLink switch is turned on. Obviously, I repeat this on another virtual interface, once per site, to connect.

SiteLink - enable sitelink for VIF

I have access to similar monitoring dashboards and metrics published to CloudWatch. I select my virtual interface, and then navigate to the Monitoring tab (hopefully your ViF will have more data available than mine that was created just for this post).

SiteLink VIF Monitoring

Availability and Pricing
You can connect your on-premises networks or branch offices to any of our Direct Connect locations available today, except in China.

Pricing is pay-as-you-go, with no commitment or recurring fees. In addition to existing Direct Connect charges, your monthly bill will include a price-per-hour for SiteLink virtual interfaces, as well as the cost of SiteLink data transfer. Check the pricing page to get the details.

Go ahead an start connecting your on-premises locations together with Direct Connect SiteLink!

— seb

Enhanced Amazon S3 Integration for Amazon FSx for Lustre

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/enhanced-amazon-s3-integration-for-amazon-fsx-for-lustre/

Today, we are announcing two additional capabilities of Amazon FSx for Lustre. First, a full bi-directional synchronization of your file systems with Amazon Simple Storage Service (Amazon S3), including deleted files and objects. Second, the ability to synchronize your file systems with multiple S3 buckets or prefixes.

Lustre is a large scale, distributed parallel file system powering the workloads of most of the largest supercomputers. It is popular among AWS customers for high-performance computing workloads, such as meteorology, life-science, and engineering simulations. It is also used in media and entertainment, as well as the financial services industry.

I had my first hands-on Lustre file systems when I was working for Sun Microsystems. I was a pre-sales engineer and worked on some deals to sell multimillion-dollar compute and storage infrastructure to financial services companies. Back then, having access to a Lustre file system was a luxury. It required expensive compute, storage, and network hardware. We had to wait weeks for delivery. Furthermore, it required days to install and configure a cluster.

Fast forward to 2021, I may create a petabyte-scale Lustre cluster and attach the file system to compute resources running in the AWS cloud, on-demand, and only pay for what I use. There is no need to know about Storage Area Networks (SAN), Fiber Channel (FC) fabric, and other underlying technologies.

Modern applications use different storage options for different workloads. It is common to use S3 object storage for data transformation, preparation, or import/export tasks. Other workloads may require POSIX file-systems to access the data. FSx for Lustre lets you synchronize objects stored on S3 with the Lustre file system to meet these requirements.

When you link your S3 bucket to your file system, FSx for Lustre transparently presents S3 objects as files and lets you to write results back to S3.

Full Bi-Directional Synchronization with Multiple S3 Buckets
If your workloads require a fast, POSIX-compliant file system access to your S3 buckets, then you can use FSx for Lustre to link your S3 buckets to a file system and keep data synchronized between the file system and S3 in both directions. However, until today, there were a couple limitations. First, you had to manually configure a task to export data back from FSx for Lustre to S3. Second, deleted files on S3 were not automatically deleted from the file system. And third, an FSx for Lustre file system was synchronized with one S3 bucket only. We are addressing these three challenges with this launch.

Starting today, when you configure an automatic export policy for your data repository association, files on your FSx for Lustre file system are automatically exported to your data repository on S3. Next, deleted objects on S3 are now deleted from the FSx for Lustre file system. The opposite is also available: deleting files on FSx for Lustre triggers the deletion of corresponding objects on S3. Finally, you may now synchronize your FSx for Lustre file system with multiple S3 buckets. Each bucket has a different path at the root of your Lustre file system. For example your S3 bucket logs may be mapped to /fsx/logs and your other financial_data bucket may be mapped to /fsx/finance.

These new capabilities are useful when you must concurrently process data in S3 buckets using both a file-based and an object-based workflow, as well as share results in near real time between these workflows. For example, an application that accesses file data can do so by using an FSx for Lustre file system linked to your S3 bucket, while another application running on Amazon EMR may process the same files from S3.

Moreover, you may link multiple S3 buckets or prefixes to a single FSx for Lustre file system, thereby enabling a unified view across multiple datasets. Now you can create a single FSx for Lustre file system and easily link multiple S3 data repositories (S3 buckets or prefixes). This is convenient when you use multiple S3 buckets or prefixes to organize and manage access to your data lake, access files from a public S3 bucket (such as these hundreds of public datasets) and write job outputs to a different S3 bucket, or when you want to use a larger FSx for Lustre file system linked to multiple S3 datasets to achieve greater scale-out performance.

How It Works
Let’s create an FSx for Lustre file system and attach it to an Amazon Elastic Compute Cloud (Amazon EC2) instance. I make sure that the file system and instance are in the same VPC subnet to minimize data transfer costs. The file system security group must authorize access from the instance.

I open the AWS Management Console, navigate to FSx, and select Create file system. Then, I select Amazon FSx for Lustre. I am not going through all of the options to create a file system here, you can refer to the documentation to learn how to create a file system. I make sure that Import data from and export data to S3 is selected.

Lustre - enable S3 synchronizationIt takes a few minutes to create the file system. Once the status is ✅ Available, I navigate to the Data repository tab, and then select Create data repository association.

I choose a Data Repository path (my source S3 bucket) and a file system path (where in the file system that bucket will be imported).

FsX Lustre Data repository

Then, I choose the Import policy and Export policy. I may synchronize the creation of file/objects, their updates, and when they are deleted. I select Create.

FsX Lustre Data repository import policies

When I use automatic import, I also make sure to provide an S3 bucket in the same AWS Region as the FSx for Lustre cluster. FSx for Lustre supports linking to an S3 bucket in a different AWS Region for automatic export and all other capabilities.

Using the console, I see the list of Data repository associations. I wait for the import task status to become ✅ Succeeded. If I link the file system to an S3 bucket with a large number of objects, then I may choose to skip Importing metadata from repository while creating the data repository association, and then load metadata from selected prefixes in my S3 buckets that are required for my workload using an Import task.

FsX for lustre - meta data repository tasks

I create an EC2 instance in the same VPC subnet. Furthermore, I make sure that the FSx for Lustre cluster security group authorizes ingress traffic from the EC2 instance. I use SSH to connect to the instance, and then type the following commands (commands are prefixed with the $ sign that is part of my shell prompt).

# check kernel version, minimum version 4.14.104-95.84 is required 
$ uname -r
4.14.252-195.483.amzn2.aarch64

# install lustre client 
$ sudo amazon-linux-extras install -y lustre2.10
Installing lustre-client
...
Installed:
  lustre-client.aarch64 0:2.10.8-5.amzn2                                                                                                                        

Complete!

# create a mount point 
$ sudo mkdir /fsx

# mount the file system 
$ sudo mount -t lustre -o noatime,flock fs-00...9d.fsx.us-east-1.amazonaws.com@tcp:/ny345bmv /fsx

# verify mount succeeded
$ mount 
...
172.0.0.0@tcp:/ny345bmv on /fsx type lustre (rw,noatime,flock,lazystatfs)

Then, I verify that the file system contains the S3 objects, and I create a new file using the touch command.

Fsx Lustre - check file system

I switch to the AWS Console, under S3 and then my bucket name, and I verify that the file has been synchronized.

Fsx Lustre - check s3

Using the console, I delete the file from S3. And, unsurprisingly, after a few seconds, the file is also deleted from the FSx file system.

Fsx Lustre - check file systems - deleted

Pricing and Availability
These new capabilities are available at no additional cost on Amazon FSx for Lustre file systems. Automatic export and multiple repositories are only available on Persistent 2 file systems in US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). Automatic import with support for deleted and moved objects in S3 is available on file systems created after July 23, 2020 in all regions where FSx for Lustre is available.

You can configure your file system to automatically import S3 updates by using the AWS Management Console, the AWS Command Line Interface (CLI), and AWS SDKs.

Learn more about using S3 data repositories with Amazon FSx for Lustre file systems.

One More Thing
One more thing while you are reading. Today, we also launched the next generation of FSx for Lustre file systems. FSx for Lustre next-gen file systems are built on AWS Graviton processors. They are designed to provide you with up to 5x higher throughput per terabyte (up to 1 GB/s per terabyte) and reduce your cost of throughput by up to 60% as compared to previous generation file systems. Give it a try today!

— seb

PS : my colleague Michael recorded a demo video to show you the enhanced S3 integration for FSx for Lustre in action. Check it out today.

Preview – AWS Backup Adds Support for Amazon S3

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/preview-aws-backup-adds-support-for-amazon-s3/

Starting today, you can preview AWS Backup for Amazon Simple Storage Service (Amazon S3).

AWS Backup is a fully managed, policy-based service that lets you to centralize and automate the backup and restore of your applications spanning across 12 AWS services: Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Block Store (EBS) volumes, Amazon Relational Database Service (RDS) databases (including Amazon Aurora clusters), Amazon DynamoDB tables, Amazon Neptune databases, Amazon DocumentDB (with MongoDB compatibility) databases, Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Lustre file systems, Amazon FSx for Windows File Server file systems, AWS Storage Gateway volumes, and now Amazon S3 (in preview).

Modern workloads and systems are leveraging different storage options for different functionalities. In the 21st century, it is normal to build applications relying on non-relational and relational databases, shared file storage, and object storage, just to name of few. When operating and managing these applications, you told us that you wanted centralized protection and provable compliance for application data stored in S3 alongside other AWS services for storage, compute, and databases.

I can see three benefits when integrating Amazon Simple Storage Service (Amazon S3) with your data protection policies in AWS Backup.

First, it lets you centrally manage your applications backups: AWS Backup provides an automated solution to centrally configure backup policies, thereby helping you simplify backup lifecycle management. This also makes it easy to ensure that your application data across AWS services (including S3) is centrally backed up.

Second, it lets you easily restore your data: AWS Backup provides a single-click-restore experience for your S3 data. This lets you perform point-in-time restores of your S3 buckets and objects to a new or existing S3 bucket.

Finally, it improves backup compliance: AWS Backup provides built-in dashboards that let you to track backup and restore operations for S3.

AWS Backup for S3 (Preview) lets you create continuous point-in-time backups along with periodic backups of S3 buckets, including object data, object tags, access control lists (ACLs), and user-defined metadata. The first backup is a full snapshot, while subsequent backups are incremental. If there is a data disruption event, then you choose a backup from the backup vault, and restore an S3 bucket (or individual S3 objects) to a new or existing S3 bucket. AWS Backup is integrated with AWS Organizations, which let you use a single policy across AWS accounts (within your Organizations) to automate backup creation and backup access management.

Furthermore, you can turn on AWS Backup Vault Lock to enable delete protection of the data that you protect with AWS Backup, and thereby improving protection of your immutable backups from accidental deletion or malicious re-encryption.

How to Get Started
AWS Backup works with versioned S3 buckets. Before you get started, turn on S3 Versioning on your buckets to backup.

I must enable S3 in AWS Backup Settings when I use this feature for the first time. Using the AWS Management Console, I navigate to AWS Backup, then select Settings and Configure resources. I enable S3, and select Confirm. This is a one-time operation.

AWS Backup - optin S3

For this demo, I already have an existing backup plan, and I want to add an S3 bucket to this plan. If you want to create a new backup plan, then you can refer to AWS Backup‘s technical documentation.

To start including my S3 objects in my backup plan, I open the AWS Management Console, navigate to Backup plans, and select Assign resources.

AWS Backup Add Resources

I give a name to my Resource assignment. I select Include specific resources types, then I select S3 as Resource type and one or several S3 Bucket names. When I am done, I select Assign resources.

Alternatively, I may use tags or resource IDs to assign S3 resources.

If you have thousands of S3 buckets, I recommend using tags to assign the S3 buckets to a backup plan. AWS Backup matches the tags in S3 buckets to the ones assigned to the backup plan, and it centrally backs up the S3 resources along with other AWS services that your application uses.

The other options are not different from what you know already.

AWS Backup - backup plan for S3

The Bucket names list in the previous screenshot only shows the S3 buckets in the same Region.

Alternatively, I may also create on-demand backups. I navigate to the Protected resources section, and select Create on-demand backup.

I select S3 as the Resource type, and select the Bucket name. As per usual, I choose a Backup Window, a Retention period, a Backup vault, and an IAM role. Then, I select Create on-demand backup.

AWS Backup - on-demand backup for S3After a while, depending on the size of my bucket, the backup is ✅ Completed.

AWS Backup for S3 - Backup completed

All of the backups are encrypted and stored securely in a backup vault that I selected in the backup plan.

A backup vault (or backup storage vault) is an encrypted logical construct in my AWS account that stores and organizes my backups (recovery points). I may create new backup vaults in every AWS Region where AWS Backup is available. I may enable AWS Backup Vault Lock (delete-protection capability) on the backup vault to avoid accidental deletions and prevent malicious actors from re-encrypting my data. AWS Backup stores my continuous backups and periodic snapshots in the backup vault of my preference, and it lets me browse and restore as per my requirements.

How to Restore Objects
Let’s try to restore this backup.

The restore operation is very flexible. I may restore entire S3 buckets or individual S3 objects. I may restore the backups to the source S3 bucket, or to another existing bucket. Furthermore, I may create a new S3 bucket during restore. The S3 buckets must have Versioning enabled. Also, I may change the encryption key during restore.

I navigate to Backup vaults to restore the S3 bucket I just backed up. In the Backups section, I select the Recovery point ID that I want to restore, and I select Restore from the Actions menu.

AWS Backup for S3 - restore

Before starting the restore, I may select a few options:

  • The Restore time: I may restore my continuous backup to a point-in-time in the last 35 days, while I can restore my periodic backups to their original state.
  • The Restore type: I may choose to restore the entire bucket or a subset of objects within it.
  • The Restore destination: I may choose to restore on the same bucket, on another one, or create a new bucket during restore.
  • The Restored object encryption: this lets me select the key I want to use to encrypt the restored objects in the bucket.

I select Restore backup to start the restore.

AWS Backup for S3 - restore optionsI can monitor the progress in the Jobs section, under the Restore jobs tab.

AWS Backup S3 - restore Jobs

When the status turns green to ✅ Completed, my objects are ready to use!

Generally, the most comprehensive data-protection strategies include regular testing and validation of your restore procedures before you need them. Testing your restores also helps to prepare and maintain recovery runbooks. In turn, that ensures operational readiness during a disaster recovery exercise, or an actual data loss scenario.

Availability and Pricing
The preview is available in the US West (Oregon) Region only.

During the preview, there are no charges for creating and storing backups. You will pay the AWS charges for underlying resources, such as S3 storage, API usage, and versioning.

Send us an email at [email protected] including your AWS account ID to register for the preview.

Go ahead and apply to the preview program today.

— seb

Machine Learning-Powered Amazon Connect, Now With Call Summarization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/machine-learning-powered-amazon-connect-now-with-call-summarization/

At AWS our mission is to make machine learning (ML) accessible to data scientists, developers, and business users. To help businesses easily leverage the power of ML, we create purpose-built solutions that embed ML and deep learning technologies directly into a business process to address real customer needs, rather than leaving companies to sort it out on their own.

One place where we have seen ML have an impact is within the contact center—the place you receive and respond to customer inquiries and issues. Because of the growing role of customer experience (CX) and the increase in contact less commerce via phone or email, contact centers are essentials to maintaining the human connections that businesses depend on. However, analog or outdated methods make it difficult to address every customer need in an effective way that delivers timely resolutions, delivers great experiences, and fosters customer loyalty.

Embedding AWS ML technologies into a cloud contact center solution helps decrease the friction of calls, chats, and other engagements. It also makes it possible to automate outdated processes.

Amazon Connect is an easy-to-use, cloud-based, ML-powered contact center service that helps companies of any size deliver superior customer service at a lower cost.

Let me take three examples with Voice ID, Wisdom, and Contact Lens.

Amazon Connect Voice ID
ML capabilities might help streamline customer experience for authentication. Instead of asking customers to repeat their email address and their mother’s maiden name several times, ML-powered voice identification can establish a digital voice print associated with each customer’s unique voice. Then, it can recognize it at the beginning of each subsequent call. Voice identification provides a confidence score that may be used to automate authentication workflows.

Amazon Connect Wisdom
ML might also help search the vast documentation and knowledge base to find the most relevant answers to the questions raised by the customer. ML helps resolve customer issues faster and better.

Contact Lens for Amazon Connect
ML technologies also shine at analyzing the tone and content of a conversation, capturing customer sentiment in the moment, and learning from it. ML can help transcribe calls, track customer sentiment, detect common issues and customer trends, or even pinpoint discrepancies.

At just about the same time last year, I announced the addition of real-time capabilities for Contact Lens. This lets supervisors identify when to assist an agent on live calls so that they can provide guidance via chat or have the agent transfer the call. Last September, we added support for eight new languages, ending up with a total of 21 languages for post-call analytics and 12 languages for both post-call and real-time analytics.

Contact Lens Adds Call Summarization
But we didn’t stop there. Today, I am pleased to announce the addition of a new capability that helps you improve customer experience and agent and supervisor productivity by automatically summarizing the important aspects of each customer call.

You told us that keeping notes of customer conversations is time consuming, especially, for agents that must take notes during the call and import them manually in your CRM tool afterward. In the end, this is more time for us, the customers, waiting in queue for an agent to become available. Likewise, using automatically generated call transcripts doesn’t save time for supervisors. It is time consuming for supervisors to read these full call transcripts to understand what happened during customer conversations.

How it Works
Starting today, Contact Lens has added a summary of the key moments in a conversation. It is enabled by default, and there is no additional configuration step. You may toggle the Show transcript summary button to show or hide the summary when you don’t need it.

Contac Lens - Show Transcript Summary - Toggle button

Once a call is analyzed, the summary is available on the contact detail page.

Contact Lens identifies and summarizes the sections corresponding to Issue (e.g., lost package), Outcome (e.g., customer refund), and Action item (e.g., send a follow-up mail confirming the refund was processed). A manager can quickly see where there’s an action to send a customer a follow-up email and take action to ensure it happens.

Contact Lens Call Summary Example

The call summary is also available in JSON format. Contact Lens uploads these in the S3 bucket of your choice. Having access to the JSON file lets you import the summaries programmatically in your CRM or other tools.

... redacted for brevity ...

"IssuesDetected": [
{
   "CharacterOffsets": {
      "BeginOffsetChar": 31,
      "EndOffsetChar": 73
   },
   "Text": "I would like to cancel my subscription"
}
]
...
"ActionItemsDetected": [
 {
   "CharacterOffsets": {
      "BeginOffsetChar": 32,
      "EndOffsetChar": 116
   },
   "Text": "I will send you an email with details"
 }
 ]

Availability and Pricing
Call summarization by Contact Lens is available in all AWS Regions where Contact Lens is available today. We support post-call analytics in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions. We support real-time analytics in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions.

Call summary comes at no additional cost on top of the usual charges for Contact Lens. This is why we choose to enable it by default. Contact Lens is charged $0.015 per minute of voice conversation analyzed. Most of our Contact Lens customers analyze millions of conversation minutes per month. The price is $0.0125 per minute when you analyze more than 5 millions minutes per month.

If you do not have Contact Lens enabled on your call center, go ahead and start using it today.

— seb

New – Amazon EBS Snapshots Archive

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive/

I am pleased to announce the availability of Amazon EBS Snapshots Archive, a new storage tier for the long-term retention of Amazon Elastic Block Store (EBS) snapshots of your EBS volumes.

In a nutshell, EBS is an easy-to-use high-performance block storage service for your Amazon Elastic Compute Cloud (Amazon EC2) instances. An EBS volume mounted to your EC2 instances lets you boot an operating system and store data for your most performance-demanding workloads. You may use EBS snapshots to create point-in-time copies of your volume data. The first snapshot of a volume contains all of the data written into that volume. Subsequent snapshots are incremental. Snapshots are stored on Amazon Simple Storage Service (Amazon S3), and they may be shared between AWS accounts and AWS Regions.

The ability to take frequent snapshots and easily restore volumes makes EBS snapshots an obvious choice for your data management strategy, alongside other backup options. The incremental nature of snapshots makes them cost-effective for daily and weekly backups that need immediate restores. However, you were telling us that business compliance and regulatory needs have meant that you needed to retain EBS snapshots for longer periods of time (months or years). For example, snapshots taken at the end of a project, or snapshots for test and development preserved for future project releases. The vast majority of these snapshots are taken and never read. For these snapshots, you are looking to lower your storage costs. Today, to benefit from lower storage costs, you may have written complex scripts involving temporary EC2 instances to restore snapshots, mount the corresponding volumes, and transfer the data to lower-cost storage tiers, such as Amazon Glacier.

EBS Snapshots Archive provides a low-cost storage tier to archive full, point-in-time copies of EBS Snapshots that you must retain for 90 days or more for regulatory and compliance reasons, or for future project releases. Now, you can easily archive and manage EBS Snapshots, thereby eliminating the need for custom scripts and third-party tools to manage these snapshots. This lets you move your rarely accessed snapshots to EBS Snapshots Archive to achieve up to 75% lower storage costs, and avoid licensing costs for third-party tools. Furthermore, you can retrieve an archived snapshot within 24-72 hours, and, once restored, use the snapshot to recover an EBS volume.

As per usual, let me show you how it works.

How to Get Started
I have a snapshot available in the US East (N. Virginia) Region, and I want to archive this snapshot for compliance reasons. I open the AWS Management Console, navigate to EC2, then to Snapshots. I select the snapshot I want to archive, and select the Actions menu. I select the Archive snapshot menu option.

EBS Snapshot Archive - create snapshot

I carefully read the confirmation message :-), and I select Archive snapshot.

EBS Snapshot Archive - create snapshot - confirmation

I may monitor the progress of the archive operation with the new Storage Tier tab at the bottom of the screen. After some time, depending on the size of the snapshot, the Tiering status becomes ✅ Archival completed.

EBS Snapshot Archive - create snapshot - archival completedArchived snapshots stay visible in the console. The new Storage tier column indicates the tier used for storage (Standard or Archive).

How do I Restore a Volume?
Restoring a volume from EBS Snapshots Archive is a two-step process. First, I retrieve the snapshot from EBS Snapshots Archive to its original snapshot ID, using RestoreSnapshotTier API call or the management console. It takes between 24-72 hours to retrieve the snapshot from the archive, depending on the snapshot size. Once retrieved, the snapshot appears as a regular snapshot on my account. At this stage, I hydrate the retrieved snapshot into an EBS volume using the default snapshot restore or Fast Snapshot Restore (FSR) for expedited restores, just like usual.

A CloudWatch event is generated when the snapshot is restored. You may listen to this event to avoid pulling the status with the API.

A CreateVolume API call on an archived snapshot will fail. You must restore a snapshot from archive before you use it to create a volume.

Using the AWS Management Console, I select the snapshot that I want to restore, I select the Actions menu, and then I select the Restore snapshot from archive menu option.

EBS Snapshot Archive - create snapshot - restore archive

I have the choice to restore the snapshot permanently, or just temporarily. At the end of the temporary duration, the standard tier snapshot is deleted, and only the archive is preserved.

EBS Snapshot Archive - create snapshot - restore archive - confirmation

After a while, depending on the snapshot size, the archive is restored to standard storage and may be used to recreate a volume, just like usual. I may monitor the progress of the retrieval and the lifetime for temporarily restored archives in the new Storage tier tab in the bottom half of the screen. Temporary restored snapshots may be kept for up to 180 days.

Pricing and Availability
EBS Snapshots Archive is available for you today in 17 AWS Regions. At the time of launch, it is not available in the two Regions in China, Asia Pacific (Seoul), Asia Pacific (Osaka), Canada (Central), and South America (São Paulo).

As per usual, you pay as-you-go, with no minimum or fixed fees. There are two metrics that influence EBS Snapshots Archive billing: data storage and data retrieval. We charge you $0.0125 per GB-month of stored data and $0.03 per GB retrieved. You are charged for a 90-day period at minimum. This means that if you delete a snapshot archive or permanently restore it less than 90 days after creation, then we charge for the full 90-day period. The EBS pricing page has the details.

Go ahead and start to configure your long term storage for EBS snaphots today.

— seb

New – Amazon CloudWatch Evidently – Experiments and Feature Management

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/cloudwatch-evidently/

As a developer, I am excited to announce the availability of Amazon CloudWatch Evidently. This is a new Amazon CloudWatch capability that makes it easy for developers to introduce experiments and feature management in their application code. CloudWatch Evidently may be used for two similar but distinct use-cases: implementing dark launches, also known as feature flags, and A/B testing.

Features flags is a software development technique that lets you enable or disable features without needing to deploy your code. It decouples the feature deployment from the release. Features in your code are deployed in advance of the actual release. They stay hidden behind if-then-else statements. At runtime, your application code queries a remote service. The service decides the percentage of users who are exposed to the new feature. You can also configure the application behavior for some specific customers, your beta testers for example.

When you use feature flags you can deploy new code in advance of your launch. Then, you can progressively introduce a new feature to a fraction of your customers. During the launch, you monitor your technical and business metrics. As long as all goes well, you may increase traffic to expose the new feature to additional users. In the case that something goes wrong, you may modify the server-side routing with just one click or API call to present only the old (and working) experience to your customers. This lets you revert back user experience without requiring rollback deployments.

A/B Testing shares many similarities with feature flags while still serving a different purpose. A/B tests consist of a randomized experiment with multiple variations. A/B testing lets you compare multiple versions of a single feature, typically by testing the response of a subject to variation A against variation B, and determining which of the two is more effective. For example, let’s imagine an e-commerce website (a scenario we know quite well at Amazon). You might want to experiment with different shapes, sizes, or colors for the checkout button, and then measure which variation has the most impact on revenue.

The infrastructure required to conduct A/B testing is similar to the one required by feature flags. You deploy multiple scenarios in your app, and you control how to route part of the customer traffic to one scenario or the other. Then, you perform deep dive statistical analysis to compare the impacts of variations. CloudWatch Evidently assists in interpreting and acting on experimental results without the need for advanced statistical knowledge. You can use the insights provided by Evidently’s statistical engine, such as anytime p-value and confidence intervals for decision-making while an experiment is in progress.

At Amazon, we use feature flags extensively to control our launches, and A/B testing to experiment with new ideas. We’ve acquired years of experience to build developers’ tools and libraries and maintain and operate experimentation services at scale. Now you can benefit from our experience.

CloudWatch Evidently uses the terms “launches” for feature flags and “experiments” for A/B testing, and so do I in the rest of this article.

Let’s see how it works from an application developer point of view.

Launches in Action
For this demo, I use a simple Guestbook web application. So far, the guest book page is read-only, and comments are entered from our back-end only. I developed a new feature to let customers enter their comments on the guestbook page. I want to launch this new feature progressively over a week and keep the ability to revert the change back if it impacts important technical or business metrics (such as p95 latency, customer engagement, page views, etc.). Users are authenticated, and I will segment users based on their user ID.

Before launch:
Evidently - experiment off
After launch:
Evidently - experiment on

Create a Project
Let’s start by configuring Evidently. I open the AWS Management Console and navigate to CloudWatch Evidently. Then, I select Create a project.

Evidently - create project

 

I enter a Project name and Description.

Evidently lets you optionally store events to CloudWatch logs or S3, so that you can move them to systems such as Amazon Redshift to perform analytical operations. For this demo, I choose not to store events. When done, I select Create project.

Evidently - create project second part

Add a Feature
Next, I create a feature for this project by selecting Add feature. I enter a Feature name and Feature description. Next, I define my Feature variations. In this example, there are two variations, and I use a Boolean type. true indicates the guestbook is editable and false indicates it is read only. Variations types might be boolean, double, long, or string.

Evidently - create featureI may define overrides. Overrides let me pre-define the variation for selected users. I want the user “seb”, my beta tester, to always receive the editable variation.

Evidently - Create feature - overridesThe console shares the JavaScript and Java code snippets to add into my application.

Evidently - code snippetTalking about code snippets, let’s look at the changes at the code level.

Instrument my Application Code
I use a simple web application for this demo. I coded this application using JavaScript. I use the AWS SDK for JavaScript and Webpack to package my code. I also use JQuery to manipulate the DOM to hide or show elements. I designed this application to use standard JavaScript and a minimum number of frameworks to make this example inclusive to all. Feel free to use higher level tools and frameworks, such as React or Angular for real-life projects.

I first initialize the Evidently client. Just like other AWS Services, I have to provide an access key and secret access key for authentication. Let’s leave the authentication part out for the moment. I added a note at the end of this article to discuss the options that you have. In this example, I use Amazon Cognito Identity Pools to receive temporary credentials.

// Initialize the Amazon CloudWatch Evidently client
const evidently = new AWS.Evidently({
    endpoint: EVIDENTLY_ENDPOINT,
    region: 'us-east-1',
    credentials: fromCognitoIdentityPool({
        client: new CognitoIdentityClient({ region: 'us-west-2' }),
        identityPoolId: IDENTITY_POOL_ID
    }),
});

Armed with this client, my code may invoke the EvaluateFeature API to make decisions about the variation to display to customers. The entityId is any string-based attribute to segment my customers. It might be a session ID, a customer ID, or even better, a hash of these. The featureName parameter contains the name of the feature to evaluate. In this example, I pass the value EditableGuestBook.

const evaluateFeature = async (entityId, featureName) => {

    // API request structure
    const evaluateFeatureRequest = {
        // entityId for calling evaluate feature API
        entityId: entityId,
        // Name of my feature
        feature: featureName,
        // Name of my project
        project: "AWSNewsBlog",
    };

    // Evaluate feature
    const response = await evidently.evaluateFeature(evaluateFeatureRequest).promise();
    console.log(response);
    return response;
}

The response contains the assignment decision from Evidently, as based on traffic rules defined on the server-side.

{
 details: {
   launch: "EditableGuestBook", group: "V2"},
   reason: "LAUNCH_RULE_MATCH", 
   value: {boolValue: false},
   variation: "readonly"
}}

The last part consists of hiding or displaying part of the user interface based on the value received above. Using basic JQuery DOM manipulation, it would be something like the following:

window.aws.evaluateFeature(entityId, 'EditableGuestbook').then((response, error) => {
    if (response.value.boolValue) {
        console.log('Feature Flag is on, showing guest book');
        $('div#guestbook-add').show();
    } else {
        console.log('Feature Flag is off, hiding guest book');
        $('div#guestbook-add').hide();
    }
});

Create a Launch
Now that the feature is defined on the server-side, and the client code is instrumented, I deploy the code and expose it to my customers. At a later stage, I may decide to launch the feature. I navigate back to the console, select my project, and select Create Launch. I choose a Launch name and a Launch description for my launch. Then, I select the feature I want to launch.

Evidently - create launchIn the Launch Configuration section, I configure how much traffic is sent to each variation. I may also schedule the launch with multiple steps. This lets me plan different steps of routing based on a schedule. For example, on the first day, I may choose to send 10% of the traffic to the new feature, and on the second day 20%, etc. In this example, I decide to split the traffic 50/50.

Evidently - launch configurationFinally, I may define up to three metrics to measure the performance of my variations. Metrics are defined by applying rules to data events.

Evidently - Custom MetricsAgain, I have to instrument my code to send these metrics with PutProjectEvents API from Evidently. Once my launch is created, the EvaluateFeature API returns different values for different values of entityId (users in this demo).

At any moment, I may change the routing configuration. Moreover, I also have access to a monitoring dashboard to observe the distribution of my variations and the metrics for each variation.

Evidently - launch monitoringI am confident that your real-life launch graph will get more data than mine did, as I just created it to write this post.

A/B Testing
Doing an A/B test is similar. I create a feature to test, and I create an Experiment. I configure the experiment to route part of the traffic to variation 1, and then the other part to variation 2. When I am ready to launch the experiment, I explicitly select Start experiment.

Evidently - start experiment

In this experiment, I am interested in sending custom metrics. For example:

// pageLoadTime custom metric
const timeSpendOnHomePageData = `{
   "details": {
      "timeSpendOnHomePage": ${timeSpendOnHomePageValue}
   },
   "userDetails": { "userId": "${randomizedID}", "sessionId": "${randomizedID}" }
}`;

const putProjectEventsRequest: PutProjectEventsRequest = {
   project: 'AWSNewsBlog',
   events: [
    {
        timestamp: new Date(),
        type: 'aws.evidently.custom',
        data: JSON.parse(timeSpendOnHomePageData)
    },
   ],
};

this.evidently.putProjectEvents(putProjectEventsRequest).promise().then(res =>{})

Switching to the Results page, I see raw values and graph data for Event Count, Total Value, Average, Improvement (with 95% confidence interval), and Statistical significance. The statistical significance describes how certain we are that the variation has an effect on the metric as compared to the baseline.

These results are generated throughout the experiment and the confidence intervals and the statistical significance are guaranteed to be valid anytime you want to view them. Additionally, at the end of the experiment, Evidently also generates a Bayesian perspective of the experiment that provides information about how likely it is that a difference between the variations exists.

The following two screenshots show graphs for the average value of two metrics over time, and the improvement for a metric within a 95% confidence interval.

Evidently - experiment monitoring - average valuesEvidently - experiment monitoring - improvement

Additional Thoughts
Before we wrap-up, I’d like to share some additional considerations.

First, it is important to understand that I choose to demo Evidently in the context of front-end application development. However, you may use Evidently with any application type: front-end web or mobile, back-end API, or even machine learning (ML). For example, you may use Evidently to deploy two different ML models and conduct experiments just like I showed above.

Second, just like with other AWS Services, Evidently API is available in all of our AWS SDK. This lets you use EvaluateFeature and other APIs from nine programing languages: C++, Go, Java, JavaScript (and Typescript), .Net, NodeJS, PHP, Python, and Ruby. AWS SDK for Rust and Swift are in the making.

Third, for a front-end application as I demoed here, it is important to consider how to authenticate calls to Evidently API. Hard coding access keys and secret access keys is not an option. For the front-end scenario, I suggest that you use Amazon Cognito Identity Pools to exchange user identity tokens for a temporary access and secret keys. User identity tokens may be obtained from Cognito User Pools, or third-party authentications systems, such as Active Directory, Login with Amazon, Login with Facebook, Login with Google, Signin with Apple, or any system compliant with OpenID Connect or SAML. Cognito Identity Pools also allows for anonymous access. No identity token is required. Cognito Identity Pools vends temporary tokens associated with IAM roles. You must Allow calls to the evidently:EvaluateFeature API in your policies.

Finally, when using feature flags, plan for code cleanup time during your sprints. Once a feature is launched, you might consider removing calls to EvaluateFeature API and the if-then-else logic used to initially hide the feature.

Pricing and Availability
Amazon Cloudwatch Evidently is generally available in nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), and Europe (Stockholm). As usual, we will gradually extend to other Regions in the coming months.

Pricing is pay-as-you-go with no minimum or recurring fees. CloudWatch Evidently charges your account based on Evidently events and Evidently analysis units. Evidently analysis units are generated from Evidently events, based on rules you have created in Evidently. For example, a user checkout event may produce two Evidently analysis units: checkout value and the number of items in cart. For more information about pricing, see Amazon CloudWatch Pricing.

Start experimenting with CloudWatch Evidently today!

— seb