Tag Archives: launch

AWS Week in Review – March 20, 2023

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-20-2023/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

A new week starts, and Spring is almost here! If you’re curious about AWS news from the previous seven days, I got you covered.

Last Week’s Launches
Here are the launches that got my attention last week:

Picture of an S3 bucket and AWS CEO Adam Selipsky.Amazon S3 – Last week there was AWS Pi Day 2023 celebrating 17 years of innovation since Amazon S3 was introduced on March 14, 2006. For the occasion, the team released many new capabilities:

Amazon Linux 2023 – Our new Linux-based operating system is now generally available. Sébastien’s post is full of tips and info.

Application Auto Scaling – Now can use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. You can use it to scale based on your own application-specific metrics. Read how it works with Amazon ECS services.

AWS Data Exchange for Amazon S3 is now generally available – You can now share and find data files directly from S3 buckets, without the need to create or manage copies of the data.

Amazon Neptune – Now offers a graph summary API to help understand important metadata about property graphs (PG) and resource description framework (RDF) graphs. Neptune added support for Slow Query Logs to help identify queries that need performance tuning.

Amazon OpenSearch Service – The team introduced security analytics that provides new threat monitoring, detection, and alerting features. The service now supports OpenSearch version 2.5 that adds several new features such as support for Point in Time Search and improvements to observability and geospatial functionality.

AWS Lake Formation and Apache Hive on Amazon EMR – Introduced fine-grained access controls that allow data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR.

Amazon EC2 M1 Mac Instances – You can now update guest environments to a specific or the latest macOS version without having to tear down and recreate the existing macOS environments.

AWS Chatbot – Now Integrates With Microsoft Teams to simplify the way you troubleshoot and operate your AWS resources.

Amazon GuardDuty RDS Protection for Amazon Aurora – Now generally available to help profile and monitor access activity to Aurora databases in your AWS account without impacting database performance

AWS Database Migration Service – Now supports validation to ensure that data is migrated accurately to S3 and can now generate an AWS Glue Data Catalog when migrating to S3.

AWS Backup – You can now back up and restore virtual machines running on VMware vSphere 8 and with multiple vNICs.

Amazon Kendra – There are new connectors to index documents and search for information across these new content: Confluence Server, Confluence Cloud, Microsoft SharePoint OnPrem, Microsoft SharePoint Cloud. This post shows how to use the Amazon Kendra connector for Microsoft Teams.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
A few more blog posts you might have missed:

Example of a geospatial query.Women founders Q&A – We’re talking to six women founders and leaders about how they’re making impacts in their communities, industries, and beyond.

What you missed at that 2023 IMAGINE: Nonprofit conference – Where hundreds of nonprofit leaders, technologists, and innovators gathered to learn and share how AWS can drive a positive impact for people and the planet.

Monitoring load balancers using Amazon CloudWatch anomaly detection alarms – The metrics emitted by load balancers provide crucial and unique insight into service health, service performance, and end-to-end network performance.

Extend geospatial queries in Amazon Athena with user-defined functions (UDFs) and AWS Lambda – Using a solution based on Uber’s Hexagonal Hierarchical Spatial Index (H3) to divide the globe into equally-sized hexagons.

How cities can use transport data to reduce pollution and increase safety – A guest post by Rikesh Shah, outgoing head of open innovation at Transport for London.

For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more.

Upcoming AWS Events
Here are some opportunities to meet:

AWS Public Sector Day 2023 (March 21, London, UK) – An event dedicated to helping public sector organizations use technology to achieve more with less through the current challenging conditions.

Women in Tech at Skills Center Arlington (March 23, VA, USA) – Let’s celebrate the history and legacy of women in tech.

The AWS Summits season is warming up! You can sign up here to know when registration opens in your area.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

AWS Chatbot Now Integrates With Microsoft Teams

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-chatbot-now-integrates-with-microsoft-teams/

I am pleased to announce that, starting today, you can use AWS Chatbot to troubleshoot and operate your AWS resources from Microsoft Teams.

Communicating and collaborating on IT operation tasks through chat channels is known as ChatOps. It allows you to centralize the management of infrastructure and applications, as well as to automate and streamline your workflows. It helps to provide a more interactive and collaborative experience, as you can communicate and work with your colleagues in real time through a familiar chat interface to get the job done.

We launched AWS Chatbot in 2020 with Amazon Chime and Slack integrations. Since then, the landscape of chat platforms has evolved rapidly, and many of you are now using Microsoft Teams.

AWS Chatbot Benefits
When using AWS Chatbot for Microsoft Teams or other chat platforms, you receive notifications from AWS services directly in your chat channels, and you can take action on your infrastructure by typing commands without having to switch to another tool.

Typically you want to receive alerts about your system health, your budget, any new security threat or risk, or the status of your CI/CD pipelines. Sending a message to the chat channel is as simple as sending a message on an Amazon Simple Notification Service (Amazon SNS) topic. Thanks to the native integration between Amazon CloudWatch alarms and SNS, alarms are automatically delivered to your chat channels with no additional configuration step required. Similarly, thanks to the integration between Amazon EventBridge and SNS, any system or service that emits events to EventBridge can send information to your chat channels.

But ChatOps is more than the ability to spot problems as they arise. AWS Chatbot allows you to receive predefined CloudWatch dashboards interactively and retrieve Logs Insights logs to troubleshoot issues directly from the chat thread. You can also directly type in the chat channel most AWS Command Line Interface (AWS CLI) commands to retrieve additional telemetry data or resource information or to run runbooks to remediate the issues.

Typing and remembering long commands is difficult. With AWS Chatbot, you can define your own aliases to reference frequently used commands and their parameters. It reduces the number of steps to complete a task. Aliases are flexible and can contain one or more custom parameters injected at the time of the query.

And because chat channels are designed for conversation, you can also ask questions in natural language and have AWS Chatbot answer you with relevant extracts from the AWS documentation or support articles. Natural language understanding also allows you to make queries such as “show me my ec2 instances in eu-west-3.”

Let’s Configure the Integration Between AWS Chatbot and Microsoft Teams
Getting started is a two-step process. First, I configure my team in Microsoft Teams. As a Teams administrator, I add the AWS Chatbot application to the team, and I take note of the URL of the channel I want to use for receiving notifications and operating AWS resources from Microsoft Teams channels.

Second, I register Microsoft Teams channels in AWS Chatbot. I also assign IAM permissions on what channel members can do in this channel and associate SNS topics to receive notifications. I may configure AWS Chatbot with the AWS Management Console, an AWS CloudFormation template, or the AWS Cloud Development Kit (AWS CDK). For this demo, I choose to use the console.

I open the Management Console and navigate to the AWS Chatbot section. On the top right side of the screen, in the Configure a chat client box, I select Microsoft Teams and then Configure client.

I enter the Microsoft Teams channel URL I noted in the Teams app.

Add the team channel URL to ChatbotAt this stage, Chatbot redirects my browser to Microsoft Teams for authentication. If I am already authenticated, I will be redirected back to the AWS console immediately. Otherwise, I enter my Microsoft Teams credentials and one-time password and wait to be redirected.

At this stage, my Microsoft Teams team is registered with AWS Chatbot and ready to add Microsoft Teams channels. I select Configure new channel.

Chabot is now linked to your Microsoft Teams There are four sections to enter the details of the configuration. In the first section, I enter a Configuration name for my channel. Optionally, I also define the Logging details. In the second section, I paste—again—the Microsoft Teams Channel URL.

Configure chatbot section one and two

In the third section, I configure the Permissions. I can choose between the same set of permissions for all Microsoft Teams users in my team, or I can set User-level roles permission to enable user-specific permissions in the channel. In this demo, I select Channel role, and I assign an IAM role to the channel. The role defines the permissions shared by all users in the channel. For example, I can assign a role that allows users to access configuration data from Amazon EC2 but not from Amazon S3. Under Channel role, I select Use an existing IAM role. Under Existing role, I select a role I created for my 2019 re:Invent talk about ChatOps: chatbot-demo. This role gives read-only access to all AWS services, but I could also assign other roles that would allow Chatbot users to take actions on their AWS resources.

To mitigate the risk that another person in your team accidentally grants more than the necessary privileges to the channel or user-level roles, you might also include Channel guardrail policies. These are the maximum permissions your users might have when using the channel. At runtime, the actual permissions are the intersection of the channel or user-level policies and the guardrail policies. Guardrail policies act like a boundary that channel users will never escape. The concept is similar to permission boundaries for IAM entities or service control policies (SCP) for AWS Organizations. In this example, I attach the ReadOnlyAccess managed policy.

Configure chatbot section three

The fourth and last section allows you to specify the SNS topic that will be the source for notifications sent to your team’s channel. Your applications or AWS services, such as CloudWatch alarms, can send messages to this topic, and AWS Chatbot will relay all messages to the configured Microsoft Teams channel. Thanks to the integration between Amazon EventBridge and SNS, any application able to send a message to EventBridge is able to send a message to Microsoft Teams.

For this demo, I select an existing SNS topic: alarmme in the us-east-1 Region. You can configure multiple SNS topics to receive alarms from various Regions. I then select Configure.

Configure chatbot section fourLet’s Test the Integration
That’s it. Now I am ready to test my setup.

On the AWS Chatbot configuration page, I first select the Send test message. I also have an alarm defined when my estimated billing goes over $500. On the CloudWatch section of the Management Console, I configure the alarm to post a message on the SNS topic shared with Microsoft Teams.

Within seconds, I receive the test message and the alarm message on the Microsoft Teams channel.

AWS Chatbot with Microsoft Teams, first messages received on the channel

Then I type a command to understand where the billing alarm comes from. I want to understand how many EC2 instances are running.

On the chat client channel, I type @aws to select Chatbot as the destination, then the rest of the CLI command, as I would do in a terminal: ec2 describe-instances --region us-east-1 --filters "Name=architecture,Values=arm64_mac" --query "Reservations[].Instances[].InstanceId"

Chatbot answers within seconds.

AWS chatbot describe instances

I can create aliases for commands I frequently use. Aliases may have placeholder parameters that I can give at runtime, such as the Region name for example.

I create an alias to get the list of my macOS instance IDs with the command: aws alias create mac ec2 describe-instances --region $region --filters "Name=architecture,Values=arm64_mac" --query "Reservations[].Instances[].InstanceId"

Now, I can type @aws alias run mac us-east-1 as a shortcut to get the same result as above. I can also manage my aliases with the @aws alias list, @aws alias get, and @aws alias delete commands.

I don’t know about you, but for me it is hard to remember commands. When I use the terminal, I rely on auto-complete to remind me of various commands and their options. AWS Chatbot offers similar command completion and guides me to collect missing parameters.

AWS Chatbot command completion

When using AWS Chatbot, I can also ask questions using natural English language. It can help to find answers from the AWS docs and from support articles by typing questions such as @aws how can I tag my EC2 instances? or @aws how do I configure Lambda concurrency setting?

It can also find resources in my account when AWS Resource Explorer is activated. For example, I asked the bot: @aws what are the tags for my ec2 resources? and @aws what Regions do I have Lambda service?

And I received these responses.

AWS Chatbot NLP Response 1AWS Chatbot NLP Response 2Thanks to AWS Chatbot, I realized that I had a rogue Lambda function left in ca-central-1. I used the AWS console to delete it.

Available Now
You can start to use AWS Chatbot with Microsoft Teams today. AWS Chatbot for Microsoft Teams is available to download from Microsoft Teams app at no additional cost. AWS Chatbot is available in all public AWS Regions, at no additional charge. You pay for the underlying resources that you use. You might incur charges from your chat client.

Get started today and configure your first integration with Microsoft Teams.

— seb

Amazon Linux 2023, a Cloud-Optimized Linux Distribution with Long-Term Support

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-linux-2023-a-cloud-optimized-linux-distribution-with-long-term-support/

I am excited to announce the general availability of Amazon Linux 2023 (AL2023). AWS has provided you with a cloud-optimized Linux distribution since 2010. This is the third generation of our Amazon Linux distributions.

Every generation of Amazon Linux distribution is secured, optimized for the cloud, and receives long-term AWS support. We built Amazon Linux 2023 on these principles, and we go even further. Deploying your workloads on Amazon Linux 2023 gives you three major benefits: a high-security standard, a predictable lifecycle, and a consistent update experience.

Let’s look at security first. Amazon Linux 2023 includes preconfigured security policies that make it easy for you to implement common industry guidelines. You can configure these policies at launch time or run time.

For example, you can configure the system crypto policy to enforce system-wide usage of a specific set of cipher suites, TLS versions, or acceptable parameters in certificates and key exchanges. Also, the Linux kernel has many hardening features enabled by default.

Amazon Linux 2023 makes it easier to plan and manage the operating system lifecycle. New Amazon Linux major versions will be available every two years. Major releases include new features and improvements in security and performance across the stack. The improvements might include major changes to the kernel, toolchain, GLib C, OpenSSL, and any other system libraries and utilities.

During those two years, a major release will receive an update every three months. These updates include security updates, bug fixes, and new features and packages. Each minor version is a cumulative list of updates that includes security and bug fixes in addition to new features and packages. These releases might include the latest language runtimes such as Python or Java. They might also include other popular software packages such as Ansible and Docker. In addition to these quarterly updates, security updates will be provided as soon as they are available.

Each major version, including 2023, will come with five years of long-term support. After the initial two-year period, each major version enters a three-year maintenance period. During the maintenance period, it will continue to receive security bug fixes and patches as soon as they are available. This support commitment gives you the stability you need to manage long project lifecycles.

The following diagram illustrates the lifecycle of Amazon Linux distributions:

Last—and this policy is by far my favorite—Amazon Linux provides you with deterministic updates through versioned repositories, a flexible and consistent update mechanism. The distribution locks to a specific version of the Amazon Linux package repository, giving you control over how and when you absorb updates. By default, and in contrast with Amazon Linux 2, a dnf update command will not update your installed packages (dnf is the successor to yum). This helps to ensure that you are using the same package versions across your fleet. All Amazon Elastic Compute Cloud (Amazon EC2) instances launched from an Amazon Machine Image (AMI) will have the same version of packages. Deterministic updates also promote usage of immutable infrastructure, where no infrastructure is updated after deployment. When an update is required, you update your infrastructure as code scripts and redeploy a new infrastructure. Of course, if you really want to update your distribution in place, you can point dnf to an updated package repository and update your machine as you do today. But did I tell you this is not a good practice for production workloads? I’ll share more technical details later in this blog post.

How to Get Started
Getting started with Amazon Linux 2023 is no different than with other Linux distributions. You can use the EC2 run-instances API, the AWS Command Line Interface (AWS CLI), or the AWS Management Console, and one of the four Amazon Linux 2023 AMIs that we provide. We support two machine architectures (x86_64 and Arm) and two sizes (standard and minimal). Minimal AMIs contain the most basic tools and utilities to start the OS. The standard version comes with the most commonly used applications and tools installed.

To retrieve the latest AMI ID for a specific Region, you can use AWS Systems Manager get-parameter API and query the /aws/service/ami-amazon-linux-latest/<alias> parameter.

Be sure to replace <alias> with one of the four aliases available:

  • For arm64 architecture (standard AMI): al2023-ami-kernel-default-arm64
  • For arm64 architecture (minimal AMI): al2023-ami-minimal-kernel-default-arm64
  • For x86_64 architecture (standard AMI): al2023-ami-kernel-default-x86_64
  • For x86_64 architecture (minimal AMI): al2023-ami-minimal-kernel-default-x86_64

For example, to search for the latest Arm64 full distribution AMI ID, I open a terminal and enter:

~ aws ssm get-parameters --region us-east-2 --names /aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64",
            "Type": "String",
            "Value": "ami-02f9b41a7af31dded",
            "Version": 1,
            "LastModifiedDate": "2023-02-24T22:54:56.940000+01:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}

To launch an instance, I use the run-instances API. Notice how I use Systems Manager resolution to dynamically lookup the AMI ID from the CLI.

➜ aws ec2 run-instances                                                                            \
       --image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64  \
       --key-name my_ssh_key_name                                                                   \
       --instance-type c6g.medium                                                                   \
       --region us-east-2 
{
    "Groups": [],
    "Instances": [
        {
          "AmiLaunchIndex": 0,
          "ImageId": "ami-02f9b41a7af31dded",
          "InstanceId": "i-0740fe8e23f903bd2",
          "InstanceType": "c6g.medium",
          "KeyName": "my_ssh_key_name",
          "LaunchTime": "2023-02-28T14:12:34+00:00",

...(redacted for brevity)
}

When the instance is launched, and if the associated security group allows SSH (TCP 22) connections, I can connect to the machine:

~ ssh [email protected]
Warning: Permanently added '3.145.19.213' (ED25519) to the list of known hosts.
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\       Preview
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Tue Feb 28 14:14:44 2023 from 81.49.148.9
[[email protected] ~]$ uname -a
Linux ip-172-31-9-76.us-east-2.compute.internal 6.1.12-19.43.amzn2023.aarch64 #1 SMP Thu Feb 23 23:37:18 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

We also distribute Amazon Linux 2023 as Docker images. The Amazon Linux 2023 container image is built from the same software components that are included in the Amazon Linux 2023 AMI. The container image is available for use in any environment as a base image for Docker workloads. If you’re using Amazon Linux for applications in EC2, you can containerize your applications with the Amazon Linux container image.

These images are available from Amazon Elastic Container Registry (Amazon ECR) and from Docker Hub. Here is a quick demo to start a Docker container using Amazon Linux 2023 from Elastic Container Registry.

$ aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
Login Succeeded
~ docker run --rm -it public.ecr.aws/amazonlinux/amazonlinux:2023 /bin/bash
Unable to find image 'public.ecr.aws/amazonlinux/amazonlinux:2023' locally
2023: Pulling from amazonlinux/amazonlinux
b4265814d5cf: Pull complete 
Digest: sha256:bbd7a578cff9d2aeaaedf75eb66d99176311b8e3930c0430a22e0a2d6c47d823
Status: Downloaded newer image for public.ecr.aws/amazonlinux/amazonlinux:2023
bash-5.2# uname -a 
Linux 9d5b45e9f895 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
bash-5.2# exit 

When pulling from Docker Hub, you can use this command to pull the image: docker pull amazonlinux:2023.

What Are the Main Differences Compared to Amazon Linux 2?
Amazon Linux 2023 has some differences compared to Amazon Linux 2. The documentation explains these differences in detail. The two differences I would like to focus on are dnf and the package management policies.

AL2023 comes with Fedora’s dnf, the successor to yum. But don’t worry, dnf provides similar commands as yum to search, install, or remove packages. Where you used to run the commands yum list or yum install httpd, you may now run dnf list or dnf install httpd. For convenience, we create a symlink for /usr/bin/yum, so you can run your scripts unmodified.

$ which yum
/usr/bin/yum
$ ls -al /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Jun 19 18:06 /usr/bin/yum -> dnf-3

The biggest difference, in my opinion, is the deterministic updates through versioned repositories. By default, the software repository is locked to the AMI version. This means that a dnf update command will not return any new packages to install. Versioned repositories give you the assurance that all machines started from the same AMI ID are identical. Your infrastructure will not deviate from the baseline.

$ sudo dnf update 
Last metadata expiration check: 0:14:10 ago on Tue Feb 28 14:12:50 2023.
Dependencies resolved.
Nothing to do.
Complete!

Yes, but what if you want to update a machine? You have two options to update an existing machine. The cleanest one for your production environment is to create duplicate infrastructure based on new AMIs. As I mentioned earlier, we publish updates for every security fix and a consolidated update every three months for two years after the initial release. Each update is provided as a set of AMIs and their corresponding software repository.

For smaller infrastructure, such as test or development machines, you might choose to update the operating system or individual packages in place as well. This is a three-step process:

  • first, list the available updated software repositories;
  • second, point dnf to a specific software repository;
  • and third, update your packages.

To show you how it works, I purposely launched an EC2 instance with an “old” version of Amazon Linux 2023 from February 2023. I first run dnf check-release-update to list the available updated software repositories.

$ dnf check-release-update
WARNING:
  A newer release of "Amazon Linux" is available.

  Available Versions:

  Version 2023.0.20230308:
    Run the following command to upgrade to 2023.0.20230308:

      dnf upgrade --releasever=2023.0.20230308

    Release notes:
     https://docs.aws.amazon.com/linux/al2023/release-notes/relnotes.html

Then, I might either update the full distribution using dnf upgrade --releasever=2023.0.20230308 or point dnf to the updated repository to select individual packages.

$ dnf check-update --releasever=2023.0.20230308

Amazon Linux 2023 repository                                                    28 MB/s |  11 MB     00:00
Amazon Linux 2023 Kernel Livepatch repository                                  1.2 kB/s | 243  B     00:00

amazon-linux-repo-s3.noarch                          2023.0.20230308-0.amzn2023                amazonlinux
binutils.aarch64                                     2.39-6.amzn2023.0.5                       amazonlinux
ca-certificates.noarch                               2023.2.60-1.0.amzn2023.0.1                amazonlinux
(redacted for brevity)
util-linux-core.aarch64 2.37.4-1.amzn2022.0.1 amazonlinux

Finally, I might run a dnf update <package_name> command to update a specific package.

This might look like overkill for a simple machine, but when managing enterprise infrastructure or large-scale fleets of instances, this facilitates the management of your fleet by ensuring that all instances run the same version of software packages. It also means that the AMI ID is now something that you can fully run through your CI/CD pipelines for deployment and that you have a way to roll AMI versions forward and backward according to your schedule.

Where is Fedora?
When looking for a base to serve as a starting point for Amazon Linux 2023, Fedora was the best choice. We found that Fedora’s core tenets (Freedom, Friends, Features, First) resonate well with our vision for Amazon Linux. However, Amazon Linux focuses on a long-term, stable OS for the cloud, which is a notable different release cycle and lifecycle than Fedora. Amazon Linux 2023 provides updated versions of open-source software, a larger variety of packages, and frequent releases.

Amazon Linux 2023 isn’t directly comparable to any specific Fedora release. The Amazon Linux 2023 GA version includes components from Fedora 34, 35, and 36. Some of the components are the same as the components in Fedora, and some are modified. Other components more closely resemble the components in CentOS Stream 9 or were developed independently. The Amazon Linux kernel, on its side, is sourced from the long-term support options that are on kernel.org, chosen independently from the kernel provided by Fedora.

Like every good citizen in the open-source community, we give back and contribute our changes to upstream distributions and sources for the benefit of the entire community. Amazon Linux 2023 itself is open source. The source code for all RPM packages that are used to build the binaries that we ship are available through the SRPM yum repository (sudo dnf install -y 'dnf-command(download)' && dnf download --source bash)

One More Thing: Amazon EBS Gp3 Volumes
Amazon Linux 2023 AMIs use gp3 volumes by default.

Gp3 is the latest generation general-purpose solid-state drive (SSD) volume for Amazon Elastic Block Store (Amazon EBS). Gp3 provides 20 percent lower storage costs compared to gp2. Gp3 volumes deliver a baseline performance of 3,000 IOPS and 125MB/s at any volume size. What I particularly like about gp3 volumes is that I can now provision performance independently of capacity. When using gp3 volumes, I can now increase IOPS and throughput without incurring charges for extra capacity that I don’t actually need.

With the availability of gp3-backed AL2023 AMIs, this is the first time a gp3-backed Amazon Linux AMI is available. Gp3-backed AMIs have been a common customer request since gp3 was launched in 2020. It is now available by default.

Price and Availability
Amazon Linux 2023 is provided at no additional charge. Standard Amazon EC2 and AWS charges apply for running EC2 instances and other services. This distribution includes full support for five years. When deploying on AWS, our support engineers will provide technical support according to the terms and conditions of your AWS Support plan. AMIs are available in all AWS Regions.

Amazon Linux is the most used Linux distribution on AWS, with hundreds of thousands of customers using Amazon Linux 2. Dozens of Independent Software Vendors (ISVs) and hardware partners are supporting Amazon Linux 2023 today. You can adopt this new version with the confidence that the partner tools you rely on are likely to be supported. We are excited about this release, which brings you an even higher level of security, a predictable release lifecycle, and a consistent update experience.

Now go build and deploy your workload on Amazon Linux 2023 today.

— seb

New – Use Amazon S3 Object Lambda with Amazon CloudFront to Tailor Content for End Users

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-use-amazon-s3-object-lambda-with-amazon-cloudfront-to-tailor-content-for-end-users/

With S3 Object Lambda, you can use your own code to process data retrieved from Amazon S3 as it is returned to an application. Over time, we added new capabilities to S3 Object Lambda, like the ability to add your own code to S3 HEAD and LIST API requests, in addition to the support for S3 GET requests that was available at launch.

Today, we are launching aliases for S3 Object Lambda Access Points. Aliases are now automatically generated when S3 Object Lambda Access Points are created and are interchangeable with bucket names anywhere you use a bucket name to access data stored in Amazon S3. Therefore, your applications don’t need to know about S3 Object Lambda and can consider the alias to be a bucket name.

Architecture diagram.

You can now use an S3 Object Lambda Access Point alias as an origin for your Amazon CloudFront distribution to tailor or customize data for end users. You can use this to implement automatic image resizing or to tag or annotate content as it is downloaded. Many images still use older formats like JPEG or PNG, and you can use a transcoding function to deliver images in more efficient formats like WebP, BPG, or HEIC. Digital images contain metadata, and you can implement a function that strips metadata to help satisfy data privacy requirements.

Architecture diagram.

Let’s see how this works in practice. First, I’ll show a simple example using text that you can follow along by just using the AWS Management Console. After that, I’ll implement a more advanced use case processing images.

Using an S3 Object Lambda Access Point as the Origin of a CloudFront Distribution
For simplicity, I am using the same application in the launch post that changes all text in the original file to uppercase. This time, I use the S3 Object Lambda Access Point alias to set up a public distribution with CloudFront.

I follow the same steps as in the launch post to create the S3 Object Lambda Access Point and the Lambda function. Because the Lambda runtimes for Python 3.8 and later do not include the requests module, I update the function code to use urlopen from the Python Standard Library:

import boto3
from urllib.request import urlopen

s3 = boto3.client('s3')

def lambda_handler(event, context):
  print(event)

  object_get_context = event['getObjectContext']
  request_route = object_get_context['outputRoute']
  request_token = object_get_context['outputToken']
  s3_url = object_get_context['inputS3Url']

  # Get object from S3
  response = urlopen(s3_url)
  original_object = response.read().decode('utf-8')

  # Transform object
  transformed_object = original_object.upper()

  # Write object back to S3 Object Lambda
  s3.write_get_object_response(
    Body=transformed_object,
    RequestRoute=request_route,
    RequestToken=request_token)

  return

To test that this is working, I open the same file from the bucket and through the S3 Object Lambda Access Point. In the S3 console, I select the bucket and a sample file (called s3.txt) that I uploaded earlier and choose Open.

Console screenshot.

A new browser tab is opened (you might need to disable the pop-up blocker in your browser), and its content is the original file with mixed-case text:

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers...

I choose Object Lambda Access Points from the navigation pane and select the AWS Region I used before from the dropdown. Then, I search for the S3 Object Lambda Access Point that I just created. I select the same file as before and choose Open.

Console screenshot.

In the new tab, the text has been processed by the Lambda function and is now all in uppercase:

AMAZON SIMPLE STORAGE SERVICE (AMAZON S3) IS AN OBJECT STORAGE SERVICE THAT OFFERS...

Now that the S3 Object Lambda Access Point is correctly configured, I can create the CloudFront distribution. Before I do that, in the list of S3 Object Lambda Access Points in the S3 console, I copy the Object Lambda Access Point alias that has been automatically created:

Console screenshot.

In the CloudFront console, I choose Distributions in the navigation pane and then Create distribution. In the Origin domain, I use the S3 Object Lambda Access Point alias and the Region. The full syntax of the domain is:

ALIAS.s3.REGION.amazonaws.com

Console screenshot.

S3 Object Lambda Access Points cannot be public, and I use CloudFront origin access control (OAC) to authenticate requests to the origin. For Origin access, I select Origin access control settings and choose Create control setting. I write a name for the control setting and select Sign requests and S3 in the Origin type dropdown.

Console screenshot.

Now, my Origin access control settings use the configuration I just created.

Console screenshot.

To reduce the number of requests going through S3 Object Lambda, I enable Origin Shield and choose the closest Origin Shield Region to the Region I am using. Then, I select the CachingOptimized cache policy and create the distribution. As the distribution is being deployed, I update permissions for the resources used by the distribution.

Setting Up Permissions to Use an S3 Object Lambda Access Point as the Origin of a CloudFront Distribution
First, the S3 Object Lambda Access Point needs to give access to the CloudFront distribution. In the S3 console, I select the S3 Object Lambda Access Point and, in the Permissions tab, I update the policy with the following:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3-object-lambda:Get*",
            "Resource": "arn:aws:s3-object-lambda:REGION:ACCOUNT:accesspoint/NAME",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": "arn:aws:cloudfront::ACCOUNT:distribution/DISTRIBUTION-ID"
                }
            }
        }
    ]
}

The supporting access point also needs to allow access to CloudFront when called via S3 Object Lambda. I select the access point and update the policy in the Permissions tab:

{
    "Version": "2012-10-17",
    "Id": "default",
    "Statement": [
        {
            "Sid": "s3objlambda",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:REGION:ACCOUNT:accesspoint/NAME",
                "arn:aws:s3:REGION:ACCOUNT:accesspoint/NAME/object/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "s3-object-lambda.amazonaws.com"
                }
            }
        }
    ]
}

The S3 bucket needs to allow access to the supporting access point. I select the bucket and update the policy in the Permissions tab:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "*",
            "Resource": [
                "arn:aws:s3:::BUCKET",
                "arn:aws:s3:::BUCKET/*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:DataAccessPointAccount": "ACCOUNT"
                }
            }
        }
    ]
}

Finally, CloudFront needs to be able to invoke the Lambda function. In the Lambda console, I choose the Lambda function used by S3 Object Lambda, and then, in the Configuration tab, I choose Permissions. In the Resource-based policy statements section, I choose Add permissions and select AWS Account. I enter a unique Statement ID. Then, I enter cloudfront.amazonaws.com as Principal and select lambda:InvokeFunction from the Action dropdown and Save. We are working to simplify this step in the future. I’ll update this post when that’s available.

Testing the CloudFront Distribution
When the distribution has been deployed, I test that the setup is working with the same sample file I used before. In the CloudFront console, I select the distribution and copy the Distribution domain name. I can use the browser and enter https://DISTRIBUTION_DOMAIN_NAME/s3.txt in the navigation bar to send a request to CloudFront and get the file processed by S3 Object Lambda. To quickly get all the info, I use curl with the -i option to see the HTTP status and the headers in the response:

curl -i https://DISTRIBUTION_DOMAIN_NAME/s3.txt

HTTP/2 200 
content-type: text/plain
content-length: 427
x-amzn-requestid: a85fe537-3502-4592-b2a9-a09261c8c00c
date: Mon, 06 Mar 2023 10:23:02 GMT
x-cache: Miss from cloudfront
via: 1.1 a2df4ad642d78d6dac65038e06ad10d2.cloudfront.net (CloudFront)
x-amz-cf-pop: DUB56-P1
x-amz-cf-id: KIiljCzYJBUVVxmNkl3EP2PMh96OBVoTyFSMYDupMd4muLGNm2AmgA==

AMAZON SIMPLE STORAGE SERVICE (AMAZON S3) IS AN OBJECT STORAGE SERVICE THAT OFFERS...

It works! As expected, the content processed by the Lambda function is all uppercase. Because this is the first invocation for the distribution, it has not been returned from the cache (x-cache: Miss from cloudfront). The request went through S3 Object Lambda to process the file using the Lambda function I provided.

Let’s try the same request again:

curl -i https://DISTRIBUTION_DOMAIN_NAME/s3.txt

HTTP/2 200 
content-type: text/plain
content-length: 427
x-amzn-requestid: a85fe537-3502-4592-b2a9-a09261c8c00c
date: Mon, 06 Mar 2023 10:23:02 GMT
x-cache: Hit from cloudfront
via: 1.1 145b7e87a6273078e52d178985ceaa5e.cloudfront.net (CloudFront)
x-amz-cf-pop: DUB56-P1
x-amz-cf-id: HEx9Fodp184mnxLQZuW62U11Fr1bA-W1aIkWjeqpC9yHbd0Rg4eM3A==
age: 3

AMAZON SIMPLE STORAGE SERVICE (AMAZON S3) IS AN OBJECT STORAGE SERVICE THAT OFFERS...

This time the content is returned from the CloudFront cache (x-cache: Hit from cloudfront), and there was no further processing by S3 Object Lambda. By using S3 Object Lambda as the origin, the CloudFront distribution serves content that has been processed by a Lambda function and can be cached to reduce latency and optimize costs.

Resizing Images Using S3 Object Lambda and CloudFront
As I mentioned at the beginning of this post, one of the use cases that can be implemented using S3 Object Lambda and CloudFront is image transformation. Let’s create a CloudFront distribution that can dynamically resize an image by passing the desired width and height as query parameters (w and h respectively). For example:

https://DISTRIBUTION_DOMAIN_NAME/image.jpg?w=200&h=150

For this setup to work, I need to make two changes to the CloudFront distribution. First, I create a new cache policy to include query parameters in the cache key. In the CloudFront console, I choose Policies in the navigation pane. In the Cache tab, I choose Create cache policy. Then, I enter a name for the cache policy.

Console screenshot.

In the Query settings of the Cache key settings, I select the option to Include the following query parameters and add w (for the width) and h (for the height).

Console screenshot.

Then, in the Behaviors tab of the distribution, I select the default behavior and choose Edit.

There, I update the Cache key and origin requests section:

  • In the Cache policy, I use the new cache policy to include the w and h query parameters in the cache key.
  • In the Origin request policy, use the AllViewerExceptHostHeader managed policy to forward query parameters to the origin.

Console screenshot.

Now I can update the Lambda function code. To resize images, this function uses the Pillow module that needs to be packaged with the function when it is uploaded to Lambda. You can deploy the function using a tool like the AWS SAM CLI or the AWS CDK. Compared to the previous example, this function also handles and returns HTTP errors, such as when content is not found in the bucket.

import io
import boto3
from urllib.request import urlopen, HTTPError
from PIL import Image

from urllib.parse import urlparse, parse_qs

s3 = boto3.client('s3')

def lambda_handler(event, context):
    print(event)

    object_get_context = event['getObjectContext']
    request_route = object_get_context['outputRoute']
    request_token = object_get_context['outputToken']
    s3_url = object_get_context['inputS3Url']

    # Get object from S3
    try:
        original_image = Image.open(urlopen(s3_url))
    except HTTPError as err:
        s3.write_get_object_response(
            StatusCode=err.code,
            ErrorCode='HTTPError',
            ErrorMessage=err.reason,
            RequestRoute=request_route,
            RequestToken=request_token)
        return

    # Get width and height from query parameters
    user_request = event['userRequest']
    url = user_request['url']
    parsed_url = urlparse(url)
    query_parameters = parse_qs(parsed_url.query)

    try:
        width, height = int(query_parameters['w'][0]), int(query_parameters['h'][0])
    except (KeyError, ValueError):
        width, height = 0, 0

    # Transform object
    if width > 0 and height > 0:
        transformed_image = original_image.resize((width, height), Image.ANTIALIAS)
    else:
        transformed_image = original_image

    transformed_bytes = io.BytesIO()
    transformed_image.save(transformed_bytes, format='JPEG')

    # Write object back to S3 Object Lambda
    s3.write_get_object_response(
        Body=transformed_bytes.getvalue(),
        RequestRoute=request_route,
        RequestToken=request_token)

    return

I upload a picture I took of the Trevi Fountain in the source bucket. To start, I generate a small thumbnail (200 by 150 pixels).

https://DISTRIBUTION_DOMAIN_NAME/trevi-fountain.jpeg?w=200&h=150

Picture of the Trevi Fountain with size 200x150 pixels.

Now, I ask for a slightly larger version (400 by 300 pixels):

https://DISTRIBUTION_DOMAIN_NAME/trevi-fountain.jpeg?w=400&h=300

Picture of the Trevi Fountain with size 400x300 pixels.

It works as expected. The first invocation with a specific size is processed by the Lambda function. Further requests with the same width and height are served from the CloudFront cache.

Availability and Pricing
Aliases for S3 Object Lambda Access Points are available today in all commercial AWS Regions. There is no additional cost for aliases. With S3 Object Lambda, you pay for the Lambda compute and request charges required to process the data, and for the data S3 Object Lambda returns to your application. You also pay for the S3 requests that are invoked by your Lambda function. For more information, see Amazon S3 Pricing.

Aliases are now automatically generated when an S3 Object Lambda Access Point is created. For existing S3 Object Lambda Access Points, aliases are automatically assigned and ready for use.

It’s now easier to use S3 Object Lambda with existing applications, and aliases open many new possibilities. For example, you can use aliases with CloudFront to create a website that converts content in Markdown to HTML, resizes and watermarks images, or masks personally identifiable information (PII) from text, images, and documents.

Customize content for your end users using S3 Object Lambda with CloudFront.

Danilo

Meet the Newest AWS Heroes – March 2023

Post Syndicated from Taylor Jacobsen original https://aws.amazon.com/blogs/aws/meet-the-newest-aws-heroes-march-2023/

The AWS Heroes are passionate AWS experts who are dedicated to sharing their in-depth knowledge within the community. They inspire, uplift, and motivate the global AWS community, and today, we’re excited to announce and recognize the newest Heroes in 2023!

Aidan Steele – Melbourne, Australia

Serverless Hero Aidan Steele is a Senior Engineer at Nightvision. He is an avid AWS user, and has been using the first platform and EC2 since 2008. Fifteen years later, EC2 still has a special place in his heart, but his interests are in containers and serverless functions, and blurring the distinction between them wherever possible. He enjoys finding novel uses for AWS services, especially when they have a security or network focus. This is best demonstrated through his open source contributions on GitHub, where he shares interesting use cases via hands-on projects.

Ananda Dwi Rahmawati – Yogyakarta, Indonesia

Container Hero Ananda Dwi Rahmawati is a Sr. Cloud Infrastructure Engineer, specializing in system integration between cloud infrastructure, CI/CD workflows, and application modernization. She implements solutions using powerful services provided by AWS, such as Amazon Elastic Kubernetes Service (EKS), combined with open source tools to achieve the goal of creating reliable, highly available, and scalable systems. She is a regular technical speaker who delivers presentations using real-world case studies at several local community meetups and conferences, such as Kubernetes and OpenInfra Days Indonesia, AWS Community Day Indonesia, AWS Summit ASEAN, and many more.

Wendy Wong – Sydney, Australia

Data Hero Wendy Wong is a Business Performance Analyst at Service NSW, building data pipelines with AWS Analytics and agile projects in AI. As a teacher at heart, she enjoys sharing her passion as a Data Analytics Lead Instructor for General Assembly Sydney, writing technical blogs on dev.to. She is both an active speaker for AWS analytics and an advocate of diversity and inclusion, presenting at a number of events: AWS User Group Malaysia, Women Who Code, AWS Summit Australia 2022, AWS BuildHers, AWS Innovate Modern Applications, and many more.

Learn More

If you’d like to learn more about the new Heroes or connect with a Hero near you, please visit the AWS Heroes website or browse the AWS Heroes Content Library.

Taylor

AWS Application Composer Now Generally Available – Visually Build Serverless Applications Quickly

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-application-composer-now-generally-available-visually-build-serverless-applications-quickly/

At AWS re:Invent 2022, we previewed AWS Application Composer, a visual builder for you to compose and configure serverless applications from AWS services backed by deployment-ready infrastructure as code (IaC).

In the keynote, Dr. Werner Vogels, CTO of Amazon.com said:

Developers that never used serverless before. How do they know where to start? Which services do they need? How do they work together? We really wanted to make this easier. AWS Application Composer simplifies and accelerates the architecting, configuring, and building of serverless applications.

During the preview, we had lots of interest and great feedback from customers. Today, I am happy to announce the general availability of AWS Application Composer with new improvements based on customer feedback. I want to quickly review its features and introduce some improvements.

Introduction to AWS Application Composer
To get started with AWS Application Composer, choose Open demo in the AWS Management Console. This demo shows a simple cart application with Amazon API Gateway, AWS Lambda, and Amazon DynamoDB resources.

You can easily browse and search for AWS services in the left Resources panel and drag and drop them onto the canvas to expand your architecture.

In the middle Canvas panel, you can connect resources together by clicking and dragging from one resource port to another. Permissions are automatically composed for these resources to interact with each other using policy template, environment variables, and event subscriptions. Grouping resources is very useful to select one visual organization. For above example, API Compute group is compsite of Lambda functions. When you double-click on a specific resource, you can name and configure your properties in the right Resource properties panel.

As well as featured resources available in the visual resource palette, you can use hidden and read-only resources will populate on the canvas when you load an existing template that includes them.

In this example, the MyHttpApi resource is a hidden resource. It is not available from the resource palette but does appear on the canvas in color. The resource named MyHttpApiRole (in this case, an AWS::IAM::Role resource) is read-only. It grayed out on the canvas greyed out. To learn more about all supported resources, see AWS Application Composer featured resources in the AWS documentation.

When you select the Template menu, you can view, edit or manually download your IaC, such as AWS Serverless Application Model (AWS SAM). Your changes are automatically synced with your canvas.

When you start Connected mode, you can use Application Composer with local tools such as an integrated development environment (IDE). Any changes activate the automatic synchronization of your project template and files between Application Composer and your local project directory.

It is useful to incorporate into your existing team processes, such as local testing with AWS SAM Command Line Interface (CLI), peer review through version control, or deployment through AWS CloudFormation and continuous integration and delivery (CI/CD) pipelines.

This mode is supported on Chrome and Edge browsers and requires you to grant temporary local file system access to your browser.

AWS Application Composer can be used in real-world scenarios such as:

  • Building a prototype of serverless applications
  • Reviewing and collaboratively evolving existing serverless projects
  • Generating diagrams for documentation or Wikis
  • Onboarding new team members to a project
  • Reducing the first steps to deploy something in an AWS account

To learn more real-world examples, see Visualize and create your serverless workloads with AWS Application Composer in the AWS Compute Blog, How I Used AWS Application Composer to Make Analyzing My Meetup Data Easy in BuildOn.AWS, or watch a breakout session video (SVS211) from AWS re:Invent 2022.

Improvements Since Preview Launch
Here is a new feature to improve how you work with Amazon Simple Queue Service (Amazon SQS) queues.

You can now directly connect Amazon API Gateway resources to Amazon SQS without routing requests through AWS Lambda function. You can remove the complexity of the Lambda function’s execution and increase the reliability while reducing lines of code.

For example, you can drag API Gateway and Amazon SQS onto the canvas and connect the two resources. When the user drags the connector from API route to SQS, Send message appears. You can connect the API route to the SQS queue via their choice of integration target.

The new Change Inspector provides a visual diff of template changes made when you connect two resources on the canvas. This information is available as a notification when you make the connection, which helps you understand how Composer manages integration configuration in your IaC template as you build.

Here are some more improvements to your experience in the user interface!

First, we reduced the size of resource cards. The larger cards made it difficult for the users to read and view their template on the canvas. Now, you can arrange more resource cards easily and save space on the canvas.

Also, we added zoom in and out and zoom to fit buttons so that users can quickly view the entire screen or zoom to the desired level. When you load a large template onto the canvas, you can easily see all the resource cards in any size.

Now Available
AWS Application Composer is now generally available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) Regions, adding three more Regions to the six Regions available during preview. There is no additional cost, and you can start using it today.

To learn more, see the AWS Application Composer Developer Guide and send feedback to AWS re:Post for AWS Application Composer or through your usual AWS support contacts.

Channy

Subscribe to AWS Daily Feature Updates via Amazon SNS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/subscribe-to-aws-daily-feature-updates-via-amazon-sns/

Way back in 2015 I showed you how to Subscribe to AWS Public IP Address Changes via Amazon SNS. Today I am happy to tell you that you can now receive timely, detailed information about releases and updates to AWS via the same, simple mechanism.

Daily Feature Updates
Simply subscribe to topic arn:aws:sns:us-east-1:692768080016:aws-new-feature-updates using the email protocol and confirm the subscription in the usual way:

You will receive daily emails that start off like this, with an introduction and a summary of the update:

After the introduction, the email contains a JSON representation of the daily feature updates:

As noted in the message, the JSON content is also available online at URLs that look like https://aws-new-features.s3.us-east-1.amazonaws.com/update/2023-02-27.json . You can also edit the date in the URL to access historical data going back up to six months.

The email message also includes detailed information about changes and additions to managed policies that will be of particular interest to AWS customers who currently manually track and then verify the impact that these changes may have on their security profile. Here’s a sample list of changes (additional permissions) to existing managed policies:

And here’s a new managed policy:

Even More Information
The header of the email contains a link to a treasure trove of additional information. Here are some examples:

AWS Regions and AWS Services – A pair of tables. The first one includes a row for each AWS Region and a column for each service, and the second one contains the transposed version:

AWS Regions and EC2 Instance Types – Again, a pair of tables. The first one includes a row for each AWS Region and a column for each EC2 instance type, and the second one contains the transposed version:

The EC2 Instance Types Configuration link leads to detailed information about each instance type:

Each page also includes a link to the same information in JSON form. For example (EC2 Instance Types Configuration), starts like this:

{
    "a1.2xlarge": {
        "af-south-1": "-",
        "ap-east-1": "-",
        "ap-northeast-1": "a1.2xlarge",
        "ap-northeast-2": "-",
        "ap-northeast-3": "-",
        "ap-south-1": "a1.2xlarge",
        "ap-south-2": "-",
        "ap-southeast-1": "a1.2xlarge",
        "ap-southeast-2": "a1.2xlarge",
        "ap-southeast-3": "-",
        "ap-southeast-4": "-",
        "ca-central-1": "-",
        "eu-central-1": "a1.2xlarge",
        "eu-central-2": "-",
        "eu-north-1": "-",
        "eu-south-1": "-",
        "eu-south-2": "-",
        "eu-west-1": "a1.2xlarge",
        "eu-west-2": "-",
        "eu-west-3": "-",
        "me-central-1": "-",
        "me-south-1": "-",
        "sa-east-1": "-",
        "us-east-1": "a1.2xlarge",
        "us-east-2": "a1.2xlarge",
        "us-gov-east-1": "-",
        "us-gov-west-1": "-",
        "us-west-1": "-",
        "us-west-2": "a1.2xlarge"
    },

Other information includes:

  • VPC Endpoints
  • AWS Services Integrated with Service Quotas
  • Amazon SageMaker Instance Types
  • RDS DB Engine Versions
  • Amazon Nimble Instance Types
  • Amazon MSK Apache Kafka Versions

Information Sources
The information is pulled from multiple public sources, cross-checked, and then issued. Here are some of the things that we look for:

Things to Know
Here are a couple of things that you should keep in mind about the AWS Daily Feature Updates:

Content – The content provided in the Daily Feature Updates and in the treasure trove of additional information will continue to grow as new features are added to AWS.

Region Coverage – The Daily Feature Updates cover all AWS Regions in the public partition. Where possible, it also provides information about GovCloud regions; this currently includes EC2 Instance Types, SageMaker Instance Types, and Amazon Nimble Instance Types.

Region Mappings – The internal data that drives all of the information related to AWS Regions is updated once a day if there are applicable new features, and also when new AWS Regions are enabled.

Updates – On days when there are no updates, there will not be an email notification.

Usage – Similar to the updates on the What’s New page and the associated RSS feed, the updates are provided for informational purposes, and you still need to do your own evaluation and testing before deploying to production.

???

Jeff;

In the Works – AWS Region in Malaysia

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-malaysia/

We launched an AWS Region in Australia earlier this year, four more (Switzerland, Spain, the United Arab Emirates, and India) in 2022, and are working on regions in Canada, Israel, New Zealand, and Thailand. All told, we now have 99 Availability Zones spread across 31 geographic regions.

Malaysia in the Works
Today I am happy to announce that we are working on an AWS region in Malaysia. This region will give AWS customers the ability to run workloads and store data that must remain in-country.

The region will include three Availability Zones (AZs), each one physically independent of the others in the region yet far enough apart to minimize the risk that an AZ-level event will have on business continuity. The AZs will be connected to each other by high-bandwidth, low-latency network connections over dedicated, fully-redundant fiber.

AWS in Malaysia
We are planning to invest at least $6 Billion (25.5 billion Malaysian ringgit) in Malaysia by 2037.

Many organizations in Malaysia are already making use of the existing AWS Regions. This includes enterprise and public sector organizations such as Axiata Group, Baba Products, Bank Islam Malaysia, Celcom Digi, PayNet, PETRONAS, Tenaga Nasional Berhad (TNB), Asia Pacific University of Technology & Innovation, Cybersecurity Malaysia, Department of Statistics Malaysia, Ministry of Higher Education Malaysia, and Pos Malaysia, and startups like Baba’s, BeEDucation Adventures, CARSOME, and StoreHub.

Here’s a small sample of some of the exciting and innovative work that our customers are doing in Malaysia:

Johor Corporation (JCorp) is the principal development institution that drives the growth of the state of Johor’s economy through its operations in the agribusiness, wellness, food and restaurants, and real estate and infrastructure sectors. To power JCorp’s digital transformation and achieve the JCorp 3.0 reinvention plan goals, the company is leveraging the AWS cloud to manage its data and applications, serving as a single source of truth for its business and operational knowledge, and paving the way for the company to tap on artificial intelligence, machine learning and blockchain technologies in the future.

Radio Televisyen Malaysia (RTM), established in 1946, is the national public broadcaster of Malaysia, bringing news, information, and entertainment programs through its six free-to-air channels and 34 radio stations to millions of Malaysians daily. Bringing cutting-edge AWS technologies closer to RTM in Malaysia will accelerate the time it takes to develop new media services, while delivering a better viewer experience with lower latency.

Bank Islam, Malaysia’s first listed Islamic banking institution, provides end-to-end financial solutions that meet the diverse needs of their customers. The bank taps AWS’ expertise to power its digital transformation and the development of Be U digital bank through its Centre of Digital Experience, a stand-alone division that creates cutting-edge financial services on AWS to enhance customer experiences.

Malaysian Administrative Modernization Management Planning Unit (MAMPU) encourages public sector agencies to adopt cloud in all ICT projects in order to accelerate emerging technologies application and increase the efficiency of public service. MAMPU believes the establishment of the AWS Region in Malaysia will further accelerate digitalization of the public sector, and bolster efforts for public sector agencies to deliver advanced citizen services seamlessly.

Malaysia is also home to both Independent Software Vendors (ISVs) and Systems Integrators that are members of the AWS Partner Network (APN). The ISV partners build innovative solutions on AWS and the SIs provide business, technical, marketing, and go-to-market support to customers. AWS Partners based in Malaysia include Axrail, eCloudvalley, Exabytes, G-AsiaPacific, GHL, Maxis, Radmik Solutions Sdn Bhd, Silverlake, Tapway, Fourtitude, and Wavelet.

New Explainer Video
To learn more about our global infrastructure, be sure to watch our new AWS Global Infrastructure Explainer video:

Stay Tuned
As usual, subscribe to this blog so that you will be among the first to know when the new region is open!

Jeff;

New – Amazon Lightsail for Research with All-in-One Research Environments

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-lightsail-for-research-with-all-in-one-research-environments/

Today we are announcing the general availability of Amazon Lightsail for Research, a new offering that makes it easy for researchers and students to create and manage a high-performance CPU or a GPU research computer in just a few clicks on the cloud. You can use your preferred integrated development environments (IDEs) like preinstalled Jupyter, RStudio, Scilab, VSCodium, or native Ubuntu operating system on your research computer.

You no longer need to use your own research laptop or shared school computers for analyzing larger datasets or running complex simulations. You can create your own research environments and directly access the application running on the research computer remotely via a web browser. Also, you can easily upload data to and download from your research computer via a simple web interface.

You pay only for the duration the computers are in use and can delete them at any time. You can also use budgeting controls that can automatically stop your computer when it’s not in use. Lightsail for Research also includes all-inclusive prices of compute, storage, and data transfer, so you know exactly how much you will pay for the duration you use the research computer.

Get Started with Amazon Lightsail for Research
To get started, navigate to the Lightsail for Research console, and choose Virtual computers in the left menu. You can see my research computers naming “channy-jupyter” or “channy-rstudio” already created.

Choose Create virtual computer to create a new research computer, and select which software you’d like preinstalled on your computer and what type of research computer you’d like to create.

In the first step, choose the application you want installed on your computer and the AWS Region to be located in. We support Jupyter, RStudio, Scilab, and VSCodium. You can install additional packages and extensions through the interface of these IDE applications.

Next, choose the desired virtual hardware type, including a fixed amount of compute (vCPUs or GPUs), memory (RAM), SSD-based storage volume (disk) space, and a monthly data transfer allowance. Bundles are charged on an hourly and on-demand basis.

Standard types are compute-optimized and ideal for compute-bound applications that benefit from high-performance processors.

Name vCPUs Memory Storage Monthly data
transfer allowance*
Standard XL 4 8 GB 50 GB 0.5TB
Standard 2XL 8 16 GB 50 GB 0.5TB
Standard 4XL 16 32 GB 50 GB 0.5TB

GPU types provide a high-performance platform for general-purpose GPU computing. You can use these bundles to accelerate scientific, engineering, and rendering applications and workloads.

Name GPU vCPUs Memory Storage Monthly data
transfer allowance*
GPU XL 1 4 16 GB 50 GB 1 TB
GPU 2XL 1 8 32 GB 50 GB 1 TB
GPU 4XL 1 16 64 GB 50 GB 1 TB

* AWS created the Global Data Egress Waiver (GDEW) program to help eligible researchers and academic institutions use AWS services by waiving data egress fees. To learn more, see the blog post.

After making your selections, name your computer and choose Create virtual computer to create your research computer. Once your computer is created and running, choose the Launch application button to open a new window that will display the preinstalled application you selected.

Lightsail for Research Features
As with existing Lightsail instances, you can create additional block-level storage volumes (disks) that you can attach to a running Lightsail for Research virtual computer. You can use a disk as a primary storage device for data that requires frequent and granular updates. To create your own storage, choose Storage and Create disk.

You can also create Snapshots, a point-in-time copy of your data. You can create a snapshot of your Lightsail for Research virtual computers and use it as baselines to create new computers or for data backup. A snapshot contains all of the data that is needed to restore your computer from the moment when the snapshot was taken.

When you restore a computer by creating it from a snapshot, you can easily create a new one or upgrade your computer to a larger size using a snapshot backup. Create snapshots frequently to protect your data from corrupt applications or user errors.

You can use Cost control rules that you define to help manage the usage and cost of your Lightsail for Research virtual computers. You can create rules that stop running computers when average CPU utilization over a selected time period falls below a prescribed level.

For example, you can configure a rule that automatically stops a specific computer when its CPU utilization is equal to or less than 1 percent for a 30-minute period. Lightsail for Research will then automatically stop the computer so that you don’t incur charges for running computers.

In the Usage menu, you can view the cost estimate and usage hours for your resources during a specified time period.

Now Available
Amazon Lightsail for Research is now available in the US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and Europe (Sweden) Regions.

Now you can start using it today. To learn more, see the Amazon Lightsail for Research User Guide, and please send feedback to AWS re:Post for Amazon Lightsail or through your usual AWS support contacts.

Channy

New: AWS Telco Network Builder – Deploy and Manage Telco Networks

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-telco-network-builder-deploy-and-manage-telco-networks/

Over the course of more than one hundred years, the telecom industry has become standardized and regulated, and has developed methods, technologies, and an entire vocabulary (chock full of interesting acronyms) along the way. As an industry, they need to honor this tremendous legacy while also taking advantage of new technology, all in the name of delivering the best possible voice and data services to their customers.

Today I would like to tell you about AWS Telco Network Builder (TNB). This new service is designed to help Communications Service Providers (CSPs) deploy and manage public and private telco networks on AWS. It uses existing standards, practices, and data formats, and makes it easier for CSPs to take advantage of the power, scale, and flexibility of AWS.

Today, CSPs often deploy their code to virtual machines. However, as they look to the future they are looking for additional flexibility and are increasingly making use of containers. AWS TNB is intended to be a part of this transition, and makes use of Kubernetes and Amazon Elastic Kubernetes Service (EKS) for packaging and deployment.

Concepts and Vocabulary
Before we dive in to the service, let’s take a look some concepts and vocabulary that are unique to this industry, and are relevant to AWS TNB:

European Telecommunications Standards Institute (ETSI) – A European organization that defines specifications suitable for global use. AWS TNB supports multiple ETSI specifications including ETSI SOL001 through ETSI SOL005, and ETSI SOL007.

Communications Service Provider (CSP) – An organization that offers telecommunications services.

Topology and Orchestration Specification for Cloud Applications (TOSCA) – A standardized grammar that is used to describe service templates for telecommunications applications.

Network Function (NF) – A software component that performs a specific core or value-added function within a telco network.

Virtual Network Function Descriptor (VNFD) – A specification of the metadata needed to onboard and manage a Network Function.

Cloud Service Archive (CSAR) – A ZIP file that contains a VNFD, references to container images that hold Network Functions, and any additional files needed to support and manage the Network Function.

Network Service Descriptor (NSD) – A specification of the compute, storage, networking, and location requirements for a set of Network Functions along with the information needed to assemble them to form a telco network.

Network Core – The heart of a network. It uses control plane and data plane operations to manage authentication, authorization, data, and policies.

Service Orchestrator (SO) – An external, high-level network management tool.

Radio Access Network (RAN) – The components (base stations, antennas, and so forth) that provide wireless coverage over a specific geographic area.

Using AWS Telco Network Builder (TNB)
I don’t happen to be a CSP, but I will do my best to walk you through the getting-started experience anyway! The primary steps are:

  1. Creating a function package for each Network Function by uploading a CSAR.
  2. Creating a network package for the network by uploading a Network Service Descriptor (NSD).
  3. Creating a network by selecting and instantiating an NSD.

To begin, I open the AWS TNB Console and click Get started:

Initially, I have no networks, no function packages, and no network packages:

My colleagues supplied me with sample CSARs and an NSD for use in this blog post (the network functions are from Free 5G Core):

Each CSAR is a fairly simple ZIP file with a VNFD and other items inside. For example, the VNFD for the Free 5G Core Session Management Function (smf) looks like this:

tosca_definitions_version: tnb_simple_yaml_1_0

topology_template:

  node_templates:

    Free5gcSMF:
      type: tosca.nodes.AWS.VNF
      properties:
        descriptor_id: "4b2abab6-c82a-479d-ab87-4ccd516bf141"
        descriptor_version: "1.0.0"
        descriptor_name: "Free5gc SMF 1.0.0"
        provider: "Free5gc"
      requirements:
        helm: HelmImage

    HelmImage:
      type: tosca.nodes.AWS.Artifacts.Helm
      properties:
        implementation: "./free5gc-smf"

The final section (HelmImage) of the VNFD points to the Kubernetes Helm Chart that defines the implementation.

I click Function packages in the console, then click Create function package. Then I upload the first CSAR and click Next:

I review the details and click Create function package (each VNFD can include a set of parameters that have default values which can be overwritten with values that are specific to a particular deployment):

I repeat this process for the nine remaining CSARs, and all ten function packages are ready to use:

Now I am ready to create a Network Package. The Network Service Descriptor is also fairly simple, and I will show you several excerpts. First, the NSD establishes a mapping from descriptor_id to namespace for each Network Function so that the functions can be referenced by name:

vnfds:
  - descriptor_id: "aa97cf70-59db-4b13-ae1e-0942081cc9ce"
    namespace: "amf"
  - descriptor_id: "86bd1730-427f-480a-a718-8ae9dcf3f531"
    namespace: "ausf"
...

Then it defines the input variables, including default values (this reminds me of a AWS CloudFormation template):

  inputs:
    vpc_cidr_block:
      type: String
      description: "CIDR Block for Free5GCVPC"
      default: "10.100.0.0/16"

    eni_subnet_01_cidr_block:
      type: String
      description: "CIDR Block for Free5GCENISubnet01"
      default: "10.100.50.0/24"
...

Next, it uses the variables to create a mapping to the desired AWS resources (a VPC and a subnet in this case):

   Free5GCVPC:
      type: tosca.nodes.AWS.Networking.VPC
      properties:
        cidr_block: { get_input: vpc_cidr_block }
        dns_support: true

    Free5GCENISubnet01:
      type: tosca.nodes.AWS.Networking.Subnet
      properties:
        type: "PUBLIC"
        availability_zone: { get_input: subnet_01_az }
        cidr_block: { get_input: eni_subnet_01_cidr_block }
      requirements:
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Then it defines an AWS Internet Gateway within the VPC:

    Free5GCIGW:
      type: tosca.nodes.AWS.Networking.InternetGateway
      capabilities:
        routing:
          properties:
            dest_cidr: { get_input: igw_dest_cidr }
      requirements:
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Finally, it specifies deployment of the Network Functions to an EKS cluster; the functions are deployed in the specified order:

    Free5GCHelmDeploy:
      type: tosca.nodes.AWS.Deployment.VNFDeployment
      requirements:
        cluster: Free5GCEKS
        deployment: Free5GCNRFHelmDeploy
        vnfs:
          - amf.Free5gcAMF
          - ausf.Free5gcAUSF
          - nssf.Free5gcNSSF
          - pcf.Free5gcPCF
          - smf.Free5gcSMF
          - udm.Free5gcUDM
          - udr.Free5gcUDR
          - upf.Free5gcUPF
          - webui.Free5gcWEBUI
      interfaces:
        Hook:
          pre_create: Free5gcSimpleHook

I click Create network package, select the NSD, and click Next to proceed. AWS TNB asks me to review the list of function packages and the NSD parameters. I do so, and click Create network package:

My network package is created and ready to use within seconds:

Now I am ready to create my network instance! I select the network package and choose Create network instance from the Actions menu:

I give my network a name and a description, then click Next:

I make sure that I have selected the desired network package, review the list of functions packages that will be deployed, and click Next:

Then I do one final review, and click Create network instance:

I select the new network instance and choose Instantiate from the Actions menu:

I review the parameters, and enter any desired overrides, then click Instantiate network:

AWS Telco Network Builder (TNB) begins to instantiate my network (behind the scenes, the service creates a AWS CloudFormation template, uses the template to create a stack, and executes other tasks including Helm charts and custom scripts). When the instantiation step is complete, my network is ready to go. Instantiating a network creates a deployment, and the same network (perhaps with some parameters overridden) can be deployed more than once. I can see all of the deployments at a glance:

I can return to the dashboard to see my networks, function packages, network packages, and recent deployments:

Inside an AWS TNB Deployment
Let’s take a quick look inside my deployment. Here’s what AWS TNB set up for me:

Network – An Amazon Virtual Private Cloud (Amazon VPC) with three subnets, a route table, a route, and an Internet Gateway.

Compute – An Amazon Elastic Kubernetes Service (EKS) cluster.

CI/CD – An AWS CodeBuild project that is triggered every time a node is added to the cluster.

Things to Know
Here are a couple of things to know about AWS Telco Network Builder (TNB):

Access – In addition to the console access that I showed you above, you can access AWS TNB from the AWS Command Line Interface (AWS CLI) and the AWS SDKs.

Deployment Options – We are launching with the ability to create a network that spans multiple Availability Zones in a single AWS Region. Over time we expect to add additional deployment options such as Local Zones and Outposts.

Pricing – Pricing is based on the number of Network Functions that are managed by AWS TNB and on calls to the AWS TNB APIs, but the first 45,000 API requests per month in each AWS Region are not charged. There are also additional charges for the AWS resources that are created as part of the deployment. To learn more, read the TNB Pricing page.

Getting Started
To learn more and to get started, visit the AWS Telco Network Builder (TNB) home page.

Jeff;

Behind the Scenes at AWS – DynamoDB UpdateTable Speedup

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/behind-the-scenes-at-aws-dynamodb-updatetable-speedup/

We often talk about the Pace of Innovation at AWS, and share the results in this blog, in the AWS What’s New page, and in our weekly AWS on Air streams. Today I would like to talk about a slightly different kind of innovation, the kind that happens behind the scenes.

Each AWS customer uses a different mix of services, and uses those services in unique ways. Every service is instrumented and monitored, and the team responsible for designing, building, running, scaling, and evolving the service pays continuous attention to all of the resulting metrics. The metrics provide insights into how the service is being used, how it performs under load, and in many cases highlights areas for optimization in pursuit of higher availability, better performance, and lower costs.

Once an area for improvement has been identified, a plan is put in to place, changes are made and tested in pre-production environments, then deployed to multiple AWS regions. This happens routinely, and (to date) without fanfare. Each part of AWS gets better and better, with no action on your part.

DynamoDB UpdateTable
In late 2021 we announced the Standard-Infrequent Access table class for Amazon DynamoDB. As Marcia noted in her post, using this class can reduce your storage costs by 60% compared to the existing (Standard) class. She also showed you how you could modify a table to use the new class. The modification operation calls the UpdateTable function, and that function is the topic of this post!

As is the case with just about every AWS launch, customers began to make use of the new table class right away. They created new tables and modified existing ones, benefiting from the lower pricing as soon as the modification was complete.

DynamoDB uses a highly distributed storage architecture. Each table is split into multiple partitions; operations such as changing the storage class are done in parallel across the partitions. After looking at a lot of metrics, the DynamoDB team found ways to increase parallelism and to reduce the amount of time spent managing the parallel operations.

This change had a dramatic effect for Amazon DynamoDB tables over 500 GB in size, reducing the time to update the table class by up to 97%.

Each time we make a change like this, we capture the “before” and “after” metrics, and share the results internally so that other teams can learn from the experience while they are in the process of making similar improvements of their own. Even better, each change that we make opens the door to other ones, creating a positive feedback loop that (once again) benefits everyone that uses a particular service or feature.

Every DynamoDB user can take advantage of this increased performance right away without the need for a version upgrade or downtime for maintenance (DynamoDB does not even have maintenance windows).

Incremental performance and operational improvements like this one are done routinely and without much fanfare. However it is always good to hear back from our customers when their own measurements indicate that some part of AWS became better or faster.

Leadership Principles
As I was thinking about this change while getting ready to write this post, several Amazon Leadership Principles came to mind. The DynamoDB team showed Customer Obsession by implementing a change that would benefit any DynamoDB user with tables over 500 GB in size. To do this they had to Invent and Simplify, coming up with a better way to implement the UpdateTable function.

While you, as an AWS customer, get the benefits with no action needed on your part, this does not mean that you have to wait until we decide to pay special attention to your particular use case. If you are pushing any aspect of AWS to the limit (or want to), I recommend that you make contact with the appropriate service team and let them know what’s going on. You might be running into a quota or other limit, or pushing bandwidth, memory, or other resources to extremes. Whatever the case, the team would love to hear from you!

Stay Tuned
I have a long list of other internal improvements that we have made, and will be working with the teams to share more of them throughout the year.

Jeff;

New Graviton3-Based General Purpose (m7g) and Memory-Optimized (r7g) Amazon EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-graviton3-based-general-purpose-m7g-and-memory-optimized-r7g-amazon-ec2-instances/

We’ve come a long way since the launch of the m1.small instance in 2006, adding instances with additional memory, compute power, and your choice of Intel, AMD, or Graviton processors. The original general-purpose “one size fits all” instance has evolved into six families, each one optimized for specific uses cases, with over 600 generally available instances in all.

New M7g and R7g
Today I am happy to tell you about the newest Amazon EC2 instance types, the M7g and the R7g. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25% better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in EC2.

The M7g instances are for general purpose workloads such as application servers, microservices, gaming servers, mid-sized data stores, and caching fleets. The R7g instances are a great fit for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.

Here are the specs for the M7g instances:

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
m7g.medium 1 4 GiB up to 12.5 Gbps up to 10 Gbps
m7g.large 2 8 GiB up to 12.5 Gbps up to 10 Gbps
m7g.xlarge 4 16 GiB up to 12.5 Gbps up to 10 Gbps
m7g.2xlarge 8 32 GiB up to 15 Gbps up to 10 Gbps
m7g.4xlarge 16 64 GiB up to 15 Gbps up to 10 Gbps
m7g.8xlarge 32 128 GiB 15 Gbps 10 Gbps
m7g.12xlarge 48 192 GiB 22.5 Gbps 15 Gbps
m7g.16xlarge 64 256 GiB 30 Gbps 20 Gbps
m7g.metal 64 256 GiB 30 Gbps 20 Gbps

And here are the specs for the R7g instances:

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
r7g.medium 1 8 GiB up to 12.5 Gbps up to 10 Gbps
r7g.large 2 16 GiB up to 12.5 Gbps up to 10 Gbps
r7g.xlarge 4 32 GiB up to 12.5 Gbps up to 10 Gbps
r7g.2xlarge 8 64 GiB up to 15 Gbps up to 10 Gbps
r7g.4xlarge 16 128 GiB up to 15 Gbps up to 10 Gbps
r7g.8xlarge 32 256 GiB 15 Gbps 10 Gbps
r7g.12xlarge 48 384 GiB 22.5 Gbps 15 Gbps
r7g.16xlarge 64 512 GiB 30 Gbps 20 Gbps
r7g.metal 64 512 GiB 30 Gbps 20 Gbps

Both types of instances are equipped with DDR5 memory, which provides up to 50% higher memory bandwidth than the DDR4 memory used in previous generations. Here’s an infographic that I created to highlight the principal performance and capacity improvements that we have made available with the new instances:

If you are not yet running your application on Graviton instances, be sure to take advantage of the AWS Graviton Ready Program. The partners in this program provide services and solutions that will help you to migrate your application and to take full advantage of all that the Graviton instances have to offer. Other helpful resources include the Porting Advisor for Graviton and the Graviton Fast Start program.

The instances are built on the AWS Nitro System, and benefit from multiple features that enhance security: always-on memory encryption, a dedicated cache for each vCPU, and support for pointer authentication. They also support encrypted EBS volumes, which protect data at rest on the volume, data moving between the instance and the volume, snapshots created from the volume, and volumes created from those snapshots. To learn more about these and other Nitro-powered security features, be sure to read The Security Design of the AWS Nitro System.

On the network side the instances are EBS-Optimized with dedicated networking between the instances and the EBS volumes, and also support Enhanced Networking (read How do I enable and configure enhanced networking on my EC2 instances? for more info). The 16xlarge and metal instances also support Elastic Fabric Adapter (EFA) for applications that need a high level of inter-node communication.

Pricing and Regions
M7g and R7g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.

Jeff;

PS – Launch one today and let me know what you think!

New – Visualize Your VPC Resources from Amazon VPC Creation Experience

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-visualize-your-vpc-resources-from-amazon-vpc-creation-experience/

Today we are announcing Amazon Virtual Private Cloud (Amazon VPC) resource map, a new feature that simplifies the VPC creation experience in the AWS Management Console. This feature displays your existing VPC resources and their routing visually on a single page, allowing you to quickly understand the architectural layout of the VPC.

A year ago, in March 2022, we launched a new VPC creation experience that streamlines the process of creating and connecting VPC resources. With just one click, even across multiple Availability Zones (AZs), you can create and connect VPC resources, eliminating more than 90 percent of the manual steps required in the past. The new creation experience is centered around an interactive diagram that displays a preview of the VPC architecture and updates as options are selected, providing a visual representation of the resources and their relationships within the VPC that you are about to create.

However, after the creation of the VPC, the diagram that was available during the creation experience that many of our customers loved was no longer available. Today we are changing that! With VPC resource map, you can quickly understand the architectural layout of the VPC, including the number of subnets, which subnets are associated with the public route table, and which route tables have routes to the NAT Gateway.

You can also get to the specific resource details by clicking on the resource. This eliminates the need for you to map out resource relationships mentally and hold the information in your head while working with your VPC, making the process much more efficient and less prone to mistakes.

Getting Started with VPC Resource Map
To get started, choose an existing VPC in the VPC console. In the details section, select the Resource map tab. Here, you can see the resources in your VPC and the relationships between those resources.

As you hover over a resource, you can see the related resources and the connected lines highlighted. If you click to select the resource, you can see a few lines of details and a link to see the details of the selected resource.

Getting Started with VPC Creation Experience
I want to explain how to use the VPC creation experience to improve your workflow to create a new VPC to make a high-availability three-tier VPC easily.

Choose Create VPC and select VPC and more in the VPC console. You can preview the VPC resources that you are about to create all on the same page.

In Name tag auto-generation, you can specify a prefix value for Name tags. This value is used to generate Name tags for all VPC resources in the preview. If I change the default value, which is project to channy, the Name tag in the preview changes to channy- something, such as channy-vpc. You can customize a Name tag per resource in the preview by clicking each resource and making changes.

You can easily change the default CIDR value (10.0.0.0/16) when you click the IPv4 CIDR block field to reveal the CIDR joystick. Use the left or right arrow to move to the previous (9.255.0.0/16) or next (10.0.1.0/16) CIDR block within the /16 network mask. You can also change the subnet mask to /17 by using the down arrow, or go back to /16 using the up arrow.

Choose the number of Availability Zones (AZs) up to 3. The number of public and private subnet types changes based on the number of AZs and shows the total number of each subnet type it will create.

I want a high-availability VPC in three AZs and select 6 for the number of private subnets. In the preview panel, you can see that there are 9 subnets. When I hover over channy-rtb-public, I can visually confirm that this route table is connected to three public subnets and also routed to the internet gateway (channy-igw). The dotted lines indicate routes to network node, and the solid lines indicate relationships such as implicit or explicit associations.

Adding NAT gateways and VPC endpoints is easy. You can simply change the number of NAT gateways in or per Availability Zone (AZ). Note that there is a charge for each NAT gateway. We always recommend having one NAT gateway per AZ and route traffic from subnets in an AZ to the NAT gateway in the same AZ for high availability and to avoid inter-AZ data charges.

To route traffic to Amazon Simple Storage Service (Amazon S3) buckets more securely, you can choose the S3 Gateway endpoint by default. The S3 Gateway endpoint is free of charge and does not use NAT gateways when moving data from private subnets.

You can create additional tags and assign them to all resources in the VPC in no time. I select Add new tag and enter environment for the Key and test for the Value. This key-value pair will be added to every resource here.

Choose Create VPC at the bottom of the page and see the resources and the IDs of those resources that are being created. Before creating, please validate resources from the preview.

Once all the resources are created, choose View VPC at the bottom. The button takes you directly to the VPC resource map, where you can see a visual representation of what you created.

Now Available
Amazon VPC resource map is now available in all AWS Regions where Amazon VPC is available, and you can start using it today.

The VPC resource map and creation experience now only displays VPC, subnets, route tables, internet gateway, NAT gateways, and Amazon S3 gateway. The Amazon VPC console teams and user experience teams will continue to improve the console experience using customer feedback.

To learn more, see the Amazon VPC User Guide, and please send feedback to AWS re:Post for Amazon VPC or through your usual AWS support contacts.

Channy

New – AWS CloudTrail Lake Supports Ingesting Activity Events From Non-AWS Sources

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-aws-cloudtrail-lake-supports-ingesting-activity-events-from-non-aws-sources/

In November 2013, we announced AWS CloudTrail to track user activity and API usage. AWS CloudTrail enables auditing, security monitoring, and operational troubleshooting. CloudTrail records user activity and API calls across AWS services as events. CloudTrail events help you answer the questions of “who did what, where, and when?”.

Recently we have improved the ability for you to simplify your auditing and security analysis by using AWS CloudTrail Lake. CloudTrail Lake is a managed data lake for capturing, storing, accessing, and analyzing user and API activity on AWS for audit, security, and operational purposes. You can aggregate and immutably store your activity events, and run SQL-based queries for search and analysis.

We have heard your feedback that aggregating activity information from diverse applications across hybrid environments is complex and costly, but important for a comprehensive picture of your organization’s security and compliance posture.

Today we are announcing support of ingestion for activity events from non-AWS sources using CloudTrail Lake, making it a single location of immutable user and API activity events for auditing and security investigations. Now you can consolidate, immutably store, search, and analyze activity events from AWS and non-AWS sources, such as in-house or SaaS applications, in one place.

Using the new PutAuditEvents API in CloudTrail Lake, you can centralize user activity information from disparate sources into CloudTrail Lake, enabling you to analyze, troubleshoot and diagnose issues using this data. CloudTrail Lake records all events in standardized schema, making it easier for users to consume this information to comprehensively and quickly respond to security incidents or audit requests.

CloudTrail Lake is also integrated with selected AWS Partners, such as Cloud Storage Security, Clumio, CrowdStrike, CyberArk, GitHub, Kong Inc, LaunchDarkly, MontyCloud, Netskope, Nordcloud, Okta, One Identity, Shoreline.io, Snyk, and Wiz, allowing you to easily enable audit logging through the CloudTrail console.

Getting Started to Integrate External Sources
You can start to ingest activity events from your own data sources or partner applications by choosing Integrations under the Lake menu in the AWS CloudTrail console.

To create a new integration, choose Add integration and enter your channel name. You can choose the partner application source from which you want to get events. If you’re integrating with events from your own applications hosted on-premises or in the cloud, choose My custom integration.

For Event delivery location, you can choose destinations for your events from this integration. This allows your application or partners to deliver events to your event data store of CloudTrail Lake. An event data store can retain your activity events for a week to up to seven years. Then you can run queries on the event data store.

Choose either Use existing event data stores or Create new event data store—to receive events from integrations. To learn more about event data store, see Create an event data store in the AWS documentation.

You can also set up the permissions policy for the channel resource created with this integration. The information required for the policy is dependent on the integration type of each partner applications.

There are two types of integrations: direct and solution. With direct integrations, the partner calls the PutAuditEvents API to deliver events to the event data store for your AWS account. In this case, you need to provide External ID, the unique account identifier provided by the partner. You can see a link to partner website for the step-by-step guide. With solution integrations, the application runs in your AWS account and the application calls the PutAuditEvents API to deliver events to the event data store for your AWS account.

To find the Integration type for your partner, choose the Available sources tab from the integrations page.

After creating an integration, you will need to provide this Channel ARN to the source or partner application. Until these steps are finished, the status will remain as incomplete. Once CloudTrail Lake starts receiving events for the integrated partner or application, the status field will be updated to reflect the current state.

To ingest your application’s activity events into your integration, call the PutAuditEvents API to add the payload of events. Be sure that there is no sensitive or personally identifying information in the event payload before ingesting it into CloudTrail Lake.

You can make a JSON array of event objects, which includes a required user-generated ID from the event, the required payload of the event as the value of EventData, and an optional checksum to help validate the integrity of the event after ingestion into CloudTrail Lake.

{
  "AuditEvents": [
     {
      "Id": "event_ID",
      "EventData": "{event_payload}", "EventDataChecksum": "optional_checksum",
     },
   ... ]
}

The following example shows how to use the put-audit-events AWS CLI command.

$ aws cloudtrail-data put-audit-events \
--channel-arn $ChannelArn \
--external-id $UniqueExternalIDFromPartner \
--audit-events \
{
  "Id": "87f22433-0f1f-4a85-9664-d50a3545baef",
  "EventData":"{\"eventVersion\":\0.01\",\"eventSource\":\"MyCustomLog2\", ...\}",
},
{
  "Id": "7e5966e7-a999-486d-b241-b33a1671aa74",
  "EventData":"{\"eventVersion\":\0.02\",\"eventSource\":\"MyCustomLog1\", ...\}",
"EventDataChecksum":"848df986e7dd61f3eadb3ae278e61272xxxx",
}

On the Editor tab in the CloudTrail Lake, write your own queries for a new integrated event data store to check delivered events.

You can make your own integration query, like getting all principals across AWS and external resources that have made API calls after a particular date:

SELECT userIdentity.principalId FROM $AWS_EVENT_DATA_STORE_ID 
WHERE eventTime > '2022-09-24 00:00:00'
UNION ALL
SELECT eventData.userIdentity.principalId FROM $PARTNER_EVENT_DATA_STORE_ID
WHRERE eventData.eventTime > '2022-09-24 00:00:00'

To learn more, see CloudTrail Lake event schema and sample queries to help you get started.

Launch Partners
You can see the list of our launch partners to support a CloudTrail Lake integration option in the Available applications tab. Here are blog posts and announcements from our partners who collaborated on this launch (some will be added in the next few days).

  • Cloud Storage Security
  • Clumio
  • CrowdStrike
  • CyberArk
  • GitHub
  • Kong Inc
  • LaunchDarkly
  • MontyCloud
  • Netskope
  • Nordcloud
  • Okta
  • One Identity
  • Shoreline.io
  • Snyk
  • Wiz

Now Available
AWS CloudTrail Lake now supports ingesting activity events from external sources in all AWS Regions where CloudTrail Lake is available today. To learn more, see the AWS documentation and each partner’s getting started guides.

If you are interested in becoming an AWS CloudTrail Partner, you can contact your usual partner contacts.

Channy

AWS Week in Review – January 30, 2023

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-january-30-2023/

This week’s review post comes to you from the road, having just wrapped up sponsorship of NDC London. While there we got to speak to many .NET developers, both new and experienced with AWS, and all eager to learn more. Thanks to everyone who stopped by our expo booth to chat or ask questions to the team!

.NET on AWS booth, NDC London 2023.NET on AWS booth, NDC London 2023

Last Week’s Launches
My team will be back on the road to our next events soon, but first, here are just some launches that caught my attention while I was at the expo booth last week:

General availability of Porting Advisor for Graviton: AWS Graviton2 processors are custom designed, Arm64, processors, that deliver increased price performance over comparable x86-64 processors. They’re suitable for a wide range of compute workloads on Amazon Elastic Compute Cloud (Amazon EC2) including application servers, microservices, high-performance computing (HPC), CPU-based ML inference, gaming, any many more. They’re also available in other AWS services such as AWS Lambda, AWS Fargate, to name just a few. The new Porting Advisor for Graviton is a freely available, open-source command line tool for analyzing compatibility of applications you want to run on Graviton-based compute environments. It provides a report that highlights missing or outdated libraries, and code, that you may need to update in order to port your application to run on Graviton processors.

Runtime management controls for AWS Lambda: Automated feature updates, performance improvements, and security patches to runtime environments for Lambda functions is popular with many customers. However, some customers have asked for increased visibility into when these updates occur, and control over when they’re applied. The new runtime management controls for Lambda provide optional capabilities for those customers that require more control over runtime changes. The new controls are optional; by default, all your Lambda functions will continue to receive automatic updates. But, if you wish, you can now apply a runtime management configuration with your functions that specifies how you want updates to be applied. You can find full details on the new runtime management controls in this blog post on the AWS Compute Blog.

General availability of Amazon OpenSearch Serverless: OpenSearch Serverless was one of the livestream segments in the recent AWS on Air re:Invent Recap of previews that were announced at the conference last December. OpenSearch Serverless is now generally available. As a serverless option for Amazon OpenSearch Service, it removes the need to configure, manage, or scale OpenSearch clusters, offering automatic provisioning and scaling of resources to enable fast ingestion and query responses.

Additional connectors for Amazon AppFlow: At AWS re:Invent 2023, I blogged about a release of new data connectors enabling data transfer from a variety of Software-as-a-Service (SaaS) applications to Amazon AppFlow. An additional set of 10 connectors, enabling connectivity from Asana, Google Calendar, JDBC, PayPal, and more, are also now available. Check out the full list of additional connectors launched this past week in this What’s New post.

AWS open-source news and updates: As usual, there’s a new edition of the weekly open-source newsletter highlighting new open-source projects, tools, and demos from the AWS Community. Read edition #143 here – LINK TBD.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Innovate Data and AI/ML edition: AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results.

  • AWS Innovate Data and AI/ML edition for Asia Pacific and Japan is taking place on February 22, 2023. Register here.
  • Registrations for AWS Innovate EMEA (March 9, 2023) and the Americas (March 14, 2023) will open soon. Check the AWS Innovate page for updates.

You can find details on all upcoming events, in-person or virtual, here.

And finally, if you’re a .NET developer, my team will be at Swetugg, in Sweden, February 8-9, and DeveloperWeek, Oakland, California, February 15-17. If you’re in the vicinity at these events, be sure to stop by and say hello!

That’s all for this week. Check back next Monday for another Week in Review!

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Now Open — AWS Asia Pacific (Melbourne) Region in Australia

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-melbourne-region-in-australia/

Following up on Jeff’s post on the announcement of the Melbourne Region, today I’m pleased to share the general availability of the AWS Asia Pacific (Melbourne) Region with three Availability Zones and API name ap-southeast-4.

The AWS Asia Pacific (Melbourne) Region is the second infrastructure Region in Australia, in addition to the Asia Pacific (Sydney) Region, and 12th the twelfth Region in Asia Pacific, joining existing Rregions in Singapore, Tokyo, Seoul, Mumbai, Hong Kong, Osaka, Jakarta, Hyderabad, Sydney, and Mainland China Beijing and Ningxia Regions.

Melbourne city historic building: Flinders Street Station built of yellow sandstone

AWS in Australia: Long-Standing History
In November 2012, AWS established a presence in Australia with the AWS Asia Pacific (Sydney) Region. Since then, AWS has provided continuous investments in infrastructure and technology to help drive digital transformations in Australia, to support hundreds of thousands of active customers each month.

Amazon CloudFront — Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience that was first launched in Australia alongside Asia Pacific (Sydney) Region in 2012. To further accelerate the delivery of static and dynamic web content to end users in Australia, AWS announced additional CloudFront locations for Sydney and Melbourne in 2014. In addition, AWS also announced a Regional Edge Cache in 2016 and an additional CloudFront point of presence (PoP) in Perth in 2018. CloudFront points of presence ensure popular content can be served quickly to your viewers. Regional Edge Caches are positioned (network-wise) between the CloudFront locations and the origin and further help to improve content performance. AWS currently has seven edge locations and one Regional Edge Cache location in Australia.

AWS Direct Connect — As with CloudFront, the first AWS Direct Connect location was made available with Asia Pacific (Sydney) Region launch in 2012. To continue helping our customers in Australia improve application performance, secure data, and reduce networking costs, AWS announced the opening of additional Direct Connect locations in Sydney (2014), Melbourne (2016), Canberra (2017), Perth (2017), and an additional location in Sydney (2022), totaling six locations.

AWS Local Zones — To help customers run applications that require single-digit millisecond latency or local data processing, customers can use AWS Local Zones. They bring AWS infrastructure (compute, storage, database, and other select AWS services) closer to end users and business centers. AWS customers can run workloads with low latency requirements on the AWS Local Zones location in Perth while seamlessly connecting to the rest of their workloads running in AWS Regions.

Upskilling Local Developers, Students, and Future IT Leaders
Digital transformation will not happen on its own. AWS runs various programs and has trained more than 200,000 people across Australia with cloud skills since 2017. There is an additional goal to train more than 29 million people globally with free cloud skills by 2025. Here’s a brief description of related programs from AWS:

  • AWS re/Start is a digital skills training program that prepares unemployed, underemployed, and transitioning individuals for careers in cloud computing and connects students to potential employers.
  • AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs.
  • AWS Educate provides students with access to AWS services. AWS is also collaborating with governments, educators, and the industry to help individuals, both tech and nontech workers, build and deepen their digital skills to nurture a workforce that can harness the power of cloud computing and advanced technologies.
  • AWS Industry Quest is a game-based training initiative designed to help professionals and teams learn and build vital cloud skills and solutions. At re:Invent 2022, AWS announced the first iteration of the program for the financial services sector. National Australia Bank (NAB) is AWS Industry Quest: Financial Services’ first beta customer globally. Through AWS Industry Quest, NAB has trained thousands of colleagues in cloud skills since 2018, resulting in more than 4,500 industry-recognized certifications.

In addition to the above programs, AWS is also committed to supporting Victoria’s local tech community through digital upskilling, community initiatives, and partnerships. The Victorian Digital Skills is a new program from the Victorian Government that helps create a new pipeline of talent to meet the digital skills needs of Victorian employers. AWS has taken steps to help solve the retraining challenge by supporting the Victorian Digital Skills Program, which enables mid-career Victorians to reskill on technology and gain access to higher-paying jobs.

The Climate Pledge
Amazon is committed to investing and innovating across its businesses to help create a more sustainable future. With The Climate Pledge, Amazon is committed to reaching net-zero carbon across its business by 2040 and is on a path to powering its operations with 100 percent renewable energy by 2025.

As of May 2022, two projects in Australia are operational. Amazon Solar Farm Australia – Gunnedah and Amazon Solar Farm Australia – Suntop will aim to generate 392,000 MWh of renewable energy each year, equal to the annual electricity consumption of 63,000 Australian homes. Once Amazon Wind Farm Australia – Hawkesdale also becomes operational, it will boost the projects’ combined yearly renewable energy generation to 717,000 MWh, or enough for nearly 115,000 Australian homes.

AWS Customers in Australia
We have customers in Australia that are doing incredible things with AWS, for example:

National Australia Bank Limited (NAB)
NAB is one of Australia’s largest banks and Australia’s largest business bank. “We have been exploring the potential use cases with AWS since the announcement of the AWS Asia Pacific (Melbourne) Region,” said Steve Day, Chief Technology Officer at NAB.

Locating key banking applications and critical workloads geographically close to their compute platform and the bulk of their corporate workforce will provide lower latency benefits. Moreover, it will simplify their disaster recovery plans. With AWS Asia Pacific (Melbourne) Region, it will also accelerate their strategy to move 80 percent of applications to the cloud by 2025.

Littlepay
This Melbourne-based financial technology company works with more than 250 transport and mobility providers to enable contactless payments on local buses, city networks, and national public transport systems.

“Our mission is to create a universal payment experience around the world, which requires world-class global infrastructure that can grow with us,” said Amin Shayan, CEO at Littlepay. “To drive a seamless experience for our customers, we ingest and process over 1 million monthly transactions in real time using AWS, which enables us to generate insights that help us improve our services. We are excited about the launch of a second AWS Region in Australia, as it gives us access to advanced technologies, like machine learning and artificial intelligence, at a lower latency to help make commuting a simpler and more enjoyable experience.”

Royal Melbourne Institute of Technology (RMIT)
RMIT is a global university of technology, design, and enterprise with more than 91,000 students and 11,000 staff around the world.

“Today’s launch of the AWS Region in Melbourne will open up new ways for our researchers to drive computational engineering and maximize the scientific return,” said Professor Calum Drummond, Deputy Vice-Chancellor and Vice-President, Research and Innovation, and Interim DVC, STEM College, at RMIT.

“We recently launched RMIT University’s AWS Cloud Supercomputing facility (RACE) for RMIT researchers, who are now using it to power advances into battery technologies, photonics, and geospatial science. The low latency and high throughput delivered by the new AWS Region in Melbourne, combined with our 400 Gbps-capable private fiber network, will drive new ways of innovation and collaboration yet to be discovered. We fundamentally believe RACE will help truly democratize high-performance computing capabilities for researchers to run their datasets and make faster discoveries.”

Australian Bureau of Statistics (ABS)
ABS holds the Census of Population and Housing every five years. It is the most comprehensive snapshot of Australia, collecting data from around 10 million households and more than 25 million people.

“In this day and age, people expect a fast and simple online experience when using government services,” said Bindi Kindermann, program manager for 2021 Census Field Operations at ABS. “Using AWS, the ABS was able to scale and securely deliver services to people across the country, making it possible for them to quickly and easily participate in this nationwide event.”

With the success of the 2021 Census, the ABS is continuing to expand its use of AWS into broader areas of its business, making use of the security, reliability, and scalability of the cloud.

You can find more inspiring stories from our customers in Australia by visiting Customer Success Stories page.

Things to Know
AWS User Groups in Australia — Australia is also home to 9 AWS Heroes, 43 AWS Community Builders and community members of 17 AWS User Groups in various cities in Australia. Find an AWS User Group near you to meet and collaborate with fellow developers, participate in community activities and share your AWS knowledge.

AWS Global Footprint — With this launch, AWS now spans 99 Availability Zones within 31 geographic Regions around the world. We have also announced plans for 12 more Availability Zones and 4 more AWS Regions in Canada, Israel, New Zealand, and Thailand.

Available Now — The new Asia Pacific (Melbourne) Region is ready to support your business, and you can find a detailed list of the services available in this Region on the AWS Regional Services List.

To learn more, please visit the Global Infrastructure page, and start building on ap-southeast-4!

Happy building!

Donnie

AWS Week in Review – January 23, 2023

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-january-23-2023/

Welcome to my first AWS Week in Review of 2023. As usual, it has been a busy week, so let’s dive right in:

Last Week’s Launches
Here are some launches that caught my eye last week:

Amazon Connect – You can now deliver long lasting, persistent chat experiences for your customers, with the ability to resume previous conversations including context, metadata, and transcripts. Learn more.

Amazon RDS for MariaDB – You can now enforce the use of encrypted (SSL/TLS) connections to your databases instances that are running Amazon RDS for MariaDB. Learn more.

Amazon CloudWatch – You can now use Metric Streams to send metrics across AWS accounts on a continuous, near real-time basis, within a single AWS Region. Learn more.

AWS Serverless Application Model – You can now run CloudFormation Linter from the SAM CLI to validate your SAM templates. The default rules check template size, Fn:GetAtt parameters, Fn:If syntax, and more. Learn more.

EC2 Auto Scaling – You can now see (and take advantage of) recommendations for activating a predictive scaling policy to optimize the capacity of your Auto Scaling groups. Recommendations can make use of up to 8 weeks of past date; learn more.

Service Limit Increases – Service limits for several AWS services were raised, and other services now have additional quotas that can be raised upon request:

X In Y – Existing AWS services became available in additional regions:

Other AWS News
Here are some other news items and blog posts that may be of interest to you:

AWS Open Source News and Updates – My colleague Ricardo Sueiras highlights the latest open source projects, tools, and demos from the open source community around AWS. Read edition #142 here.

AWS Fundamentals – This new book is designed to teach you about AWS in a real-world context. It covers the fundamental AWS services (compute, database, networking, and so forth), and helps you to make use of Infrastructure as Code using AWS CloudFormation, CDK, and Serverless Framework. As an add-on purchase you can also get access to a set of gorgeous, high-resolution infographics.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS on Air – Every Friday at Noon PT we discuss the latest news and go in-depth on several of the most recent launches. Learn more.

#BuildOnLiveBuild On AWS Live events are a series of technical streams on twitch.tv/aws that focus on technology topics related to challenges hands-on practitioners face today:

  • Join the Build On Live Weekly show about the cloud, the community, the code, and everything in between, hosted by AWS Developer Advocates. The show streams every Thursday at 9:00 PT on twitch.tv/aws.
  • Join the new The Big Dev Theory show, co-hosted with AWS partners, discussing various topics such as data and AI, AIOps, integration, and security. The show streams every Tuesday at 8:00 PT on twitch.tv/aws.

Check the AWS Twitch schedule for all shows.

AWS Community DaysAWS Community Day events are community-led conferences that deliver a peer-to-peer learning experience, providing developers with a venue to acquire AWS knowledge in their preferred way: from one another.

AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results.

  • AWS Innovate Data and AI/ML edition for Asia Pacific and Japan is taking place on February 22, 2023. Register here.
  • Registrations for AWS Innovate EMEA (March 9, 2023) and the Americas (March 14, 2023) will open soon. Check the AWS Innovate page for updates.

You can browse all upcoming in-person and virtual events.

And that’s all for this week!

Jeff;

New – Bring ML Models Built Anywhere into Amazon SageMaker Canvas and Generate Predictions

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/new-bring-ml-models-built-anywhere-into-amazon-sagemaker-canvas-and-generate-predictions/

Amazon SageMaker Canvas provides business analysts with a visual interface to solve business problems using machine learning (ML) without writing a single line of code. Since we introduced SageMaker Canvas in 2021, many users have asked us for an enhanced, seamless collaboration experience that enables data scientists to share trained models with their business analysts with a few simple clicks.

Today, I’m excited to announce that you can now bring ML models built anywhere into SageMaker Canvas and generate predictions.

New – Bring Your Own Model into SageMaker Canvas
As a data scientist or ML practitioner, you can now seamlessly share models built anywhere, within or outside Amazon SageMaker, with your business teams. This removes the heavy lifting for your engineering teams to build a separate tool or user interface to share ML models and collaborate between the different parts of your organization. As a business analyst, you can now leverage ML models shared by your data scientists within minutes to generate predictions.

Let me show you how this works in practice!

In this example, I share an ML model that has been trained to identify customers that are potentially at risk of churning with my marketing analyst. First, I register the model in the SageMaker model registry. SageMaker model registry lets you catalog models and manage model versions. I create a model group called 2022-customer-churn-model-group and then select Create model version to register my model.

Amazon SageMaker Model Registry

To register your model, provide the location of the inference image in Amazon ECR, as well as the location of your model.tar.gz file in Amazon S3. You can also add model endpoint recommendations and additional model information. Once you’ve registered your model, select the model version and select Share.

Amazon SageMaker Studio - Share models from model registry with SageMaker Canvas users

You can now choose the SageMaker Canvas user profile(s) within the same SageMaker domain you want to share your model with. Then, provide additional model details, such as information about training and validation datasets, the ML problem type, and model output information. You can also add a note for the SageMaker Canvas users you share the model with.

Amazon SageMaker Studio - Share a model from Model Registry with SageMaker Canvas users

Similarly, you can now also share models trained in SageMaker Autopilot and models available in SageMaker JumpStart with SageMaker Canvas users.

The business analysts will receive an in-app notification in SageMaker Canvas that a model has been shared with them, along with any notes you added.

Amazon SageMaker Canvas - Received model from SageMaker Studio

My marketing analyst can now open, analyze, and start using the model to generate ML predictions in SageMaker Canvas.

Amazon SageMaker Canvas - Imported model from SageMaker Studio

Select Batch prediction to generate ML predictions for an entire dataset or Single prediction to create predictions for a single input. You can download the results in a .csv file.

Amazon SageMaker Canvas - Generate Predictions

New – Improved Model Sharing and Collaboration from SageMaker Canvas with SageMaker Studio Users
We also improved the sharing and collaboration capabilities from SageMaker Canvas with data science and ML teams. As a business analyst, you can now select which SageMaker Studio user profile(s) you want to share your standard-build models with.

Your data scientists or ML practitioners will receive a similar in-app notification in SageMaker Studio once a model has been shared with them, along with any notes from you. In addition to just reviewing the model, SageMaker Studio users can now also, if needed, update the data transformations in SageMaker Data Wrangler, retrain the model in SageMaker Autopilot, and share back the updated model. SageMaker Studio users can also recommend an alternate model from the list of models in SageMaker Autopilot.

Once SageMaker Studio users share back a model, you receive another notification in SageMaker Canvas that an updated model has been shared back with you. This collaboration between business analysts and data scientists will help democratize ML across organizations by bringing transparency to automated decisions, building trust, and accelerating ML deployments.

Now Available
The enhanced, seamless collaboration capabilities for Amazon SageMaker Canvas, including the ability to bring your ML models built anywhere, are available today in all AWS Regions where SageMaker Canvas is available with no changes to the existing SageMaker Canvas pricing.

Start collaborating and bring your ML model to Amazon SageMaker Canvas today!

— Antje

Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/

Starting in April of 2023 we will be making two changes to Amazon Simple Storage Service (Amazon S3) to put our latest best practices for bucket security into effect automatically. The changes will begin to go into effect in April and will be rolled out to all AWS Regions within weeks.

Once the changes are in effect for a target Region, all newly created buckets in the Region will by default have S3 Block Public Access enabled and access control lists (ACLs) disabled. Both of these options are already console defaults and have long been recommended as best practices. The options will become the default for buckets that are created using the S3 API, S3 CLI, the AWS SDKs, or AWS CloudFormation templates.

As a bit of history, S3 buckets and objects have always been private by default. We added Block Public Access in 2018 and the ability to disable ACLs in 2021 in order to give you more control, and have long been recommending the use of AWS Identity and Access Management (IAM) policies as a modern and more flexible alternative.

In light of this change, we recommend a deliberate and thoughtful approach to the creation of new buckets that rely on public buckets or ACLs, and believe that most applications do not need either one. If your application turns out be one that does, then you will need to make the changes that I outline below (be sure to review your code, scripts, AWS CloudFormation templates, and any other automation).

What’s Changing
Let’s take a closer look at the changes that we are making:

S3 Block Public Access – All four of the bucket-level settings described in this post will be enabled for newly created buckets:

A subsequent attempt to set a bucket policy or an access point policy that grants public access will be rejected with a 403 Access Denied error. If you need public access for a new bucket you can create it as usual and then delete the public access block by calling DeletePublicAccessBlock (you will need s3:PutBucketPublicAccessBlock permission in order to call this function; read Block Public Access to learn more about the functions and the permissions).

ACLs Disabled – The Bucket owner enforced setting will be enabled for newly created buckets, making bucket ACLs and object ACLs ineffective, and ensuring that the bucket owner is the object owner no matter who uploads the object. If you want to enable ACLs for a bucket, you can set the ObjectOwnership parameter to ObjectWriter in your CreateBucket request or you can call DeleteBucketOwnershipControls after you create the bucket. You will need s3:PutBucketOwnershipControls permission in order to use the parameter or to call the function; read Controlling Ownership of Objects and Creating a Bucket to learn more.

Stay Tuned
We will publish an initial What’s New post when we start to deploy this change and another one when the deployment has reached all AWS Regions. You can also run your own tests to detect the change in behavior.

Jeff;

Introducing Amazon GameLift Anywhere – Run Your Game Servers on Your Own Infrastructure

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/introducing-amazon-gamelift-anywhere-run-your-game-servers-on-your-own-infrastructure/

In 2016, we launched Amazon GameLift, a dedicated hosting solution that securely deploys and automatically scales fleets of session-based multiplayer game servers to meet worldwide player demand.

With Amazon GameLift, you can create and upload a game server build once, replicate, and then deploy across multiple AWS Regions and AWS Local Zones to reach your players with low-latency experiences across the world. GameLift also includes standalone features for low-cost game fleets with GameLift FleetIQ and player matchmaking with GameLift FlexMatch.

Game developers asked us to reduce the wait time to deploy a candidate server build to the cloud each time they needed to test and iterate their game during the development phase. In addition, our customers told us that they often have ongoing bare-metal contracts or on-premises game servers and want the flexibility to use their existing infrastructure with cloud servers.

Today we are announcing the general availability of Amazon GameLift Anywhere, which decouples game session management from the underlying compute resources. With this new release, you can now register and deploy any hardware, including your own local workstations, under a logical construct called an Anywhere Fleet.

Because your local hardware can now be a GameLift-managed server, you can iterate on the server build in your familiar local desktop environment, and any server error can materialize in seconds. You can also set breakpoints in your environment’s debugger, thereby eliminating trial and error and further speeding up the iteration process.

Here are the major benefits for game developers to use GameLift Anywhere.

  • Faster game development – Instantly test and iterate on your local workstation while still leveraging GameLift FlexMatch and Queue services.
  • Hybrid server management – Deploy, operate, and scale dedicated game servers hosted in the cloud or on-premises, all from a single location.
  • Streamline server operations – Reduce cost and operational complexity by unifying server infrastructure under a single game server orchestration layer.

During the beta period of GameLift Anywhere, lots of customers gave feedback. For example, Nitro Games has been an Amazon GameLift customer since 2020 and have used the service for player matchmaking and managing dedicated game servers in the cloud. Daniel Liljeqvist, Senior DevOps Engineer at Nitro Games said “With GameLift Anywhere we can easily debug a game server on our local machine, saving us time and making the feedback loop much shorter when we are developing new games and features.”

GameLift Anywhere resources such as locations, fleets, and compute are managed through the same highly secure AWS API endpoints as all AWS services. This also applies to generating the authentication tokens for game server processes that are only valid for a limited amount of time for additional security. You can leverage AWS Identity and Access Management (AWS IAM) roles and policies to fully manage access to all the GameLift Anywhere endpoints.

Getting Started with GameLift Anywhere
Before creating your GameLift fleet in your local hardware, you can create custom locations to run your game builds or scripts. Choose Locations in the left navigation pane of the GameLift console and select Create location.

You can create a custom location of your hardware that you can use with your GameLift Anywhere fleet to test your games.

Choose Fleets from the left navigation pane, then choose Create fleet to add your GameLift Anywhere fleet in the desired location.

Choose Anywhere on the Choose compute type step.

Define your fleet details, such as a fleet name and optional items. For more information on settings, see Create a new GameLift fleet in the AWS documentation.

On the Select locations step, select the custom location that you created. The home AWS Region is automatically selected as the Region you are creating the fleet in. You can use the home Region to access and use your resources.

After completing the fleet creation steps to create your Anywhere fleet, you can see active fleets in both the managed EC2 instances and the Anywhere location. You also can integrate remote on-premises hardware by adding more GameLift Anywhere locations, so you can manage your game sessions from one place. To learn more, see Create a new GameLift fleet in the AWS documentation.

You can register your laptop as a compute resource in the fleet that you created. Use the fleet-id created in the previous step and add a compute-name and your laptop’s ip-address.

$ aws gamelift register-compute \
    --compute-name ChannyDevLaptop \
    --fleet-id fleet-12345678-abcdefghi \
    --ip-address 10.1.2.3

Now, you can start a debug session of your game server by retrieving the authorization token for your laptop in the fleet that you created.

$ aws gamelift get-compute-auth-token \
    --fleet-id fleet-12345678-abcdefghi \
    --compute-name ChannyDevLaptop

To run a debug instance of your game server executable, your game server must call InitSDK(). After the process is ready to host a game session, the game server calls ProcessReady(). To learn more, see Integrating games with custom game servers and Testing your integration in the AWS documentation.

Now Available
Amazon GameLift Anywhere is available in all Regions where Amazon GameLift is available.  GameLift offers a step-by-step developer guide, API reference guide, and GameLift SDKs. You can also see for yourself how easy it is to test Amazon GameLift using our sample game to get started.

Give it a try, and please send feedback to AWS re:Post for Amazon GameLift or through your usual AWS support contacts.

Channy