Tag Archives: news

AWS named as a Leader in the 2024 Gartner Magic Quadrant for Desktop as a Service (DaaS)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-the-2024-gartner-magic-quadrant-for-desktop-as-a-service-daas/

The 2024 Gartner Magic Quadrant for DaaS (Desktop as a Service) positions AWS as a Leader for the first time. Last year we were recognized as a Challenger. We believe this is a result of our commitment to meet a wide range of customer needs by delivering a diverse portfolio of virtual desktop services with license portability (including Microsoft 365 Apps for Enterprise), our geographic strategy, and operational capabilities focused on cost optimization and automation. Also, our focus on easy-to-use interfaces for managing each aspect of our virtual desktop services means that our customers rarely need to make use of third-party tools.

You can access the complete 2024 Gartner Magic Quadrant for Desktop as a Service (DaaS) to learn more.

2024-Gartner-MQ-for-DaaS-Graph

AWS DaaS Offerings
Let’s take a quick look at our lineup of DaaS offerings (part of our End User Computing portfolio):

Amazon WorkSpaces Family – Originally launched in early 2014 and enhanced frequently ever since, Amazon WorkSpaces gives you a desktop computing environment running Microsoft Windows, Ubuntu, Amazon Linux, or Red Hat Enterprise Linux in the cloud. Designed to support remote & hybrid workers, knowledge workers, developer workstations, and learning environments, WorkSpaces is available in sixteen AWS Regions, in your choice of six bundle sizes, including the GPU-equipped Graphics G4dn bundle. WorkSpaces Personal gives each user a persistent desktop — perfect for developers, knowledge workers, and others who need to install apps and save files or data. If your users do not need persistent desktops (often the case for contact centers, training, virtual learning, and back office access) you can use WorkSpaces Pools to simplify management and reduce costs. WorkSpaces Core provides managed virtual desktop infrastructure that is designed to work with third-party VDI solutions such as those from Citrix, Leostream, Omnissa, and Workspot.

Amazon WorkSpaces clients are available for desktops and tablets, with web access (Amazon WorkSpaces Secure Browser) and the Amazon WorkSpaces Thin Client providing even more choices. If you have the appropriate Windows 10 or 11 desktop license from Microsoft, you can bring your own license to the cloud (also known as BYOL), where it will run on hardware that is dedicated to you.

You can read about the Amazon WorkSpaces Family and review the WorkSpaces Features to learn more about what WorkSpaces has to offer.

Amazon AppStream 2.0 – Launched in late 2016, Amazon AppStream gives you instant, streamed access to SaaS applications and desktop applications without writing code or refactoring the application. You can easily scale applications and make them available to users across the globe without the need to manage any infrastructure. A wide range of compute, memory, storage, GPU, and operating system options let you empower remote workers, while also taking advantage of auto-scaling to avoid overprovisioning. Amazon AppStream offers three fleet types: Always on (instant connections), On-Demand (2 minutes to launch), and Elastic (for unpredictable demand). Pricing varies by type, with per second and per hour granularity for Windows and Linux; read Amazon AppStream 2.0 Pricing to learn more.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from AWS.

Now available: Graviton4-powered memory-optimized Amazon EC2 X8g instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-graviton4-powered-memory-optimized-amazon-ec2-x8g-instances/

Graviton-4-powered, memory-optimized X8g instances are now available in ten virtual sizes and two bare metal sizes, with up to 3 TiB of DDR5 memory and up to 192 vCPUs. The X8g instances are our most energy efficient to date, with the best price performance and scale-up capability of any comparable EC2 Graviton instance to date. With a 16 to 1 ratio of memory to vCPU, these instances are designed for Electronic Design Automation, in-memory databases & caches, relational databases, real-time analytics, and memory-constrained microservices. The instances fully encrypt all high-speed physical hardware interfaces and also include additional AWS Nitro System and Graviton4 security features.

Over 50K AWS customers already make use of the existing roster of over 150 Graviton-powered instances. They run a wide variety of applications including Valkey, Redis, Apache Spark, Apache Hadoop, PostgreSQL, MariaDB, MySQL, and SAP HANA Cloud. Because they are available in twelve sizes, the new X8g instances are an even better host for these applications by allowing you to choose between scaling up (using a bigger instance) and scaling out (using more instances), while also providing additional flexibility for existing memory-bound workloads that are currently running on distinct instances.

The Instances
When compared to the previous generation (X2gd) instances, the X8g instances offer 3x more memory, 3x more vCPUs, more than twice as much EBS bandwidth (40 Gbps vs 19 Gbps), and twice as much network bandwidth (50 Gbps vs 25 Gbps).

The Graviton4 processors inside the X8g instances have twice as much L2 cache per core as the Graviton2 processors in the X2gd instances (2 MiB vs 1 MiB) along with 160% higher memory bandwidth, and can deliver up to 60% better compute performance.

The X8g instances are built using the 5th generation of AWS Nitro System and Graviton4 processors, which incorporates additional security features including Branch Target Identification (BTI) which provides protection against low-level attacks that attempt to disrupt control flow at the instruction level. To learn more about this and Graviton4’s other security features, read How Amazon’s New CPU Fights Cybersecurity Threats and watch the re:Invent 2023 AWS Graviton session.

Here are the specs:

Instance Name vCPUs
Memory (DDR5)
EBS Bandwidth
Network Bandwidth
x8g.medium 1 16 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.large 2 32 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.xlarge 4 64 GiB Up to 10 Gbps Up to 12.5 Gbps
x8g.2xlarge 8 128 GiB Up to 10 Gbps Up to 15 Gbps
x8g.4xlarge 16 256 GiB Up to 10 Gbps Up to 15 Gbps
x8g.8xlarge 32 512 GiB 10 Gbps 15 Gbps
x8g.12xlarge 48 768 GiB 15 Gbps 22.5 Gbps
x8g.16xlarge 64 1,024 GiB 20 Gbps 30 Gbps
x8g.24xlarge 96 1,536 GiB 30 Gbps 40 Gbps
x8g.48xlarge 192 3,072 GiB 40 Gbps 50 Gbps
x8g.metal-24xl 96 1,536 GiB 30 Gbps 40 Gbps
x8g.metal-48xl 192 3,072 GiB 40 Gbps 50 Gbps

The instances support ENA, ENA Express, and EFA Enhanced Networking. As you can see from the table above they provide a generous amount of EBS bandwidth, and support all EBS volume types including io2 Block Express, EBS General Purpose SSD, and EBS Provisioned IOPS SSD.

X8g Instances in Action
Let’s take a look at some applications and use cases that can make use of 16 GiB of memory per vCPU and/or up to 3 TiB per instance:

Databases – X8g instances allow SAP HANA and SAP Data Analytics Cloud to handle larger and more ambitious workloads than before. Running on Graviton4 powered instances, SAP has measured up to 25% better performance for analytical workloads and up to 40% better performance for transactional workloads in comparison to the same workloads running on Graviton3 instances. X8g instances allow SAP to expand their Graviton-based usage to even larger memory bound solutions.

Electronic Design Automation – EDA workloads are central to the process of designing, testing, verifying, and taping out new generations of chips, including Graviton, Trainium, Inferentia, and those that form the building blocks for the Nitro System. AWS and many other chip makers have adopted the AWS Cloud for these workloads, taking advantage of scale and elasticity to supply each phase of the design process with the appropriate amount of compute power. This allows engineers to innovate faster because they are not waiting for results. Here’s a long-term snapshot from one of the clusters that was used to support development of Graviton4 in late 2022 and early 2023. As you can see this cluster runs at massive scale, with peaks as high as 5x normal usage:

You can see bursts of daily and weekly activity, and then a jump in overall usage during the tape-out phase. The instances in the cluster are on the large end of the size spectrum so the peaks represent several hundred thousand cores running concurrently. This ability to spin up compute when we need it and down when we don’t gives us access to unprecedented scale without a dedicated investment in hardware.

The new X8g instances will allow us and our EDA customers to run even more workloads on Graviton processors, reducing costs and decreasing energy consumption, while also helping to get new products to market faster than ever.

Available Now
X8g instances are available today in the US East (N. Virginia), US West (Oregon), and Europe (Frankfurt) AWS Regions in On Demand, Spot, Reserved Instance, Savings Plan, Dedicated Instance, and Dedicated Host form. To learn more, visit the X8g page.

Data engineering professional certificate: New hands-on specialization by DeepLearning.AI and AWS

Post Syndicated from Betty Zheng (郑予彬) original https://aws.amazon.com/blogs/aws/data-engineering-professional-certificate-new-hands-on-specialization-by-deeplearning-ai-and-aws/

Data engineers play a crucial role in the modern data-driven landscape, managing essential tasks from data ingestion and processing to transformation and serving. Their expertise is particularly valuable in the era of generative AI, where harnessing the value of vast datasets is paramount.

To empower aspiring and experienced data professionals, DeepLearning.AI and Amazon Web Services (AWS) have partnered to launch the Data Engineering Specialization, an advanced professional certificate on Coursera. This comprehensive program covers a wide range of data engineering concepts, tools, and techniques relevant to modern organizations. It’s designed for learners with some experience working with data who are interested in learning the fundamentals of data engineering. The specialization comprises four hands-on courses, each culminating in a Coursera course certificate upon completion.

Specialization overview

This Data Engineering Specialization is a joint initiative by AWS and DeepLearning.AI, a leading provider of world-class AI education founded by renowned machine learning (ML) pioneer Andrew Ng.

Joe Reis, a prominent figure in data engineering and coauthor of the bestselling book Fundamentals of Data Engineering, leads the program as a primary instructor. By providing a foundational framework, the curriculum ensures learners gain a holistic understanding of the data engineering lifecycle, while covering key aspect such as data architecture, orchestration, DataOps, and data management.

Further enhancing the learning experience, the program features hands-on labs and technical assessments hosted on the AWS Cloud. These practical, cloud-based exercises were designed in partnership with AWS technical experts, including Gal Heyne, Navnit Shukla, and Morgan Willis. Learners will apply theoretical concepts using AWS services and tools, such as Amazon Kinesis, AWS Glue, Amazon Simple Storage Service (Amazon S3), and Amazon Redshift, equipping them with hands-on skill and experience.

Specialization highlights

Participants will be introduced to several key learning opportunities.

Acquisition of core skills and strategies

The specialization equips data engineers with the ability to design data engineering solutions for various use cases, select the right technologies for their data architecture, and circumvent potential pitfalls. The skills gained universally apply across various platforms and technologies, offering learners a program that is versatile.

Unparalleled approach to data engineering education

Unlike conventional courses focused on specific technologies, this specialization provides a comprehensive understanding of data engineering fundamentals. It emphasizes the importance of aligning data engineering strategies with broader business goals, fostering a more integrated and effective approach to building and maintaining data solutions.

Holistic understanding of data engineering

By using the insights from the Fundamentals of Data Engineering book, the curriculum offers a well-rounded education that prepares professionals for success in the data-driven focused industries.

Practical skills through AWS cloud labs

The hands-on labs hosted by AWS Partner Vocareum let learners apply the techniques directly in an AWS environment provided with the course. This practical experience is crucial for mastering the intricacies of data engineering and developing the skills needed to excel in the industry.

Why choose this specialization?

  • Structured learning path–The specification is thoughtfully structured to provide a step-by-step learning journey, from foundational concepts to advanced applications.
  • Expert insights–Gain insights from the authors of Fundamentals of Data Engineering and other industry experts. Learn how to apply practical knowledge to build modern data architecture on the cloud, using cloud services for data engineering.
  • Hands-on experience–Engage in hands-on labs in the AWS Cloud, where you not only learn but also apply the knowledge in real-world scenarios.
  • Comprehensive curriculum–This program encompasses all aspects of the data engineering lifecycle, including data generation in source systems, ingestion, transformation, storage, and serving. It also addresses key undercurrents of data engineering, such as security, data management, and orchestration.

At the end of this specialization, learners will be well-equipped with the necessary skills and expertise to embark on a career in data engineering, an in-demand role at the core of any organization that is looking to use data to create value. Data-centric ML and analytics would not be possible without the foundation of data engineering.

Course modules

The Data Engineering Specialization comprises four courses:

  • Course 1–Introduction to Data Engineering–This foundational module explores the collaborative nature of data engineering, identifying key stakeholders and understanding their requirements. The course delves into a mental framework for building data engineering solutions, emphasizing holistic ecosystem understanding, critical factors like data quality and scalability, and effective requirements gathering. The course then examines the data engineering lifecycle, illustrating interconnections between stages. By showcasing the AWS data engineering stack, the course teaches how to use the right technologies. By the end of this course, learners will have the skills and mindset to tackle data engineering challenges and make informed decisions.
  • Course 2–Source Systems, Data Ingestion, and Pipelines–In this course, data engineers dive deep into the practical aspects of working with diverse data sources, ingestion patterns, and pipeline construction. Learners explore the characteristics of different data formats and the appropriate source systems for generating each type of data, equipping them with the knowledge to design effective data pipelines. The course covers the fundamentals of relational and NoSQL databases, including ACID compliance and CRUD operations, so that engineers learn to interact with a wide range of data source systems. The course covers the significance of cloud networking, resolving database connection issues, and using message queues and streaming platforms—crucial skills for creating strong and scalable data architectures. By mastering the concepts in this course, data engineers will be able to automate data ingestion processes, optimize connectivity, and establish the foundation for successful data engineering projects.
  • Course 3–Data Storage and Queries–This course equips data engineers with principles and best practices for designing robust, efficient data storage and querying solutions. Learners explore the data lake house concept, implementing a medallion-like architecture and using open table formats to build transactional data lakes. The course enhances SQL proficiency by teaching advanced queries, such as aggregations and joins on streaming data, while also exploring data warehouse and data lake capabilities. Learners compare storage performance and discover optimization strategies, like indexing. Data engineers can achieve high performance and scalability in data services by comprehending query execution and processing.
  • Course 4–Data Modeling, Transformation, and Serving–In this capstone course, data engineers explore advanced data modeling techniques, including data vault and star schemas. Learners differentiate between modeling approaches like Inmon and Kimball, gaining the ability to transform data and structure it for optimal analytical and ML use cases. The course equips data engineers with preprocessing skills for textual, image, and tabular data. Learners understand the distinctions between supervised and unsupervised learning, as well as classification and regression tasks, empowering them to design data solutions supporting a range of predictive applications. By mastering these data modeling, transformation, and serving concepts, data engineers can build robust, scalable, and business-aligned data architectures to deliver maximum value.

Enrollment

Whether you’re new to data engineering or looking to enhance your skills, this specialization provides a balanced mix of theory and hands-on experience through 4 courses, each culminating in a Coursera course certificate.

Embark on your data engineering journey from here:

By enrolling in these courses, you’ll also earn the DeepLearning.AI Data Engineering Professional Certificate upon completing all four courses.

Enroll now and take the first step towards mastering data engineering with this comprehensive and practical program, built on the foundation of Fundamentals of Data Engineering and powered by AWS.

Amazon S3 Express One Zone now supports AWS KMS with customer managed keys

Post Syndicated from Elizabeth Fuentes original https://aws.amazon.com/blogs/aws/amazon-s3-express-one-zone-now-supports-aws-kms-with-customer-managed-keys/

Amazon S3 Express One Zone, a high-performance, single-Availability Zone (AZ) S3 storage class, now supports server-side encryption with AWS Key Management Service (KMS) keys (SSE-KMS). S3 Express One Zone already encrypts all objects stored in S3 directory buckets with Amazon S3 managed keys (SSE-S3) by default. Starting today, you can use AWS KMS customer managed keys to encrypt data at rest, with no impact on performance. This new encryption capability gives you an additional option to meet compliance and regulatory requirements when using S3 Express One Zone, which is designed to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications.

S3 directory buckets allow you to specify only one customer managed key per bucket for SSE-KMS encryption. Once the customer managed key is added, you cannot edit it to use a new key. On the other hand, with S3 general purpose buckets, you can use multiple KMS keys either by changing the default encryption configuration of the bucket or during S3 PUT requests. When using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are always enabled. S3 Bucket Keys are free and reduce the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

Using SSE-KMS with Amazon S3 Express One Zone
To show you this new capability in action, I first create an S3 directory bucket in the Amazon S3 console following the steps to create a S3 directory bucket and use apne1-az4 as the Availability Zone. In Base name, I enter s3express-kms and a suffix that includes the Availability Zone ID wich is automatically added to create the final name. Then, I select the checkbox to acknowledge that Data is stored in a single Availability Zone.

In the Default encryption section, I choose Server-side encryption with AWS Key Management Service keys (SSE-KMS). Under AWS KMS Key I can Choose from your AWS KMS keys, Enter AWS KMS key ARN, or Create a KMS key. For this example, I previously created an AWS KMS key, which I selected from the list, and then choose Create bucket.

Now, any new object I upload to this S3 directory bucket will be automatically encrypted using my AWS KMS key.

SSE-KMS with Amazon S3 Express One Zone in action
To use SSE-KMS with S3 Express One Zone via the AWS Command Line Interface (AWS CLI), you need an AWS Identity and Access Management (IAM) user or role with the following policy . This policy allows the CreateSession API operation, which is necessary to successfully upload and download encrypted files to and from your S3 directory bucket.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3express:CreateSession"
			],
			"Resource": [
				"arn:aws:s3express:*:<account>:bucket/s3express-kms--apne1-az4--x-s3"
			]
		},
		{
			"Effect": "Allow",
			"Action": [
				"kms:Decrypt",
				"kms:GenerateDataKey"
			],
			"Resource": [
				"arn:aws:kms:*:<account>:key/<keyId>"
			]
		}
	]
}

With the PutObject command, I upload a new file named confidential-doc.txt to my S3 directory bucket.

aws s3api put-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt \
--body confidential-doc.txt

As a success of the previous command I receive the following output:

{
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ChecksumCRC32": "0duteA==",
    "ServerSideEncryption": "aws:kms",
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true
}

Checking the object’s properties with HeadObject command, I see that it’s encrypted using SSE-KMS with the key that I created before:

aws s3api head-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt

I get the following output:

 
{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

I download the encrypted object with GetObject:

aws s3api get-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt output-confidential-doc.txt

As my session has the necessary permissions, the object is downloaded and decrypted automatically.

{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

For this second test, I use a different IAM user with a policy that is not granted the necessary KMS key permissions to download the object. This attempt fails with an AccessDenied error, demonstrating that the SSE-KMS encryption is functioning as intended.

An error occurred (AccessDenied) when calling the CreateSession operation: Access Denied

This demonstration shows how SSE-KMS works seamlessly with S3 Express One Zone, providing an additional layer of security while maintaining ease of use for authorized users.

Things to know
Getting started – You can enable SSE-KMS for S3 Express One Zone using the Amazon S3 console, AWS CLI, or AWS SDKs. Set the default encryption configuration of your S3 directory bucket to SSE-KMS and specify your AWS KMS key. Remember, you can only use one customer managed key per S3 directory bucket for its lifetime.

Regions – S3 Express One Zone support for SSE-KMS using customer managed keys is available in all AWS Regions where S3 Express One Zone is currently available.

Performance – Using SSE-KMS with S3 Express One Zone does not impact request latency. You’ll continue to experience the same single-digit millisecond data access.

Pricing – You pay AWS KMS charges to generate and retrieve data keys used for encryption and decryption. Visit the AWS KMS pricing page for more details. In addition, when using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are enabled by default for all data plane operations except for CopyObject and UploadPartCopy, and can’t be disabled. This reduces the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

AWS CloudTrail integration – You can audit SSE-KMS actions on S3 Express One Zone objects using AWS CloudTrail. Learn more about that in my previous blog post.

– Eli.

AWS Weekly Roundup: Oracle Database@AWS, Amazon RDS, AWS PrivateLink, Amazon MSK, Amazon EventBridge, Amazon SageMaker and more

Post Syndicated from Matheus Guimaraes original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-oracle-databaseaws-amazon-rds-aws-privatelink-amazon-msk-amazon-eventbridge-amazon-sagemaker-and-more/

Hello, everyone!

It’s been an interesting week full of AWS news as usual, but also full of vibrant faces filling up the rooms in a variety of events happening this month.

Let’s start by covering some of the releases that have caught my attention this week.

My Top 3 AWS news of the week

Amazon RDS for MySQL zero-ETL integrations is now generally available and it comes with exciting new features. You are now able to configure zero-ETL integrations in your AWS CloudFormation templates, and you also now have the ability to set up multiple integrations from a source Amazon RDS for MySQL database with up to five Amazon Redshift warehouses. Lastly, you can now also apply data filters which determine which database and tables get automatically replicated. Read this blog post where I review aspects of this release and show you how to get started with data filtering if you want to know more. Incidentally, this release pairs well with another release this week: Amazon Redshift now allows you to alter the sort keys of tables replicated via zero-ETL integrations.

Oracle Database@AWS has been announced as part of a strategic partnership between Amazon Web Services (AWS) and Oracle. This offering allows customers to access Oracle Autonomous Database and Oracle Exadata Database Service directly within AWS simplifying cloud migration for enterprise workloads. Key features include zero-ETL integration between Oracle and AWS services for real-time data analysis, enhanced security, and optimized performance for hybrid cloud environments. This collaboration addresses the growing demand for multi-cloud flexibility and efficiency. It will be available in preview later in the year with broader availability in 2025 as it expands to new Regions.

Amazon OpenSearch Service now supports version 2.15, featuring improvements in search performance, query optimization, and AI-powered application capabilities. Key updates include radial search for vector space queries, optimizations for neural sparse and hybrid search, and the ability to enable vector and hybrid search on existing indexes. Additionally, it also introduces new features like a toxicity detection guardrail and an ML inference processor for enriching ingest pipelines. Read this guide to see how you can upgrade your Amazon OpenSearch Service domain.

So simple yet so good
These releases are simple in nature, but have a big impact.

AWS Resource Access Manager (RAM) now supports AWS PrivateLink – With this release, you can now securely share resources across AWS accounts with private connectivity, without exposing traffic to the public internet. This integration allows for more secure and streamlined access to shared services via VPC endpoints, improving network security and simplifying resource sharing across organizations.

AWS Network Firewall now supports AWS PrivateLink – another security quick-win, you can now securely access and manage Network Firewall resources without exposing traffic to the public internet.

AWS IAM Identity Center now enables users to customize their experience – You can set the language and visual mode preferences, including dark mode for improved readability and reduced eye strain. This update supports 12 different languages and enables users to adjust their settings for a more personalized experience when accessing AWS resources through the portal​.

Others
Amazon EventBridge Pipes now supports customer managed KMS keysAmazon EventBridge Pipes now supports customer-managed keys for server-side encryption. This update allows customers to use their own AWS Key Management Service (KMS) keys to encrypt data when transferring between sources and targets, offering more control and security over sensitive event data. The feature enhances security for point-to-point integrations without the need for custom integration code. See instructions on how to configure this in the updated documentation. 

AWS Glue Data Catalog now supports enhanced storage optimization for Apache Iceberg tables – This includes automatic removal of unnecessary data files, orphan file management, and snapshot retention. These optimizations help reduce storage costs and improve query performance by continuously monitoring and compacting tables, making it easier to manage large-scale datasets stored in Amazon S3. See this Big Data blog post for a deep dive into this new feature.

Amazon MSK Replicator now supports the replication of Kafka topics across clusters while preserving identical topic namesThis simplifies cross-cluster replication processes allowing users to replicate data across regions without needing to reconfigure client applications. This reduces setup complexity and enhances support for more seamless failovers in multi-cluster streaming architectures​. See this Amazon MSK Replicator developer guide to learn more about it.

Amazon SageMaker introduces sticky session routing for inferenceThis allows requests from the same client to be directed to the same model instance for the duration of a session improving consistency and reducing latency, particularly in real-time inference scenarios like chatbots or recommendation systems, where session-based interactions are crucial​. Read about how to configure it in this documentation guide.

Events
The AWS GenAI Lofts continue to pop up around the world! This week, developers in San Francisco had the opportunity to attend two very exciting events at the AWS Gen AI Loft in San Francisco including the “Generative AI on AWS” meetup last Tuesday, featuring discussions about extended reality, future AI tools, and more. Then things got playful on Thursday with the demonstration of an Amazon Bedrock-powered MineCraft bot and AI video game battles! If you’re around San Francisco before October 19th make sure to check out the schedule to see the list of events that you can join.

AWS GenAI Loft San Francisco talk

Make sure to check out the AWS GenAI Loft in Sao Paulo, Brazil, which opened recently, and the AWS GenAI Loft in London, which opens September 30th. You can already start registering for events before they fill up including one called “The future of development” that offers a whole day of targeted learning for developers to help them accelerate their skills.

Our AWS communities have also been very busy throwing incredible events! I was privileged to be a speaker at AWS Community Day Belfast where I got to finally meet all of the organizers of this amazing thriving community in Northern Ireland. If you haven’t been to a community day, I really recommend you check them out! You are sure to leave energized by the dedication and passion from communities leaders like Matt Coulter, Kristi Perreault, Matthew Wilson, Chloe McAteer, and their community members – not to mention the smiles all around. 🙂

AWS Community Belfast organizers and codingmatheus

Certifications
If you’ve been postponing taking an AWS certification exam, now is the perfect time! Register free for the AWS Certified: Associate Challenge before December 12, 2024 and get a 50% discount voucher to take any of the following exams: AWS Certified Solutions Architect – Associate, AWS Certified Developer – Associate, AWS Certified SysOps Administrator – Associate, or AWS Certified Data Engineer – Associate. My colleague Jenna Seybold has posted a collection of study material for each exam; check it out if you’re interested.

Also, don’t forget that the brand new AWS Certified AI Practitioner exam is now available. It is in beta stage, but you can already take it. If you pass it before February 15, 2025, you get an Early Adopter badge to add to your collection.

Conclusion
I hope you enjoyed the news this week!

Keep learning!

Amazon RDS for MySQL zero-ETL integration with Amazon Redshift, now generally available, enables near real-time analytics

Post Syndicated from Matheus Guimaraes original https://aws.amazon.com/blogs/aws/amazon-rds-for-mysql-zero-etl-integration-with-amazon-redshift-now-generally-available-enables-near-real-time-analytics/

Zero-ETL integrations help unify your data across applications and data sources for holistic insights and breaking data silos. They provide a fully managed, no-code, near real-time solution for making petabytes of transactional data available in Amazon Redshift within seconds of data being written into Amazon Relational Database Service (Amazon RDS) for MySQL. This eliminates the need to create your own ETL jobs simplifying data ingestion, reducing your operational overhead and potentially lowering your overall data processing costs. Last year, we announced the general availability of zero-ETL integration with Amazon Redshift for Amazon Aurora MySQL-Compatible Edition as well as the availability in preview of Aurora PostgreSQL-Compatible Edition, Amazon DynamoDB, and RDS for MySQL.

I am happy to announce that Amazon RDS for MySQL zero-ETL with Amazon Redshift is now generally available. This release also includes new features such as data filtering, support for multiple integrations, and the ability to configure zero-ETL integrations in your AWS CloudFormation template.

In this post, I’ll show how you can get started with data filtering and consolidating your data across multiple databases and data warehouses. For a step-by-step walkthrough on how to set up zero-ETL integrations, see this blog post for a description of how to set one up for Aurora MySQL-Compatible, which offers a very similar experience.

Data filtering
Most companies, no matter the size, can benefit from adding filtering to their ETL jobs. A typical use case is to reduce data processing and storage costs by selecting only the subset of data needed to replicate from their production databases. Another is to exclude personally identifiable information (PII) from a report’s dataset. For example, a business in healthcare might want to exclude sensitive patient information when replicating data to build aggregate reports analyzing recent patient cases. Similarly, an e-commerce store may want to make customer spending patterns available to their marketing department, but exclude any identifying information. Conversely, there are certain cases when you might not want to use filtering, such as when making data available to fraud detection teams that need all the data in near real time to make inferences. These are just a few examples, so I encourage you to experiment and discover different use cases that might apply to your organization.

There are two ways to enable filtering in your zero-ETL integrations: when you first create the integration or by modifying an existing integration. Either way, you will find this option on the “Source” step of the zero-ETL creation wizard.

Interface for adding data filtering expressions to include or exclude databases or tables.

You apply filters by entering filter expressions that can be used to either include or exclude databases or tables from the dataset in the format of database*.table*. You can add multiple expressions and they will be evaluated in order from left to right.

If you’re modifying an existing integration, the new filtering rules will apply from that point in time on after you confirm your changes and Amazon Redshift will drop tables that are no longer part of the filter.

If you want to dive deeper, I recommend you read this blog post, which goes in depth into how you can set up data filters for Amazon Aurora zero-ETL integrations since the steps and concepts are very similar.

Create multiple zero-ETL integrations from a single database
You are now also able to configure up integrations from a single RDS for MySQL database to up to 5 Amazon Redshift data warehouses. The only requirement is that you must wait for the first integration to finish setting up successfully before adding others.

This allows you to share transactional data with different teams while providing them ownership over their own data warehouses for their specific use cases. For example, you can also use this in conjunction with data filtering to fan out different sets of data to development, staging, and production Amazon Redshift clusters from the same Amazon RDS production database.

Another interesting scenario where this could be really useful is consolidation of Amazon Redshift clusters by using zero-ETL to replicate to different warehouses. You could also use Amazon Redshift materialized views to explore your data, power your Amazon Quicksight dashboards, share data, train jobs in Amazon SageMaker, and more.

Conclusion
RDS for MySQL zero-ETL integrations with Amazon Redshift allows you to replicate data for near real-time analytics without needing to build and manage complex data pipelines. It is generally available today with the ability to add filter expressions to include or exclude databases and tables from the replicated data sets. You can now also set up multiple integrations from the same source RDS for MySQL database to different Amazon Redshift warehouses or create integrations from different sources to consolidate data into one data warehouse.

This zero-ETL integration is available for RDS for MySQL versions 8.0.32 and later, Amazon Redshift Serverless, and Amazon Redshift RA3 instance types in supported AWS Regions.

In addition to using the AWS Management Console, you can also set up a zero-ETL integration via the AWS Command Line Interface (AWS CLI) and by using an AWS SDK such as boto3, the official AWS SDK for Python.

See the documentation to learn more about working with zero-ETL integrations.

Matheus Guimaraes

Amazon SageMaker HyperPod introduces Amazon EKS support

Post Syndicated from Elizabeth Fuentes original https://aws.amazon.com/blogs/aws/amazon-sagemaker-hyperpod-introduces-amazon-eks-support/

Today, we are pleased to announce Amazon Elastic Kubernetes Service (EKS) support in Amazon SageMaker HyperPod — purpose-built infrastructure engineered with resilience at its core for foundation model (FM) development. This new capability enables customers to orchestrate HyperPod clusters using EKS, combining the power of Kubernetes with Amazon SageMaker HyperPod‘s resilient environment designed for training large models. Amazon SageMaker HyperPod helps efficiently scale across more than a thousand artificial intelligence (AI) accelerators, reducing training time by up to 40%.

Amazon SageMaker HyperPod now enables customers to manage their clusters using a Kubernetes-based interface. This integration allows seamless switching between Slurm and Amazon EKS for optimizing various workloads, including training, fine-tuning, experimentation, and inference. The CloudWatch Observability EKS add-on provides comprehensive monitoring capabilities, offering insights into CPU, network, disk, and other low-level node metrics on a unified dashboard. This enhanced observability extends to resource utilization across the entire cluster, node-level metrics, pod-level performance, and container-specific utilization data, facilitating efficient troubleshooting and optimization.

Launched at re:Invent 2023, Amazon SageMaker HyperPod has become a go-to solution for AI startups and enterprises looking to efficiently train and deploy large scale models. It is compatible with SageMaker’s distributed training libraries, which offer Model Parallel and Data Parallel software optimizations that help reduce training time by up to 20%. SageMaker HyperPod automatically detects and repairs or replaces faulty instances, enabling data scientists to train models uninterrupted for weeks or months. This allows data scientists to focus on model development, rather than managing infrastructure.

The integration of Amazon EKS with Amazon SageMaker HyperPod uses the advantages of Kubernetes, which has become popular for machine learning (ML) workloads due to its scalability and rich open-source tooling. Organizations often standardize on Kubernetes for building applications, including those required for generative AI use cases, as it allows reuse of capabilities across environments while meeting compliance and governance standards. Today’s announcement enables customers to scale and optimize resource utilization across more than a thousand AI accelerators. This flexibility enhances the developer experience, containerized app management, and dynamic scaling for FM training and inference workloads.

Amazon EKS support in Amazon SageMaker HyperPod strengthens resilience through deep health checks, automated node recovery, and job auto-resume capabilities, ensuring uninterrupted training for large scale and/or long-running jobs. Job management can be streamlined with the optional HyperPod CLI, designed for Kubernetes environments, though customers can also use their own CLI tools. Integration with Amazon CloudWatch Container Insights provides advanced observability, offering deeper insights into cluster performance, health, and utilization. Additionally, data scientists can use tools like Kubeflow for automated ML workflows. The integration also includes Amazon SageMaker managed MLflow, providing a robust solution for experiment tracking and model management.

At a high level, Amazon SageMaker HyperPod cluster is created by the cloud admin using the HyperPod cluster API and is fully managed by the HyperPod service, removing the undifferentiated heavy lifting involved in building and optimizing ML infrastructure. Amazon EKS is used to orchestrate these HyperPod nodes, similar to how Slurm orchestrates HyperPod nodes, providing customers with a familiar Kubernetes-based administrator experience.

Let’s explore how to get started with Amazon EKS support in Amazon SageMaker HyperPod
I start by preparing the scenario, checking the prerequisites, and creating an Amazon EKS cluster with a single AWS CloudFormation stack following the Amazon SageMaker HyperPod EKS workshop, configured with VPC and storage resources.

To create and manage Amazon SageMaker HyperPod clusters, I can use either the AWS Management Console or AWS Command Line Interface (AWS CLI). Using the AWS CLI, I specify my cluster configuration in a JSON file. I choose the Amazon EKS cluster created previously as the orchestrator of the SageMaker HyperPod Cluster. Then, I create the cluster worker nodes that I call “worker-group-1”, with a private Subnet, NodeRecovery set to Automatic to enable automatic node recovery and for OnStartDeepHealthChecks I add InstanceStress and InstanceConnectivity to enable deep health checks.

cat > eli-cluster-config.json << EOL
{
    "ClusterName": "example-hp-cluster",
    "Orchestrator": {
        "Eks": {
            "ClusterArn": "${EKS_CLUSTER_ARN}"
        }
    },
    "InstanceGroups": [
        {
            "InstanceGroupName": "worker-group-1",
            "InstanceType": "ml.p5.48xlarge",
            "InstanceCount": 32,
            "LifeCycleConfig": {
                "SourceS3Uri": "s3://${BUCKET_NAME}",
                "OnCreate": "on_create.sh"
            },
            "ExecutionRole": "${EXECUTION_ROLE}",
            "ThreadsPerCore": 1,
            "OnStartDeepHealthChecks": [
                "InstanceStress",
                "InstanceConnectivity"
            ],
        },
  ....
    ],
    "VpcConfig": {
        "SecurityGroupIds": [
            "$SECURITY_GROUP"
        ],
        "Subnets": [
            "$SUBNET_ID"
        ]
    },
    "ResilienceConfig": {
        "NodeRecovery": "Automatic"
    }
}
EOL

You can add InstanceStorageConfigs to provision and mount an additional Amazon EBS volumes on HyperPod nodes.

To create the cluster using the SageMaker HyperPod APIs, I run the following AWS CLI command:

aws sagemaker create-cluster \ 
--cli-input-json file://eli-cluster-config.json

The AWS command returns the ARN of the new HyperPod cluster.

{
"ClusterArn": "arn:aws:sagemaker:us-east-2:ACCOUNT-ID:cluster/wccy5z4n4m49"
}

I then verify the HyperPod cluster status in the SageMaker Console, awaiting until the status changes to InService.

Alternatively, you can check the cluster status using the AWS CLI running the describe-cluster command:

aws sagemaker describe-cluster --cluster-name my-hyperpod-cluster

Once the cluster is ready, I can access the SageMaker HyperPod cluster nodes. For most operations, I can use kubectl commands to manage resources and jobs from my development environment, using the full power of Kubernetes orchestration while benefiting from SageMaker HyperPod’s managed infrastructure. On this occasion, for advanced troubleshooting or direct node access, I use AWS Systems Manager (SSM) to log into individual nodes, following the instructions in the Access your SageMaker HyperPod cluster nodes page.

To run jobs on the SageMaker HyperPod cluster orchestrated by EKS, I follow the steps outlined in the Run jobs on SageMaker HyperPod cluster through Amazon EKS page. You can use the HyperPod CLI and the native kubectl command to find avaible HyperPod clusters and submit training jobs (Pods). For managing ML experiments and training runs, you can use Kubeflow Training Operator, Kueue and Amazon SageMaker-managed MLflow.

Finally, in the SageMaker Console, I can view the Status and Kubernetes version of recently added EKS clusters, providing a comprehensive overview of my SageMaker HyperPod environment.

And I can monitor cluster performance and health insights using Amazon CloudWatch Container.

Things to know
Here are some key things you should know about Amazon EKS support in Amazon SageMaker HyperPod:

Resilient Environment – This integration provides a more resilient training environment with deep health checks, automated node recovery, and job auto-resume. SageMaker HyperPod automatically detects, diagnoses, and recovers from faults, allowing you to continually train foundation models for weeks or months without disruption. This can reduce training time by up to 40%.

Enhanced GPU Observability Amazon CloudWatch Container Insights provides detailed metrics and logs for your containerized applications and microservices. This enables comprehensive monitoring of cluster performance and health.

Scientist-Friendly Tool – This launch includes a custom HyperPod CLI for job management, Kubeflow Training Operators for distributed training, Kueue for scheduling, and integration with SageMaker Managed MLflow for experiment tracking. It also works with SageMaker’s distributed training libraries, which provide Model Parallel and Data Parallel optimizations to significantly reduce training time. These libraries, combined with auto-resumption of jobs, enable efficient and uninterrupted training of large models.

Flexible Resource Utilization – This integration enhances developer experience and scalability for FM workloads. Data scientists can efficiently share compute capacity across training and inference tasks. You can use your existing Amazon EKS clusters or create and attach new ones to HyperPod compute, bring your own tools for job submission, queuing and monitoring.

To get started with Amazon SageMaker HyperPod on Amazon EKS, you can explore resources such as the SageMaker HyperPod EKS Workshop, the aws-do-hyperpod project, and the awsome-distributed-training project. This release is generally available in the AWS Regions where Amazon SageMaker HyperPod is available except Europe(London). For pricing information, visit the Amazon SageMaker Pricing page.

This blog post was a collaborative effort. I would like to thank Manoj Ravi, Adhesh Garg, Tomonori Shimomura, Alex Iankoulski, Anoop Saha, and the entire team for their significant contributions in compiling and refining the information presented here. Their collective expertise was crucial in creating this comprehensive article.

– Eli.

Stability AI’s best image generating models now in Amazon Bedrock

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/stability-ais-best-image-generating-models-now-in-amazon-bedrock/

Starting today, you can use three new text-to-image models from Stability AI in Amazon Bedrock: Stable Image UltraStable Diffusion 3 Large, and Stable Image Core. These models greatly improve performance in multi-subject prompts, image quality, and typography and can be used to rapidly generate high-quality visuals for a wide range of use cases across marketing, advertising, media, entertainment, retail, and more.

These models excel in producing images with stunning photorealism, boasting exceptional detail, color, and lighting, addressing common challenges like rendering realistic hands and faces. The models’ advanced prompt understanding allows it to interpret complex instructions involving spatial reasoning, composition, and style.

The three new Stability AI models available in Amazon Bedrock cover different use cases:

Stable Image Ultra – Produces the highest quality, photorealistic outputs perfect for professional print media and large format applications. Stable Image Ultra excels at rendering exceptional detail and realism.

Stable Diffusion 3 Large – Strikes a balance between generation speed and output quality. Ideal for creating high-volume, high-quality digital assets like websites, newsletters, and marketing materials.

Stable Image Core – Optimized for fast and affordable image generation, great for rapidly iterating on concepts during ideation.

This table summarizes the model’s key features:

Features Stable Image Ultra Stable Diffusion 3 Large Stable Image Core
Parameters 16 billion 8 billion 2.6 billion
Input Text Text or image Text
Typography Tailored for
large-scale display
Tailored for
large-scale display
Versatility and readability across
different sizes and applications
Visual
aesthetics
Photorealistic
image output
Highly realistic with
finer attention to detail
Good rendering;
not as detail-oriented

One of the key improvements of Stable Image Ultra and Stable Diffusion 3 Large compared to Stable Diffusion XL (SDXL) is text quality in generated images, with fewer errors in spelling and typography thanks to its innovative Diffusion Transformer architecture, which implements two separate sets of weights for image and text but enables information flow between the two modalities.

Here are a few images created with these models.

Stable Image Ultra – Prompt: photo, realistic, a woman sitting in a field watching a kite fly in the sky, stormy sky, highly detailed, concept art, intricate, professional composition.

Stable Diffusion 3 Ultra – Prompt: photo, realistic, a woman sitting in a field watching a kite fly in the sky, stormy sky, highly detailed, concept art, intricate, professional composition.

Stable Diffusion 3 Large – Prompt: comic-style illustration, male detective standing under a streetlamp, noir city, wearing a trench coat, fedora, dark and rainy, neon signs, reflections on wet pavement, detailed, moody lighting.

Stable Diffusion 3 Large – Prompt: comic-style illustration, male detective standing under a streetlamp, noir city, wearing a trench coat, fedora, dark and rainy, neon signs, reflections on wet pavement, detailed, moody lighting.

Stable Image Core – Prompt: professional 3d render of a white and orange sneaker, floating in center, hovering, floating, high quality, photorealistic.

Stable Image Core – Prompt: Professional 3d render of a white and orange sneaker, floating in center, hovering, floating, high quality, photorealistic

Use cases for the new Stability AI models in Amazon Bedrock
Text-to-image models offer transformative potential for businesses across various industries and can significantly streamline creative workflows in marketing and advertising departments, enabling rapid generation of high-quality visuals for campaigns, social media content, and product mockups. By expediting the creative process, companies can respond more quickly to market trends and reduce time-to-market for new initiatives. Additionally, these models can enhance brainstorming sessions, providing instant visual representations of concepts that can spark further innovation.

For e-commerce businesses, AI-generated images can help create diverse product showcases and personalized marketing materials at scale. In the realm of user experience and interface design, these tools can quickly produce wireframes and prototypes, accelerating the design iteration process. The adoption of text-to-image models can lead to significant cost savings, increased productivity, and a competitive edge in visual communication across various business functions.

Here are some example use cases across different industries:

Advertising and Marketing

  • Stable Image Ultra for luxury brand advertising and photorealistic product showcases
  • Stable Diffusion 3 Large for high-quality product marketing images and print campaigns
  • Use Stable Image Core for rapid A/B testing of visual concepts for social media ads

E-commerce

  • Stable Image Ultra for high-end product customization and made-to-order items
  • Stable Diffusion 3 Large for most product visuals across an e-commerce site
  • Stable Image Core to quickly generate product images and keep listings up-to-date

Media and Entertainment

  • Stable Image Ultra for ultra-realistic key art, marketing materials, and game visuals
  • Stable Diffusion 3 Large for environment textures, character art, and in-game assets
  • Stable Image Core for rapid prototyping and concept art exploration

Now, let’s see these new models in action, first using the AWS Management Console, then with the AWS Command Line Interface (AWS CLI) and AWS SDKs.

Using the new Stability AI models in the Amazon Bedrock console
In the Amazon Bedrock console, I choose Model access from the navigation pane to enable access the three new models in the Stability AI section.

Now that I have access, I choose Image in the Playgrounds section of the navigation pane. For the model, I choose Stability AI and Stable Image Ultra.

As prompt, I type:

A stylized picture of a cute old steampunk robot with in its hands a sign written in chalk that says "Stable Image Ultra in Amazon Bedrock".

I leave all other options to their default values and choose Run. After a few seconds, I get what I asked. Here’s the image:

A stylized picture of a cute old steampunk robot with in its hands a sign written in chalk that says "Stable Image Ultra in Amazon Bedrock".

Using Stable Image Ultra with the AWS CLI
While I am still in the console Image playground, I choose the three small dots in the corner of the playground window and then View API request. In this way, I can see the AWS Command Line Interface (AWS CLI) command equivalent to what I just did in the console:

aws bedrock-runtime invoke-model \
--model-id stability.stable-image-ultra-v1:0 \
--body "{\"prompt\":\"A stylized picture of a cute old steampunk robot with in its hands a sign written in chalk that says \\\"Stable Image Ultra in Amazon Bedrock\\\".\",\"mode\":\"text-to-image\",\"aspect_ratio\":\"1:1\",\"output_format\":\"jpeg\"}" \
--cli-binary-format raw-in-base64-out \
--region us-west-2 \
invoke-model-output.txt

To use Stable Image Core or Stable Diffusion 3 Large, I can replace the model ID.

The previous command outputs the image in Base64 format inside a JSON object in a text file.

To get the image with a single command, I write the output JSON file to standard output and use the jq tool to extract the encoded image so that it can be decoded on the fly. The output is written in the img.png file. Here’s the full command:

aws bedrock-runtime invoke-model \
--model-id stability.stable-image-ultra-v1:0 \
--body "{\"prompt\":\"A stylized picture of a cute old steampunk robot with in its hands a sign written in chalk that says \\\"Stable Image Ultra in Amazon Bedrock\\\".\",\"mode\":\"text-to-image\",\"aspect_ratio\":\"1:1\",\"output_format\":\"jpeg\"}" \
--cli-binary-format raw-in-base64-out \
--region us-west-2 \
/dev/stdout | jq -r '.images[0]' | base64 --decode > img.png

Using Stable Image Ultra with AWS SDKs
Here’s how you can use Stable Image Ultra with the AWS SDK for Python (Boto3). This simple application interactively asks for a text-to-image prompt and then calls Amazon Bedrock to generate the image.

import base64
import boto3
import json
import os

MODEL_ID = "stability.stable-image-ultra-v1:0"

bedrock_runtime = boto3.client("bedrock-runtime", region_name="us-west-2")

print("Enter a prompt for the text-to-image model:")
prompt = input()

body = {
    "prompt": prompt,
    "mode": "text-to-image"
}
response = bedrock_runtime.invoke_model(modelId=MODEL_ID, body=json.dumps(body))

model_response = json.loads(response["body"].read())

base64_image_data = model_response["images"][0]

i, output_dir = 1, "output"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
while os.path.exists(os.path.join(output_dir, f"img_{i}.png")):
    i += 1

image_data = base64.b64decode(base64_image_data)

image_path = os.path.join(output_dir, f"img_{i}.png")
with open(image_path, "wb") as file:
    file.write(image_data)

print(f"The generated image has been saved to {image_path}")

The application writes the resulting image in an output directory that is created if not present. To not overwrite existing files, the code checks for existing files to find the first file name available with the img_<number>.png format.

More examples of how to use Stable Diffusion models are available in the Code Library of the AWS Documentation.

Customer voices
Learn from Ken Hoge, Global Alliance Director, Stability AI, how Stable Diffusion models are reshaping the industry from text-to-image to video, audio, and 3D, and how Amazon Bedrock empowers customers with an all-in-one, secure, and scalable solution.

Step into a world where reading comes alive with Nicolette Han, Product Owner, Stride Learning. With support from Amazon Bedrock and AWS, Stride Learning’s Legend Library is transforming how young minds engage with and comprehend literature using AI to create stunning, safe illustrations for children stories.

Things to know
The new Stability AI models – Stable Image Ultra,  Stable Diffusion 3 Large, and Stable Image Core – are available today in Amazon Bedrock in the US West (Oregon) AWS Region. With this launch, Amazon Bedrock offers a broader set of solutions to boost your creativity and accelerate content generation workflows. See the Amazon Bedrock pricing page to understand costs for your use case.

You can find more information on Stable Diffusion 3 in the research paper that describes in detail the underlying technology.

To start, see the Stability AI’s models section of the Amazon Bedrock User Guide. To discover how others are using generative AI in their solutions and learn with deep-dive technical content, visit community.aws.

Danilo

The latest AWS Heroes have arrived – September 2024

Post Syndicated from Taylor Jacobsen original https://aws.amazon.com/blogs/aws/the-latest-aws-heroes-have-arrived-september-2024/

The AWS Heroes program recognizes outstanding individuals who are making meaningful contributions within the AWS community. These technical experts generously share their insights, best practices, and innovative solutions to help others create efficiencies and build faster on AWS. Heroes are thought leaders who have demonstrated a commitment to empowering the broader AWS community through their significant contributions and leadership.

Meet our newest cohort of AWS Heroes!

Faye Ellis – London, United Kingdom

Community Hero Faye Ellis is a Principal Training Architect at Pluralsight, where she specializes in helping organizations and individuals to develop their AWS skills, and has taught AWS to millions of people worldwide. She is also committed to make a rewarding cloud career achievable for people all around the world. With over a decade of experience in the IT industry, she uses her expertise in designing and supporting mission critical systems to help explain cloud technology in a way that is accessible and easy to understand.

Ilanchezhian Ganesamurthy – Chennai, India

Community Hero Ilanchezhian Ganesamurthy is currently the Director – Generative AI (GenAI) and Conversational AI (CAI) at Tietoevry, a leading Nordic IT services company. Since 2015, he has been actively involved with the AWS User Group India, and in 2018, he took on the role of co-organizer for the AWS User Group Chennai, which has grown to 4,800 members. Ilan champions diversity and inclusion, recognizing the importance of fostering the next generation of cloud experts. He is a strong supporter of AWS Cloud Clubs, leveraging his industry connections to help the Chennai chapter organize events and networking to empower aspiring cloud professionals.

Jaehyun Shin – Seoul, Korea

Community Hero Jaehyun Shin is a Site Reliability Engineer at MUSINSA, a Korean online fashion retailer. In 2017, he joined the AWS Korea User Group (AWSKRUG), where he has since served as a co-owner of the AWSKRUG Serverless and Seongsu Groups. During this time, Jaehyun was an AWS Community Builder, leveraging his expertise to mentor and nurture new Korea User Group leaders. He has also been an active event organizer for AWS Community Days and hands-on labs, further strengthening the AWS community in Korea.

Jimmy Dahlqvist – Malmö, Sweden

Serverless Hero Jimmy Dahlqvist is a Lead Cloud Architect and Advisor at Sigma Technology Cloud, an AWS Advanced Tier Services Partner and one of Sweden’s major consulting companies. In 2024, he started Serverless-Handbook as the home base of all his serverless adventures. Jimmy is also an AWS Certification Subject Matter Expert, and regularly participates in workshops ensuring AWS Certifications are fair for everyone.

Lee Gilmore – Newcastle, United Kingdom

Serverless Hero Lee Gilmore is a Principal Solutions Architect at Leighton, an AWS Consulting Partner based in Newcastle, North East England. With over two decades of experience in the tech industry, he has spent the past ten years specializing in serverless and cloud-native technologies. Lee is passionate about domain-driven design and event-driven architectures, which are central to his work. Additionally, he regularly authors in-depth articles on serverlessadvocate.com, shares open-source full solutions on GitHub, and frequently speaks at both local and international events.

Maciej Walkowiak – Berlin, Germany

DevTools Hero Maciej Walkowiak is an independent Java consultant based in Berlin, Germany. For nearly two decades, he has been helping companies ranging from startups to enterprises in architecting and developing fast, scalable, and easy-to-maintain Java applications. The great majority of these applications are based on the Spring Framework and Spring Boot, which are his favorite tools for building software. Since 2015, he has been deeply involved in the Spring ecosystem, and leads the Spring Cloud AWS project on GitHub—the bridge between AWS APIs and the Spring programming model.

Minoru Onda – Tokyo, Japan

Community Hero Minoru Onda is a Technology Evangelist at KDDI Agile Development Center Corporation (KAG). He joined the Japan AWS User Group (JAWS-UG) in 2021 and now leads the operations of three communities: the Tokyo chapter, the SRE chapter, and NW-JAWS. In recent years, he has been focusing on utilizing Generative AI on AWS, and co-authored an introductory technical book on Amazon Bedrock with community members, which was published in Japan.

Learn More

Visit the AWS Heroes website if you’d like to learn more about the AWS Heroes program or to connect with a Hero near you.

Taylor

AWS named as a Leader in the first Gartner Magic Quadrant for AI Code Assistants

Post Syndicated from Channy Yun (윤석찬) original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-the-first-gartner-magic-quadrant-for-ai-code-assistants/

On August 19th, 2024, Gartner published its first Magic Quadrant for AI Code Assistants, which includes Amazon Web Services (AWS). Amazon Q Developer qualified for inclusion, having launched in general availability on April 30, 2024. AWS was ranked as a Leader for its ability to execute and completeness of vision.

We believe this Leader placement reflects our rapid pace of innovation, which makes the whole software development lifecycle easier and increases developer productivity with enterprise-grade access controls and security.

The Gartner Magic Quadrant evaluates 12 AI code assistants based on their Ability to Execute, which measures a vendor’s capacity to deliver its products or services effectively, and Completeness of Vision, which assesses a vendor’s understanding of the market and its strategy for future growth, according to Gartner’s report, How Markets and Vendors Are Evaluated in Gartner Magic Quadrants.

Here is the graphical representation of the 2024 Gartner Magic Quadrant for AI Code Assistants.

Here is the quote from Gartner’s report:

Amazon Web Services (AWS) is a Leader in this Magic Quadrant. Its product, Amazon Q Developer (formerly CodeWhisperer), is focused on assisting and automating developer tasks using AI. For example, Amazon Q Developer helps with code suggestions and transformation, testing and security, as well as feature development. Its operations are geographically diverse, and its clients are of all sizes. AWS is focused on delivering AI-driven solutions that enhance the software development life cycle (SDLC), automating complex tasks, optimizing performance, ensuring security, and driving innovation.

My team focuses on creating content on Amazon Q Developer that directly supports software developers’ jobs-to-be-done, enabled and enhanced by generative AI in Amazon Q Developer Center and Community.aws.

I’ve had the chance to talk with our customers to ask why they choose Amazon Q Developer. They said it is available to accelerate and complete tasks across the SDLC much more than general AI code assistants—from coding, testing, and upgrading, to troubleshooting, performing security scanning and fixes, optimizing AWS resources, and creating data engineering pipelines.

Here are the highlights that customers talked about more often:

Available everywhere you need it – You can use Amazon Q Developer in any of the following integrated development environment (IDE), including Visual Studio Code, JetBrains IDEs, AWS Toolkit with Amazon Q, JupyterLab, Amazon EMR Studio, Amazon SageMaker Studio, or AWS Glue Studio. You can also use Amazon Q Developer in the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS documentation, AWS Support, AWS Console Mobile Application, Amazon CodeCatalyst, or through Slack and Microsoft Teams with AWS Chatbot. According to Safe Software, “Amazon Q knows all the ways to make use of the many tools that AWS provides. Because we are now able to accomplish more, we will be able to extend our automations into other AWS services and make use of Amazon Q to help us get there.” To learn more, visit Amazon Q Developer features and Amazon Q Developer customers.

Customizing code recommendations – You can get code recommendations based on your internal code base. Amazon Q Developer accelerates onboarding to a new code base to generate even more relevant inline code recommendations and chat responses (in preview) by making it aware of your internal libraries, APIs, best practices, and architectural patterns. Your organization’s administrators can securely connect Amazon Q Developer to your internal code bases to create multiple customizations. According to National Australia Bank (NAB), NAB has now added specific suggestions using the Amazon Q customization capability that are tailored to the NAB coding standards. They’re seeing increased acceptance rates of 60 percent with customization. To learn more, visit Customizing suggestions in the AWS documentation.

Upgrading your Java applicationsAmazon Q Developer Agent for code transformation automates the process of upgrading and transforming your legacy Java applications. According to an internal Amazon study, Amazon has migrated tens of thousands of production applications from Java 8 or 11 to Java 17 with assistance from Amazon Q Developer. This represents a savings of over 4,500 years of development work for over a thousand developers (when compared to manual upgrades) and performance improvements worth $260 million dollars in annual cost savings. Transformations from Windows to cross-platform .NET are also coming soon! To learn more, visit Upgrading language versions with the Amazon Q Developer Agent for code transformation in the AWS documentation.

Access the complete 2024 Gartner Magic Quadrant for AI Code Assistants report to learn more.

Channy

Gartner Magic Quadrant for AI Code Assistants, Arun Batchu, Philip Walsh, Matt Brasier, Haritha Khandabattu, 19 August, 2024.

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

AWS Weekly Roundup: AWS Parallel Computing Service, Amazon EC2 status checks, and more (September 2, 2024)

Post Syndicated from Esra Kayabali original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-parallel-computing-service-amazon-ec2-status-checks-and-more-september-2-2024/

With the arrival of September, AWS re:Invent 2024 is now 3 months away and I am very excited for the new upcoming services and announcements at the conference. I remember attending re:Invent 2019, just before the COVID-19 pandemic. It was the biggest in-person re:Invent with 60,000+ attendees and it was my second one. It was amazing to be in that atmosphere! Registration is now open for AWS re:Invent 2024. Come join us in Las Vegas for five exciting days of keynotes, breakout sessions, chalk talks, interactive learning opportunities, and career-changing connections!

Now let’s look at the last week’s new announcements.

Last week’s launches
Here are the launches that got my attention.

Announcing AWS Parallel Computing Service – AWS Parallel Computing Service (AWS PCS) is a new managed service that lets you run and scale high performance computing (HPC) workloads on AWS. You can build scientific and engineering models and run simulations using a fully managed Slurm scheduler with built-in technical support and a rich set of customization options. Tailor your HPC environment to your specific needs and integrate it with your preferred software stack. Build complete HPC clusters that integrates compute, storage, networking, and visualization resources, and seamlessly scale from zero to thousands of instances. To learn more, visit AWS Parallel Computing Service and read Channy’s blog post.

Amazon EC2 status checks now support reachability health of attached EBS volumes – You can now use Amazon EC2 status checks to directly monitor if the Amazon EBS volumes attached to your instances are reachable and able to complete I/O operations. With this new status check, you can quickly detect attachment issues or volume impairments that may impact the performance of your applications running on Amazon EC2 instances. You can further integrate these status checks within Auto Scaling groups to monitor the health of EC2 instances and replace impacted instances to ensure high availability and reliability of your applications. Attached EBS status checks can be used along with the instance status and system status checks to monitor the health of your instances. To learn more, refer to the Status checks for Amazon EC2 instances documentation.

Amazon QuickSight now supports sharing views of embedded dashboards – You can now share views of embedded dashboards in Amazon QuickSight. This feature allows you to enable more collaborative capabilities in your application with embedded QuickSight dashboards. Additionally, you can enable personalization capabilities such as bookmarks for anonymous users. You can share a unique link that displays only your changes while staying within the application, and use dashboard or console embedding to generate a shareable link to your application page with QuickSight’s reference encapsulated using the QuickSight Embedding SDK. QuickSight Readers can then send this shareable link to their peers. When their peer accesses the shared link, they are taken to the page on the application that contains the embedded QuickSight dashboard. For more information, refer to Embedded view documentation.

Amazon Q Business launches IAM federation for user identity authenticationAmazon Q Business is a fully managed service that deploys a generative AI business expert for your enterprise data. You can use the Amazon Q Business IAM federation feature to connect your applications directly to your identity provider to source user identity and user attributes for these applications. Previously, you had to sync your user identity information from your identity provider into AWS IAM Identity Center, and then connect your Amazon Q Business applications to IAM Identity Center for user authentication. At launch, Amazon Q Business IAM federation will support the OpenID Connect (OIDC) and SAML2.0 protocols for identity provider connectivity. To learn more, visit Amazon Q Business documentation.

Amazon Bedrock now supports cross-Region inferenceAmazon Bedrock announces support for cross-Region inference, an optional feature that enables you to seamlessly manage traffic bursts by utilizing compute across different AWS Regions. If you are using on-demand mode, you’ll be able to get higher throughput limits (up to 2x your allocated in-Region quotas) and enhanced resilience during periods of peak demand by using cross-Region inference. By opting in, you no longer have to spend time and effort predicting demand fluctuations. Instead, cross-Region inference dynamically routes traffic across multiple Regions, ensuring optimal availability for each request and smoother performance during high-usage periods. You can control where your inference data flows by selecting from a pre-defined set of Regions, helping you comply with applicable data residency requirements and sovereignty laws. Find the list at Supported Regions and models for cross-Region inference. To get started, refer to the Amazon Bedrock documentation or this Machine Learning blog.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

We launched existing services and instance types in additional Regions:

Other AWS events
AWS GenAI Lofts are collaborative spaces and immersive experiences that showcase AWS’s cloud and AI expertise, while providing startups and developers with hands-on access to AI products and services, exclusive sessions with industry leaders, and valuable networking opportunities with investors and peers. Find a GenAI Loft location near you and don’t forget to register.

Gen AI loft workshop

credit: Antje Barth

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

AWS Summits are free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. AWS Summits for this year are coming to an end. There are 3 more left that you can still register: Jakarta (September 5), Toronto (September 11), and Ottawa (October 9).

AWS Community Days feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world. While AWS Summits 2024 are almost over, AWS Community Days are in full swing. Upcoming AWS Community Days are in Belfast (September 6), SF Bay Area (September 13), where our own Antje Barth is a keynote speaker, Argentina (September 14), and Armenia (September 14).

Browse all upcoming AWS led in-person and virtual events here.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Esra

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Announcing AWS Parallel Computing Service to run HPC workloads at virtually any scale

Post Syndicated from Channy Yun (윤석찬) original https://aws.amazon.com/blogs/aws/announcing-aws-parallel-computing-service-to-run-hpc-workloads-at-virtually-any-scale/

Today we are announcing AWS Parallel Computing Service (AWS PCS), a new managed service that helps customers set up and manage high performance computing (HPC) clusters so they seamlessly run their simulations at virtually any scale on AWS. Using the Slurm scheduler, they can work in a familiar HPC environment, accelerating their time to results instead of worrying about infrastructure.

In November 2018, we introduced AWS ParallelCluster, an AWS supported open-source cluster management tool that helps you to deploy and manage HPC clusters in the AWS Cloud. With AWS ParallelCluster, customers can also quickly build and deploy proof of concept and production HPC compute environments. They can use AWS ParallelCluster Command-Line interface, API, Python library, and the user interface installed from open source packages. They are responsible for updates, which can include tearing down and redeploying clusters. Many customers, though, have asked us for a fully managed AWS service to eliminate operational jobs in building and operating HPC environments.

AWS PCS simplifies HPC environments managed by AWS and is accessible through the AWS Management Console, AWS SDK, and AWS Command-Line Interface (AWS CLI). Your system administrators can create managed Slurm clusters that use their compute and storage configurations, identity, and job allocation preferences. AWS PCS uses Slurm, a highly scalable, fault-tolerant job scheduler used across a wide range of HPC customers, for scheduling and orchestrating simulations. End users such as scientists, researchers, and engineers can log in to AWS PCS clusters to run and manage HPC jobs, use interactive software on virtual desktops, and access data. You can bring their workloads to AWS PCS quickly, without significant effort to port code.

You can use fully managed NICE DCV remote desktops for remote visualization, and access job telemetry or application logs to enable specialists to manage your HPC workflows in one place.

AWS PCS is designed for a wide range of traditional and emerging, compute or data-intensive, engineering and scientific workloads across areas such as computational fluid dynamics, weather modeling, finite element analysis, electronic design automation, and reservoir simulations using familiar ways of preparing, executing, and analyzing simulations and computations.

Getting started with AWS Parallel Computing Service
To try out AWS PCS, you can use our tutorial for creating a simple cluster in the AWS documentation. First, you create a virtual private cloud (VPC) with an AWS CloudFormation template and shared storage in Amazon Elastic File System (Amazon EFS) within your account for the AWS Region where you will try AWS PCS. To learn more, visit Create a VPC and Create shared storage in the AWS documentation.

1. Create a cluster
In the AWS PCS console, choose Create cluster, a persistent resource for managing resources and running workloads.

Next, enter your cluster name and choose the controller size of your Slurm scheduler. You can choose Small (up to 32 nodes and 256 jobs), Medium (up to 512 nodes and 8,192 jobs), or Large (up to 2,048 nodes and 16,384 jobs) for the limits of cluster workloads. In the Networking section, choose your created VPC, subnet to launch the cluster, and security group applied to your cluster.

Optionally, you can set the Slurm configuration such as an idle time before compute nodes will scale down, a Prolog and Epilog scripts directory on launched compute nodes, and a resource selection algorithm parameter used by Slurm.

Choose Create cluster. It takes some time for the cluster to be provisioned.

2. Create compute node groups
After creating your cluster, you can create compute node groups, a virtual collection of Amazon Elastic Compute Cloud (Amazon EC2) instances that AWS PCS uses to provide interactive access to a cluster or run jobs in a cluster. When you define a compute node group, you specify common traits such as EC2 instance types, minimum and maximum instance count, target VPC subnets, Amazon Machine Image (AMI), purchase option, and custom launch configuration. Compute node groups require an instance profile to pass an AWS Identity and Access Management (IAM) role to an EC2 instance and an EC2 launch template that AWS PCS uses to configure EC2 instances it launches. To learn more, visit Create a launch template And Create an instance profile in the AWS documentation.

To create a compute node group in the console, go to your cluster and choose the Compute node groups tab and the Create compute node group button.

You can create two compute node groups: a login node group to be accessed by end users and a job node group to run HPC jobs.

To create a compute node group running HPC jobs, enter a compute node name and select a previously-created EC2 launch template, IAM instance profile, and subnets to launch compute nodes in your cluster VPC.

Next, choose your preferred EC2 instance types to use when launching compute nodes and the minimum and maximum instance count for scaling. I chose the hpc6a.48xlarge instance type and scale limit up to eight instances. For a login node, you can choose a smaller instance, such as one c6i.xlarge instance. You can also choose either the On-demand or Spot EC2 purchase option if the instance type supports. Optionally, you can choose a specific AMI.

Choose Create. It takes some time for the compute node group to be provisioned. To learn more, visit Create a compute node group to run jobs and Create a compute node group for login nodes in the AWS documentation.

3. Create and run your HPC jobs
After creating your compute node groups, you submit a job to a queue to run it. The job remains in the queue until AWS PCS schedules it to run on a compute node group, based on available provisioned capacity. Each queue is associated with one or more compute node groups, which provide the necessary EC2 instances to do the processing.

To create a queue in the console, go to your cluster and choose the Queues tab and the Create queue button.

Enter your queue name and choose your compute node groups assigned to your queue.

Choose Create and wait while the queue is being created.

When the login compute node group is active, you can use AWS Systems Manager to connect to the EC2 instance it created. Go to the Amazon EC2 console and choose your EC2 instance of the login compute node group. To learn more, visit Create a queue to submit and manage jobs and Connect to your cluster in the AWS documentation.

To run a job using Slurm, you prepare a submission script that specifies the job requirements and submit it to a queue with the sbatch command. Typically, this is done from a shared directory so the login and compute nodes have a common space for accessing files.

You can also run a message passing interface (MPI) job in AWS PCS using Slurm. To learn more, visit Run a single node job with Slurm or Run a multi-node MPI job with Slurm in the AWS documentation.

You can connect a fully-managed NICE DCV remote desktop for visualization. To get started, use the CloudFormation template from HPC Recipes for AWS GitHub repository.

In this example, I used the OpenFOAM motorBike simulation to calculate the steady flow around a motorcycle and rider. This simulation was run with 288 cores of three hpc6a instances. The output can be visualized in the ParaView session after logging in to the web interface of DCV instance.

Finally, after you are done HPC jobs with the cluster and node groups that you created, you should delete the resources that you created to avoid unnecessary charges. To learn more, visit Delete your AWS resources in the AWS documentation.

Things to know
Here are a couple of things that you should know about this feature:

  • Slurm versions – AWS PCS initially supports Slurm 23.11 and offers mechanisms designed to enable customers to upgrade their Slurm major versions once new versions are added. Additionally, AWS PCS is designed to automatically update the Slurm controller with patch versions. To learn more, visit Slurm versions in the AWS documentation.
  • Capacity Reservations – You can reserve EC2 capacity in a specific Availability Zone and for a specific duration using On-Demand Capacity Reservations to make sure that you have the necessary compute capacity available when you need it. To learn more, visit Capacity Reservations in the AWS documentation.
  • Network file systems – You can attach network storage volumes where data and files can be written and accessed, including Amazon FSx for NetApp ONTAP, Amazon FSx for OpenZFS, and Amazon File Cache as well as Amazon EFS and Amazon FSx for Lustre. You can also use self-managed volumes, such as NFS servers. To learn more, visit Network file systems in the AWS documentation.

Now available
AWS Parallel Computing Service is now available in the US East (N. Virginia), AWS US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) Regions.

AWS PCS launches all resources in your AWS account. You will be billed appropriately for those resources. For more information, see the AWS PCS Pricing page.

Give it a try and send feedback to AWS re:Post or through your usual AWS Support contacts.

Channy

P.S. Special thanks to Matthew Vaughn, a principal developer advocate at AWS for his contribution in creating a HPC testing environment.

AWS Weekly Roundup: S3 Conditional writes, AWS Lambda, JAWS Pankration, and more (August 26, 2024)

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-s3-conditional-writes-aws-lambda-jaws-pankration-and-more-august-26-2024/

The AWS User Group Japan (JAWS-UG) hosted JAWS PANKRATION 2024 themed ‘No Border’. This is a 24-hour online event where AWS Heroes, AWS Community Builders, AWS User Group leaders, and others from around the world discuss topics ranging from cultural discussions to technical talks. One of the speakers at this event, Kevin Tuei, an AWS Community Builder based in Kenya, highlighted the importance of building in public and sharing your knowledge with others, a very fitting talk for this kind of event.

Last week’s launches
Here are some launches that got my attention during the previous week.

Amazon S3 now supports conditional writes – We’ve added support for conditional writes in Amazon S3 which check for existence of an object before creating it. With this feature, you can now simplify how distributed applications with multiple clients concurrently update data in parallel across shared datasets. Each client can conditionally write objects, making sure that it does not overwrite any objects already written by another client.

AWS Lambda introduces recursive loop detection APIs – With the recursive loop detection APIs you can now set recursive loop detection configuration on individual AWS Lambda functions. This allows you to turn off recursive loop detection on functions that intentionally use recursive patterns, avoiding disruption of these workloads. Using these APIs, you can avoid disruption to any intentionally recursive workflows as Lambda expands support of recursive loop detection to other AWS services. Configure recursive loop detection for Lambda functions through the Lambda Console, the AWS command line interface (CLI), or Infrastructure as Code tools like AWS CloudFormation, AWS Serverless Application Model (AWS SAM), or AWS Cloud Development Kit (CDK). This new configuration option is supported in AWS SAM CLI version 1.123.0 and CDK v2.153.0.

General availability of Amazon Bedrock batch inference API – You can now use Amazon Bedrock to process prompts in batch to get responses for model evaluation, experimentation, and offline processing. Using the batch API makes it more efficient to run inference with foundation models (FMs). It also allows you to aggregate responses and analyze them in batches. To get started, visit Run batch inference.

Other AWS news
Launched in July 2024, AWS GenAI Lofts is a global tour designed to foster innovation and community in the evolving landscape of generative artificial intelligence (AI) technology. The lofts bring collaborative pop-up spaces to key AI hotspots around the world, offering developers, startups, and AI enthusiasts a platform to learn, build, and connect. The events are ongoing. Find a location near you and be sure to attend soon.

Upcoming AWS events
AWS Summits – These are free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Whether you’re in the Americas, Asia Pacific & Japan, or EMEA region, learn more about future AWS Summit events happening in your area. On a personal note, I look forward to being one of the keynote speakers at the AWS Summit Johannesburg happening this Thursday. Registrations are still open and I look forward to seeing you there if you’ll be attending.

AWS Community Days – Join an AWS Community Day event just like the one I mentioned at the beginning of this post to participate in technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from your area. If you’re in New York, there’s an event happening in your area this week.

You can browse all upcoming in-person and virtual events here.

That’s all for this week. Check back next Monday for another Weekly Roundup!

– Veliswa

Now open — AWS Asia Pacific (Malaysia) Region

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-malaysia-region/

In March of last year, Jeff Barr announced the plan for an AWS Region in Malaysia. Today, I’m pleased to share the general availability of the AWS Asia Pacific (Malaysia) Region with three Availability Zones and API name ap-southeast-5.

The AWS Asia Pacific (Malaysia) Region is the first infrastructure Region in Malaysia and the thirteenth Region in Asia Pacific, joining the existing Asia Pacific Regions in Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, and Tokyo and the Mainland China Beijing and Ningxia Regions.

The Petronas Twin Towers in the heart of Kuala Lumpur’s central business district.

The new AWS Region in Malaysia will play a pivotal role in supporting the Malaysian government’s strategic Madani Economy Framework. This initiative aims to improve the living standards of all Malaysians by 2030 while supporting innovation in Malaysia and across ASEAN. The construction and operation of the new AWS Region is estimated to add approximately $12.1 billion (MYR 57.3 billion) to Malaysia’s gross domestic product (GDP) and will support an average of more than 3,500 full-time equivalent jobs at external businesses annually through 2038.

The AWS Region in Malaysia will help to meet the high demand for cloud services while supporting innovation in Malaysia and across Southeast Asia.

AWS in Malaysia
In 2016, Amazon Web Services (AWS) established a presence with its first AWS office in Malaysia. Since then, AWS has provided continuous investments in infrastructure and technology to help drive digital transformations in Malaysia in support of hundreds of thousands of active customers each month.

Amazon CloudFront – In 2017, AWS announced the launch of the first edge location in Malaysia, which helps improve performance and availability for end users. Today, there are four Amazon CloudFront locations in Malaysia.

AWS Direct Connect – To continue helping our customers in Malaysia improve application performance, secure data, and reduce networking costs, in 2017, AWS announced the opening of additional Direct Connect locations in Malaysia. Today, there are two AWS Direct Connect locations in Malaysia.

AWS Outposts – As a fully managed service that extends AWS infrastructure and AWS services, AWS Outposts is ideal for applications that need to run on-premises to meet low latency requirements. Since 2020, customers in Malaysia have been able to order AWS Outposts to be installed at their datacenters and on-premises locations.

AWS customers in Malaysia
Cloud adoption in Malaysia has been steadily gaining momentum in recent years. Here are some examples of AWS customers in Malaysia and how they are using AWS for various workloads:

PayNet – PayNet is Malaysia’s national payments network and shared central infrastructure for the financial market in Malaysia. PayNet uses AWS to run critical national payment workloads, including the MyDebit online cashless payments system and e-payment reporting.

Pos Malaysia Berhad (Pos Malaysia) – Pos Malaysia is the national post and parcel service provider, holding the sole mandate to deliver services under the universal postal service obligation for Malaysia. They migrated critical applications to AWS, which increased their business agility and ability to deliver enhanced customer experiences. Also, they scaled their compute capacity to handle deliveries to more than 11 million addresses and a network of more than 3,500 retail touchpoints using Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS), ensuring disruption-free services.

DerivDeriv, one of the world’s largest online brokers, is using Amazon Q Business to increase productivity, efficiency, and innovation in its operations across customer support, marketing, and recruiting departments. With Amazon Q Business, Deriv has been able to boost productivity and reduce onboarding time by 45 percent.

Asia Pacific University – As one of the leading tech universities in Malaysia, Asia Pacific University (APU) uses AWS serverless technology such as Lambda to reduce operational costs. The automated scalability of AWS services has led to high availability and faster deployment that ensure APU’s applications and services are accessible to the students and staff at all times, enhancing the overall user experience. 

Aerodyne – Aerodyne Group is a DT3 (Drone Tech, Data Tech, and Digital Transformation) solutions provider of drone-based enterprise solutions. They’re running their DRONOS software as a service (SaaS) platform on AWS to help drone operators worldwide grow their businesses.

Building cloud skills together
AWS and various organizations in Malaysia have been working closely to build necessary cloud skills for builders in Malaysia. Here are some of the initiatives:

Program AKAR powered by AWS re/Start – Program AKAR is the first financial services-aligned cloud skills program initiated by AWS and PayNet. This new program aims to bridge the growing skills gap in Malaysia’s digital economy by equipping university students with transferrable skills for careers in the sector. As part of this initial collaboration, PayNet, AWS re/Start, and WEPS have committed to starting the program with 100 students in 2024, with the first 50 from Asia Pacific University serving as a pilot. 

AWS Academy — AWS Academy aims to bridge the gap between industry and academia by preparing students for industry-recognized certifications and careers in the cloud with a free and ready-to-teach cloud computing curriculum. AWS Academy currently runs courses in 48 Malaysian universities, covering various domains. Since 2018, 23,000 students have been trained through this program.

AWS Skills Guild at PETRONAS – PETRONAS, a global energy and solutions provider with a presence in over 50 countries, has been an AWS customer since 2014. AWS is also collaborating with PETRONAS to train their employees using the AWS Skills Guild program.

AWS’s contribution to sustainability in Malaysia
With The Climate Pledge, Amazon is committed to reaching net-zero carbon across its business by 2040 and is on a path to powering its operations with 100 percent renewable energy by 2025.

In September 2023, AWS announced its collaboration with Petronas and Gentari, a global clean energy company, to accelerate sustainability and decarbonization efforts in the global energy transition. Shortly after, in December 2023, AWS customer PKT Logistics Group became the first Malaysian company to join over 300 global companies in The Climate Pledge to accelerate the world’s path to net-zero carbon.

In July 2024, AWS and Zero Waste Management collaborated on the first-ever AWS InCommunities Malaysia initiative, Green Wira Programme, to train educators to build sustainability initiatives in schools to advance Malaysia’s sustainable future.

Amazon is committed to investing and innovating across its businesses to help create a more sustainable future.

Things to know
AWS Community in Malaysia – Malaysia is also home to one AWS Hero, nine AWS Community Builders and about 9,000 community members of three AWS User Groups in various cities in Malaysia. If you’re interested in joining AWS User Groups Malaysia, visit their Meetup and Facebook pages.

AWS Global footprint – With this launch, AWS now spans 108 Availability Zones within 34 geographic Regions around the world. We have also announced plans for 18 more Availability Zones and six more AWS Regions in Mexico, New Zealand, the Kingdom of Saudi Arabia, Taiwan, Thailand, and the AWS European Sovereign Cloud.

Available now – The new Asia Pacific (Malaysia) Region is ready to support your business, and you can find a detailed list of the services available in this Region on the AWS Services by Region page.

To learn more, please visit the AWS Global Infrastructure page, and start building on ap-southeast-5!

Happy building!
— Donnie

Add macOS to your continuous integration pipelines with AWS CodeBuild

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/add-macos-to-your-continuous-integration-pipelines-with-aws-codebuild/

Starting today, you can build applications on macOS with AWS CodeBuild. You can now build artifacts on managed Apple M2 machines that run on macOS 14 Sonoma. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages.

Building, testing, signing, and distributing applications for Apple systems (iOS, iPadOS, watchOS, tvOS, and macOS) requires the use of Xcode, which runs exclusively on macOS. When you build for Apple systems in the AWS Cloud, it is very likely you configured your continuous integration and continuous deployment (CI/CD) pipeline to run on Amazon Elastic Cloud Compute (Amazon EC2) Mac instances.

Since we launched Amazon EC2 Mac in 2020, I have spent a significant amount of time with our customers in various industries and geographies, helping them configure and optimize their pipelines on macOS. In the simplest form, a customer’s pipeline might look like the following diagram.

iOS build pipeline on EC2 Mac

The pipeline starts when there is a new commit or pull request on the source code repository. The repository agent installed on the machine triggers various scripts to configure the environment, build and test the application, and eventually deploy it to App Store Connect.

Amazon EC2 Mac drastically simplifies the management and automation of macOS machines. As I like to describe it, an EC2 Mac instance has all the things I love from Amazon EC2 (Amazon Elastic Block Store (Amazon EBS) volumes, snapshots, virtual private clouds (VPCs), security groups, and more) applied to Mac minis running macOS in the cloud.

However, customers are left with two challenges. The first is to prepare the Amazon Machine Image (AMI) with all the required tools for the build. A minimum build environment requires Xcode, but it is very common to install Fastlane (and Ruby), as well as other build or development tools and libraries. Most organizations require multiple build environments for multiple combinations of macOS and Xcode versions.

The second challenge is to scale your build fleet according to the number and duration of builds. Large organizations typically have hundreds or thousands of builds per day, requiring dozens of build machines. Scaling in and out of that fleet helps to save on costs. EC2 Mac instances are reserved for your dedicated use. One instance is allocated to one dedicated host. Scaling a fleet of dedicated hosts requires a specific configuration.

To address these challenges and simplify the configuration and management of your macOS build machines, today we introduce CodeBuild for macOS.

CodeBuild for macOS is based on the recently introduced reserved capacity fleet, which contains instances powered by Amazon EC2 that are maintained by CodeBuild. With reserved capacity fleets, you configure a set of dedicated instances for your build environment. These machines remain idle, ready to process builds or tests immediately, which reduces build durations. With reserved capacity fleets, your machines are always running and will continue to incur costs as long as they’re provisioned.

CodeBuild provides a standard disk image (AMI) to run your builds. It contains preinstalled versions of Xcode, Fastlane, Ruby, Python, Node.js, and other popular tools for a development and build environment. The full list of tools installed is available in the documentation. Over time, we will provide additional disk images with updated versions of these tools. You can also bring your own custom disk image if you desire.

In addition, CodeBuild makes it easy to configure auto scaling. You tell us how much capacity you want, and we manage everything from there.

Let’s see CodeBuild for macOS in action
To show you how it works, I create a CI/CD pipeline for my pet project: getting started with AWS Amplify on iOS. This tutorial and its accompanying source code explain how to create a simple iOS app with a cloud-based backend. The app uses a GraphQL API (AWS AppSync), a NoSQL database (Amazon DynamoDB), a file-based storage (Amazon Simple Storage Service (Amazon S3)), and user authentication (Amazon Cognito). AWS Amplify for Swift is the piece that glues all these services together.

The tutorial and the source code of the app are available in a Git repository. It includes scripts to automate the build, test, and deployment of the app.

Configuring a new CI/CD pipeline with CodeBuild for macOS involves the following high-level steps:

  1. Create the build project.
  2. Create the dedicated fleet of machines.
  3. Configure one or more build triggers.
  4. Add a pipeline definition file (buildspec.yaml) to the project.

To get started, I open the AWS Management Console, select CodeBuild, and select Create project.

codebuild mac - 1

I enter a Project name and configure the connection to the Source code repository. I use GitHub in this example. CodeBuild also supports GitLab and BitBucket. The documentation has an up-to-date list of supported source code repositories.

codebuild mac - 2

For the Provisioning model, I select Reserved capacity. This is the only model where Amazon EC2 Mac instances are available. I don’t have a fleet defined yet, so I decide to create one on the flight while creating the build project. I select Create fleet.

codebuild mac - 3

On the Compute fleet configuration page, I enter a Compute fleet name and select macOS as Operating system. Under Compute, I select the amount of memory and the quantity of vCPUs needed for my build project, and the number of instances I want under Capacity.

For this example, I am happy to use the Managed image. It includes Xcode 15.4 and the simulator runtime for iOS 17.5, among other packages. You can read the list of packages preinstalled on this image in the documentation.

When finished, I select Create fleet to return to the CodeBuild project creation page.

CodeBuild - create fleet

As a next step, I tell CodeBuild to create a new service role to define the permissions I want for my build environment. In the context of this project, I must include permissions to pull an Amplify configuration and access AWS Secrets Manager. I’m not sharing step-by-step instructions to do so, but the sample project code contains the list of the permissions I added.

codebuild mac - 4

I can choose between providing my set of build commands in the project definition or in a buildspec.yaml file included in my project. I select the latter.

codebuild mac - 5

This is optional, but I want to upload the build artifact to an S3 bucket where I can archive each build. In the Artifact 1 – Primary section, I therefore select Amazon S3 as Type, and I enter a Bucket name and artifact Name. The file name to upload is specified in the buildspec.yaml file.

codebuild mac - 6

Down on the page, I configure the project trigger to add a GitHub WebHook. This will configure CodeBuild to start the build every time a commit or pull request is sent to my project on GitHub.

codebuild - webhook

Finally, I select the orange Create project button at the bottom of the page to create this project.

Testing my builds
My project already includes build scripts to prepare the build, build the project, run the tests, and deploy it to Apple’s TestFlight.

codebuild - project scripts

I add a buildspec.yaml file at the root of my project to orchestrate these existing scripts.

version: 0.2

phases:

  install:
    commands:
      - code/ci_actions/00_install_rosetta.sh
  pre_build:
    commands:
      - code/ci_actions/01_keychain.sh
      - code/ci_actions/02_amplify.sh
  build:
    commands:
      - code/ci_actions/03_build.sh
      - code/ci_actions/04_local_tests.sh
  post_build:
    commands:
      - code/ci_actions/06_deploy_testflight.sh
      - code/ci_actions/07_cleanup.sh
artifacts:
   name: $(date +%Y-%m-%d)-getting-started.ipa
   files:
    - 'getting started.ipa'
  base-directory: 'code/build-release'

I add this file to my Git repository and push it to GitHub with the following command: git commit -am "add buildpsec" buildpec.yaml

On the console, I can observe that the build has started.

codebuild - build history

When I select the build, I can see the log files or select Phase details to receive a high-level status of each phase of the build.

codebuild - phase details

When the build is successful, I can see the iOS application IPA file uploaded to my S3 bucket.

aws s3 ls

The last build script that CodeBuild executes uploads the binary to App Store Connect. I can observe new builds in the TestFlight section of the App Store Connect.

App Store Connect

Things to know
It takes 8-10 minutes to prepare an Amazon EC2 Mac instance and to accept the very first build. This is not specific to CodeBuild. The builds you submit during the machine preparation time are queued and will be run in order as soon as the machine is available.

CodeBuild for macOS works with reserved fleets. Contrary to on-demand fleets, where you pay per minute of build, reserved fleets are charged for the time the build machines are reserved for your exclusive usage, even when no builds are running. The capacity reservation follows the Amazon EC2 Mac 24-hour minimum allocation period, as required by the Software License Agreement for macOS (article 3.A.ii).

A fleet of machines can be shared across CodeBuild projects on your AWS account. The machines in the fleet are reserved for your exclusive use. Only CodeBuild can access the machines.

CodeBuild cleans the working directory between builds, but the machines are reused for other builds. It allows you to use the CodeBuild local cache mechanism to quickly restore selected files after a build. If you build different projects on the same fleet, be sure to reset any global state, such as the macOS keychain, and build artifacts, such as the SwiftPM and Xcode package caches, before starting a new build.

When you work with custom build images, be sure they are built for a 64-bit Mac-Arm architecture. You also must install and start the AWS Systems Manager Agent (SSM Agent). CodeBuild uses the SSM Agent to install its own agent and to manage the machine. Finally, make sure the AMI is available to the CodeBuild organization ARN.

CodeBuild for macOS is available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). These are the same Regions that offer Amazon EC2 Mac M2 instances.

Get started today and create your first CodeBuild project on macOS.

— seb

AWS Weekly Roundup: G6e instances, Karpenter, Amazon Prime Day metrics, AWS Certifications update and more (August 19, 2024)

Post Syndicated from Prasad Rao original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-g6e-instances-karpenter-amazon-prime-day-metrics-aws-certifications-update-and-more-august-19-2024/

You know what I find more exciting than the Amazon Prime Day sale? Finding out how Amazon Web Services (AWS) makes it all happen. Every year, I wait eagerly for Jeff Barr’s annual post to read the chart-topping metrics. The scale never ceases to amaze me.

This year, Channy Yun and Jeff Barr bring us behind the scenes of how AWS powered Prime Day 2024 for record-breaking sales. I will let you read the post for full details, but one metric that blows my mind every year is that of Amazon Aurora. On Prime Day, 6,311 Amazon Aurora database instances processed more than 376 billion transactions, stored 2,978 terabytes of data, and transferred 913 terabytes of data.

Amazon Box with checkbox showing a record breaking prime day event powered by AWS

Other news I’m excited to share is that registration is open for two new AWS Certification exams. You can now register for the beta version of the AWS Certified AI Practitioner and AWS Certified Machine Learning Engineer – Associate. These certifications are for everyone—from line-of-business professionals to experienced machine learning (ML) engineers—and will help individuals prepare for in-demand artificial intelligence and machine learning (AI/ML) careers. You can prepare for your exam by following a four-step exam prep plan for AWS Certified AI Practitioner and AWS Certified Machine Learning Engineer – Associate.

Last week’s launches
Here are some launches that got my attention:

General availability of Amazon Elastic Compute Cloud (Amazon EC2) EC2 G6e instances – Powered by NVIDIA L40S Tensor Core GPUs, G6e instances can be used for a wide range of ML and spatial computing use cases. You can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio.

Release of Karpenter 1.0 – Karpenter is a flexible, efficient, and high-performance Kubernetes compute management solution. You can use Karpenter with Amazon Elastic Kubernetes Service (Amazon EKS) or any conformant Kubernetes cluster. To learn more, visit the Karpenter 1.0 launch post.

Drag-and-drop UI for Amazon SageMaker Pipelines – With this launch, you can now quickly create, execute, and monitor an end-to-end AI/ML workflow to train, fine-tune, evaluate, and deploy models without writing code. You can drag and drop various steps of the workflow and connect them together in the UI to compose an AI/ML workflow.

Split, move and modify Amazon EC2 On-Demand Capacity Reservations – With the new capabilities for managing Amazon EC2 On-Demand Capacity Reservations, you can split your Capacity Reservations, move capacity between Capacity Reservations, and modify your Capacity Reservation’s instance eligibility attribute. To learn more about these features, refer to Split off available capacity from an existing Capacity Reservation.

Document-level sync reports in Amazon Q Business – This new feature of Amazon Q Business provides you with a comprehensive document-level report including granular indexing status, metadata, and access control list (ACL) details for every document processed during a data source sync job. You have the visibility of the status of the documents Amazon Q Business attempted to crawl and index as well as the ability to troubleshoot why certain documents were not returned with the expected answers.

Landing zone version selection in AWS Control Tower – Starting with landing zone version 3.1 and above, you can update or reset in-place your landing zone on the current version, or upgrade to a version of your choice. To learn more, visit Select a landing zone version in the AWS Control Tower user guide.

Launch of AWS Support Official channel on AWS re:Post – You now have access to curated content for operating at scale on AWS, authored by AWS Support and AWS Managed Services (AMS) experts. In this new channel, you can find technical solutions for complex problems, operational best practices, and insights into AWS Support and AMS offerings. To learn more, visit the AWS Support Official channel on re:Post.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Regional expansion of AWS Services
Here are some of the expansions of AWS services into new AWS Regions that happened this week:

Amazon VPC Lattice is now available in 7 additional RegionsAmazon VPC Lattice is now available in US West (N. California), Africa (Cape Town), Europe (Milan), Europe (Paris), Asia Pacific (Mumbai), Asia Pacific (Seoul), and South America (São Paulo). With this launch, Amazon VPC Lattice is now generally available in 18 AWS Regions.

Amazon Q in QuickSight is now available in 5 additional Regions Amazon Q in QuickSight is now generally available in Asia Pacific (Mumbai), Canada (Central), Europe (Ireland), Europe (London), and South America (São Paulo), in addition to the existing US East (N. Virginia), US West (Oregon), and Europe (Frankfurt) Regions.

AWS Wickr is now available in the Europe (Zurich) RegionAWS Wickr adds Europe (Zurich) to the US East (N. Virginia), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (London), Europe (Frankfurt), and Europe (Stockholm) Regions that it’s available in.

You can browse the full list of AWS Services available by Region.

Upcoming AWS events
Check your calendars and sign up for these AWS events:

AWS re:Invent 2024 – Dive into the first-round session catalog. Explore all the different learning opportunities at AWS re:Invent this year and start building your agenda today. You’ll find sessions for all interests and learning styles.

AWS Summits – The 2024 AWS Summit season is starting to wrap up! Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Jakarta (September 5), and Toronto (September 11).

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Colombia (August 24), New York (August 28), Belfast (September 6), and Bay Area (September 13).

AWS GenAI Lofts – Meet AWS AI experts and attend talks, workshops, fireside chats, and Q&As with industry leaders. All lofts are free and are carefully curated to offer something for everyone to help you accelerate your journey with AI. There are lofts scheduled in San Francisco (August 14–September 27), São Paulo (September 2–November 20), London (September 30–October 25), Paris (October 8–November 25), and Seoul (November).

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Prasad

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!