All posts by Donnie Prakoso

AWS Weekly Roundup: New AWS Heroes, Amazon API Gateway, Amazon Q and more (June 10, 2024)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-new-aws-heroes-amazon-api-gateway-amazon-q-and-more-june-10-2024/

In the last AWS Weekly Roundup, Channy reminded us on how life has ups and downs. It’s just how life is. But, that doesn’t mean that we should do it alone. Farouq Mousa, AWS Community Builder, is fighting brain cancer and Allen Helton, AWS Serverless Hero, his daughter is fighting leukemia.

If you have a moment, please visit their campaign pages and give your support.

Meanwhile, we’ve just finished a few AWS Summits in India, Korea and also Thailand. As always, I had so much fun working together at Developer Lounge with AWS Heroes, AWS Community Builders, and AWS User Group leaders. Here’s a photo from everyone here.

Last Week’s Launches
Here are some launches that caught my attention last week:

Welcome, new AWS Heroes! — Last week, we just announced new cohort for AWS Heroes, worldwide group of AWS experts who go above and beyond to share knowledge and empower their communities.

Amazon API Gateway increased integration timeout limit — If you’re using Regional REST APIs and private REST APIs in Amazon API Gateway, now you can increase the integration timeout limit greater than 29 seconds. This allows you to run various workloads requiring longer timeouts.

Amazon Q offers inline completion in the command line — Now, Amazon Q Developer provides real-time AI-generated code suggestions as you type in your command line. As a regular command line interface (CLI) user, I’m really excited about this.

New common control library in AWS Audit Manager — This announcement helps you to save time when mapping enterprise controls into AWS Audit Manager. Check out Danilo’s post where he elaborated how that you can simplify risk and complicance assessment with the new common control library.

Amazon Inspector container image scanning for Amazon CodeCatalyst and GitHub actions — If you need to integrate your CI/CD with software vulnerabilities checking, you can use Amazon Inspector. Now, with this native integration in GitHub actions and Amazon CodeCatalyst, it streamlines your development pipeline process.

Ingest streaming data with Amazon OpenSearch Ingestion and Amazon Managed Streaming for Apache Kafka — With this new capability, now you can build more efficient data pipelines for your complex analytics use cases. Now, you can seamlessly index the data from your Amazon MSK Serverless clusters in Amazon OpenSearch service.

Amazon Titan Text Embeddings V2 now available in Amazon Bedrock Knowledge Base — You now can embed your data into a vector database using Amazon Titan Text Embeddings V2. This will be helpful for you to retrieve relevant information for various tasks.

Max tokens 8,192
Languages 100+ in pre-training
Fine-tuning supported No
Normalization supported Yes
Vector size 256, 512, 1,024 (default)

From Community.aws
Here’s my 3 personal favorites posts from community.aws:

Upcoming AWS events
Check your calendars and sign up for these AWS and AWS Community events:

  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Japan (June 20), Washington, DC (June 26–27), and New York (July 10).

  • AWS re:Inforce — Join us for AWS re:Inforce (June 10–12) in Philadelphia, PA. AWS re:Inforce is a learning conference focused on AWS security solutions, cloud security, compliance, and identity. Connect with the AWS teams that build the security tools and meet AWS customers to learn about their security journeys.

  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), New Zealand (August 15), Nigeria (August 24), and New York (August 28).

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Donnie

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Simplify custom contact center insights with Amazon Connect analytics data lake

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/simplify-custom-contact-center-insights-with-amazon-connect-analytics-data-lake/

Analytics are vital to the success of a contact center. Having insights into each touchpoint of the customer experience allows you to accurately measure performance and adapt to shifting business demands. While you can find common metrics in the Amazon Connect console, sometimes you need to have more details and custom requirements for reporting based on the unique needs of your business. 

Starting today, the Amazon Connect analytics data lake is generally available. As announced last year as preview, this new capability helps you to eliminate the need to build and maintain complex data pipelines. Amazon Connect data lake is zero-ETL capable, so no extract, transform, or load (ETL) is needed.

Here’s a quick look at the Amazon Connect analytics data lake:

Improving your customer experience with Amazon Connect
Amazon Connect analytics data lake helps you to unify disparate data sources, including customer contact records and agent activity, into a single location. By having your data in a centralized location, you now have access to analyze contact center performance and gain insights while reducing the costs associated with implementing complex data pipelines.

With Amazon Connect analytics data lake, you can access and analyze contact center data, such as contact trace records and Amazon Connect Contact Lens data. This provides you the flexibility to prepare and analyze data with Amazon Athena and use the business intelligence (BI) tools of your choice, such as, Amazon QuickSight and Tableau

Get started with the Amazon Connect analytics data lake
To get started with the Amazon Connect analytics data lake, you’ll first need to have an Amazon Connect instance setup. You can follow the steps in the Create an Amazon Connect instance page to create a new Amazon Connect instance. Because I’ve already created my Amazon Connect instance, I will go straight to showing you how you can get started with Amazon Connect analytics data lake.

First, I navigate to the Amazon Connect console and select my instance.

Then, on the next page, I can set up my analytics data lake by navigating to Analytics tools and selecting Add data share.

This brings up a pop-up dialog, and I first need to define the target AWS account ID. With this option, I can set up a centralized account to receive all data from Amazon Connect instances running in multiple accounts. Then, under Data types, I can select the types I need to share with the target AWS account. To learn more about the data types that you can share in the Amazon Connect analytics data lake, please visit Associate tables for Analytics data lake.

Once it’s done, I can see the list of all the target AWS account IDs with which I have shared all the data types.

Besides using the AWS Management Console, I can also use the AWS Command Line Interface (AWS CLI) to associate my tables with the analytics data lake. The following is a sample command:

$> aws connect batch-associate-analytics-data-set --cli-input-json file:///input_batch_association.json

Where input_batch_association.json is a JSON file that contains association details. Here’s a sample:

{
	"InstanceId": YOUR_INSTANCE_ID,
	"DataSetIds": [
		"<DATA_SET_ID>"
		],
	"TargetAccountId": YOUR_ACCOUNT_ID
} 

Next, I need to approve (or reject) the request in the AWS Resource Access Manager (RAM) console in the target account. RAM is a service to help you securely share resources across AWS accounts. I navigate to AWS RAM and select Resource shares in the Shared with me section.

Then, I select the resource and select Accept resource share

At this stage, I can access shared resources from Amazon Connect. Now, I can start creating linked tables from shared tables in AWS Lake Formation. In the Lake Formation console, I navigate to the Tables page and select Create table.

I need to create a Resource link to a shared table. Then, I fill in the details and select the available Database and the Shared table’s region.

Then, when I select Shared table, it will list all the available shared tables that I can access.

Once I select the shared table, it will automatically populate Shared table’s database and Shared table’s owner ID. Once I’m happy with the configuration, I select Create.

To run some queries for the data, I go to the Amazon Athena console.The following is an example of a query that I ran:

With this configuration, I have access to certain Amazon Connect data types. I can even visualize the data by integrating with Amazon QuickSight. The following screenshot show some visuals in the Amazon QuickSight dashboard with data from Amazon Connect.

Customer voice
During the preview period, we heard lots of feedback from our customers about Amazon Connect analytics data lake. Here’s what our customer say:

Joulica is an analytics platform supporting insights for software like Amazon Connect and Salesforce. Tony McCormack, founder and CEO of Joulica, said, “Our core business is providing real-time and historical contact center analytics to Amazon Connect customers of all sizes. In the past, we frequently had to set up complex data pipelines, and so we are excited about using Amazon Connect analytics data lake to simplify the process of delivering actionable intelligence to our shared customers.”

Things you need to know

  • Pricing — Amazon Connect analytics data lake is available for you to use up to 2 years of data without any additional charges in Amazon Connect. You only need to pay for any services you use to interact with the data.
  • Availability — Amazon Connect analytics data lake is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, London)
  • Learn more — For more information, please visit Analytics data lake documentation page.

Happy building,
Donnie

Amazon Q Developer, now generally available, includes new capabilities to reimagine developer experience

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-q-developer-now-generally-available-includes-new-capabilities-to-reimagine-developer-experience/

When Amazon Web Services (AWS) launched Amazon Q Developer as a preview last year, it changed my experience of interacting with AWS services and, at the same time, maximizing the potential of AWS services on a daily basis. Trained on 17 years of AWS knowledge and experience, this generative artificial intelligence (generative AI)–powered assistant helps me build applications on AWS, research best practices, perform troubleshooting, and resolve errors.

Today, we are announcing the general availability of Amazon Q Developer. In this announcement, we have a few updates, including new capabilities. Let’s get started.

New: Amazon Q Developer has knowledge of your AWS account resources
This new capability helps you understand and manage your cloud infrastructure on AWS. With this capability, you can list and describe your AWS resources using natural language prompts, minimizing friction in navigating the AWS Management Console and compiling all information from documentation pages.

To get started, you can navigate to the AWS Management Console and select the Amazon Q Developer icon.

With this new capability, I can ask Amazon Q Developer to list all of my AWS resources. For example, if I ask Amazon Q Developer, “List all of my Lambda functions,” Amazon Q Developer returns the response with a set of my AWS Lambda functions as requested, as well as deep links so I can navigate to each resource easily.

Prompt for you to try: List all of my Lambda functions.

I can also list my resources residing in other AWS Regions without having to navigate through the AWS Management Console.

Prompt for you to try: List my Lambda functions in the Singapore Region.

Not only that, this capability can also generate AWS Command Line Interface (AWS CLI) commands so I can make changes immediately. Here, I ask Amazon Q Developer to change the timeout configuration for my Lambda function.

Prompt for you to try: Change the timeout for Lambda function <NAME of AWS LAMBDA FUNCTION> in the Singapore Region to 10 seconds.

I can see Amazon Q Developer generated an AWS CLI command for me to perform the action. Next, I can copy and paste the command into my terminal to perform the change.

$> aws lambda update-function-configuration --function-name <AWS_LAMBDA_FUNCTION_NAME> --region ap-southeast-1 --timeout 10
{
    "FunctionName": "<AWS_LAMBDA_FUNCTION_NAME>",
    "FunctionArn": "arn:aws:lambda:ap-southeast-1:<ACCOUNT_ID>:function:<AWS_LAMBDA_FUNCTION_NAME>",
    "Runtime": "python3.8",
    "Role": "arn:aws:iam::<ACCOUNT_ID>:role/service-role/-role-1o58f7qb",
    "Handler": "lambda_function.lambda_handler",
    "CodeSize": 399,
    "Description": "",
    "Timeout": 10,
...
<truncated for brevity> }

What I really like about this capability is that it minimizes the time and effort needed to get my account information in the AWS Management Console and generate AWS CLI commands so I can immediately implement any changes that I need. This helps me focus on my workflow to manage my AWS resources.

Amazon Q Developer can now help you understand your costs (preview)
To fully maximize the value of cloud spend, I need to have a thorough understanding of my cloud costs. With this capability, I can get answers to AWS cost-related questions using natural language. This capability works by retrieving and analyzing cost data from AWS Cost Explorer.

Recently, I’ve been building a generative AI demo using Amazon SageMaker JumpStart, and this is the right timing because I need to know the total spend. So, I ask Amazon Q Developer the following prompt to know my spend in Q1 this year.

Prompt for you to try: What were the top three highest-cost services in Q1?

From the Amazon Q response, I can further investigate this result by selecting the Cost Explorer URL, which will bring me to the AWS Cost Explorer dashboard. Then, I can follow up with this prompt:

Prompt for you to try: List services in my account which have the most increment month over month. Provide details and analysis.

In short, this capability makes it easier for me to develop a deep understanding and get valuable insights into my cloud spending.

Amazon Q extension for IDEs
As part of the update, we also released an Amazon Q integrated development environment (IDE) extension for Visual Studio Code and JetBrains IDEs. Now, you will see two extensions in the IDE marketplaces: (1) Amazon Q and (2) AWS Toolkit.

If you’re a new user, after installing the Amazon Q extension, you will see a sign-in page in the IDE with two options: using AWS Builder ID or single sign-on. You can continue to use Amazon Q normally.

For existing users, you will need to update the AWS Toolkit extension in your IDEs. Once you’ve finished the update, if you have existing Amazon Q and Amazon CodeWhisperer connections, even if they’re expired, the new Amazon Q extension will be automatically installed for you.

If you’re using Visual Studio 2022, you can use Amazon Q Developer as part of the AWS Toolkit for Visual Studio 2022 extension.

Free access for advanced capabilities in IDE
As you might know, you can use AWS Builder ID to start using Amazon Q Developer in your preferred IDEs. Now, with this announcement, you have free access to two existing advanced capabilities of Amazon Q Developer in IDE, Amazon Q Developer Agent for software development and Amazon Q Developer Agent for code transformation. I’m really excited about this update!

With the Amazon Q Developer Agent for software development, Amazon Q Developer can help you develop code features for projects in your IDE. To get started, you enter /dev in the Amazon Q Developer chat panel. My colleague Séb shared with me the following screenshot when he was using this capability for his support case project. He used the following prompt to generate an implementation plan for creating a new API in AWS Lambda:

Prompt for you to try: Add an API to list all support cases. Expose this API as a new Lambda function

Amazon Q Developer then provides an initial plan and you can keep on iterating this plan until you’re sure mostly everything is covered. Then, you can accept the plan and select Insert code.

The other capability you can access using AWS Builder ID is Developer Agent for code transformation. This capability will help you in upgrading your Java applications in IntelliJ or Visual Studio Code. Danilo described this capability last year, and you can see his thorough journey in Upgrade your Java applications with Amazon Q Code Transformation (preview).

Improvements in Amazon Q Developer Agent for Code Transformation
The new transformation plan provides details specific to my applications to help me understand the overall upgrade process. To get started, I enter /transform in the Amazon Q Developer chat and provide the necessary details for Amazon Q to start upgrading my java project.

In the first step, Amazon Q identifies and provides details on the Java Development Kit (JDK) version, dependencies, and related code that needs to be updated. The dependencies upgrades now include upgrading popular frameworks to their latest major versions. For example, if you’re building with Spring Boot, it now gets upgraded to version 3 as part of the Java 17 upgrade.

In this step, if Amazon Q identifies any deprecated code that Java language specifications recommend replacing, it will make those updates automatically during the upgrade. This is a new enhancement to Amazon Q capabilities and is available now.

In the third step, this capability will build and run unit tests on the upgraded code, including fixing any issues to ensure the code compilation process will run smoothly after the upgrade.

With this capability, you can upgrade Java 8 and 11 applications that are built using Apache Maven to Java version 17. To get started with the Amazon Q Developer Agent for code transformation capability, you can read and follow the steps at Upgrade language versions with Amazon Q Code Transformation. We also have sample code for you to try this capability.

Things to know

  • Availability — To learn more about the availability of Amazon Q Developer capabilities, please visit Amazon Q Developer FAQs page.
  • Pricing — Amazon Q Developer now offers two pricing tiers – Free (free), and Pro, at $19/month/user.
  • Free self-paced course on AWS Skill Builder — Amazon Q Introduction is a 15-minute course that provides a high-level overview of Amazon Q, a generative AI–powered assistant, and the use cases and benefits of using it. This course is part of Amazon’s AI Ready initiative to provide free AI skills training to 2 million people globally by 2025.

Visit our Amazon Q Developer Center to find deep-dive technical content and to discover how you can speed up your software development work.

Happy building,
Donnie

AWS Weekly Roundup: Amazon EC2 G6 instances, Mistral Large on Amazon Bedrock, AWS Deadline Cloud, and more (April 8, 2024)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-mistral-large-aws-clean-rooms-ml-aws-deadline-cloud-and-more-april-8-2024/

We’re just two days away from AWS Summit Sydney (April 10–11) and a month away from the AWS Summit season in Southeast Asia, starting with the AWS Summit Singapore (May 7) and the AWS Summit Bangkok (May 30). If you happen to be in Sydney, Singapore, or Bangkok around those dates, please join us.

Last Week’s Launches
If you haven’t read last week’s Weekly Roundup yet, Channy wrote about the AWS Chips Taste Test, a new initiative from Jeff Barr as part of April’ Fools Day.

Here are some launches that caught my attention last week:

New Amazon EC2 G6 instances — We announced the general availability of Amazon EC2 G6 instances powered by NVIDIA L4 Tensor Core GPUs. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases. G6 instances deliver up to 2x higher performance for deep learning inference and graphics workloads compared to Amazon EC2 G4dn instances. To learn more, visit the Amazon EC2 G6 instance page.

Mistral Large is now available in Amazon Bedrock — Veliswa wrote about the availability of the Mistral Large foundation model, as part of the Amazon Bedrock service. You can use Mistral Large to handle complex tasks that require substantial reasoning capabilities. In addition, Amazon Bedrock is now available in the Paris AWS Region.

Amazon Aurora zero-ETL integration with Amazon Redshift now in additional Regions — Zero-ETL integration announcements were my favourite launches last year. This Zero-ETL integration simplifies the process of transferring data between the two services, allowing customers to move data between Amazon Aurora and Amazon Redshift without the need for manual Extract, Transform, and Load (ETL) processes. With this announcement, Zero-ETL integrations between Amazon Aurora and Amazon Redshift is now supported in 11 additional Regions.

Announcing AWS Deadline Cloud — If you’re working in films, TV shows, commercials, games, and industrial design and handling complex rendering management for teams creating 2D and 3D visual assets, then you’ll be excited about AWS Deadline Cloud. This new managed service simplifies the deployment and management of render farms for media and entertainment workloads.

AWS Clean Rooms ML is Now Generally Available — Last year, I wrote about the preview of AWS Clean Rooms ML. In that post, I elaborated a new capability of AWS Clean Rooms that helps you and your partners apply machine learning (ML) models on your collective data without copying or sharing raw data with each other. Now, AWS Clean Rooms ML is available for you to use.

Knowledge Bases for Amazon Bedrock now supports private network policies for OpenSearch Serverless — Here’s exciting news for you who are building with Amazon Bedrock. Now, you can implement Retrieval-Augmented Generation (RAG) with Knowledge Bases for Amazon Bedrock using Amazon OpenSearch Serverless (OSS) collections that have a private network policy.

Amazon EKS extended support for Kubernetes versions now generally available — If you’re running Kubernetes version 1.21 and higher, with this Extended Support for Kubernetes, you can stay up-to-date with the latest Kubernetes features and security improvements on Amazon EKS.

AWS Lambda Adds Support for Ruby 3.3 — Coding in Ruby? Now, AWS Lambda supports Ruby 3.3 as its runtime. This update allows you to take advantage of the latest features and improvements in the Ruby language.

Amazon EventBridge Console Enhancements — The Amazon EventBridge console has been updated with new features and improvements, making it easier for you to manage your event-driven applications with a better user experience.

Private Access to the AWS Management Console in Commercial Regions — If you need to restrict access to personal AWS accounts from the company network, you can use AWS Management Console Private Access. With this launch, you can use AWS Management Console Private Access in all commercial AWS Regions.

From community.aws 
The community.aws is a home for us, builders, to share our learnings with building on AWS. Here’s my Top 3 posts from last week:

Other AWS News 
Here are some additional news items, open-source projects, and Twitch shows that you might find interesting:

Build On Generative AI – Join Tiffany and Darko to learn more about generative AI, see their demos and discuss different aspects of generative AI with the guest speakers. Streaming every Monday on Twitch, 9:00 AM US PT.

AWS open source news and updates – If you’re looking for various open-source projects and tools from the AWS community, please read the AWS open-source newsletter maintained by my colleague, Ricardo.

Upcoming AWS events
Check your calendars and sign up for these AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Amsterdam (April 9), Sydney (April 10–11), London (April 24), Singapore (May 7), Berlin (May 15–16), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Dubai (May 29), Thailand (May 30), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore cloud security in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania for two-and-a-half days of immersive cloud security learning designed to help drive your business initiatives.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Poland (April 11), Bay Area (April 12), Kenya (April 20), and Turkey (May 18).

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Donnie

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Run and manage open source InfluxDB databases with Amazon Timestream

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/run-and-manage-open-source-influxdb-databases-with-amazon-timestream/

Starting today, you can use InfluxDB as a database engine in Amazon Timestream. This support makes it easy for you to run near real-time time-series applications using InfluxDB and open source APIs, including open source Telegraf agents that collect time-series observations.

Now you have two database engines to choose in Timestream: Timestream for LiveAnalytics and Timestream for InfluxDB.

You should use the Timestream for InfluxDB engine if your use cases require near real-time time-series queries or specific features in InfluxDB, such as using Flux queries. Another option is the existing Timestream for LiveAnalytics engine, which is suitable if you need to ingest more than tens of gigabytes of time-series data per minute and run SQL queries on petabytes of time-series data in seconds.

With InfluxDB support in Timestream, you can use a managed instance that is automatically configured for optimal performance and availability. Furthermore, you can increase resiliency by configuring multi-Availability Zone support for your InfluxDB databases.

Timestream for InfluxDB and Timestream for LiveAnalytics complement each other for low-latency and large-scale ingestion of time-series data.

Getting started with Timestream for InfluxDB
Let me show you how to get started.

First, I create an InfluxDB instance. I navigate to the Timestream console, go to InfluxDB databases in Timestream for InfluxDB and select Create Influx database.

On the next page, I specify the database credentials for the InfluxDB instance.

I also specify my instance class in Instance configuration and the storage type and volume to suit my needs.

In the next part, I can choose a multi-AZ deployment, which synchronously replicates data to a standby database in a different Availability Zone or just a single instance of InfluxDB. In the multi-AZ deployment, if a failure is detected, Timestream for InfluxDB will automatically fail over to the standby instance without data loss.

Then, I configure how to connect to my InfluxDB instance in Connectivity configuration. Here, I have the flexibility to define network type, virtual private cloud (VPC), subnets, and database port. I also have the flexibility to configure my InfluxDB instance to be publicly accessible by specifying public subnets and set the public access to Publicly Accessible, allowing Amazon Timestream will assign a public IP address to my InfluxDB instance. If you choose this option, make sure that you have proper security measures to protect your InfluxDB instances.

In this demo, I set my InfluxDB instance as Not publicly accessible, which also means I can only access it through the VPC and subnets I defined in this section.

Once I configure my database connectivity, I can define the database parameter group and the log delivery settings. In Parameter group, I can define specific configurable parameters that I want to use for my InfluxDB database. In the log delivery settings, I also can define which Amazon Simple Storage Service (Amazon S3) bucket I have to export the system logs. To learn more about the required AWS Identity and Access Management (IAM) policy for the Amazon S3 bucket, visit this page.

Once I’m happy with the configuration, I select Create Influx database.

Once my InfluxDB instance is created, I can see more information on the detail page.

With the InfluxDB instance created, I can also access the InfluxDB user interface (UI). If I configure my InfluxDB as publicly accessible, I can access the UI using the console by selecting InfluxDB UI. As shown on the setup, I configured my InfluxDB instance as not publicly accessible. In this case, I need to access the InfluxDB UI with SSH tunneling through an Amazon Elastic Compute Cloud (Amazon EC2) instance within the same VPC as my InfluxDB instance.

With the URL endpoint from the detail page, I navigate to the InfluxDB UI and use the username and password I configured in the creation process.

With access to the InfluxDB UI, I can now create a token to interact with my InfluxDB instance.

I can also use the Influx command line interface (CLI) to create a token. Before I can create the token, I create a configuration to interact with my InfluxDB instance. The following is the sample command to create a configuration:

influx config create --config-name demo  \
    --host-url https://<TIMESTREAM for INFLUX DB ENDPOINT> \
   --org demo-org  
   --username-password [USERNAME] \
   --active

With the InfluxDB configuration created, I can now create an operator, all-access or read/write token. The following is an example for creating an all-access token to grant permissions to all resources in the organization that I defined:

influx auth create --org demo-org --all-access

With the required token for my use case, I can use various tools, such as the Influx CLI, Telegraf agent, and InfluxDB client libraries, to start ingesting data into my InfluxDB instance. Here, I’m using the Influx CLI to write sample home sensor data in the line protocol format, which you can also get from the InfluxDB documentation page.

influx write \
  --bucket demo-bucket \
  --precision s "
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
"

Finally, I can query the data using the InfluxDB UI. I navigate to the Data Explorer page in the InfluxDB UI, create a simple Flux script, and select Submit.

Timestream for InfluxDB makes it easier for you to develop applications using InfluxDB, while continuing to use your existing tools to interact with the database. With the multi-AZ configuration, you can increase the availability of your InfluxDB data without worrying about the underlying infrastructure.

AWS and InfluxDB partnership
Celebrating this launch, here’s what Paul Dix, Founder and Chief Technology Officer at InfluxData, said about this partnership:

“The future of open source is powered by the public cloud—reaching the broadest community through simple entry points and practical user experience. Amazon Timestream for InfluxDB delivers on that vision. Our partnership with AWS turns InfluxDB open source into a force multiplier for real-time insights on time-series data, making it easier than ever for developers to build and scale their time-series workloads on AWS.”

Things to know
Here are some additional information that you need to know:

Availability – Timestream for InfluxDB is now generally available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland, Stockholm).

Migration scenario – To migrate from a self-managed InfluxDB instance, you can simply restore a backup from an existing InfluxDB database into Timestream for InfluxDB. If you need to migrate from existing Timestream LiveAnalytics engine to Timestream for InfluxDB, you can leverage Amazon S3. Read more on how to do migration for various use cases on Migrating data from self-managed InfluxDB to Timestream for InfluxDB page.

Supported version – Timestream for InfluxDB currently supports the open source 2.7.5 version of InfluxDB

Pricing – To learn more about pricing, please visit Amazon Timestream pricing.

Demo – To see Timestream for InfluxDB in action, have a look at this demo created by my colleague, Derek:

Start building time-series applications and dashboards with millisecond response times using Timestream for InfluxDB. To learn more, visit Amazon Timestream for InfluxDB page.

Happy building!
Donnie

Mistral AI models now available on Amazon Bedrock

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/mistral-ai-models-now-available-on-amazon-bedrock/

Last week, we announced that Mistral AI models are coming to Amazon Bedrock. In that post, we elaborated on a few reasons why Mistral AI models may be a good fit for you. Mistral AI offers a balance of cost and performance, fast inference speed, transparency and trust, and is accessible to a wide range of users.

Today, we’re excited to announce the availability of two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B, on Amazon Bedrock. Mistral AI is the 7th foundation model provider offering cutting-edge models in Amazon Bedrock, joining other leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. This integration provides you the flexibility to choose optimal high-performing foundation models in Amazon Bedrock.

Mistral 7B is the first foundation model from Mistral AI, supporting English text generation tasks with natural coding capabilities. It is optimized for low latency with a low memory requirement and high throughput for its size. Mixtral 8x7B is a popular, high-quality, sparse Mixture-of-Experts (MoE) model, that is ideal for text summarization, question and answering, text classification, text completion, and code generation.

Here’s a quick look at Mistral AI models on Amazon Bedrock:

Getting Started with Mistral AI Models
To get started with Mistral AI models in Amazon Bedrock, first you need to get access to the models. On the Amazon Bedrock console, select Model access, and then select Manage model access. Next, select Mistral AI models, and then select Request model access.

Once you have the access to selected Mistral AI models, you can test the models with your prompts using Chat or Text in the Playgrounds section.

Programmatically Interact with Mistral AI Models
You can also use AWS Command Line Interface (CLI) and AWS Software Development Kit (SDK) to make various calls using Amazon Bedrock APIs. Following, is a sample code in Python that interacts with Amazon Bedrock Runtime APIs with AWS SDK:

import boto3
import json

bedrock = boto3.client(service_name="bedrock-runtime")

prompt = "<s>[INST] INSERT YOUR PROMPT HERE [/INST]"

body = json.dumps({
    "prompt": prompt,
    "max_tokens": 512,
    "top_p": 0.8,
    "temperature": 0.5,
})

modelId = "mistral.mistral-7b-instruct-v0:2"

accept = "application/json"
contentType = "application/json"

response = bedrock.invoke_model(
    body=body,
    modelId=modelId,
    accept=accept,
    contentType=contentType
)

print(json.loads(response.get('body').read()))

Mistral AI models in action
By integrating your application with AWS SDK to invoke Mistral AI models using Amazon Bedrock, you can unlock possibilities to implement various use cases. Here are a few of my personal favorite use cases using Mistral AI models with sample prompts. You can see more examples on Prompting Capabilities from the Mistral AI documentation page.

Text summarization — Mistral AI models extract the essence from lengthy articles so you quickly grasp key ideas and core messaging.

You are a summarization system. In clear and concise language, provide three short summaries in bullet points of the following essay.

# Essay:
{insert essay text here}

Personalization — The core AI capabilities of understanding language, reasoning, and learning, allow Mistral AI models to personalize answers with more human-quality text. The accuracy, explanation capabilities, and versatility of Mistral AI models make them useful at personalization tasks, because they can deliver content that aligns closely with individual users.

You are a mortgage lender customer service bot, and your task is to create personalized email responses to address customer questions. Answer the customer's inquiry using the provided facts below. Ensure that your response is clear, concise, and directly addresses the customer's question. Address the customer in a friendly and professional manner. Sign the email with "Lender Customer Support."

# Facts
<INSERT FACTS AND INFORMATION HERE>

# Email
{insert customer email here}

Code completion — Mistral AI models have an exceptional understanding of natural language and code-related tasks, which is essential for projects that need to juggle computer code and regular language. Mistral AI models can help generate code snippets, suggest bug fixes, and optimize existing code, accelerating your development process.

[INST] You are a code assistant. Your task is to generate a 5 valid JSON object based on the following properties:
name: 
lastname: 
address: 
Just generate the JSON object without explanations:
[/INST]

Things You Have to Know
Here are few additional information for you:

  • Availability — Mistral AI’s Mixtral 8x7B and Mistral 7B models in Amazon Bedrock are available in the US West (Oregon) Region.
  • Deep dive into Mistral 7B and Mixtral 8x7B — If you want to learn more about Mistral AI models on Amazon Bedrock, you might also enjoy this article titled “Mistral AI – Winds of Change” prepared by my colleague, Mike.

Now Available
Mistral AI models are available today in Amazon Bedrock, and we can’t wait to see what you’re going to build. Get yourself started by visiting Mistral AI on Amazon Bedrock.

Happy building,
Donnie

Mistral AI models coming soon to Amazon Bedrock

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/mistral-ai-models-coming-soon-to-amazon-bedrock/

Mistral AI, an AI company based in France, is on a mission to elevate publicly available models to state-of-the-art performance. They specialize in creating fast and secure large language models (LLMs) that can be used for various tasks, from chatbots to code generation.

We’re pleased to announce that two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B, will be available soon on Amazon Bedrock. AWS is bringing Mistral AI to Amazon Bedrock as our 7th foundation model provider, joining other leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. With these two Mistral AI models, you will have the flexibility to choose the optimal, high-performing LLM for your use case to build and scale generative AI applications using Amazon Bedrock.

Overview of Mistral AI Models
Here’s a quick overview of these two highly anticipated Mistral AI models:

  • Mistral 7B is the first foundation model from Mistral AI, supporting English text generation tasks with natural coding capabilities. It is optimized for low latency with a low memory requirement and high throughput for its size. This model is powerful and supports various use cases from text summarization and classification, to text completion and code completion.
  • Mixtral 8x7B is a popular, high-quality sparse Mixture-of-Experts (MoE) model that is ideal for text summarization, question and answering, text classification, text completion, and code generation.

Choosing the right foundation model is key to building successful applications. Let’s have a look at a few highlights that demonstrate why Mistral AI models could be a good fit for your use case:

  • Balance of cost and performance — One prominent highlight of Mistral AI’s models strikes a remarkable balance between cost and performance. The use of sparse MoE makes these models efficient, affordable, and scalable, while controlling costs.
  • Fast inference speed — Mistral AI models have an impressive inference speed and are optimized for low latency. The models also have a low memory requirement and high throughput for their size. This feature matters most when you want to scale your production use cases.
  • Transparency and trust — Mistral AI models are transparent and customizable. This enables organizations to meet stringent regulatory requirements.
  • Accessible to a wide range of users — Mistral AI models are accessible to everyone. This helps organizations of any size integrate generative AI features into their applications.

Available Soon
Mistral AI publicly available models are coming soon to Amazon Bedrock. As usual, subscribe to this blog so that you will be among the first to know when these models will be available on Amazon Bedrock.

Learn more

Stay tuned,
Donnie

AWS Weekly Roundup — AWS Lambda, AWS Amplify, Amazon OpenSearch Service, Amazon Rekognition, and more — December 18, 2023

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-lambda-aws-amplify-amazon-opensearch-service-amazon-rekognition-and-more-december-18-2023/

My memories of Amazon Web Services (AWS) re:Invent 2023 are still fresh even when I’m currently wrapping up my activities in Jakarta after participating in AWS Community Day Indonesia. It was a great experience, from delivering chalk talks and having thoughtful discussions with AWS service teams, to meeting with AWS Heroes, AWS Community Builders, and AWS User Group leaders. AWS re:Invent brings the global AWS community together to learn, connect, and be inspired by innovation. For me, that spirit of connection is what makes AWS re:Invent always special.

Here’s a quick look of my highlights at AWS re:Invent and AWS Community Day Indonesia:

If you missed AWS re:Invent, you can watch the keynotes and sessions on demand. Also, check out the AWS News Editorial Team’s Top announcements of AWS re:Invent 2023 for all the major launches.

Recent AWS launches
Here are some of the launches that caught my attention in the past two weeks:

Query MySQL and PostgreSQL with AWS Amplify – In this post, Channy wrote how you can now connect your MySQL and PostgreSQL databases to AWS Amplify with just a few clicks. It generates a GraphQL API to query your database tables using AWS CDK.

Migration Assistant for Amazon OpenSearch Service – With this self-service solution, you can smoothly migrate from your self-managed clusters to Amazon OpenSearch Service managed clusters or serverless collections.

AWS Lambda simplifies connectivity to Amazon RDS and RDS Proxy – Now you can connect your AWS Lambda to Amazon RDS or RDS proxy using the AWS Lambda console. With a guided workflow, this improvement helps to minimize complexities and efforts to quickly launch a database instance and correctly connect a Lambda function.

New no-code dashboard application to visualize IoT data – With this announcement, you can now visualize and interact with operational data from AWS IoT SiteWise using a new open source Internet of Things (IoT) dashboard.

Amazon Rekognition improves Face Liveness accuracy and user experience – This launch provides higher accuracy in detecting spoofed faces for your face-based authentication applications.

AWS Lambda supports additional concurrency metrics for improved quota monitoring – Add CloudWatch metrics for your Lambda quotas, to improve visibility into concurrency limits.

AWS Malaysia now supports 3D-Secure authentication – This launch enables 3DS2 transaction authentication required by banks and payment networks, facilitating your secure online payments.

Announcing AWS CloudFormation template generation for Amazon EventBridge Pipes – With this announcement, you can now streamline the deployment of your EventBridge resources with CloudFormation templates, accelerating event-driven architecture (EDA) development.

Enhanced data protection for CloudWatch Logs – With the enhanced data protection, CloudWatch Logs helps identify and redact sensitive data in your logs, preventing accidental exposure of personal data.

Send SMS via Amazon SNS in Asia Pacific – With this announcement, now you can use SMS messaging across Asia Pacific from the Jakarta Region.

Lambda adds support for Python 3.12 – This launch brings the latest Python version to your Lambda functions.

CloudWatch Synthetics upgrades Node.js runtime – Now you can use Node.js 16.1 runtimes for your canary functions.

Manage EBS Volumes for your EC2 fleets – This launch simplifies attaching and managing EBS volumes across your EC2 fleets.

See you next year!
This is the last AWS Weekly Roundup for this year, and we’d like to thank you for being our wonderful readers. We’ll be back to share more launches for you on January 8, 2024.

Happy holidays!

Donnie

IDE extension for AWS Application Composer enhances visual modern applications development with AI-generated IaC

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/ide-extension-for-aws-application-composer-enhances-visual-modern-applications-development-with-ai-generated-iac/

Today, I’m happy to share the integrated development environment (IDE) extension for AWS Application Composer. Now you can use AWS Application Composer directly in your IDE to visually build modern applications and iteratively develop your infrastructure as code templates with Amazon CodeWhisperer.

Announced as preview at AWS re:Invent 2022 and generally available in March 2023, Application Composer is a visual builder that makes it easier for developers to visualize, design, and iterate on an application architecture by dragging, grouping, and connecting AWS services on a visual canvas. Application Composer simplifies building modern applications by providing an easy-to-use visual drag-and-drop interface and generates IaC templates in real time.

AWS Application Composer also lets you work with AWS CloudFormation resources. In September, AWS Application Composer announced support for 1000+ AWS CloudFormation resources. This provides you the flexibility to define configuration for your AWS resources at a granular level.

Building modern applications with modern tools
The IDE extension for AWS Application Composer provides you with the same visual drag-and-drop experience and functionality as what it offers you in the console. Utilizing the visual canvas in your IDE means you can quickly prototype your ideas and focus on your application code.

With Application Composer running in your IDE, you can also use the various tools available in your IDE. For example, you can seamlessly integrate IaC templates generated real-time by Application Composer with AWS Serverless Application Model (AWS SAM) to manage and deploy your serverless applications.

In addition to making Application Composer available in your IDE, you can create generative AI powered code suggestions in the CloudFormation template in real time while visualizing the application architecture in split view. You can pair and synchronize Application Composer’s visualization and CloudFormation template editing side by side in the IDE without context switching between consoles to iterate on their designs. This minimizes hand coding and increase your productivity.

Using AWS Application Composer in Visual Studio Code
First, I need to install the latest AWS Toolkit for Visual Studio Code plugin. If you already have the AWS Toolkit plugin installed, you only need to update the plugin to start using Application Composer.

To start using Application Composer, I don’t need to authenticate into my AWS account. With Application Composer available on my IDE, I can open my existing AWS CloudFormation or AWS SAM templates.

Another method is to create a new blank file, then right-click on the file and select Open with Application Composer to start designing my application visually.

This will provide me with a blank canvas. Here I have both code and visual editors at the same time to build a simple serverless API using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Any changes that I make on the canvas will also be reflected in real time on my IaC template.

I get consistent experiences, such as when I use the Application Composer console. For example, if I make some modifications to my AWS Lambda function, it will also create relevant files in my local folder.

With IaC templates available in my local folder, it’s easier for me to manage my applications with AWS SAM CLI. I can create continuous integration and continuous delivery (CI/CD) with sam pipeline or deploy my stack with sam deploy.

One of the features that accelerates my development workflow is the built-in Sync feature that seamlessly integrates with AWS SAM command sam sync. This feature syncs my local application changes to my AWS account, which is helpful for me to do testing and validation before I deploy my applications into a production environment.

Developing IaC templates with generative AI
With this new capability, I can use generative AI code suggestions to quickly get started with any of CloudFormation’s 1000+ resources. This also means that it’s now even easier to include standard IaC resources to extend my architecture.

For example, I need to use Amazon MQ, which is a standard IaC resource, and I need to modify some configurations for its AWS CloudFormation resource using Application Composer. In the Resource configuration section, change some values if needed, then choose Generate. Application Composer provides code suggestions that I can accept and incorporate into my IaC template.

This capability helps me to improve my development velocity by eliminating context switching. I can design my modern applications using AWS Application Composer canvas and use various tools such as Amazon CodeWhisperer and AWS SAM to accelerate my development workflow.

Things to know
Here are a couple of things to note:

Supported IDE – At launch, this new capability is available for Visual Studio Code.

Pricing – The IDE extension for AWS Application Composer is available at no charge.

Get started with IDE extension for AWS Application Composer by installing the latest AWS Toolkit for Visual Studio Code.

Happy coding!
Donnie

AWS Clean Rooms Differential Privacy enhances privacy protection of your users data (preview)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-clean-rooms-differential-privacy-enhances-privacy-protection-of-your-users-data-preview/

Starting today, you can use AWS Clean Rooms Differential Privacy (preview) to help protect the privacy of your users with mathematically backed and intuitive controls in a few steps. As a fully managed capability of AWS Clean Rooms, no prior differential privacy experience is needed to help you prevent the reidentification of your users.

AWS Clean Rooms Differential Privacy obfuscates the contribution of any individual’s data in generating aggregate insights in collaborations so that you can run a broad range of SQL queries to generate insights about advertising campaigns, investment decisions, clinical research, and more.

Quick overview on differential privacy
Differential privacy is not new. It is a strong, mathematical definition of privacy compatible with statistical and machine learning based analysis, and has been used by the United States Census Bureau as well as companies with vast amounts of data.

Differential privacy helps with a wide variety of use cases involving large datasets, where adding or removing a few individuals has a small impact on the overall result, such as population analyses using count queries, histograms, benchmarking, A/B testing, and machine learning.

The following illustration shows how differential privacy works when it is applied to SQL queries.

When an analyst runs a query, differential privacy adds a carefully calibrated amount of error (also referred to as noise) to query results at run-time, masking the contribution of individuals while still keeping the query results accurate enough to provide meaningful insights. The noise is carefully fine-tuned to mask the presence or absence of any possible individual in the dataset.

Differential privacy also has another component called privacy budget. The privacy budget is a finite resource consumed each time a query is run and thus controls the number of queries that can be run on your datasets, helping ensure that the noise cannot be averaged out to reveal any private information about an individual. When the privacy budget is fully exhausted, no more queries can be run on your tables until it is increased or refreshed.

However, differential privacy is not easy to implement because this technique requires an in-depth understanding of mathematically rigorous formulas and theories to apply it effectively. Configuring differential privacy is also a complex task because customers need to calculate the right level of noise in order to preserve the privacy of their users without negatively impacting the utility of query results.

Customers also want to enable their partners to conduct a wide variety of analyses including highly complex and customized queries on their data. This requirement is hard to support with differential privacy because of the intricate nature of the calculations involved in calibrating the noise while processing various query components such as aggregations, joins, and transformations.

We created AWS Clean Rooms Differential Privacy to help you protect the privacy of your users with mathematically backed controls in a few clicks.

How differential privacy works in AWS Clean Rooms
While differential privacy is quite a sophisticated technique, AWS Clean Rooms Differential Privacy makes it easy for you to apply it and protect the privacy of your users with mathematically backed, flexible, and intuitive controls. You can begin using it with just a few steps after starting or joining an AWS Clean Rooms collaboration as a member with abilities to contribute data.

You create a configured table, which is a reference to your table in the AWS Glue Data Catalog, and choose to turn on differential privacy while adding a custom analysis rule to the configured table.

Next, you associate the configured table to your AWS Clean Rooms collaboration and configure a differential privacy policy in the collaboration to make your table available for querying. You can use a default policy to quickly complete the setup or customize it to meet your specific requirements. As part of this step, you will configure the following:

Privacy budget
Quantified as a value that we call epsilon, the privacy budget controls the level of privacy protection. It is a common, finite resource that is applied for all of your tables protected with differential privacy in the collaboration because the goal is to preserve the privacy of your users whose information can be present in multiple tables. The privacy budget is consumed every time a query is run on your tables. You have the flexibility to increase the privacy budget value any time during the collaboration and automatically refresh it each calendar month.

Noise added per query
Measured in terms of the number of users whose contributions you want to obscure, this input parameter governs the rate at which the privacy budget is depleted.

In general, you need to balance your privacy needs against the number of queries you want to permit and the accuracy of those queries. AWS Clean Rooms makes it easy for you to complete this step by helping you understand the resulting utility you are providing to your collaboration partner. You can also use the interactive examples to understand how your chosen settings would impact the results for different types of SQL queries.

Now that you have successfully enabled differential privacy protection for your data, let’s see AWS Clean Rooms Differential Privacy in action. For this demo, let’s assume I am your partner in the AWS Clean Rooms collaboration.

Here, I’m running a query to count the number of overlapping customers and the result shows there are 3,227,643 values for tv.customer_id.

Now, if I run the same query again after removing records about an individual from coffee_customers table, it shows a different result, 3,227,604 tv.customer_id. This variability in the query results prevents me from identifying the individuals from observing the difference in query results.

I can also see the impact of differential privacy, including the remaining queries I can run.

Available for preview
Join this preview and start protecting the privacy of your users with AWS Clean Rooms Differential Privacy. During this preview period, you can use AWS Clean Rooms Differential Privacy wherever AWS Clean Rooms is available. To learn more on how to get started, visit the AWS Clean Rooms Differential Privacy page.

Happy collaborating!
Donnie

AWS Clean Rooms ML helps customers and partners apply ML models without sharing raw data (preview)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-clean-rooms-ml-helps-customers-and-partners-apply-ml-models-without-sharing-raw-data-preview/

Today, we’re introducing AWS Clean Rooms ML (preview), a new capability of AWS Clean Rooms that helps you and your partners apply machine learning (ML) models on your collective data without copying or sharing raw data with each other. With this new capability, you can generate predictive insights using ML models while continuing to protect your sensitive data.

During this preview, AWS Clean Rooms ML introduces its first model specialized to help companies create lookalike segments for marketing use cases. With AWS Clean Rooms ML lookalike, you can train your own custom model, and you can invite partners to bring a small sample of their records to collaborate and generate an expanded set of similar records while protecting everyone’s underlying data.

In the coming months, AWS Clean Rooms ML will release a healthcare model. This will be the first of many models that AWS Clean Rooms ML will support next year.

AWS Clean Rooms ML helps you to unlock various opportunities for you to generate insights. For example:

  • Airlines can take signals about loyal customers, collaborate with online booking services, and offer promotions to users with similar characteristics.
  • Auto lenders and car insurers can identify prospective auto insurance customers who share characteristics with a set of existing lease owners.
  • Brands and publishers can model lookalike segments of in-market customers and deliver highly relevant advertising experiences.
  • Research institutions and hospital networks can find candidates similar to existing clinical trial participants to accelerate clinical studies (coming soon).

AWS Clean Rooms ML lookalike modeling helps you apply an AWS managed, ready-to-use model that is trained in each collaboration to generate lookalike datasets in a few clicks, saving months of development work to build, train, tune, and deploy your own model.

How to use AWS Clean Rooms ML to generate predictive insights
Today I will show you how to use lookalike modeling in AWS Clean Rooms ML and assume you have already set up a data collaboration with your partner. If you want to learn how to do that, check out the AWS Clean Rooms Now Generally Available — Collaborate with Your Partners without Sharing Raw Data post.

With your collective data in the AWS Clean Rooms collaboration, you can work with your partners to apply ML lookalike modeling to generate a lookalike segment. It works by taking a small sample of representative records from your data, creating a machine learning (ML) model, then applying the particular model to identify an expanded set of similar records from your business partner’s data.

The following screenshot shows the overall workflow for using AWS Clean Rooms ML.

By using AWS Clean Rooms ML, you don’t need to build complex and time-consuming ML models on your own. AWS Clean Rooms ML trains a custom, private ML model, which saves months of your time while still protecting your data.

Eliminating the need to share data
As ML models are natively built within the service, AWS Clean Rooms ML helps you protect your dataset and customer’s information because you don’t need to share your data to build your ML model.

You can specify the training dataset using the AWS Glue Data Catalog table, which contains user-item interactions.

Under Additional columns to train, you can define numerical and categorical data. This is useful if you need to add more features to your dataset, such as the number of seconds spent watching a video, the topic of an article, or the product category of an e-commerce item.

Applying custom-trained AWS-built models
Once you have defined your training dataset, you can now create a lookalike model. A lookalike model is a machine learning model used to find similar profiles in your partner’s dataset without either party having to share their underlying data with each other.

When creating a lookalike model, you need to specify the training dataset. From a single training dataset, you can create many lookalike models. You also have the flexibility to define the date window in your training dataset using Relative range or Absolute range. This is useful when you have data that is constantly updated within AWS Glue, such as articles read by users.

Easy-to-tune ML models
After you create a lookalike model, you need to configure it to use in AWS Clean Rooms collaboration. AWS Clean Rooms ML provides flexible controls that enable you and your partners to tune the results of the applied ML model to garner predictive insights.

On the Configure lookalike model page, you can choose which Lookalike model you want to use and define the Minimum matching seed size you need. This seed size defines the minimum number of profiles in your seed data that overlap with profiles in the training data.

You also have the flexibility to choose whether the partner in your collaboration receives metrics in Metrics to share with other members.

With your lookalike models properly configured, you can now make the ML models available for your partners by associating the configured lookalike model with a collaboration.

Creating lookalike segments
Once the lookalike models have been associated, your partners can now start generating insights by selecting Create lookalike segment and choosing the associated lookalike model for your collaboration.

Here on the Create lookalike segment page, your partners need to provide the Seed profiles. Examples of seed profiles include your top customers or all customers who purchased a specific product. The resulting lookalike segment will contain profiles from the training data that are most similar to the profiles from the seed.

Lastly, your partner will get the Relevance metrics as the result of the lookalike segment using the ML models. At this stage, you can use the Score to make a decision.

Export data and use programmatic API
You also have the option to export the lookalike segment data. Once it’s exported, the data is available in JSON format and you can process this output by integrating with AWS Clean Rooms API and your applications.

Join the preview
AWS Clean Rooms ML is now in preview and available via AWS Clean Rooms in US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Seoul, Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland, London). Support for additional models is in the works.

Learn how to apply machine learning with your partners without sharing underlying data on the AWS Clean Rooms ML page.

Happy collaborating!
— Donnie

New Amazon Q in QuickSight uses generative AI assistance for quicker, easier data insights (preview)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-amazon-q-in-quicksight-uses-generative-ai-assistance-for-quicker-easier-data-insights-preview/

Today, I’m happy to share that Amazon Q in QuickSight is available for preview. Now you can experience the Generative BI capabilities in Amazon QuickSight announced on July 26, as well as two additional capabilities for business users.

Turning insights into impact faster with Amazon Q in QuickSight
With this announcement, business users can now generate compelling sharable stories examining their data, see executive summaries of dashboards surfacing key insights from data in seconds, and confidently answer questions of data not answered by dashboards and reports with a reimagined Q&A experience.

Before we go deeper into each capability, here’s a quick summary:

  • Stories — This is a new and visually compelling way to present and share insights. Stories can automatically generated in minutes using natural language prompts, customized using point-and-click options, and shared securely with others.
  • Executive summaries — With this new capability, Amazon Q helps you to understand key highlights in your dashboard.
  • Data Q&A — This capability provides a new and easy-to-use natural-language Q&A experience to help you get answers for questions beyond what is available in existing dashboards and reports.​​

To get started, you need to enable Preview Q Generative Capabilities in Preview manager.

Once enabled, you’re ready to experience what Amazon Q in QuickSight brings for business users and business analysts building dashboards.

Stories automatically builds formatted narratives
Business users often need to share their findings of data with others to inform team decisions; this has historically involved taking data out of the business intelligence (BI) system. Stories are a new feature enabling business users to create beautifully formatted narratives that describe data, and include visuals, images, and text in document or slide format directly that can easily be shared with others within QuickSight.

Now, business users can use natural language to ask Amazon Q to build a story about their data by starting from the Amazon Q Build menu on an Amazon QuickSight dashboard. Amazon Q extracts data insights and statistics from selected visuals, then uses large language models (LLMs) to build a story in multiple parts, examining what the data may mean to the business and suggesting ideas to achieve specific goals.

For example, a sales manager can ask, “Build me a story about overall sales performance trends. Break down data by product and region. Suggest some strategies for improving sales.” Or, “Write a marketing strategy that uses regional sales trends to uncover opportunities that increase revenue.” Amazon Q will build a story exploring specific data insights, including strategies to grow sales.

Once built, business users get point-and-click tools augmented with artificial intelligence- (AI) driven rewriting capabilities to customize stories using a rich text editor to refine the message, add ideas, and highlight important details.

Stories can also be easily and securely shared with other QuickSight users by email.

Executive summaries deliver a quick snapshot of important information
Executive summaries are now available with a single click using the Amazon Q Build menu in Amazon QuickSight. Amazon QuickSight automatically determines interesting facts and statistics, then use LLMs to write about interesting trends.

This new capability saves time in examining detailed dashboards by providing an at-a-glance view of key insights described using natural language.

The executive summaries feature provides two advantages. First, it helps business users generate all the key insights without the need to browse through tens of visuals on the dashboard and understand changes from each. Secondly, it enables readers to find key insights based on information in the context of dashboards and reports with minimum effort.

New data Q&A experience
Once an interesting insight is discovered, business users frequently need to dig in to understand data more deeply than they can from existing dashboards and reports. Natural language query (NLQ) solutions designed to solve this problem frequently expect that users already know what fields may exist or how they should be combined to answer business questions. However, business users aren’t always experts in underlying data schemas, and their questions frequently come in more general terms, like “How were sales last week in NY?” Or, “What’s our top campaign?”

The new Q&A experience accessed within the dashboards and reports helps business users confidently answer questions about data. It includes AI-suggested questions and a profile of what data can be asked about and automatically generated multi-visual answers with narrative summaries explaining data context.

Furthermore, Amazon Q brings the ability to answer vague questions and offer alternatives for specific data. For example, customers can ask a vague question, such as “Top products,” and Amazon Q will provide an answer that breaks down products by sales and offers alternatives for products by customer count and products by profit. Amazon Q explains answer context in a narrative summarizing total sales, number of products, and picking out the sales for the top product.

Customers can search for specific data values and even a single word such as, for example, the product name “contactmatcher.” Amazon Q returns a complete set of data related to that product and provides a natural language breakdown explaining important insights like total units sold. Specific visuals from the answers can also be added to a pinboard for easy future access.

Watch the demo
To see these new capabilities in action, have a look at the demo.

Things to Know
Here are a few additional things that you need to know:

Join the preview
Amazon Q in QuickSight product page

Happy building!
— Donnie

Amazon Q brings generative AI-powered assistance to IT pros and developers (preview)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/

Today, we are announcing the preview of Amazon Q, a new type of generative artificial intelligence (AI) powered assistant that is specifically for work and can be tailored to a customer’s business.

Amazon Q brings a set of capabilities to support developers and IT professionals. Now you can use Amazon Q to get started building applications on AWS, research best practices, resolve errors, and get assistance in coding new features for your applications. For example, Amazon Q Code Transformation can perform Java application upgrades now, from version 8 and 11 to version 17.

Amazon Q is available in multiple areas of AWS to provide quick access to answers and ideas wherever you work. Here’s a quick look at Amazon Q, including in integrated development environment (IDE):

Building applications together with Amazon Q
Application development is a journey. It involves a continuous cycle of researching, developing, deploying, optimizing, and maintaining. At each stage, there are many questions—from figuring out the right AWS services to use, to troubleshooting issues in the application code.

Trained on 17 years of AWS knowledge and best practices, Amazon Q is designed to help you at each stage of development with a new experience for building applications on AWS. With Amazon Q, you minimize the time and effort you need to gain the knowledge required to answer AWS questions, explore new AWS capabilities, learn unfamiliar technologies, and architect solutions that fuel innovation.

Let us show you some capabilities of Amazon Q.

1. Conversational Q&A capability
You can interact with the Amazon Q conversational Q&A capability to get started, learn new things, research best practices, and iterate on how to build applications on AWS without needing to shift focus away from the AWS console.

To start using this feature, you can select the Amazon Q icon on the right-hand side of the AWS Management Console.

For example, you can ask, “What are AWS serverless services to build serverless APIs?” Amazon Q provides concise explanations along with references you can use to follow up on your questions and validate the guidance. You can also use Amazon Q to follow up on and iterate your questions. Amazon Q will show more deep-dive answers for you with references.

There are times when we have questions for a use case with fairly specific requirements. With Amazon Q, you can elaborate on your use cases in more detail to provide context.

For example, you can ask Amazon Q, “I’m planning to create serverless APIs with 100k requests/day. Each request needs to lookup into the database. What are the best services for this workload?” Amazon Q responds with a list of AWS services you can use and tries to limit the answer results to those that are accurately referenceable and verified with best practices.

Here is some additional information that you might want to note:

2. Optimize Amazon EC2 instance selection
Choosing the right Amazon Elastic Compute Cloud (Amazon EC2) instance type for your workload can be challenging with all the options available. Amazon Q aims to make this easier by providing personalized recommendations.

To use this feature, you can ask Amazon Q, “Which instance families should I use to deploy a Web App Server for hosting an application?” This feature is also available when you choose to launch an instance in the Amazon EC2 console. In Instance type, you can select Get advice on instance type selection. This will show a dialog to define your requirements.

Your requirements are automatically translated into a prompt on the Amazon Q chat panel. Amazon Q returns with a list of suggestions of EC2 instances that are suitable for your use cases. This capability helps you pick the right instance type and settings so your workloads will run smoothly and more cost-efficiently.

This capability to provide EC2 instance type recommendations based on your use case is available in preview in all commercial AWS Regions.

3. Troubleshoot and solve errors directly in the console
Amazon Q can also help you to solve errors for various AWS services directly in the console. With Amazon Q proposed solutions, you can avoid slow manual log checks or research.

Let’s say that you have an AWS Lambda function that tries to interact with an Amazon DynamoDB table. But, for an unknown reason (yet), it fails to run. Now, with Amazon Q, you can troubleshoot and resolve this issue faster by selecting Troubleshoot with Amazon Q.

Amazon Q provides concise analysis of the error which helps you to understand the root cause of the problem and the proposed resolution. With this information, you can follow the steps described by Amazon Q to fix the issue.

In just a few minutes, you will have the solution to solve your issues, saving significant time without disrupting your development workflow. The Amazon Q capability to help you troubleshoot errors in the console is available in preview in the US West (Oregon) for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon ECS, and AWS Lambda.

4. Network troubleshooting assistance
You can also ask Amazon Q to assist you in troubleshooting network connectivity issues caused by network misconfiguration in your current AWS account. For this capability, Amazon Q works with Amazon VPC Reachability Analyzer to check your connections and inspect your network configuration to identify potential issues.

This makes it easy to diagnose and resolve AWS networking problems, such as “Why can’t I SSH to my EC2 instance?” or “Why can’t I reach my web server from the Internet?” which you can ask Amazon Q.

Then, on the response text, you can select preview experience here, which will provide explanations to help you to troubleshoot network connectivity-related issues.

Here are a few things you need to know:

5. Integration and conversational capabilities within your IDEs
As we mentioned, Amazon Q is also available in supported IDEs. This allows you to ask questions and get help within your IDE by chatting with Amazon Q or invoking actions by typing / in the chat box.

To get started, you need to install or update the latest AWS Toolkit and sign in to Amazon CodeWhisperer. Once you’re signed in to Amazon CodeWhisperer, it will automatically activate the Amazon Q conversational capability in the IDE. With Amazon Q enabled, you can now start chatting to get coding assistance.

You can ask Amazon Q to describe your source code file.

From here, you can improve your application, for example, by integrating it with Amazon DynamoDB. You can ask Amazon Q, “Generate code to save data into DynamoDB table called save_data() accepting data parameter and return boolean status if the operation successfully runs.”

Once you’ve reviewed the generated code, you can do a manual copy and paste into the editor. You can also select Insert at cursor to place the generated code into the source code directly.

This feature makes it really easy to help you focus on building applications because you don’t have to leave your IDE to get answers and context-specific coding guidance. You can try the preview of this feature in Visual Studio Code and JetBrains IDEs.

6. Feature development capability
Another exciting feature that Amazon Q provides is guiding you interactively from idea to building new features within your IDE and Amazon CodeCatalyst. You can go from a natural language prompt to application features in minutes, with interactive step-by-step instructions and best practices, right from your IDE. With a prompt, Amazon Q will attempt to understand your application structure and break down your prompt into logical, atomic implementation steps.

To use this capability, you can start by invoking an action command /dev in Amazon Q and describe the task you need Amazon Q to process.

Then, from here, you can review, collaborate and guide Amazon Q in the chat for specific areas that need to be implemented.

Additional capabilities to help you ship features faster with complete pull requests are available if you’re using Amazon CodeCatalyst. In Amazon CodeCatalyst, you can assign a new or an existing issue to Amazon Q, and it will process an end-to-end development workflow for you. Amazon Q will review the existing code, propose a solution approach, seek feedback from you on the approach, generate merge-ready code, and publish a pull request for review. All you need to do after is to review the proposed solutions from Amazon Q.

The following screenshots show a pull request created by Amazon Q in Amazon CodeCatalyst.

Here are a couple of things that you should know:

  • Amazon Q feature development capability is currently in preview in Visual Studio Code and Amazon CodeCatalyst
  • To use this capability in IDE, you need to have the Amazon CodeWhisperer Professional tier. Learn more on the Amazon CodeWhisperer pricing page.

7. Upgrade applications with Amazon Q Code Transformation
With Amazon Q, you can now upgrade an entire application within a few hours by starting a guided code transformation. This capability, called Amazon Q Code Transformation, simplifies maintaining, migrating, and upgrading your existing applications.

To start, navigate to the CodeWhisperer section and then select Transform. Amazon Q Code Transformation automatically analyzes your existing codebase, generates a transformation plan, and completes the key transformation tasks suggested by the plan.

Some additional information about this feature:

  • Amazon Q Code Transformation is available in preview today in the AWS Toolkit for IntelliJ IDEA and the AWS Toolkit for Visual Studio Code.
  • To use this capability, you need to have the Amazon CodeWhisperer Professional tier during the preview.
  • During preview, you can can upgrade Java 8 and 11 applications to version 17, a Java Long-Term Support (LTS) release.

Get started with Amazon Q today
With Amazon Q, you have an AI expert by your side to answer questions, write code faster, troubleshoot issues, optimize workloads, and even help you code new features. These capabilities simplify every phase of building applications on AWS.

Amazon Q lets you engage with AWS Support agents directly from the Q interface if additional assistance is required, eliminating any dead ends in the customer’s self-service experience. The integration with AWS Support is available in the console and will honor the entitlements of your AWS Support plan.

Learn more

— Donnie & Channy

AWS Step Functions Workflow Studio is now available in AWS Application Composer

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-step-functions-workflow-studio-is-now-available-in-aws-application-composer/

Today, we’re announcing that AWS Step Functions Workflow Studio is now available in AWS Application Composer. This new integration brings together the development of workflows and application resources into a unified visual infrastructure as code (IaC) builder.

Now, you can have a seamless transition between authoring workflows with AWS Step Functions Workflow Studio and defining resources with AWS Application Composer. This announcement allows you to create and manage all resources at any stage of your development journey. You can visualize the full application in AWS Application Composer, then zoom into the workflow details with AWS Step Functions Workflow Studio—all within a single interface.

Seamlessly build workflow and modern application
To help you design and build modern applications, we launched AWS Application Composer in March 2023. With AWS Application Composer, you can use a visual builder to compose and configure serverless applications from AWS services backed by deployment-ready IaC.

In various use cases of building modern applications, you may also need to orchestrate microservices, automate mission-critical business processes, create event-driven applications that respond to infrastructure changes, or build machine learning (ML) pipelines. To solve these challenges, you can use AWS Step Functions, a fully managed service that makes it easier to coordinate distributed application components using visual workflows. To simplify workflow development, in 2021 we introduced AWS Step Functions Workflow Studio, a low-code visual tool for rapid workflow prototyping and development across 12,000+ API actions from over 220 AWS services.

While AWS Step Functions Workflow Studio brings simplicity to building workflows, customers that want to deploy workflows using IaC had to manually define their state machine resource and migrate their workflow definitions to the IaC template.

Better together: AWS Step Functions Workflow Studio in AWS Application Composer
With this new integration, you can now design AWS Step Functions workflows in AWS Application Composer using a drag-and-drop interface. This accelerates the path from prototyping to production deployment and iterating on existing workflows.

You can start by composing your modern application with AWS Application Composer. Within the canvas, you can add a workflow by adding an AWS Step Functions state machine resource. This new capability provides you with the ability to visually design and build a workflow with an intuitive interface to connect workflow steps to resources.

How it works
Let me walk you through how you can use AWS Step Functions Workflow Studio in AWS Application Composer. For this demo, let’s say that I need to improve handling e-commerce transactions by building a workflow and integrating with my existing serverless APIs.

First, I navigate to AWS Application Composer. Because I already have an existing project that includes application code and IaC templates from AWS Application Composer, I don’t need to build anything from scratch.

I open the menu and select Project folder to open the files in my local development machine.

Then, I select the path of my local folder, and AWS Application Composer automatically detects the IaC template that I currently have.

Then, AWS Application Composer visualizes the diagram in the canvas. What I really like about using this approach is that AWS Application Composer activates Local sync mode, which automatically syncs and saves any changes in IaC templates into my local project.

Here, I have a simple serverless API running on Amazon API Gateway, which invokes an AWS Lambda function and integrates with Amazon DynamoDB.

Now, I’m ready to make some changes to my serverless API. I configure another route on Amazon API Gateway and add AWS Step Functions state machine to start building my workflow.

When I configure my Step Functions state machine, I can start editing my workflow by selecting Edit in Workflow Studio.

This opens Step Functions Workflow Studio within the AWS Application Composer canvas. I have the same experience as Workflow Studio in the AWS Step Functions console. I can use the canvas to add actions, flows , and patterns into my Step Functions state machine.

I start building my workflow, and here’s the result that I exported using Export PNG image in Workflow Studio.

But here’s where this new capability really helps me as a developer. In the workflow definition, I use various AWS resources, such as AWS Lambda functions and Amazon DynamoDB. If I need to reference the AWS resources I defined in AWS Application Composer, I can use an AWS CloudFormation substitution.

With AWS CloudFormation substitutions, I can add a substitution using an AWS CloudFormation convention, which is a dynamic reference to a value that is provided in the IaC template. I am using a placeholder substitution here so I can map it with an AWS resource in the AWS Application Composer canvas in a later step.

I can also define the AWS CloudFormation substitution for my Amazon DynamoDB table.

At this stage, I’m happy with my workflow. To review the Amazon States Language as my AWS Step Functions state machine definition, I can also open the Code tab. Now I don’t need to manually copy and paste this definition into IaC templates. I only need to save my work and choose Return to Application Composer.

Here, I can see that my AWS Step Functions state machine is updated both in the visual diagram and in the state machine definition section.

If I scroll down, I will find AWS Cloudformation Definition Substitutions for resources that I defined in Workflow Studio. I can manually replace the mapping here, or I can use the canvas.

To use the canvas, I simply drag and drop the respective resources in my Step Functions state machine and in the Application Composer canvas. Here, I connect the Inventory Process task state with a new AWS Lambda function. Also, my Step Functions state machine tasks can reference existing resources.

When I choose Template, the state machine definition is integrated with other AWS Application Composer resources. With this IaC template I can easily deploy using AWS Serverless Application Model Command Line Interface (AWS SAM CLI) or CloudFormation.

Things to know
Here is some additional information for you:

Pricing – The AWS Step Functions Workflow Studio in AWS Application Composer comes at no additional cost.

Availability – This feature is available in all AWS Regions where Application Composer is available.

AWS Step Functions Workflow Studio in AWS Application Composer provides you with an easy-to-use experience to integrate your workflow into modern applications. Get started and learn more about this feature on the AWS Application Composer page.

Happy building!
— Donnie

Amazon EKS Pod Identity simplifies IAM permissions for applications on Amazon EKS clusters

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-eks-pod-identity-simplifies-iam-permissions-for-applications-on-amazon-eks-clusters/

Starting today, you can use Amazon EKS Pod Identity to simplify your applications that access AWS services. This enhancement provides you with a seamless and easy to configure experience that lets you define required IAM permissions for your applications in Amazon Elastic Kubernetes Service (Amazon EKS) clusters so you can connect with AWS services outside the cluster.

Amazon EKS Pod Identity helps you solve growing challenges for managing permissions across many of your EKS clusters.

Simplifying experience with Amazon EKS Pod Identity
In 2019, we introduced IAM roles for service accounts (IRSA). IRSA lets you associate an IAM role with a Kubernetes service account. This helps you to implement the principle of least privilege by giving pods only the permissions they need. This approach prioritizes pods in IAM and helps developers configure applications with fine-grained permissions that enable the least privileged access to AWS services.

Now, with Amazon EKS Pod Identity, it’s even easier to configure and automate granting AWS permissions to Kubernetes identities. As the cluster administrator, you no longer need to switch between Amazon EKS and IAM services to authenticate your applications to all AWS resources.

The overall workflow to start using Amazon EKS Pod Identity can be summarized in a few simple steps:

  • Step 1: Create an IAM role with required permissions for your application and specify pods.eks.amazonaws.com as the service principal in its trust policy.
  • Step 2: Install Amazon EKS Pod Identity Agent add-on using the Amazon EKS console or AWS Command Line Interface (AWS CLI).
  • Step 3: Map the role to a service account directly in the Amazon EKS console, APIs, or AWS CLI.

Once it’s done, any new pods that use that service account will automatically be configured to receive IAM credentials.

Let’s get started
Let me show you how you can get started with EKS Pod Identity. For the demo in this post, I need to configure permission for a simple API running in my Amazon EKS cluster, which will return the list of files in my Amazon Simple Storage Service (Amazon S3) bucket.

First, I need to create an IAM role to provide the required permissions so my applications can run properly. In my case, I need to configure permissions to access my S3 bucket.

Next, on the same IAM role, I need to configure its trust policy and configure the principal to pods.eks.amazonaws.com. The following is the IAM template that I use:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}

At this stage, my IAM role is ready, and now we need to configure the Amazon EKS Pod Identity Agent in my cluster. For this article, I’m using my existing EKS cluster. If you want to learn how to do that, visit Getting started with Amazon EKS.

Moving on, I navigate to the Amazon EKS dashboard and then select my EKS cluster.

In my EKS cluster page, I need to select the Add-ons tab and then choose Get more add-ons.

Then, I need to add the Amazon EKS Pod Identity Agent add-on.

On the next page, I can add additional configuration if needed. In this case, I leave the default configuration and choose Next.

Then, I just need to review my add-on configuration and choose Create.

After a few minutes, the Amazon EKS Pod Identity Agent add-on is active for my cluster.

Once I have Amazon EKS Pod Identity in my cluster, I need to associate the IAM role to my Kubernetes pods.

I need to navigate to the Access tab in my EKS cluster. On the Pod Identity associations section, I select Create Pod Identity association to map my IAM role to Kubernetes pods.

Here, I use the IAM role that I created in the beginning. I also need to define my Kubernetes namespace and service account. If they don’t exist yet, I can type in the name of the namespace and service account. If they already exist, I can select them from the dropdown. Then, I choose Create.

Those are all the steps I need to do to configure IAM permissions for my applications running on Amazon EKS with EKS Pod Identity. Now, I can see my IAM role is listed in Pod Identity associations.

When I test my API running on Amazon EKS, it runs as expected and returns the list of files in my S3 bucket.

curl -X https://<API-URL> -H "Accept: application/json" 

{
   "files": [
         "test-file-1.md",
         "test-file-2.md"
    ]        
}

I found that Amazon EKS Pod Identity simplifies the experience of managing IAM roles for my applications running on Amazon EKS. I can easily reuse IAM roles across multiple EKS clusters without needing to update the role trust policy each time a new cluster is created.

New AWS APIs to configure EKS Pod Identity
You also have the flexibility to configure Amazon EKS Pod Identity for your cluster using AWS CLI. Amazon EKS Pod Identity provides a new set of APIs that you can use.

For example, I can use aws eks create-addon to install the Amazon EKS Pod Identity Agent add-on into my cluster. Here’s the AWS CLI command:

$ aws eks create-addon \
--cluster-name <CLUSTER_NAME> \
--addon-name eks-pod-identity-agent \
--addon-version v1.0.0-eksbuild.1

{
    "addon": {
    "addonName": "eks-pod-identity-agent",
    "clusterName": "<CLUSTER_NAME>",
    "status": "CREATING",
    "addonVersion": "v1.0.0-eksbuild.1",
    "health": {
        "issues": []
        },
    "addonArn": "<ARN>",
    "createdAt": 1697734297.597,
    "modifiedAt": 1697734297.612,
    "tags": {}
    }
}

Another example of what you can do with AWS APIs is to map the IAM role into your Kubernetes pods.

$ aws eks create-pod-identity-association \
  --cluster-name <CLUSTER_NAME> \
  --namespace <NAMESPACE> \
  --service-account <SERVICE_ACCOUNT_NAME> \
  --role-arn <IAM_ROLE_ARN>

Things to know

Availability – Amazon EKS Pod Identity is available in all AWS Regions supported by Amazon EKS, except the AWS GovCloud (US-East), AWS GovCloud (US-West), China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD).

Pricing – Amazon EKS Pod Identity is available at no charge.

Supported Amazon EKS cluster  – Amazon EKS Pod Identity supports Kubernetes running version 1.24 and above in Amazon EKS. You can see EKS Pod Identity cluster versions for more information.

Supported AWS SDK versions – You need to update your application to use the latest AWS SDK versions. Check out AWS developer tools to find out how to install and update your AWS SDK.

Get started today and visit EKS Pod Identities documentation page to learn more about how to simplify IAM management for your applications.

Happy building!
Donnie

Amazon Managed Service for Prometheus collector provides agentless metric collection for Amazon EKS

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/

Today, I’m happy to announce a new capability, Amazon Managed Service for Prometheus collector, to automatically and agentlessly discover and collect Prometheus metrics from Amazon Elastic Kubernetes Service (Amazon EKS). Amazon Managed Service for Prometheus collector consists of a scraper that discovers and collects metrics from Amazon EKS applications and infrastructure without needing to run any collectors in-cluster.

This new capability provides fully managed Prometheus-compatible monitoring and alerting with Amazon Managed Service for Prometheus. One of the significant benefits is that the collector is fully managed, automatically right-sized, and scaled for your use case. This means you don’t have to run any compute for collectors to collect the available metrics. This helps you optimize metric collection costs to monitor your applications and infrastructure running on EKS.

With this launch, Amazon Managed Service for Prometheus now supports two major modes of Prometheus metrics collection: AWS managed collection, a fully managed and agentless collector, and customer managed collection.

Getting started with Amazon Managed Service for Prometheus Collector
Let’s take a look at how to use AWS managed collectors to ingest metrics using this new capability into a workspace in Amazon Managed Service for Prometheus. Then, we will evaluate the collected metrics in Amazon Managed Service for Grafana.

When you create a new EKS cluster using the Amazon EKS console, you now have the option to enable AWS managed collector by selecting Send Prometheus metrics to Amazon Managed Service for Prometheus. In the Destination section, you can also create a new workspace or select your existing Amazon Managed Service for Prometheus workspace. You can learn more about how to create a workspace by following the getting started guide.

Then, you have the flexibility to define your scraper configuration using the editor or upload your existing configuration. The scraper configuration controls how you would like the scraper to discover and collect metrics. To see possible values you can configure, please visit the Prometheus Configuration page.

Once you’ve finished the EKS cluster creation, you can go to the Observability tab on your cluster page to see the list of scrapers running in your EKS cluster.

The next step is to configure your EKS cluster to allow the scraper to access metrics. You can find the steps and information on Configuring your Amazon EKS cluster.

Once your EKS cluster is properly configured, the collector will automatically discover metrics from your EKS cluster and nodes. To visualize the metrics, you can use Amazon Managed Grafana integrated with your Prometheus workspace. Visit the Set up Amazon Managed Grafana for use with Amazon Managed Service for Prometheus page to learn more.

The following is a screenshot of metrics ingested by the collectors and visualized in an Amazon Managed Grafana workspace. From here, you can run a simple query to get the metrics that you need.

Using AWS CLI and APIs
Besides using the Amazon EKS console, you can also use the APIs or AWS Command Line Interface (AWS CLI) to add an AWS managed collector. This approach is useful if you want to add an AWS managed collector into an existing EKS cluster or make some modifications to the existing collector configuration.

To create a scraper, you can run the following command:

aws amp create-scraper \ 
       --source eksConfiguration="{clusterArn=<EKS-CLUSTER-ARN>,securityGroupIds=[<SG-SECURITY-GROUP-ID>],subnetIds=[<SUBNET-ID>]}" \ 
       --scrape-configuration configurationBlob=<BASE64-CONFIGURATION-BLOB> \ 
       --destination=ampConfiguration={workspaceArn="<WORKSPACE_ARN>"}

You can get most of the parameter values from the respective AWS console, such as your EKS cluster ARN and your Amazon Managed Service for Prometheus workspace ARN. Other than that, you also need to define the scraper configuration defined as configurationBlob.

Once you’ve defined the scraper configuration, you need to encode the configuration file into base64 encoding before passing the API call. The following is the command that I use in my Linux development machine to encode sample-configuration.yml into base64 and copy it onto the clipboard.

$ base64 sample-configuration.yml | pbcopy

Now Available
The Amazon Managed Service for Prometheus collector capability is now available to all AWS customers in all AWS Regions where Amazon Managed Service for Prometheus is supported.

Learn more:

Happy building!
Donnie

Amazon Aurora MySQL zero-ETL integration with Amazon Redshift is now generally available

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-aurora-mysql-zero-etl-integration-with-amazon-redshift-is-now-generally-available/

Data is at the center of every application, process, and business decision,” wrote Swami Sivasubramanian, VP of Database, Analytics, and Machine Learning at AWS, and I couldn’t agree more. A common pattern customers use today is to build data pipelines to move data from Amazon Aurora to Amazon Redshift. These solutions help them gain insights to grow sales, reduce costs, and optimize their businesses.

To help you focus on creating value from data instead of preparing data for analysis, we announced Amazon Aurora zero-ETL integration with Amazon Redshift at AWS re:Invent 2022 and in public preview for Amazon Aurora MySQL-Compatible Edition in June 2023.

Now generally available: Amazon Aurora MySQL zero-ETL integration with Amazon Redshift
Today, we announced the general availability of Amazon Aurora MySQL zero-ETL integration with Amazon Redshift. With this fully managed solution, you no longer need to build and maintain complex data pipelines in order to derive time-sensitive insights from your transactional data to inform critical business decisions.

This zero-ETL integration between Amazon Aurora and Amazon Redshift unlocks opportunities for you to run near real-time analytics and machine learning (ML) on petabytes of transactional data in Amazon Redshift. As this data gets written into Aurora, it will be available in Amazon Redshift within seconds.

It also enables you to run consolidated analytics from multiple Aurora MySQL database clusters in Amazon Redshift to derive holistic insights across many applications or partitions. Amazon Aurora MySQL zero-ETL integration with Amazon Redshift processes over 1 million transactions per minute (an equivalent of 17.5 million insert/update/delete row operations per minute) from multiple Aurora databases and makes them available in Amazon Redshift in less than 15 seconds (p50 latency lag).

Furthermore, you can take advantage of the analytics and built-in ML capabilities of Amazon Redshift, such as materialized views, cross-Region data sharing, and federated access to multiple data stores and data lakes.

Let’s get started
In this article, I’ll highlight some steps along with information on how you can get started easily. I will use my existing Amazon Aurora MySQL serverless database and Amazon Redshift data warehouse.

To get started, I need to navigate to Amazon RDS and select Create zero-ETL integration on the Zero-ETL integrations page.

On the Create zero-ETL integration page, I need to follow a few steps to configure the integration for my Amazon Aurora database cluster and my Amazon Redshift data warehouse.

First, I define an identifier for my integration and select Next.

On the next page, I need to select the source database by selecting Browse RDS databases.

Here, I can select my existing database as the source.

The next step asks me the target Amazon Redshift data warehouse. Here, I have the flexibility to choose the Amazon Redshift Serverless or RA3 data warehouse in my account or in different account. I select Browse Redshift data warehouses.

Then, I choose the target data warehouse.

Because Amazon Aurora needs to replicate into the data warehouse, we need to add an additional resource policy and add the Aurora database as an authorized integration source in the Amazon Redshift data warehouse.

I can solve this by manually updating in the Amazon Redshift console or let Amazon RDS fix it for me. I tick the checkbox.

On the next page, it shows me the changes that Amazon RDS will perform for us. I select Continue.

On the next page, I can configure the tags and also the encryption. By default, zero-ETL integration encrypts your data using AWS Key Management Service (AWS KMS), and I have the option to use my own key.

Then, I need to review all the configurations and select Create zero-ETL integration to create the integration.

After a few minutes, my zero-ETL integration is sucessfully created. Then, I switch to Amazon Redshift, and on the Zero-ETL integrations page, I can see that I have my recently created zero-ETL integration.

Since the integration does not yet have a target database inside Amazon Redshift, I need to create one.

Now the integration configuration is complete. On this page, I can see the integration status is active, and there is one table that has been replicated.

For testing, I create a new table in my Amazon Aurora database and insert a record into this table.

Then I switched to the Redshift query editor v2 inside Amazon Redshift. Here I can make a connection to the database that I formed as part of the integration. By running a simple query, I can see that my data is already available inside Amazon Redshift.

I found this zero-ETL integration very convenient for two reasons. First, I could unify all data from multiple database clusters together and analyze it in aggregate. Second, within seconds of the transactional data being written into Amazon Aurora MySQL, this zero-ETL integration seamlessly made the data available in Amazon Redshift.

Things to know

Availability – Amazon Aurora zero-ETL integration with Amazon Redshift is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

Supported Database Engines – Amazon Aurora zero-ETL Integration with Amazon Redshift currently supports MySQL-compatible editions of Amazon Aurora. Support for Amazon Aurora PostgreSQL-Compatible Edition is a work in progress.

Pricing –  Amazon Aurora zero-ETL integration with Amazon Redshift is provided at no additional cost. You pay for existing Amazon Aurora and Amazon Redshift resources used to create and process the change data created as part of a zero-ETL integration.

We’re one step closer to helping you focus more on creating value from data instead of preparing it for analysis. To learn more on how to get started, please visit the Amazon Aurora MySQL zero-ETL integration with Amazon Redshift page.

Happy integrating!
— Donnie

New Customization Capability in Amazon CodeWhisperer Generates Even Better Suggestions (Preview)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-customization-capability-in-amazon-codewhisperer-generates-even-better-suggestions-preview/

An AI coding companion, such as Amazon CodeWhisperer, aims to improve developers’ productivity by helping them write code quickly and securely. However, in particular cases, developers need to have code recommendations based on their internal libraries and APIs they extensively use every day.

As most of the existing AI coding companion tools are trained only on open-source codes, they lack the capability to customize the code recommendations using private code repositories. This limitation presents a variety of challenges for developers. Developers have a difficulty learning how to use internal libraries correctly and avoid security problems. For large codebases, it requires hours of reading documentation to understand what code needs to be written to complete the task.

Now in Preview —  Amazon CodeWhisperer Customization Capability
Today, I’m excited to announce Amazon CodeWhisperer customization capability (in preview) that enables organizations to customize CodeWhisperer to generate specific code recommendations from private code repositories. With this feature, developers who are part of Amazon CodeWhisperer Professional tier can now receive real-time code recommendations that include their internal libraries, APIs, packages, classes, and methods.

Let’s say that you’re a developer working for a hypothetical food delivery company called AnyCompany. You’re given a task to process a list of unassigned food deliveries around the driver’s current location. Previously, with CodeWhisperer, it would not know the correct internal APIs to process unassigned food deliveries or getting driver’s current location as this isn’t publicly available information. 

Now, with customization capability, you can ask CodeWhisperer to provide recommendations that include specific code related to the company’s internal services. The following screenshot shows how CodeWhisperer generates codes based on the internal codebase just by writing a set of comments.

With the customization capability of utilizing your internal codebase, CodeWhisperer now understands the intent, determines which internal and public APIs are best suited to the task, and generates code recommendations.

How It Works
The explanation above described how you can use CodeWhisperer customization capability as a developer. Now, let me share how it works and how you can get started. 

To create a customization, you need to complete the following steps as a CodeWhisperer administrator. 

  1. Administer your end users as CodeWhisperer administrator.
  2. Connect to existing repositories. You can connect one or more code repositories in your GitHub, GitLab, or BitBucket account using AWS CodeStar Connections or manually upload all of your codes into an Amazon Simple Storage Service (Amazon S3) bucket.
  3. Create a customization. CodeWhisperer will customize its model based on your codebase.
  4. Activate the customization for your team members. Once the customization is created, you can review and manually activate the customization to make it available automatically in your team members’ IDEs.

This capability provides two main advantages: providing real-time customized code recommendations that are specific to organizations and ensuring the protection of valuable intellectual property. Organizations can now promote the use of code that meets their quality and security standards based on their codes in existing repositories.

Furthermore, CodeWhisperer helps to ensure the security of your codes by providing the option to encrypt your customization data using customer managed keys in AWS Key Management Service (AWS KMS). This customization data will be deleted once the customization job finishes. 

Let’s Get Started
Let me show you how you can use the Amazon CodeWhisperer customization capability.

To get started, I need to create a customization. I need to have administrator access to navigate to the Create customization page on the Amazon CodeWhisperer dashboard.

On the Create customization page, I can connect the desired private code repositories I want CodeWhisperer to train. Currently, CodeWhisperer customization capability supports connection to GitHub, GitLab, and Bitbucket via AWS CodeStar Connections. If I have codes that are not in any code repositories, I can also manually upload my codes into an S3 bucket and define the Amazon S3 URI.

The following screenshot shows that I have existing connections with my code repositories using AWS CodeStar Connections. I can also create a new connection by selecting Create new connection.

Then, I can select Create Customization so CodeWhisperer can start training the model based on the codes available in the connection. The duration to complete this process depends on the size of the code repositories.

When the customization is ready, CodeWhisperer will not activate it automatically. This gives me the flexibility to activate the customizations just when I need them. But before I demonstrate that, I’d like to explain the evaluation score.

In short, the evaluation score helps me to measure the customization’s accuracy in predicting and providing code recommendations based on the codes in my code repositories. It provides a score in one of three categories: 1) Very Good, with a score ranging from 7–10; 2) Fair, with a score ranging from 4–7; and 3) Poor, with a score ranging from 0–4. It’s recommended to activate the customization if the evaluation score is 6 or higher. If the evaluation score is less than desired, I need to make sure that I’m providing enough codes for customization and provide a new code dataset that extensively contains references to internal APIs.

Here, I can see the Evaluation score for my customization is 8, and I’m happy with this result. Then, I can select Activate to start using this customization.

Once I have activated the customizations, I can define the access to selected customizations by selecting Add users. Now, I can give access to the customizations for selected team members who have been added as users for Amazon CodeWhisperer Professional tier. To do that, I can follow the guide from the Administering end users page. 

Then, once my team members sign in via AWS Toolkit in their IDEs, they will see the available customizations and can start using them. 

With Amazon CodeWhisperer, I can create multiple customizations by providing different code repositories. This feature is useful if I want to build customizations for code recommendations for certain teams. 

As administrator, I can also monitor the performance of each of the customizations by navigating to the CodeWhisperer dashboard page. This page summarizes useful data such as user activity, how many lines of code were suggested by CodeWhisperer and accepted by my team members, and how many security scans have successfully been run from IDEs. 

Amazon CodeWhisperer customization capability also follows the supported IDEs as part of AWS Toolkit by Amazon CodeWhisperer, such as Visual Studio Code, IntelliJ JetBrains, Visual Studio, and AWS Cloud9. This feature also provides support for most popular programming languages, including Python, Java, JavaScript, TypeScript, and C#.

Join the Public Preview
By securely leveraging customer’s internal codebase, Amazon CodeWhisperer unlocks the full potential of generative AI-powered coding that is customized to your unique requirements.

Join the public preview now and learn more on how to get started on the Amazon CodeWhisperer Customization page.

Happy coding!
Donnie

AWS Weekly Roundup: Amazon EC2 M2 Pro Mac, Amazon Coretto 21, Amazon CloudWatch Synthetics, and more (Sept. 25, 2023)

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-ec2-m2-pro-mac-amazon-coretto-21-amazon-cloudwatch-synthetics-and-more-sept-25-2023/

This week, I’m in Jakarta to support AWS User Group Indonesia and AWS Cloud Day Indonesia. Yesterday, I attended a community event – a collaboration between AWS User Group Indonesia and Hacktiv8 with “Innovating Yourself as Early-Stage Developers” as the main theme. We had a blast and I had a wonderful time connecting with speakers and developers.

Next up, AWS Cloud Day Indonesia. I’ll be at the Developer Lounge, come and say hi!

Last Week’s Launches
Here are some of the launches that caught my attention last week:

Add Your Swift Packages to AWS CodeArtifact – In this article, Seb describes how Swift developers who write code for Apple platforms (iOS, iPadOS, macOS, tvOS, watchOS, visionOS or Swift) applications running on the server side can use AWS CodeArtifact to securely store and retrieve their package dependencies. What I really like is how developers can still use standard developer tools, such as Xcode, xcodebuild, and the Swift Package Manager (the swift package command) to interact with AWS CodeArtifact and facilitate integration into the development workflow.

Amazon EC2 M2 Pro Mac Instances Built on Apple Silicon M2 Pro Mac Mini Computers – Channy wrote how developers can use Amazon EC2 M2 Pro Mac to run memory intensive builds and test workloads, modernize their CI/CD and accelerate their product time to market. With 2x RAM, 1.5x CPU cores, and more than 2x GPU cores compared to EC2 M1 Mac instances, Apple developers can now run more tests in parallel using multiple Xcode simulators.

Synthetics Python runtime version 2.0 for Amazon CloudWatch Synthetics – With Amazon CloudWatch Synthetics, you can continually verify your customer experience and discover issues before your customers do by creating canaries. Canaries are configurable scripts that run on a schedule, to monitor your endpoints and APIs. In this announcement, you can use Synthetics Python runtime version syn-python-selenium-2.0 to create canaries.

Amazon QuickSight adds new layout and sparkline to KPI visual – Effortlessly design visually appealing KPIs on Amazon Quicksight with these new updates. Quicksight introduces a range of enhancements with user-friendly experience, including templated KPI layouts, support for sparklines, improvements in conditional formatting, and a revamped format pane.

Amazon Location Services announces a price reduction of up to 75 percent for tracking and geofencing – Amazon Location Service just announced a four-tiered pricing model for tracking and geofencing to help you scale and cost-effectively run your operations and business. If you use geofencing, you might see your bill decrease by 20 percent to 70 percent, and tracking by up to 75 percent.

Amazon Corretto 21 is now generally available – Happy news for Java developers. Amazon Coretto 21 with long term support (LTS) is generally available for Linux, Windows and macOS.

AWS App Runner launches improvements for Auto-Scaling configuration management – Now you can use new APIs and parameters for AWS App Runner service to manage your App Runner services and define your auto-scaling configuration (ASC). For example, setting default ASC, update existing ASC and list all App Runner services that are using an ASC resource.

Amazon SNS message data protection with redaction or masking – With Amazon SNS, now you can discover and protect certain types of personally identifiable information (PII) and protected health information (PHI). You can define your data protection policies and SNS will scan messages in real-time for sensitive data.

Upcoming AWS and Community Events
Check your calendars and sign up for these AWS events:

And let’s learn from our fellow builders and join AWS Community Days:

  • AWS Community Day Zimbabwe (Sept. 30),
  • AWS Community Day Chile (Sept. 30),
  • AWS Community Day Bulgaria Bulgaria (Oct. 7).

Visit the landing page to check out all the upcoming AWS Community Days.

Happy building!
— Donnie

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

New — Deliver Interactive Real-Time Live Streams with Amazon IVS

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-deliver-interactive-real-time-live-streams-with-amazon-ivs/

Live streaming is becoming an increasingly popular way to connect customers with their favorite influencers and brands through interactive live video experiences. Our customers, DeNA and Rooter, rely on Amazon Interactive Video Service (Amazon IVS), a fully managed live streaming solution, to build engaging live stream and interactive video experiences for their audiences.

In March we introduced Amazon IVS support for multiple hosts in live streams to further provide flexibility in building interactive experiences, by using a resource called stage. A stage is a virtual space where participants can exchange audio and video in real time.

However, latency is still a critical component to engaging audiences and enriching the overall experience. The lower the latency, the better it is to connect with live audiences in a direct and personal way. Previously, Amazon IVS supported real-time live streaming for up to 12 hosts via stages with around 3–5 seconds latency for viewers via channels. This latency gap restricts the ability to build interactive experiences with direct engagement for wider audiences.

Introducing Amazon IVS Real-Time Streaming
Today, I’m excited to share that with Amazon IVS Real-Time Streaming, you now can deliver real-time live streams to 10,000 viewers with up to 12 hosts from a stage, with latency that can be under 300 milliseconds from host to viewer.

This feature unlocks the opportunity for you to build interactive video experiences for social media applications or for latency sensitive use cases like auctions.

Now you will no longer have to compromise to achieve real-time latency for viewers. You can avoid such workarounds as using multiple AWS services or external tools. Instead, you can simply use Amazon IVS as a centralized service to deliver real-time interactive live streams, and you don’t even need to enable anything on your account to start using this feature.

Deliver Real-time Streams with The Amazon IVS Broadcast SDK
To deliver real-time streams, you need to interact with a stage resource and use the Amazon IVS Broadcast SDK available on iOS, Android, and web. With a stage, you can create a virtual space for participants to join as either viewers or hosts with real-time latency that can be under 300 ms.

You can use a stage to build an experience where hosts and viewers can go live together. For example, inviting viewers to become hosts and join other hosts in a Q&A session, delivering a singing competition, or having multiple guests in a talk show.

We published an overview on how to get started with a stage resource on the Add multiple hosts to live streams with Amazon IVS page. Let me do a quick refresher for the overall flow and how to interact with a stage resource.

First, you need to create a stage. You can do this via the console or programmatically using the Amazon IVS API. The following command is an example of how to create a stage using the create-stage API and AWS CLI.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
{
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}
    }
}

A key concept for a stage resource that enables participants to join as a host or a viewer is a participation token. A participant token is an authorization token that lets your participants publish or subscribe to a stage. When you’re using the create-stage API, you can also generate a participation token and add additional information by using attributes, including custom user IDs and their display names. The API responds with stage details and participant tokens.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
    --participant-token-configurations userId=test-1,capabilities=PUBLISH,SUBSCRIBE,attributes={demo-attribute=test-1}

{
    "participantTokens": [
        {
            "attributes": {
                "demo-attribute": "test-1"
            },
            "capabilities": [
                "PUBLISH",
                "SUBSCRIBE"
            ],
            "participantId": "p7HIfs3v9GIo",
            "token": "TOKEN",
            "userId": "test-1"
        }
    ],
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}
    }
}

In addition to the create-stage API, you can also programmatically generate participant tokens using the API. Currently, there are two capability values for a participant token, PUBLISH and SUBSCRIBE. If you need to invite a participant to host, you need to add a PUBLISH capability while creating the participant token. With the PUBLISH attribute, you can include video and audio of your host into a stream.

Here is an example on how you can generate a participant token.

$ aws ivs-realtime create-participant-token \
    --region us-east-1 \
	--capabilities PUBLISH \
	--stage-arn ARN \
	--user-id test-2

{
    "participantToken": {
        "capabilities": [
            "PUBLISH"
        ],
        "expirationTime": "2023-07-23T23:48:57+00:00",
        "participantId": "86KGafGbrXpK",
        "token": "TOKEN",
        "userId": "test-2"
    }
}

Once you have generated a participant token, you need to distribute it to your respective clients using, for example, a WebSocket message. Then, within your client applications using Amazon IVS Broadcast SDK, you can use this participant token to let the your users join the stage as hosts or viewers. To learn more on how you can interact with a stage resource, you can see and review the sample demo for iOS or Android, and the supporting serverless applications for real-time demo.

At this point, you’re able to deliver real-time live streams using a stage to 10,000 viewers. If you need to extend the stream to a wider audience, you can use your stage as the input for a channel and use the Amazon IVS Low-Latency Streaming capability. With a channel, you can deliver high concurrency video from a single source with low latency that can be under 5 seconds to millions of viewers. You can learn more on how to publish a stage to a channel on the Amazon IVS Broadcast SDK documentation page, which includes information for iOS, Android, and web.

Layered Encoding Feature for Amazon IVS Real-Time Streaming Capability
End users prefer a live stream with good quality. However, the quality of the live stream depends on various factors, such as the health of their network connections and device performance.

The most common scenario is that viewers will receive a single version of video that is above their optimum viewing configuration. For example, if the host can produce high-quality video, the live stream can be enjoyed by viewers with good connections, but viewers with slower connections would experience loading delays or even an inability to watch the videos. However, if the host can only produce low-quality video, viewers with good connections will get less optimal video, while viewers with slower connections will have a better experience.

To address the issue, with this announcement we also released the layered encoding feature for Amazon IVS Real-Time Streaming capability. With layered encoding (also known as simulcast) when you publish to a stage, Amazon IVS will automatically send multiple variations of video and audio. This ensures your viewers can continue to enjoy the stream at the best quality they can receive based on their network conditions.

Customer Voices
During the private preview period, we heard lots of feedback from our customers about Amazon IVS Real-Time Streaming.

Whatnot is a live stream shopping platform and marketplace that allows collectors and enthusiasts to connect with their community to buy and sell products they’re passionate about. “Scaling live video auctions to our global community is one of our major engineering challenges. Ensuring real-time latency is fundamental to maintaining the integrity and excitement of our auction experience. By leveraging Amazon IVS Real-Time Streaming, we can confidently scale our operations worldwide, assuring a seamless and high-quality real-time video experience across our entire user base, whether on web or mobile platforms.”, Ludo Antonov, VP of Engineering.

Available Now
Amazon IVS Real-Time Streaming is available in all AWS Regions where Amazon IVS is available. To use Amazon IVS Real-Time Streaming, you pay an hourly rate for the duration that you have hosts or viewers connected to the stage resource as a participant.

Learn more about benefits, use cases, how to get started, and pricing details for Amazon IVS’s Real-Time Streaming and Low-Latency Streaming capabilities on the Amazon IVS page.

Happy streaming!
Donnie