All posts by Danilo Poccia

AWS Weekly Roundup: Amazon Bedrock, AWS CodeBuild, Amazon CodeCatalyst, and more (April 29, 2024)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-bedrock-aws-codebuild-amazon-codecatalyst-and-more-april-29-2024/

This was a busy week for Amazon Bedrock with many new features! Using GitHub Actions with AWS CodeBuild is much easier. Also, Amazon Q in Amazon CodeCatalyst can now manage more complex issues.

I was amazed to meet so many new and old friends at the AWS Summit London. To give you a quick glimpse, here’s AWS Hero Yan Cui starting his presentation at the AWS Community stage.

AWS Community at the AWS Summit London 2024

Last week’s launches
With so many interesting new features, I start with generative artificial intelligence (generative AI) and then move to the other topics. Here’s what got my attention:

Amazon Bedrock – For supported architectures such as Llama, Mistral, or Flan T5, you can now import custom models and access them on demand. Model evaluation is now generally available to help you evaluate, compare, and select the best foundation models (FMs) for your specific use case. You can now access Meta’s Llama 3 models.

Agents for Amazon Bedrock – A simplified agent creation and return of control, so that you can define an action schema and get the control back to perform those action without needing to create a specific AWS Lambda function. Agents also added support for Anthropic Claude 3 Haiku and Sonnet to help build faster and more intelligent agents.

Knowledge Bases for Amazon Bedrock – You can now ingest data from up to five data sources and provide more complete answers. In the console, you can now chat with one of your documents without needing to set up a vector database (read more in this Machine Learning blog post).

Guardrails for Amazon Bedrock – The capability to implement safeguards based on your use cases and responsible AI policies is now available with new safety filters and privacy controls.

Amazon Titan – The new watermark detection feature is now generally available in Amazon Bedrock. In this way, you can identify images generated by Amazon Titan Image Generator using an invisible watermark present in all images generated by Amazon Titan.

Amazon CodeCatalyst – Amazon Q can now split complex issues into separate, simpler tasks that can then be assigned to a user or back to Amazon Q. CodeCatalyst now also supports approval gates within a workflow. Approval gates pause a workflow that is building, testing, and deploying code so that a user can validate whether it should be allowed to proceed.

Amazon EC2 – You can now remove an automatically assigned public IPv4 address from an EC2 instance. If you no longer need the automatically assigned public IPv4 (for example, because you are migrating to using a private IPv4 address for SSH with EC2 instance connect), you can use this option to quickly remove the automatically assigned public IPv4 address and reduce your public IPv4 costs.

Network Load Balancer – Now supports Resource Map in AWS Management Console, a tool that displays all your NLB resources and their relationships in a visual format on a single page. Note that Application Load Balancer already supports Resource Map in the console.

AWS CodeBuild – Now supports managed GitHub Action self-hosted runners. You can configure CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts.

Amazon Route 53 – You can now define a standard DNS configuration in the form of a Profile, apply this configuration to multiple VPCs, and share it across AWS accounts.

AWS Direct Connect – Hosted connections now support capacities up to 25 Gbps. Before, the maximum was 10 Gbps. Higher bandwidths simplify deployments of applications such as advanced driver assistance systems (ADAS), media and entertainment (M&E), artificial intelligence (AI), and machine learning (ML).

NoSQL Workbench for Amazon DynamoDB – A revamped operation builder user interface to help you better navigate, run operations, and browse your DynamoDB tables.

Amazon GameLift – Now supports in preview end-to-end development of containerized workloads, including deployment and scaling on premises, in the cloud, or for hybrid configurations. You can use containers for building, deploying, and running game server packages.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

GQL, the new ISO standard for graphs, has arrived – GQL, which stands for Graph Query Language, is the first new ISO database language since the introduction of SQL in 1987.

Authorize API Gateway APIs using Amazon Verified Permissions and Amazon Cognito – Externalizing authorization logic for application APIs can yield multiple benefits. Here’s an example of how to use Cedar policies to secure a REST API.

Build and deploy a 1 TB/s file system in under an hour – Very nice walkthrough for something that used to be not so easy to do in the recent past.

Let’s Architect! Discovering Generative AI on AWS – A new episode in this amazing series of posts that provides a broad introduction to the domain and then shares a mix of videos, blog posts, and hands-on workshops.

Building scalable, secure, and reliable RAG applications using Knowledge Bases for Amazon Bedrock – This post explores the new features (including AWS CloudFormation support) and how they align with the AWS Well-Architected Framework.

Using the unified CloudWatch Agent to send traces to AWS X-Ray – With added support for the collection of AWS X-Ray and OpenTelemetry traces, you can now provision a single agent to capture metrics, logs, and traces.

The executive’s guide to generative AI for sustainability – A guide for implementing a generative AI roadmap within sustainability strategies.

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community. Check out Ricardo’s page for the latest updates.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

GOTO EDA Day LondonJoin us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners.

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Danilo

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Agents for Amazon Bedrock: Introducing a simplified creation and configuration experience

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/agents-for-amazon-bedrock-introducing-a-simplified-creation-and-configuration-experience/

With Agents for Amazon Bedrock, applications can use generative artificial intelligence (generative AI) to run tasks across multiple systems and data sources. Starting today, these new capabilities streamline the creation and management of agents:

Quick agent creation – You can now quickly create an agent and optionally add instructions and action groups later, providing flexibility and agility for your development process.

Agent builder – All agent configurations can be operated in the new agent builder section of the console.

Simplified configuration – Action groups can use a simplified schema that just lists functions and parameters without having to provide an API schema.

Return of control –You can skip using an AWS Lambda function and return control to the application invoking the agent. In this way, the application can directly integrate with systems outside AWS or call internal endpoints hosted in any Amazon Virtual Private Cloud (Amazon VPC) without the need to integrate the required networking and security configurations with a Lambda function.

Infrastructure as code – You can use AWS CloudFormation to deploy and manage agents with the new simplified configuration, ensuring consistency and reproducibility across environments for your generative AI applications.

Let’s see how these enhancements work in practice.

Creating an agent using the new simplified console
To test the new experience, I want to build an agent that can help me reply to an email containing customer feedback. I can use generative AI, but a single invocation of a foundation model (FM) is not enough because I need to interact with other systems. To do that, I use an agent.

In the Amazon Bedrock console, I choose Agents from the navigation pane and then Create Agent. I enter a name for the agent (customer-feedback) and a description. Using the new interface, I proceed and create the agent without providing additional information at this stage.

Console screenshot.

I am now presented with the Agent builder, the place where I can access and edit the overall configuration of an agent. In the Agent resource role, I leave the default setting as Create and use a new service role so that the AWS Identity and Access Management (IAM) role assumed by the agent is automatically created for me. For the model, I select Anthropic and Claude 3 Sonnet.

Console screenshot.

In Instructions for the Agent, I provide clear and specific instructions for the task the agent has to perform. Here, I can also specify the style and tone I want the agent to use when replying. For my use case, I enter:

Help reply to customer feedback emails with a solution tailored to the customer account settings.

In Additional settings, I select Enabled for User input so that the agent can ask for additional details when it does not have enough information to respond. Then, I choose Save to update the configuration of the agent.

I now choose Add in the Action groups section. Action groups are the way agents can interact with external systems to gather more information or perform actions. I enter a name (retrieve-customer-settings) and a description for the action group:

Retrieve customer settings including customer ID.

The description is optional but, when provided, is passed to the model to help choose when to use this action group.

Console screenshot.

In Action group type, I select Define with function details so that I only need to specify functions and their parameters. The other option here (Define with API schemas) corresponds to the previous way of configuring action groups using an API schema.

Action group functions can be associated to a Lambda function call or configured to return control to the user or application invoking the agent so that they can provide a response to the function. The option to return control is useful for four main use cases:

  • When it’s easier to call an API from an existing application (for example, the one invoking the agent) than building a new Lambda function with the correct authentication and network configurations as required by the API
  • When the duration of the task goes beyond the maximum Lambda function timeout of 15 minutes so that I can handle the task with an application running in containers or virtual servers or use a workflow orchestration such as AWS Step Functions
  • When I have time-consuming actions because, with the return of control, the agent doesn’t wait for the action to complete before proceeding to the next step, and the invoking application can run actions asynchronously in the background while the orchestration flow of the agent continues
  • When I need a quick way to mock the interaction with an API during the development and testing and of an agent

In Action group invocation, I can specify the Lambda function that will be invoked when this action group is identified by the model during orchestration. I can ask the console to quickly create a new Lambda function, to select an existing Lambda function, or return control so that the user or application invoking the agent will ask for details to generate a response. I select Return Control to show how that works in the console.

Console screenshot.

I configure the first function of the action group. I enter a name (retrieve-customer-settings-from-crm) and the following description for the function:

Retrieve customer settings from CRM including customer ID using the customer email in the sender/from fields of the email.

Console screenshot.

In Parameters, I add email with Customer email as the description. This is a parameter of type String and is required by this function. I choose Add to complete the creation of the action group.

Because, for my use case, I expect many customers to have issues when logging in, I add another action group (named check-login-status) with the following description:

Check customer login status.

This time, I select the option to create a new Lambda function so that I can handle these requests in code.

For this action group, I configure a function (named check-customer-login-status-in-login-system) with the following description:

Check customer login status in login system using the customer ID from settings.

In Parameters, I add customer_id, another required parameter of type String. Then, I choose Add to complete the creation of the second action group.

When I open the configuration of this action group, I see the name of the Lambda function that has been created in my account. There, I choose View to open the Lambda function in the console.

Console screenshot.

In the Lambda console, I edit the starting code that has been provided and implement my business case:

import json

def lambda_handler(event, context):
    print(event)
    
    agent = event['agent']
    actionGroup = event['actionGroup']
    function = event['function']
    parameters = event.get('parameters', [])

    # Execute your business logic here. For more information,
    # refer to: https://docs.aws.amazon.com/bedrock/latest/userguide/agents-lambda.html
    if actionGroup == 'check-login-status' and function == 'check-customer-login-status-in-login-system':
        response = {
            "status": "unknown"
        }
        for p in parameters:
            if p['name'] == 'customer_id' and p['type'] == 'string' and p['value'] == '12345':
                response = {
                    "status": "not verified",
                    "reason": "the email address has not been verified",
                    "solution": "please verify your email address"
                }
    else:
        response = {
            "error": "Unknown action group {} or function {}".format(actionGroup, function)
        }
    
    responseBody =  {
        "TEXT": {
            "body": json.dumps(response)
        }
    }

    action_response = {
        'actionGroup': actionGroup,
        'function': function,
        'functionResponse': {
            'responseBody': responseBody
        }

    }

    dummy_function_response = {'response': action_response, 'messageVersion': event['messageVersion']}
    print("Response: {}".format(dummy_function_response))

    return dummy_function_response

I choose Deploy in the Lambda console. The function is configured with a resource-based policy that allows Amazon Bedrock to invoke the function. For this reason, I don’t need to update the IAM role used by the agent.

I am ready to test the agent. Back in the Amazon Bedrock console, with the agent selected, I look for the Test Agent section. There, I choose Prepare to prepare the agent and test it with the latest changes.

As input to the agent, I provide this sample email:

From: [email protected]

Subject: Problems logging in

Hi, when I try to log into my account, I get an error and cannot proceed further. Can you check? Thank you, Danilo

In the first step, the agent orchestration decides to use the first action group (retrieve-customer-settings) and function (retrieve-customer-settings-from-crm). This function is configured to return control, and in the console, I am asked to provide the output of the action group function. The customer email address is provided as the input parameter.

Console screenshot.

To simulate an interaction with an application, I reply with a JSON syntax and choose Submit:

{ "customer id": 12345 }

In the next step, the agent has the information required to use the second action group (check-login-status) and function (check-customer-login-status-in-login-system) to call the Lambda function. In return, the Lambda function provides this JSON payload:

{
  "status": "not verified",
  "reason": "the email address has not been verified",
  "solution": "please verify your email address"
}

Using this content, the agent can complete its task and suggest the correct solution for this customer.

Console screenshot.

I am satisfied with the result, but I want to know more about what happened under the hood. I choose Show trace where I can see the details of each step of the agent orchestration. This helps me understand the agent decisions and correct the configurations of the agent groups if they are not used as I expect.

Console screenshot.

Things to know
You can use the new simplified experience to create and manage Agents for Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) AWS Regions.

You can now create an agent without having to specify an API schema or provide a Lambda function for the action groups. You just need to list the parameters that the action group needs. When invoking the agent, you can choose to return control with the details of the operation to perform so that you can handle the operation in your existing applications or if the duration is longer than the maximum Lambda function timeout.

CloudFormation support for Agents for Amazon Bedrock has been released recently and is now being updated to support the new simplified syntax.

To learn more:

Danilo

Import custom models in Amazon Bedrock (preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/import-custom-models-in-amazon-bedrock-preview/

With Amazon Bedrock, you have access to a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies that make it easier to build and scale generative AI applications. Some of these models provide publicly available weights that can be fine-tuned and customized for specific use cases. However, deploying customized FMs in a secure and scalable way is not an easy task.

Starting today, Amazon Bedrock adds in preview the capability to import custom weights for supported model architectures (such as Meta Llama 2, Llama 3, and Mistral) and serve the custom model using On-Demand mode. You can import models with weights in Hugging Face safetensors format from Amazon SageMaker and Amazon Simple Storage Service (Amazon S3).

In this way, you can use Amazon Bedrock with existing customized models such as Code Llama, a code-specialized version of Llama 2 that was created by further training Llama 2 on code-specific datasets, or use your data to fine-tune models for your own unique business case and import the resulting model in Amazon Bedrock.

Let’s see how this works in practice.

Bringing a custom model to Amazon Bedrock
In the Amazon Bedrock console, I choose Imported models from the Foundation models section of the navigation pane. Now, I can create a custom model by importing model weights from an Amazon Simple Storage Service (Amazon S3) bucket or from an Amazon SageMaker model.

I choose to import model weights from an S3 bucket. In another browser tab, I download the MistralLite model from the Hugging Face website using this pull request (PR) that provides weights in safetensors format. The pull request is currently Ready to merge, so it might be part of the main branch when you read this. MistralLite is a fine-tuned Mistral-7B-v0.1 language model with enhanced capabilities of processing long context up to 32K tokens.

When the download is complete, I upload the files to an S3 bucket in the same AWS Region where I will import the model. Here are the MistralLite model files in the Amazon S3 console:

Console screenshot.

Back at the Amazon Bedrock console, I enter a name for the model and keep the proposed import job name.

Console screenshot.

I select Model weights in the Model import settings and browse S3 to choose the location where I uploaded the model weights.

Console screenshot.

To authorize Amazon Bedrock to access the files on the S3 bucket, I select the option to create and use a new AWS Identity and Access Management (IAM) service role. I use the View permissions details link to check what will be in the role. Then, I submit the job.

About ten minutes later, the import job is completed.

Console screenshot.

Now, I see the imported model in the console. The list also shows the model Amazon Resource Name (ARN) and the creation date.

Console screenshot.

I choose the model to get more information, such as the S3 location of the model files.

Console screenshot.

In the model detail page, I choose Open in playground to test the model in the console. In the text playground, I type a question using the prompt template of the model:

<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>

The MistralLite imported model is quick to reply and describe some of those challenges.

Console screenshot.

In the playground, I can tune responses for my use case using configurations such as temperature and maximum length or add stop sequences specific to the imported model.

To see the syntax of the API request, I choose the three small vertical dots at the top right of the playground.

Console screenshot.

I choose View API syntax and run the command using the AWS Command Line Interface (AWS CLI):

aws bedrock-runtime invoke-model \
--model-id arn:aws:bedrock:us-east-1:123412341234:imported-model/a82bkefgp20f \
--body "{\"prompt\":\"<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>\",\"max_tokens\":512,\"top_k\":200,\"top_p\":0.9,\"stop\":[],\"temperature\":0.5}" \
--cli-binary-format raw-in-base64-out \
--region us-east-1 \
invoke-model-output.txt

The output is similar to what I got in the playground. As you can see, for imported models, the model ID is the ARN of the imported model. I can use the model ID to invoke the imported model with the AWS CLI and AWS SDKs.

Things to know
You can bring your own weights for supported model architectures to Amazon Bedrock in the US East (N. Virginia) AWS Region. The model import capability is currently available in preview.

When using custom weights, Amazon Bedrock serves the model with On-Demand mode, and you only pay for what you use with no time-based term commitments. For detailed information, see Amazon Bedrock pricing.

The ability to import models is managed using AWS Identity and Access Management (IAM), and you can allow this capability only to the roles in your organization that need to have it.

With this launch, it’s now easier to build and scale generative AI applications using custom models with security and privacy built in.

To learn more:

Danilo

Run large-scale simulations with AWS Batch multi-container jobs

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/run-large-scale-simulations-with-aws-batch-multi-container-jobs/

Industries like automotive, robotics, and finance are increasingly implementing computational workloads like simulations, machine learning (ML) model training, and big data analytics to improve their products. For example, automakers rely on simulations to test autonomous driving features, robotics companies train ML algorithms to enhance robot perception capabilities, and financial firms run in-depth analyses to better manage risk, process transactions, and detect fraud.

Some of these workloads, including simulations, are especially complicated to run due to their diversity of components and intensive computational requirements. A driving simulation, for instance, involves generating 3D virtual environments, vehicle sensor data, vehicle dynamics controlling car behavior, and more. A robotics simulation might test hundreds of autonomous delivery robots interacting with each other and other systems in a massive warehouse environment.

AWS Batch is a fully managed service that can help you run batch workloads across a range of AWS compute offerings, including Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, and Amazon EC2 Spot or On-Demand Instances. Traditionally, AWS Batch only allowed single-container jobs and required extra steps to merge all components into a monolithic container. It also did not allow using separate “sidecar” containers, which are auxiliary containers that complement the main application by providing additional services like data logging. This additional effort required coordination across multiple teams, such as software development, IT operations, and quality assurance (QA), because any code change meant rebuilding the entire container.

Now, AWS Batch offers multi-container jobs, making it easier and faster to run large-scale simulations in areas like autonomous vehicles and robotics. These workloads are usually divided between the simulation itself and the system under test (also known as an agent) that interacts with the simulation. These two components are often developed and optimized by different teams. With the ability to run multiple containers per job, you get the advanced scaling, scheduling, and cost optimization offered by AWS Batch, and you can use modular containers representing different components like 3D environments, robot sensors, or monitoring sidecars. In fact, customers such as IPG Automotive, MORAI, and Robotec.ai are already using AWS Batch multi-container jobs to run their simulation software in the cloud.

Let’s see how this works in practice using a simplified example and have some fun trying to solve a maze.

Building a Simulation Running on Containers
In production, you will probably use existing simulation software. For this post, I built a simplified version of an agent/model simulation. If you’re not interested in code details, you can skip this section and go straight to how to configure AWS Batch.

For this simulation, the world to explore is a randomly generated 2D maze. The agent has the task to explore the maze to find a key and then reach the exit. In a way, it is a classic example of pathfinding problems with three locations.

Here’s a sample map of a maze where I highlighted the start (S), end (E), and key (K) locations.

Sample ASCII maze map.

The separation of agent and model into two separate containers allows different teams to work on each of them separately. Each team can focus on improving their own part, for example, to add details to the simulation or to find better strategies for how the agent explores the maze.

Here’s the code of the maze model (app.py). I used Python for both examples. The model exposes a REST API that the agent can use to move around the maze and know if it has found the key and reached the exit. The maze model uses Flask for the REST API.

import json
import random
from flask import Flask, request, Response

ready = False

# How map data is stored inside a maze
# with size (width x height) = (4 x 3)
#
#    012345678
# 0: +-+-+ +-+
# 1: | |   | |
# 2: +-+ +-+-+
# 3: | |   | |
# 4: +-+-+ +-+
# 5: | | | | |
# 6: +-+-+-+-+
# 7: Not used

class WrongDirection(Exception):
    pass

class Maze:
    UP, RIGHT, DOWN, LEFT = 0, 1, 2, 3
    OPEN, WALL = 0, 1
    

    @staticmethod
    def distance(p1, p2):
        (x1, y1) = p1
        (x2, y2) = p2
        return abs(y2-y1) + abs(x2-x1)


    @staticmethod
    def random_dir():
        return random.randrange(4)


    @staticmethod
    def go_dir(x, y, d):
        if d == Maze.UP:
            return (x, y - 1)
        elif d == Maze.RIGHT:
            return (x + 1, y)
        elif d == Maze.DOWN:
            return (x, y + 1)
        elif d == Maze.LEFT:
            return (x - 1, y)
        else:
            raise WrongDirection(f"Direction: {d}")


    def __init__(self, width, height):
        self.width = width
        self.height = height        
        self.generate()
        

    def area(self):
        return self.width * self.height
        

    def min_lenght(self):
        return self.area() / 5
    

    def min_distance(self):
        return (self.width + self.height) / 5
    

    def get_pos_dir(self, x, y, d):
        if d == Maze.UP:
            return self.maze[y][2 * x + 1]
        elif d == Maze.RIGHT:
            return self.maze[y][2 * x + 2]
        elif d == Maze.DOWN:
            return self.maze[y + 1][2 * x + 1]
        elif d ==  Maze.LEFT:
            return self.maze[y][2 * x]
        else:
            raise WrongDirection(f"Direction: {d}")


    def set_pos_dir(self, x, y, d, v):
        if d == Maze.UP:
            self.maze[y][2 * x + 1] = v
        elif d == Maze.RIGHT:
            self.maze[y][2 * x + 2] = v
        elif d == Maze.DOWN:
            self.maze[y + 1][2 * x + 1] = v
        elif d ==  Maze.LEFT:
            self.maze[y][2 * x] = v
        else:
            WrongDirection(f"Direction: {d}  Value: {v}")


    def is_inside(self, x, y):
        return 0 <= y < self.height and 0 <= x < self.width


    def generate(self):
        self.maze = []
        # Close all borders
        for y in range(0, self.height + 1):
            self.maze.append([Maze.WALL] * (2 * self.width + 1))
        # Get a random starting point on one of the borders
        if random.random() < 0.5:
            sx = random.randrange(self.width)
            if random.random() < 0.5:
                sy = 0
                self.set_pos_dir(sx, sy, Maze.UP, Maze.OPEN)
            else:
                sy = self.height - 1
                self.set_pos_dir(sx, sy, Maze.DOWN, Maze.OPEN)
        else:
            sy = random.randrange(self.height)
            if random.random() < 0.5:
                sx = 0
                self.set_pos_dir(sx, sy, Maze.LEFT, Maze.OPEN)
            else:
                sx = self.width - 1
                self.set_pos_dir(sx, sy, Maze.RIGHT, Maze.OPEN)
        self.start = (sx, sy)
        been = [self.start]
        pos = -1
        solved = False
        generate_status = 0
        old_generate_status = 0                    
        while len(been) < self.area():
            (x, y) = been[pos]
            sd = Maze.random_dir()
            for nd in range(4):
                d = (sd + nd) % 4
                if self.get_pos_dir(x, y, d) != Maze.WALL:
                    continue
                (nx, ny) = Maze.go_dir(x, y, d)
                if (nx, ny) in been:
                    continue
                if self.is_inside(nx, ny):
                    self.set_pos_dir(x, y, d, Maze.OPEN)
                    been.append((nx, ny))
                    pos = -1
                    generate_status = len(been) / self.area()
                    if generate_status - old_generate_status > 0.1:
                        old_generate_status = generate_status
                        print(f"{generate_status * 100:.2f}%")
                    break
                elif solved or len(been) < self.min_lenght():
                    continue
                else:
                    self.set_pos_dir(x, y, d, Maze.OPEN)
                    self.end = (x, y)
                    solved = True
                    pos = -1 - random.randrange(len(been))
                    break
            else:
                pos -= 1
                if pos < -len(been):
                    pos = -1
                    
        self.key = None
        while(self.key == None):
            kx = random.randrange(self.width)
            ky = random.randrange(self.height)
            if (Maze.distance(self.start, (kx,ky)) > self.min_distance()
                and Maze.distance(self.end, (kx,ky)) > self.min_distance()):
                self.key = (kx, ky)


    def get_label(self, x, y):
        if (x, y) == self.start:
            c = 'S'
        elif (x, y) == self.end:
            c = 'E'
        elif (x, y) == self.key:
            c = 'K'
        else:
            c = ' '
        return c

                    
    def map(self, moves=[]):
        map = ''
        for py in range(self.height * 2 + 1):
            row = ''
            for px in range(self.width * 2 + 1):
                x = int(px / 2)
                y = int(py / 2)
                if py % 2 == 0: #Even rows
                    if px % 2 == 0:
                        c = '+'
                    else:
                        v = self.get_pos_dir(x, y, self.UP)
                        if v == Maze.OPEN:
                            c = ' '
                        elif v == Maze.WALL:
                            c = '-'
                else: # Odd rows
                    if px % 2 == 0:
                        v = self.get_pos_dir(x, y, self.LEFT)
                        if v == Maze.OPEN:
                            c = ' '
                        elif v == Maze.WALL:
                            c = '|'
                    else:
                        c = self.get_label(x, y)
                        if c == ' ' and [x, y] in moves:
                            c = '*'
                row += c
            map += row + '\n'
        return map


app = Flask(__name__)

@app.route('/')
def hello_maze():
    return "<p>Hello, Maze!</p>"

@app.route('/maze/map', methods=['GET', 'POST'])
def maze_map():
    if not ready:
        return Response(status=503, retry_after=10)
    if request.method == 'GET':
        return '<pre>' + maze.map() + '</pre>'
    else:
        moves = request.get_json()
        return maze.map(moves)

@app.route('/maze/start')
def maze_start():
    if not ready:
        return Response(status=503, retry_after=10)
    start = { 'x': maze.start[0], 'y': maze.start[1] }
    return json.dumps(start)

@app.route('/maze/size')
def maze_size():
    if not ready:
        return Response(status=503, retry_after=10)
    size = { 'width': maze.width, 'height': maze.height }
    return json.dumps(size)

@app.route('/maze/pos/<int:y>/<int:x>')
def maze_pos(y, x):
    if not ready:
        return Response(status=503, retry_after=10)
    pos = {
        'here': maze.get_label(x, y),
        'up': maze.get_pos_dir(x, y, Maze.UP),
        'down': maze.get_pos_dir(x, y, Maze.DOWN),
        'left': maze.get_pos_dir(x, y, Maze.LEFT),
        'right': maze.get_pos_dir(x, y, Maze.RIGHT),

    }
    return json.dumps(pos)


WIDTH = 80
HEIGHT = 20
maze = Maze(WIDTH, HEIGHT)
ready = True

The only requirement for the maze model (in requirements.txt) is the Flask module.

To create a container image running the maze model, I use this Dockerfile.

FROM --platform=linux/amd64 public.ecr.aws/docker/library/python:3.12-alpine

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt

COPY . .

CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=5555"]

Here’s the code for the agent (agent.py). First, the agent asks the model for the size of the maze and the starting position. Then, it applies its own strategy to explore and solve the maze. In this implementation, the agent chooses its route at random, trying to avoid following the same path more than once.

import random
import requests
from requests.adapters import HTTPAdapter, Retry

HOST = '127.0.0.1'
PORT = 5555

BASE_URL = f"http://{HOST}:{PORT}/maze"

UP, RIGHT, DOWN, LEFT = 0, 1, 2, 3
OPEN, WALL = 0, 1

s = requests.Session()

retries = Retry(total=10,
                backoff_factor=1)

s.mount('http://', HTTPAdapter(max_retries=retries))

r = s.get(f"{BASE_URL}/size")
size = r.json()
print('SIZE', size)

r = s.get(f"{BASE_URL}/start")
start = r.json()
print('START', start)

y = start['y']
x = start['x']

found_key = False
been = set((x, y))
moves = [(x, y)]
moves_stack = [(x, y)]

while True:
    r = s.get(f"{BASE_URL}/pos/{y}/{x}")
    pos = r.json()
    if pos['here'] == 'K' and not found_key:
        print(f"({x}, {y}) key found")
        found_key = True
        been = set((x, y))
        moves_stack = [(x, y)]
    if pos['here'] == 'E' and found_key:
        print(f"({x}, {y}) exit")
        break
    dirs = list(range(4))
    random.shuffle(dirs)
    for d in dirs:
        nx, ny = x, y
        if d == UP and pos['up'] == 0:
            ny -= 1
        if d == RIGHT and pos['right'] == 0:
            nx += 1
        if d == DOWN and pos['down'] == 0:
            ny += 1
        if d == LEFT and pos['left'] == 0:
            nx -= 1 

        if nx < 0 or nx >= size['width'] or ny < 0 or ny >= size['height']:
            continue

        if (nx, ny) in been:
            continue

        x, y = nx, ny
        been.add((x, y))
        moves.append((x, y))
        moves_stack.append((x, y))
        break
    else:
        if len(moves_stack) > 0:
            x, y = moves_stack.pop()
        else:
            print("No moves left")
            break

print(f"Solution length: {len(moves)}")
print(moves)

r = s.post(f'{BASE_URL}/map', json=moves)

print(r.text)

s.close()

The only dependency of the agent (in requirements.txt) is the requests module.

This is the Dockerfile I use to create a container image for the agent.

FROM --platform=linux/amd64 public.ecr.aws/docker/library/python:3.12-alpine

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt

COPY . .

CMD [ "python3", "agent.py"]

You can easily run this simplified version of a simulation locally, but the cloud allows you to run it at larger scale (for example, with a much bigger and more detailed maze) and to test multiple agents to find the best strategy to use. In a real-world scenario, the improvements to the agent would then be implemented into a physical device such as a self-driving car or a robot vacuum cleaner.

Running a simulation using multi-container jobs
To run a job with AWS Batch, I need to configure three resources:

  • The compute environment in which to run the job
  • The job queue in which to submit the job
  • The job definition describing how to run the job, including the container images to use

In the AWS Batch console, I choose Compute environments from the navigation pane and then Create. Now, I have the choice of using Fargate, Amazon EC2, or Amazon EKS. Fargate allows me to closely match the resource requirements that I specify in the job definitions. However, simulations usually require access to a large but static amount of resources and use GPUs to accelerate computations. For this reason, I select Amazon EC2.

Console screenshot.

I select the Managed orchestration type so that AWS Batch can scale and configure the EC2 instances for me. Then, I enter a name for the compute environment and select the service-linked role (that AWS Batch created for me previously) and the instance role that is used by the ECS container agent (running on the EC2 instances) to make calls to the AWS API on my behalf. I choose Next.

Console screenshot.

In the Instance configuration settings, I choose the size and type of the EC2 instances. For example, I can select instance types that have GPUs or use the Graviton processor. I do not have specific requirements and leave all the settings to their default values. For Network configuration, the console already selected my default VPC and the default security group. In the final step, I review all configurations and complete the creation of the compute environment.

Now, I choose Job queues from the navigation pane and then Create. Then, I select the same orchestration type I used for the compute environment (Amazon EC2). In the Job queue configuration, I enter a name for the job queue. In the Connected compute environments dropdown, I select the compute environment I just created and complete the creation of the queue.

Console screenshot.

I choose Job definitions from the navigation pane and then Create. As before, I select Amazon EC2 for the orchestration type.

To use more than one container, I disable the Use legacy containerProperties structure option and move to the next step. By default, the console creates a legacy single-container job definition if there’s already a legacy job definition in the account. That’s my case. For accounts without legacy job definitions, the console has this option disabled.

Console screenshot.

I enter a name for the job definition. Then, I have to think about which permissions this job requires. The container images I want to use for this job are stored in Amazon ECR private repositories. To allow AWS Batch to download these images to the compute environment, in the Task properties section, I select an Execution role that gives read-only access to the ECR repositories. I don’t need to configure a Task role because the simulation code is not calling AWS APIs. For example, if my code was uploading results to an Amazon Simple Storage Service (Amazon S3) bucket, I could select here a role giving permissions to do so.

In the next step, I configure the two containers used by this job. The first one is the maze-model. I enter the name and the image location. Here, I can specify the resource requirements of the container in terms of vCPUs, memory, and GPUs. This is similar to configuring containers for an ECS task.

Console screenshot.

I add a second container for the agent and enter name, image location, and resource requirements as before. Because the agent needs to access the maze as soon as it starts, I use the Dependencies section to add a container dependency. I select maze-model for the container name and START as the condition. If I don’t add this dependency, the agent container can fail before the maze-model container is running and able to respond. Because both containers are flagged as essential in this job definition, the overall job would terminate with a failure.

Console screenshot.

I review all configurations and complete the job definition. Now, I can start a job.

In the Jobs section of the navigation pane, I submit a new job. I enter a name and select the job queue and the job definition I just created.

Console screenshot.

In the next steps, I don’t need to override any configuration and create the job. After a few minutes, the job has succeeded, and I have access to the logs of the two containers.

Console screenshot.

The agent solved the maze, and I can get all the details from the logs. Here’s the output of the job to see how the agent started, picked up the key, and then found the exit.

SIZE {'width': 80, 'height': 20}
START {'x': 0, 'y': 18}
(32, 2) key found
(79, 16) exit
Solution length: 437
[(0, 18), (1, 18), (0, 18), ..., (79, 14), (79, 15), (79, 16)]

In the map, the red asterisks (*) follow the path used by the agent between the start (S), key (K), and exit (E) locations.

ASCII-based map of the solved maze.

Increasing observability with a sidecar container
When running complex jobs using multiple components, it helps to have more visibility into what these components are doing. For example, if there is an error or a performance problem, this information can help you find where and what the issue is.

To instrument my application, I use AWS Distro for OpenTelemetry:

Using telemetry data collected in this way, I can set up dashboards (for example, using CloudWatch or Amazon Managed Grafana) and alarms (with CloudWatch or Prometheus) that help me better understand what is happening and reduce the time to solve an issue. More generally, a sidecar container can help integrate telemetry data from AWS Batch jobs with your monitoring and observability platforms.

Things to know
AWS Batch support for multi-container jobs is available today in the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs in all AWS Regions where Batch is offered. For more information, see the AWS Services by Region list.

There is no additional cost for using multi-container jobs with AWS Batch. In fact, there is no additional charge for using AWS Batch. You only pay for the AWS resources you create to store and run your application, such as EC2 instances and Fargate containers. To optimize your costs, you can use Reserved Instances, Savings Plan, EC2 Spot Instances, and Fargate in your compute environments.

Using multi-container jobs accelerates development times by reducing job preparation efforts and eliminates the need for custom tooling to merge the work of multiple teams into a single container. It also simplifies DevOps by defining clear component responsibilities so that teams can quickly identify and fix issues in their own areas of expertise without distraction.

To learn more, see how to set up multi-container jobs in the AWS Batch User Guide.

Danilo

AWS Weekly Roundup — New models for Amazon Bedrock, CloudFront embedded POPs, and more — March 4, 2024

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-new-models-for-amazon-bedrock-cloudfront-embedded-pops-and-more-march-4-2024/

This has been a busy week – we introduced a new kind of Amazon CloudFront infrastructure, more efficient ways to analyze data stored on Amazon Simple Storage Service (Amazon S3), and new generative AI capabilities.

Last week’s launches
Here’s what got my attention:

Amazon Bedrock – Mistral AI’s Mixtral 8x7B and Mistral 7B foundation models are now generally available on Amazon Bedrock. More details in Donnie’s post. Here’s a deep dive into Mistral 7B and Mixtral 8x7B models, by my colleague Mike.

Knowledge Bases for Amazon Bedrock – With hybrid search support, you can improve the relevance of retrieved results, especially for keyword searches. More information and examples in this post on the AWS Machine Learning Blog.

Amazon CloudFront – We announced the availability of embedded Points of Presence (POPs), a new type of CloudFront infrastructure deployed closest to end viewers, within internet service provider (ISP) and mobile network operator (MNO) networks. Embedded POPs are custom-built to deliver large scale live-stream video, video-on-demand (VOD), and game downloads. Today, CloudFront has 600+ embedded POPs deployed across 200+ cities globally.

Amazon Kinesis Data Streams – To help you analyze and visualize the data in your streams in real-time, you can now run SQL queries with one click in the AWS Management Console.

Amazon EventBridge – API destinations now supports content-type header customization. By defining your own content-type, you can unlock more HTTP targets for API destinations, including support for CloudEvents. Read more in this X/Twitter thread by Nik, principal engineer at AWS Lambda.

Amazon MWAA – You can now create Apache Airflow version 2.8 environments on Amazon Managed Workflows for Apache Airflow (MWAA). More in this AWS Big Data blog post.

Amazon CloudWatch Logs – With CloudWatch Logs support for IPv6, you can simplify your network stack by running Amazon CloudWatch log groups on a dual-stack network that supports both IPv4 and IPv6. You can find more information on AWS services that support IPv6 in the documentation.

SQL Workbench for Amazon DynamoDB – As you use this client-side application to help you visualize and build scalable, high-performance data models, you can now clone tables between development environments. With this feature, you can develop and test your code with Amazon DynamoDB tables in the same state across multiple development environments.

AWS Cloud Development Kit (AWS CDK)  – The new AWS AppConfig Level 2 (L2) constructs simplify provisioning of AWS AppConfig resources, including feature flags and dynamic configuration data.

Amazon Location Service – You can now use the authentication libraries for iOS and Android platforms to simplify the integration of Amazon Location Service into mobile apps. The libraries support API key and Amazon Cognito authentication.

Amazon SageMaker – You can now accelerate Amazon SageMaker Model Training using the Amazon S3 Express One Zone storage class to gain faster load times for training data, checkpoints, and model outputs. S3 Express One Zone is purpose-built to deliver the fastest cloud object storage for performance-critical applications, and delivers consistent single-digit millisecond request latency and high throughput.

Amazon Data Firehose – Now supports message extraction for CloudWatch Logs. CloudWatch log records use a nested JSON structure, and the message in each record is embedded within header information. It’s now easier to filter out the header information and deliver only the embedded message to the destination, reducing the cost of subsequent processing and storage.

Amazon OpenSearch – Terraform now supports Amazon OpenSearch Ingestion deployments, a fully managed data ingestion tier for Amazon OpenSearch Service that allows you to ingest and process petabyte-scale data before indexing it in Amazon OpenSearch-managed clusters and serverless collections. Read more in this AWS Big Data blog post.

AWS Mainframe Modernization – AWS Blu Age Runtime is now available for seamless deployment on Amazon ECS on AWS Fargate to run modernized applications in serverless containers.

AWS Local Zones – A new Local Zone in Atlanta helps applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, and more.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects, programs, and news items that you might find interesting.

The PartyRock Hackathon is closing this month, and there is still time to join and make apps without code! Here’s the screenshot of a quick app that I built to help me plan what to do when I visit a new place.

Party Rock (sample) Trip PLanner application.

Use RAG for drug discovery with Knowledge Bases for Amazon Bedrock – A very interesting use case for generative AI.

Here’s a complete solution to build a robust text-to-SQL solution generating complex queries, self-correcting, and querying diverse data sources.

A nice overview of .NET 8 Support on AWS, the latest Long Term Support (LTS) version of cross-platform .NET.

Introducing the AWS WAF traffic overview dashboard – A new tool to help you make informed decisions about your security posture for applications protected by AWS WAF.

Some tips on how to improve the speed and cost of high performance computing (HPC) deployment with Mountpoint for Amazon S3, an open source file client that you can use to mount an S3 bucket on your compute instances, accessing it as a local file system.

My colleague Ricardo writes this weekly open source newsletter, in which he highlights new open source projects, tools, and demos from the AWS Community.

Upcoming AWS events
You can feel it in the air–the AWS Summits season is coming back! The first ones will be in Europe, you can join us in Paris (April 3), Amsterdam (April 9), and London (April 24). On March 12, you can meet public sector industry leaders and AWS experts at the AWS Public Sector Symposium in Brussels.

AWS Innovate are an online events designed to help you develop the right skills to design, deploy, and operate infrastructure and applications. AWS Innovate Generative AI + Data Edition for Americas is on March 14. It follows the ones for Asia Pacific & Japan and EMEA that we held in February.

There are still a few AWS Community re:Invent re:Cap events organized by volunteers from AWS User Groups and AWS Cloud Clubs around the world to learn about the latest announcements from AWS re:Invent.

You can browse all upcoming in-person and virtual events here.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Danilo

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS.

DNS over HTTPS is now available in Amazon Route 53 Resolver

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/dns-over-https-is-now-available-in-amazon-route-53-resolver/

Starting today, Amazon Route 53 Resolver supports using the DNS over HTTPS (DoH) protocol for both inbound and outbound Resolver endpoints. As the name suggests, DoH supports HTTP or HTTP/2 over TLS to encrypt the data exchanged for Domain Name System (DNS) resolutions.

Using TLS encryption, DoH increases privacy and security by preventing eavesdropping and manipulation of DNS data as it is exchanged between a DoH client and the DoH-based DNS resolver.

This helps you implement a zero-trust architecture where no actor, system, network, or service operating outside or within your security perimeter is trusted and all network traffic is encrypted. Using DoH also helps follow recommendations such as those described in this memorandum of the US Office of Management and Budget (OMB).

DNS over HTTPS support in Amazon Route 53 Resolver
You can use Amazon Route 53 Resolver to resolve DNS queries in hybrid cloud environments. For example, it allows AWS services access for DNS requests from anywhere within your hybrid network. To do so, you can set up inbound and outbound Resolver endpoints:

  • Inbound Resolver endpoints allow DNS queries to your VPC from your on-premises network or another VPC.Amazon Route 53 Resolver inbound endpoint architecture.
  • Outbound Resolver endpoints allow DNS queries from your VPC to your on-premises network or another VPC.Amazon Route 53 Resolver outbound endpoint architecture.

After you configure the Resolver endpoints, you can set up rules that specify the name of the domains for which you want to forward DNS queries from your VPC to an on-premises DNS resolver (outbound) and from on-premises to your VPC (inbound).

Now, when you create or update an inbound or outbound Resolver endpoint, you can specify which protocols to use:

  • DNS over port 53 (Do53), which is using either UDP or TCP to send the packets.
  • DNS over HTTPS (DoH), which is using TLS to encrypt the data.
  • Both, depending on which one is used by the DNS client.
  • For FIPS compliance, there is a specific implementation (DoH-FIPS) for inbound endpoints.

Let’s see how this works in practice.

Using DNS over HTTPS with Amazon Route 53 Resolver
In the Route 53 console, I choose Inbound endpoints from the Resolver section of the navigation pane. There, I choose Create inbound endpoint.

I enter a name for the endpoint, select the VPC, the security group, and the endpoint type (IPv4, IPv6, or dual-stack). To allow using both encrypted and unencrypted DNS resolutions, I select Do53, DoH, and DoH-FIPS in the Protocols for this endpoint option.

Console screenshot.

After that, I configure the IP addresses for DNS queries. I select two Availability Zones and, for each, a subnet. For this setup, I use the option to have the IP addresses automatically selected from those available in the subnet.

After I complete the creation of the inbound endpoint, I configure the DNS server in my network to forward requests for the amazonaws.com domain (used by AWS service endpoints) to the inbound endpoint IP addresses.

Similarly, I create an outbound Resolver endpoint and and select both Do53 and DoH as protocols. Then, I create forwarding rules that tell for which domains the outbound Resolver endpoint should forward requests to the DNS servers in my network.

Now, when the DNS clients in my hybrid environment use DNS over HTTPS in their requests, DNS resolutions are encrypted. Optionally, I can enforce encryption and select only DoH in the configuration of inbound and outbound endpoints.

Things to know
DNS over HTTPS support for Amazon Route 53 Resolver is available today in all AWS Regions where Route 53 Resolver is offered, including GovCloud Regions and Regions based in China.

DNS over port 53 continues to be the default for inbound or outbound Resolver endpoints. In this way, you don’t need to update your existing automation tooling unless you want to adopt DNS over HTTPS.

There is no additional cost for using DNS over HTTPS with Resolver endpoints. For more information, see Route 53 pricing.

Start using DNS over HTTPS with Amazon Route 53 Resolver to increase privacy and security for your hybrid cloud environments.

Danilo

Upgrade your Java applications with Amazon Q Code Transformation (preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/upgrade-your-java-applications-with-amazon-q-code-transformation-preview/

As our applications age, it takes more and more effort just to keep them secure and running smoothly. Developers managing the upgrades must spend time relearning the intricacies and nuances of breaking changes and performance optimizations others have already discovered in past upgrades. As a result, it’s difficult to balance the focus between new features and essential maintenance work.

Today, we are introducing in preview Amazon Q Code Transformation. This new capability simplifies upgrading and modernizing existing application code using Amazon Q, a new type of assistant powered by generative artificial intelligence (AI). Amazon Q is specifically designed for work and can be tailored to your business.

Amazon Q Code Transformation can perform Java application upgrades now, from version 8 and 11 to version 17, a Java Long-Term Support (LTS) release, and it will soon be able to transform Windows-based .NET Framework applications to cross-platform .NET.

Previously, developers could spend two to three days upgrading each application. Our internal testing shows that the transformation capability can upgrade an application in minutes compared to the days or weeks typically required for manual upgrades, freeing up time to focus on new business requirements. For example, an internal Amazon team of five people successfully upgraded one thousand production applications from Java 8 to 17 in 2 days. It took, on average, 10 minutes to upgrade applications, and the longest one took less than an hour.

Amazon Q Code Transformation automatically analyzes the existing code, generates a transformation plan, and completes the transformation tasks suggested by the plan. While doing so, it identifies and updates package dependencies and refactors deprecated and inefficient code components, switching to new language frameworks and incorporating security best practices. Once complete, you can review the transformed code, complete with build and test results, before accepting the changes.

In this way, you can keep applications updated and supported in just a few steps, gain performance benefits, and remove vulnerabilities from using unsupported versions, freeing up time to focus on new business requirements. Let’s see how this works in practice.

Upgrading a Java application from version 8 to 17
I am using IntelliJ IDEA in this walkthrough (the same is available for Visual Studio Code). To have Amazon Q Code Transformation in my IDE, I install the latest version of the AWS Toolkit for IntelliJ IDEA and sign in using the AWS IAM Identity Center credentials provided by my organization. Note that to access Amazon Q Code Transformation, the CodeWhisperer administrator needs to explicitly give access to Amazon Q features in the profile used by the organization.

I open an old project that I never had the time to update to a more recent version of Java. The project is using Apache Maven to manage the build. The project object model (POM) file (pom.xml), an XML representation of the project, is in the root directory.

First, in the project settings, I check that the project is configured to use the correct SDK version (1.8 in this case). I choose AWS Toolkit on the left pane and then the Amazon Q + CodeWhisperer tab. In the Amazon Q (Preview) section, I choose Transform.

IDE screenshot.

This opens a dialog where I check that the correct Maven module is selected for the upgrade before proceeding with the transformation.

IDE screenshot.

I follow the progress in the Transformation Hub window. The upgrade completes in a few minutes for my small application, while larger ones might take more than an hour to complete.

The end-to-end application upgrade consists of three steps:

  1. Identifying and analyzing the application – The code is copied to a managed environment in the cloud where the build process is set up based on the instructions in the repository. At this stage, the components to be upgraded are identified.
  2. Creating a transformation plan – The code is analyzed to create a transformation plan that lists the steps that Amazon Q Code Transformation will take to upgrade the code, including updating dependencies, building the upgraded code, and then iteratively fixing any build errors encountered during the upgrade.
  3. Code generation, build testing, and finalization – The transformation plan is followed iteratively to update existing code and configuration files, generate new files where needed, perform build validation using the tests provided with the code, and fix issues identified in failed builds.

IDE screenshot.

After a few minutes, the transformation terminates successfully. From here, I can open the plan and a summary of the transformation. I choose View diff to see the proposed changes. In the Apply Patch dialog, I see a recap of the files that have been added, modified, or deleted.

IDE screenshot.

First, I select the pom.xml file and then choose Show Difference (the icon with the left/right arrows) to have a side-by-side view of the current code in the project and the proposed changes. For example, I see that the version of one of the dependencies (Project Lombok) has been increased for compatibility with the target Java version.

IDE screenshot.

In the Java file, the annotations used by the upgraded dependency have been updated. With the new version, @With has been promoted, and @Wither (which was experimental) deprecated. These changes are reflected in the import statements.

IDE screenshot.

There is also a summary file that I keep in the code repo to quickly look up the changes made to complete the upgrade.

I spend some time reviewing the files. Then, I choose OK to accept all changes.

Now the patch has been successfully applied, and the proposed changes merged with the code. I commit changes to my repo and move on to focus on business-critical changes that have been waiting for the migration to be completed.

Things to know
The preview of Amazon Q Code Transformation is available today for customers on the Amazon CodeWhisperer Professional Tier in the AWS Toolkit for IntelliJ IDEA and the AWS Toolkit for Visual Studio Code. To use Amazon Q Code Transformation, the CodeWhisperer administrator needs to give access to the profile used by the organization.

There is no additional cost for using Amazon Q Code Transformation during the preview. You can upgrade Java 8 and 11 applications that are built using Apache Maven to Java version 17. The project must have the POM file (pom.xml) in the root directory. We’ll soon add the option to transform Windows-based .NET Framework applications to cross-platform .NET and help accelerate migrations to Linux.

Once a transformation job is complete, you can use a diff view to verify and accept the proposed changes. The final transformation summary provides details of the dependencies updated and code files changed by Amazon Q Code Transformation. It also provides details of any build failures encountered in the final build of the upgraded code that you can use to fix the issues and complete the upgrade.

Combining Amazon’s long-term investments in automated reasoning and static code analysis with the power of generative AI, Amazon Q Code Transformation incorporates foundation models that we found to be essential for context-specific code transformations that often require updating a long tail of Java libraries with backward-incompatible changes.

In addition to generative AI-powered code transformations built by AWS, Amazon Q Code Transformation uses parts of OpenRewrite to further accelerate Java upgrades for customers. At AWS, many of our services are built with open source components and promoting the long-term sustainability of these communities is critical to us and our customers. That is why it’s important for us to contribute back to communities like OpenRewrite, helping ensure the whole industry can continue to benefit from their innovations. AWS plans to contribute to OpenRewrite recipes and improvements developed as part of Amazon Q Code Transformation to open source.

“The ability for software to adapt at a much faster pace is one of the most fundamental advantages any business can have. That’s why we’re excited to see AWS using OpenRewrite, the open source automated code refactoring technology, as a component of their service,” said Jonathan Schneider, CEO and Co-founder of Moderne (the sponsor of OpenRewrite). “We’re happy to have AWS join the OpenRewrite community and look forward to their contributions to make it even easier to migrate frameworks, patch vulnerabilities, and update APIs.”

Upgrade your Java applications now
Amazon Q Code Transformation product page

Danilo

Announcing throughput increase and dead letter queue redrive support for Amazon SQS FIFO queues

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/announcing-throughput-increase-and-dead-letter-queue-redrive-support-for-amazon-sqs-fifo-queues/

With Amazon Simple Queue Service (Amazon SQS), you can send, store, and receive messages between software components at any volume. Today, Amazon SQS has introduced two new capabilities for first-in, first-out (FIFO) queues:

  • Maximum throughput has been increased up to 70,000 transactions per second (TPS) per API action in selected AWS Regions, supporting sending or receiving up to 700,000 messages per second with batching.
  • Dead letter queue (DLQ) redrive support to handle messages that are not consumed after a specific number of retries in a way similar to what was already available for standard queues.

Let’s take a more in-depth look at how these work in practice.

FIFO queues throughput increase up to 70K TPS
FIFO queues are designed for applications that require messages to be processed exactly once and in the order in which they are sent. While standard queues have an unlimited throughput, FIFO queues have an upper quota in the number of TPS per API action.

Standard and FIFO queues support batch actions that can send and receive up to 10 messages with a single API call (up to a maximum total payload of 256 KB). This means that a FIFO queue can process up to 10 times more messages per second than its maximum throughput.

At launch in 2016, FIFO queues supported up to 300 TPS per API action (3,000 messages per second with batching). This was enough for many use cases, but some customers asked for more throughput.

With high throughput mode launched in 2021, FIFO queues introduced a tenfold increase of the maximum throughput and could process up to 3,000 TPS per API action, depending on the Region. One year later, that quota was doubled to up to 6,000 TPS per API action.

This year, Amazon SQS has already increased FIFO queue throughput quota two times, to up to 9,000 TPS per API action in August and up to 18,000 TPS per API action in October (depending on the Region).

Today, the Amazon SQS team has been able to increase the FIFO queue throughput quota again, allowing you to process up to 70,000 TPS per API action (up to 700,000 messages per second with batching) in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions. This is more than two hundred times the maximum throughput at launch.

DLQ redrive support for FIFO queues
With Amazon SQS, messages that are not consumed after a specific number of retries can automatically be moved to a DLQ. There, messages can be analyzed to understand the reason why they have not been processed correctly. Sometimes there is a bug or a misconfiguration in the consumer application. Other times the messages contain invalid data from the source applications that needs to be fixed to allow the messages to be processed again.

Either way, you can define a plan to reprocess these messages. For example, you can fix the consumer application and redrive all messages to the source queue. Or you can create a dedicated queue where a custom application receives the messages, fixes their content, and then sends them to the source queue.

To simplify moving the messages back to the source queue or to a different queue, Amazon SQS allows you to create a redrive task. Redrive tasks are already available for standard queues. Starting today, you can also start a redrive task for FIFO queues.

Using the Amazon SQS console, I create a first queue (my-dlq.fifo) to be used as a DLQ. To redrive messages back to the source FIFO queue, the queue type must match, so this is also a FIFO queue.

Then, I create a source FIFO queue (my-source-queue.fifo) to handle messages as usual. When I create the source queue, I configure the first queue (my-dlq.fifo) as the DLQ and specify 3 as the Maximum receives condition under which messages are moved from the source queue to the DLQ.

Console screenshot.

When a message has been received by a consumer for more than the number of times specified by this condition, Amazon SQS moves the message to the DLQ. The original message ID is retained and can be used to uniquely track the message.

To test this setup, I use the console to send a message to the source queue. Then, I use the AWS Command Line Interface (AWS CLI) to receive the message multiple times without deleting it.

aws sqs receive-message --queue-url https://sqs.eu-west-1.amazonaws.com/123412341234/my-source-queue.fifo
{
    "Messages": [
        {
            "MessageId": "ef2f1c72-4bfe-4093-a451-03fe2dbd4d0f",
            "ReceiptHandle": "...",
            "MD5OfBody": "0f445a578fbcb0c06ca8aeb90a36fcfb",
            "Body": "My important message."
        }
    ]
}

To receive the same message more than once, I wait for the time specified in the queue visibility timeout to pass (30 seconds by default).

After the third time, the message is not in the source queue because it has been moved to the DLQ. When I try to receive messages from the source queue, the list is empty.

aws sqs receive-message --queue-url https://sqs.eu-west-1.amazonaws.com/123412341234/my-source-queue.fifo
{
    "Messages": []
}

To confirm that the message has been moved, I poll the DLQ to see if the message is there.

aws sqs receive-message --queue-url https://sqs.eu-west-1.amazonaws.com/123412341234/my-dlq.fifo  
{
    "Messages": [
        {
            "MessageId": "ef2f1c72-4bfe-4093-a451-03fe2dbd4d0f",
            "ReceiptHandle": "...",
            "MD5OfBody": "0f445a578fbcb0c06ca8aeb90a36fcfb",
            "Body": "My important message."
        }
    ]
}

Now that the message is in the DLQ, I can investigate why the message has not been processed (well, I know the reason this time) and decide whether to redrive messages from the DLQ using the Amazon SQS console or the new redrive API that was introduced a few months ago. For this example, I use the console. Back on the Amazon SQS console, I select the DLQ queue and choose Start DLQ redrive.

In Redrive configuration, I choose to redrive the messages to the source queue. Optionally, I can specify another FIFO queue as a custom destination. I use System optimized in Velocity control settings to redrive messages with the maximum number of messages per second optimized by Amazon SQS. Optionally, if there is a large number of messages in the DLQ, I can configure a custom maximum rate of messages per second to avoid overloading consumers.

Console screenshot.

Before starting the redrive task, I can use the Inspect messages section to poll and check messages. I already decided what to do, so I choose DLQ redrive to start the task. I have only one message to process, so the redrive task completes very quickly.

Console screenshot.

As expected, the message is back in the source queue and is ready to be processed again.

Console screenshot.

Things to know
Dead letter queue (DLQ) support for FIFO queues is available today in all AWS Regions where Amazon SQS is offered with the exception of GovCloud Regions and those based in China.

In the DLQ configuration, the maximum number of receives should be between 1 and 1,000.

There is no additional cost for using high throughput mode or a DLQ. Every Amazon SQS action counts as a request. A single request can send or receive from 1 to 10 messages, up to a maximum total payload of 256 KB. You pay based on the number of requests, and requests are priced differently between standard and FIFO queues.

As part of the AWS Free Tier, there is no cost for the first million requests per month for standard queues and for the first million requests per month for FIFO queues. For more information, see Amazon SQS pricing.

With these updates and the increased throughput, you can cover the vast majority of use cases with FIFO queues.

Use Amazon SQS FIFO queues to have high throughput, exactly-once processing, and first-in-first-out delivery.

Danilo

Replication failback and increased IOPS are new for Amazon EFS

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/replication-failback-and-increased-iops-are-new-for-amazon-efs/

Today, Amazon Elastic File System (Amazon EFS) has introduced two new capabilities:

  • Replication failback – Failback support for EFS replication makes it easier and more cost-effective to synchronize changes between EFS file systems when performing disaster recovery (DR) workflows. You can now quickly replicate incremental changes from your secondary back to your primary file system after disaster events and other DR-related activities.
  • Increased IOPS – Amazon EFS now supports up to 250,000 read IOPS and up to 50,000 write IOPS per file system, making it easier to run more IOPS-heavy workloads at any scale for virtual servers, containers, and serverless functions that require shared storage.

Let’s see more in depth how these work in practice.

Introducing Amazon EFS replication failback
With Amazon EFS replication, you can create a replica of your file system in the same or in another AWS Region. When replication is enabled, Amazon EFS automatically keeps the primary (source) and secondary (destination) file systems synchronized. To help you meet your compliance and business continuity goals, EFS replication is designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) measured in minutes.

Now, with failback support, you can respond to disaster recovery (DR) events, conduct planned business continuity tests, and manage other DR-related activities with greater speed and cost efficiency. Failback support allows you to switch the direction of replication between the primary and secondary file systems. EFS replication keeps the two file systems in sync by copying only incremental changes, eliminating the need to make full copies of your data or use a self-managed, custom solution to complete a recovery workflow.

Using Amazon EFS replication failback
I have a file system replicated to another Region. As part of a periodic DR test, I want to switch to using the secondary file system and then revert back to the primary file system, preserving all the changes made on the secondary file system. To do so, I can use EFS Replication failback in just a few steps.

First, I delete the replication from the primary (source) to the secondary (destination) file system. After this, the secondary file system becomes writable. To do so, in the Amazon EFS console, I check I am in the correct Region and select the secondary file system. In the Replication tab, I choose Delete replication and confirm deletion. I can also start from the primary file system. In that case, the Delete replication link in the Replication tab opens a new browser tab and asks to confirm deletion like before.

I can now use the secondary file system and change its data as needed.

To go back to using the primary file system, I create a “reverse replication” from the secondary to the primary file system. To do so, I check I am in the correct Region and select the secondary file system. In the Replication tab, I choose Create replication and the new option Replicate to existing file system. Then, I select the Region of the primary file system and use the console to browse the EFS file systems in that Region and choose the primary one.

Console screenshot.

The console warns me that Replication overwrite protection is enabled for the primary file system. I follow the Disable protection link to open a new browser tab and edit the primary file system to disable replication overwrite protection.

Console screenshot.

Now, I go back to the browser tab where I am creating the failback replication from the secondary to the primary file system. I refresh the protection check and choose to create the replication.

Console screenshot.

In the following dialog, I confirm that I want Amazon EFS to write to the primary file system.

Console screenshot.

To know when the primary file system is back in sync, I check the Last synced timestamp in the Replication tab, which indicates that all changes made to the source file system before that time are replicated to the destination. Optionally, I can look at the TimeSinceLastSync metric (expressed in minutes) in Amazon CloudWatch to understand how data is being replicated.

Console screenshot.

When the primary file system is back in sync, I delete the replication from the secondary to the primary file system. To complete the restore of the original configuration, I again create the replication from the primary to the secondary file system.

Increased IOPS per file system
The Amazon EFS team has been able to increase IOPS again! The last time they did it was just a few months back. Starting today, an EFS file system can handle up to 50,000 write IOPS (a 2x improvement) and up to 250,000 read IOPS (a 4.5x improvement) when working with frequently-accessed data from a high-performance cache managed by Amazon EFS.

You can monitor the percentage utilization of your file system’s available IOPS using the PercentIOLimit CloudWatch metric. This metric considers the maximum IOPS for writes and uncached reads, including combinations of the two. Reads from the cache are not included in the PercentIOLimit metric.

With these performance improvements, you can run even more IOPS-demanding workloads on Amazon EFS, such as machine learning (ML) training, fine-tuning, and inference. Other use cases that can benefit from the increased IOPS are data science user shares, SaaS applications, and media processing.

Things to know
EFS replication failback is available in all AWS Regions where EFS is available. There are no additional costs for using replication failback. You pay for the usual replication and file system changes as described in Amazon EFS pricing.

The increased IOPS limits are immediately available for all file systems using the Elastic Throughput mode in all Regions where EFS is available. You don’t need to do anything to benefit from these performance improvements. To achieve the maximum IOPS, your application needs sufficient parallelization. For example, using multiple clients and distributing the load across a large number of files. For more information, see the performance tips in the user guide.

Learn more
Amazon EFS product page

Danilo

Use natural language to query Amazon CloudWatch logs and metrics (preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/use-natural-language-to-query-amazon-cloudwatch-logs-and-metrics-preview/

To make it easy to interact with your operational data, Amazon CloudWatch is introducing today natural language query generation for Logs and Metrics Insights. With this capability, powered by generative artificial intelligence (AI), you can describe in English the insights you are looking for, and a Logs or Metrics Insights query will be automatically generated.

This feature provides three main capabilities for CloudWatch Logs and Metrics Insights:

  • Generate new queries from a description or a question to help you get started easily.
  • Query explanation to help you learn the language including more advanced features.
  • Refine existing queries using guided iterations.

Let’s see how these work in practice with a few examples. I’ll cover logs first and then metrics.

Generate CloudWatch Logs Insights queries with natural language
In the CloudWatch console, I select Log Insights in the Logs section. I then select the log group of an AWS Lambda function that I want to investigate.

I choose the Query generator button to open a new Prompt field where I enter what I need using natural language:

Tell me the duration of the 10 slowest invocations

Then, I choose Generate new query. The following Log Insights query is automatically generated:

fields @timestamp, @requestId, @message, @logStream, @duration 
| filter @type = "REPORT" and @duration > 1000
| sort @duration desc
| limit 10

Console screenshot.

I choose Run query to see the results.

Console screenshot.

I find that now there’s too much information in the output. I prefer to see only the data I need, so I enter the following sentence in the Prompt and choose Update query.

Show only timestamps and latency

The query is updated based on my input and only the timestamp and duration are returned:

fields @timestamp, @duration 
| filter @type = "REPORT" and @duration > 1000
| sort @duration desc
| limit 10

I run the updated query and get a result that is easier for me to read.

Console screenshot.

Now, I want to know if there are any errors in the log. I enter this sentence in the Prompt and generate a new query:

Count the number of ERROR messages

As requested, the generated query is counting the messages that contain the ERROR string:

fields @message
| filter @message like /ERROR/
| stats count()

I run the query and find out that there are more errors than I expected. I need more information.

Console screenshot.

I use this prompt to update the query and get a better distribution of the errors:

Show the errors per hour

The updated query uses the bin() function to group the result in one hour intervals.

fields @timestamp, @message
| filter @message like /ERROR/
| stats count(*) by bin(1h)

Let’s see a more advanced query about memory usage. I select the log groups of a few Lambda functions and type:

Show invocations with the most over-provisioned memory grouped by log stream

Before generating the query, I choose the gear icon to toggle the options to include my prompt and an explanation as comment. Here’s the result (I split the explanation over multiple lines for readability):

# Show invocations with the most over-provisioned memory grouped by log stream

fields @logStream, @memorySize/1000/1000 as memoryMB, @maxMemoryUsed/1000/1000 as maxMemoryUsedMB, (@memorySize/1000/1000 - @maxMemoryUsed/1000/1000) as overProvisionedMB 
| stats max(overProvisionedMB) as maxOverProvisionedMB by @logStream 
| sort maxOverProvisionedMB desc

# This query finds the amount of over-provisioned memory for each log stream by
# calculating the difference between the provisioned and maximum memory used.
# It then groups the results by log stream and calculates the maximum
# over-provisioned memory for each log stream. Finally, it sorts the results
# in descending order by the maximum over-provisioned memory to show
# the log streams with the most over-provisioned memory.

Now, I have the information I need to understand these errors. On the other side, I also have EC2 workloads. How are those instances running? Let’s look at some metrics.

Generate CloudWatch Metrics Insights queries with natural language
In the CloudWatch console, I select All metrics in the Metrics section. Then, in the Query tab, I use the Editor. If you prefer, the Query generator is available also in the Builder.

I choose Query generator like before. Then, I enter what I need using plain English:

Which 10 EC2 instances have the highest CPU utilization?

I choose Generate new query and get a result using the Metrics Insights syntax.

SELECT AVG("CPUUtilization")
FROM SCHEMA("AWS/EC2", InstanceId)
GROUP BY InstanceId
ORDER BY AVG() DESC
LIMIT 10

To see the graph, I choose Run.

Console screenshot.

Well, it looks like my EC2 instances are not doing much. This result shows how those instances are using the CPU, but what about storage? I enter this in the prompt and choose Update query:

How about the most EBS writes?

The updated query replaces the average CPU utilization with the sum of bytes written to all EBS volumes attached to the instance. It keeps the limit to only show the top 10 results.

SELECT SUM("EBSWriteBytes")
FROM SCHEMA("AWS/EC2", InstanceId)
GROUP BY InstanceId
ORDER BY SUM() DESC
LIMIT 10

I run the query and, by looking at the result, I have a better understanding of how storage is being used by my EC2 instances.

Try entering some requests and run the generated queries over your logs and metrics to see how this works with your data.

Things to know
Amazon CloudWatch natural language query generation for logs and metrics is available in preview in the US East (N. Virginia) and US West (Oregon) AWS Regions.

There is no additional cost for using natural language query generation during the preview. You only pay for the cost of running the queries according to CloudWatch pricing.

Generated queries are produced by generative AI and dependent on factors including the data selected and available in your account. For these reasons, your results may vary.

When generating a query, you can include your original request and an explanation of the query as comments. To do so, choose the gear icon in the bottom right corner of the query edit window and toggle those options.

This new capability can help you generate and update queries for logs and metrics, saving you time and effort. This approach allows engineering teams to scale their operations without worrying about specific data knowledge or query expertise.

Use natural language to analyze your logs and metrics with Amazon CloudWatch.

Danilo

Amazon CloudWatch Logs now offers automated pattern analytics and anomaly detection

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-cloudwatch-logs-now-offers-automated-pattern-analytics-and-anomaly-detection/

Searching through log data to find operational or business insights often feels like looking for a needle in a haystack. It usually requires you to manually filter and review individual log records. To help you with that, Amazon CloudWatch has added new capabilities to automatically recognize and cluster patterns among log records, extract noteworthy content and trends, and notify you of anomalies using advanced machine learning (ML) algorithms trained using decades of Amazon and AWS operational data.

Specifically, CloudWatch now offers the following:

  • The Patterns tab on the Logs Insights page finds recurring patterns in your query results and lets you analyze them in detail. This makes it easier to find what you’re looking for and drill down into new or unexpected content in your logs.
  • The Compare button in the time interval selector on the Logs Insights page lets you quickly compare the query result for the selected time range to a previous period, such as the previous day, week, or month. In this way, it takes less time to see what has changed compared to a previous stable scenario.
  • The Log Anomalies page in the Logs section of the navigation pane automatically surfaces anomalies found in your logs while they are processed during ingestion.

Let’s see how these work in practice with a typical troubleshooting journey. I will look at some application logs to find key patterns, compare two time periods to understand what changed, and finally see how detecting anomalies can help discover issues.

Finding recurring patterns in the logs
In the CloudWatch console, I choose Logs Insights from the Logs section of the navigation pane. To start, I have selected which log groups I want to query. In this case, I select a log group of a Lambda function that I want to inspect and choose Run query.

In the Pattern tab, I see the patterns that have been found in these log groups. One of the patterns seems to be an error. I can select it to quickly add it as a filter to my query and focus on the logs that contain this pattern. For now, I choose the magnifying glass icon to analyze the pattern.

Console screenshot.

In the Pattern inspect window, a histogram with the occurrences of the pattern in the selected time period is shown. After the histogram, samples from the logs are provided.

Console screenshot.

The variable parts of the pattern (such as numbers) have been extracted as “tokens.” I select the Token values tab to see the values for a token. I can select a token value to quickly add it as a filter to the query and focus on the logs that contain this pattern with this specific value.

Console screenshot.

I can also look at the Related patterns tab to see other logs that typically occurred at the same time as the pattern I am analyzing. For example, if I am looking at an ERROR log that was always written alongside a DEBUG log showing more details, I would see that relationship there.

Comparing logs with a previous period
To better understand what is happening, I choose the Compare button in the time interval selector. This updates the query to compare results with a previous period. For example, I choose Previous day to see what changed compared to yesterday.

Console screenshot.

In the Patterns tab, I notice that there has actually been a 10 percent decrease in the number of errors, so the current situation might not be too bad.

I choose the magnifying glass icon on the pattern with severity type ERROR to see a full comparison of the two time periods. The graph overlaps the occurrences of the pattern over the two periods (now and yesterday in this case) inside the selected time range (one hour).

Console screenshot.

Errors are decreasing but are still there. To reduce those errors, I make some changes to the application. I come back after some time to compare the logs, and a new ERROR pattern is found that was not present in the previous time period.

Console screenshot.

My update probably broke something, so I roll back to the previous version of the application. For now, I’ll keep it as it is because the number of errors is acceptable for my use case.

Detecting anomalies in the log
I am reassured by the decrease in errors that I discovered comparing the logs. But how can I know if something unexpected is happening? Anomaly detection for CloudWatch Logs looks for unexpected patterns in the logs as they are processed during ingestion and can be enabled at log group level.

I select Log groups in the navigation pane and type a filter to see the same log group I was looking at before. I choose Configure in the Anomaly detection column and select an Evaluation frequency of 5 minutes. Optionally, I can use a longer interval (up to 60 minutes) and add patterns to process only specific log events for anomaly detection.

After I activate anomaly detection for this log group, incoming logs are constantly evaluated against historical baselines. I wait for a few minutes and, to see what has been found, I choose Log anomalies from the Logs section of the navigation pane.

Console screenshot.

To simplify this view, I can suppress anomalies that I am not interested in following. For now, I choose one of the anomalies in order to inspect the corresponding pattern in a way similar to before.

Console screenshot.

After this additional check, I am convinced there are no urgent issues with my application. With all the insights I collected with these new capabilities, I can now focus on the errors in the logs to understand how to solve them.

Things to know
Amazon CloudWatch automated log pattern analytics is available today in all commercial AWS Regions where Amazon CloudWatch Logs is offered excluding the China (Beijing), the China (Ningxia), and Israel (Tel Aviv) Regions.

The patterns and compare query features are charged according to existing Logs Insights query costs. Comparing a one-hour time period against another one-hour time period is equivalent to running a single query over a two-hour time period. Anomaly detection is included as part of your log ingestion fees, and there is no additional charge for this feature. For more information, see CloudWatch pricing.

Simplify how you analyze logs with CloudWatch automated log pattern analytics.

Danilo

Optimize your storage costs for rarely-accessed files with Amazon EFS Archive

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/optimize-your-storage-costs-for-rarely-accessed-files-with-amazon-efs-archive/

Today, we are introducing EFS Archive, a new storage class for Amazon Elastic File System (Amazon EFS) optimized for long-lived data that is rarely accessed.

With this launch, Amazon EFS supports three Regional storage classes:

  • EFS Standard – Powered by SSD storage and designed to deliver submillisecond latency for active data.
  • EFS Infrequent Access (EFS IA) – Cost-optimized for data accessed only a few times a quarter, and that doesn’t need the submillisecond latencies of EFS Standard.
  • EFS Archive – Cost-optimized for long-lived data accessed a few times a year or less and offering similar performance to EFS IA.

All Regional storage classes deliver gigabytes-per-second throughput and hundreds of thousands of IOPS performance and are designed for eleven nines of durability.

You don’t need to manually pick and choose a storage class for your file systems because EFS lifecycle management can automatically migrate files across storage classes based on their access patterns. This allows you to have a single shared file system that contains files processed in very different ways: from active latency-sensitive to cold rarely-accessed data.

Many datasets have subsets of data that are valuable for generating insights but aren’t often used. With EFS Archive, you can store rarely accessed data cost-effectively while keeping it in the same shared file system as other data. This simplified storage approach allows end users and applications to collaborate on large shared datasets in one place, making it easier and quicker to set up and scale analytics workloads.

Using EFS Archive, you can optimize costs for workloads with large file-based datasets that contain a mix of active and inactive data such as user shares, machine learning (ML) training datasets, SaaS applications, and data retained for regulatory compliance like financial transactions and medical records.

Let’s see how this works in practice.

Using EFS Archive storage
To use the new EFS Archive storage class, I need to configure lifecycle management for the file system. In the Amazon EFS console, I select one of my file systems and choose Edit. To use EFS Archive storage, the file system Throughput mode must be ElasticElastic Throughput is the recommended choice for most workloads because it is designed to provide applications with as much throughput as they need with pay-as-you-use pricing.

Console screenshot.

Now, I configure Lifecycle management to transition files into EFS IA or EFS Archive based on my workload’s access patterns.

Console screenshot.

My workloads rarely use files older than one month. Files older than a quarter are not used by normal activities but need to be kept for a longer time. Based on these considerations, I select to automatically transition files to EFS IA after 30 days and to EFS Archive after 90 days since the last access. These are the default settings for new file systems.

When one of my old files is accessed, it’s usually an indicator that is being used in a new analysis, so it’ll become active again for some period. For this reason, I use the option to transition files back to Standard storage on their first access in IA or Archive storage.

I save changes, and that’s it! This file system will now automatically use different storage classes based on how files are being processed by my applications.

Things to know
EFS Archive is available today in all AWS Regions where Amazon EFS is offered, excluding those based in China.

To offer a more cost-optimized experience for colder, rarely-accessed files, EFS Archive offers 50 percent lower storage cost than EFS IA with a three times higher request charge when data is accessed. For more information, see Amazon EFS pricing.

You can use EFS Archive with existing file systems by configuring the file system lifecycle policies. New file systems are created by default with a lifecycle policy that automatically transitions files to EFS IA after 30 days and to EFS Archive after 90 days since the last access.

Optimize your storage costs by configuring lifecycle management for your Amazon EFS file systems.

Danilo

Introducing Amazon CloudFront KeyValueStore: A low-latency datastore for CloudFront Functions

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-cloudfront-keyvaluestore-a-low-latency-datastore-for-cloudfront-functions/

Amazon CloudFront allows you to securely deliver static and dynamic content with low latency and high transfer speeds. With CloudFront Functions, you can perform latency-sensitive customizations for millions of requests per second. For example, you can use CloudFront Functions to modify headers, normalize cache keys, rewrite URLs, or authorize requests.

Today, we are introducing CloudFront KeyValueStore, a secure global low-latency key value datastore that allows read access from within CloudFront Functions, enabling advanced customizable logic at the CloudFront edge locations.

Previously, you had to embed configuration data inside the function code. For example, data for determining if a URL should be redirected and which URL to redirect the viewer to. When embedding configuration data with the function code, every small change in configuration requires a code change and a redeployment of the function code. Updating and deploying code for every new lookup addition introduces the risk of making inadvertent changes to code. Also, the maximum function size is 10 KB, making it difficult for many use cases to fit all the data within the code.

With CloudFront KeyValueStore, you can now update the data associated with a function and the function code independently from each other. This simplifies function code and makes it easy to update data without the need to deploy code changes.

Let’s see how this works in practice.

Creating a CloudFront key value store
In the CloudFront console, I choose Functions from the navigation pane. In the KeyValueStores tab, I choose Create KeyValueStore.

Here, I have the option to import key value pairs from a JSON file in an Amazon Simple Storage Service (Amazon S3) bucket. I am not doing that now because I want to start with no keys. I enter a name and description and complete the creation of the key value store.

Console screenshot.

When the key value store has been created, I choose Edit in the Key value pairs section and then Add pair. I type hello for the key and Hello World for the value and save the changes. I can add more keys and values, but one key is enough for now.

Console screenshot.

When I update a key value store, changes are propagated to all CloudFront edge locations in a few seconds so that it can be used with low latency by the functions that are associated with the key value store. Let’s see how that works.

Using CloudFront KeyValueStore from CloudFront Functions
In the CloudFront console, I choose Functions in the navigation pane and then Create function. I type a name for the function, select the cloudfront-js-2.0 runtime, and complete the creation of the function. Then, I use the new option to associate the key value store with this function.

Console screenshot.

I copy the key value store ID from the console to use it in the following function code:

import cf from 'cloudfront';

const kvsId = '<KEY_VALUE_STORE_ID>';

// This fails if the key value store is not associated with the function
const kvsHandle = cf.kvs(kvsId);

async function handler(event) {
    // Use the first part of the pathname as key, for example http(s)://domain/<key>/something/else
    const key = event.request.uri.split('/')[1]
    let value = "Not found" // Default value
    try {
        value = await kvsHandle.get(key);
    } catch (err) {
        console.log(`Kvs key lookup failed for ${key}: ${err}`);
    }
    var response = {
        statusCode: 200,
        statusDescription: 'OK',
        body: {
            encoding: 'text',
            data: `Key: ${key} Value: ${value}\n`
        }
    };
    return response;
}

This function uses the first part of the path of the request as key and responds with the name of the key and its value.

I save the changes and publish the function. In the Publish tab of the function, I associate the function with a CloudFront distribution that I created before. I use the Viewer Request event type and Default (*) cache behavior to intercept all requests to the distribution.

In the console, I go back to the list of functions and wait for the function to be deployed. Then, I use curl from the command line to download content from the distribution and test the result of the function.

First, I try with a couple of paths that invoke the function and look up the key I created before (hello):

curl https://distribution-domain.cloudfront.net/hello
Key: hello Value: Hello World

curl https://distribution-domain.cloudfront.net/hello/world
Key: hello Value: Hello World

It works! Then, I try with a different path to see that the default value I use in the code is returned when the key is not found.

curl https://distribution-domain.cloudfront.net/hi
Key: hi Value: Not found

Now that this first example works, let’s try something more advanced and useful.

Rewriting the URL using configuration data in CloudFront KeyValueStore
Let’s build a function that uses the content of the URL in the HTTP request to look up in a key value store the custom path that CloudFront should use to make the actual request. This function can help manage the multiple services that are part of a website.

For example, I want to update the blog platform I use for my website. The old blog has origin path /blog-v1 while the new blog has origin path /blog-v2.

Architectural diagram.

At first, I am still using the old blog. In the CloudFormation console, I add the blog key to the key value store with value blog-v1.

Then, I create the following function and associate it with the distribution using Viewer Request event and Default (*) cache behavior to intercept all requests to the distribution.

import cf from 'cloudfront';

const kvsId = "<KEY_VALUE_STORE_ID>";

// This fails if the key value store is not associated with the function
const kvsHandle = cf.kvs(kvsId);

async function handler(event) {
    const request = event.request;
    // Use the first segment of the pathname as key
    // For example http(s)://domain/<key>/something/else
    const pathSegments = request.uri.split('/')
    const key = pathSegments[1]
    try {
        // Replace the first path of the pathname with the value of the key
        // For example http(s)://domain/<value>/something/else
        pathSegments[1] = await kvsHandle.get(key);
        const newUri = pathSegments.join('/');
        console.log(`${request.uri} -> ${newUri}`)
        request.uri = newUri;
    } catch (err) {
        // No change to the pathname if the key is not found
        console.log(`${request.uri} | ${err}`);
    }
    return request;
}

Now, when I type blog at the beginning of the URL path, the request will actually go to the blog-v1 path. CloudFront will make the HTTP request to the old blog because blog-v1 is the origin path used by the old blog.

For example, if I type https://distribution-domain.cloudfront.net/blog/index.html in a browser, I see the old blog (V1).

Browser screenshot showing blog V1.

In the console, I update the blog key with value blog-v2. I access the same URL after a few seconds, and now I reach the new blog (V2).

Browser screenshot showing blog V2.

As you can see, the public URL is the same, but the content has changed. More generally, this function assumes that URLs do not change between the two blog versions.

I can now add more keys for the different services that are part of my website (blog, support, help, commerce, and so on) and set their values to use the correct URL path for each of them. When I add a new version for one of them (for example, I migrate to a new commerce platform), I can configure a new origin and update the corresponding key to use the new origin path.

This is just an example of the flexibility you get when you separate configuration data from code. If you are already using CloudFront Functions, you can simplify your code by using CloudFront KeyValueStore.

Things to know
CloudFront KeyValueStore is available today in all edge locations globally. With CloudFront KeyValueStore, you pay only for what you use based on the read/write operations from the public API and the read operations from within CloudFront Functions. For more information, see CloudFront pricing.

You can manage a key value store using the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. AWS CloudFormation support is coming soon. The maximum size of a key value store is 5 MB, and you can associate a single key value store to each function. The maximum size of a key is 512 bytes. Values can be up to 1KB in size. When creating a key value store, you can import key/value data during creation using a source file on Amazon S3 with this JSON structure:

{
  "data":[
    {
      "key":"key1",
      "value":"val1"
    },
    {
      "key":"key2",
      "value":"val2"
    }
  ]
}

Importing key/value data at creation can help automate the setup of a new environment (such as test or dev) and easily replicate the configuration from one environment to another (such as preproduction to production).

Simplify the way you add custom logic at the edge using CloudFront KeyValueStore.

Danilo

Happy anniversary, Amazon CloudFront: 15 years of evolution and internet advancements

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/happy-anniversary-amazon-cloudfront-15-years-of-evolution-and-internet-advancements/

I can’t believe it’s been 15 years since Amazon CloudFront was launched! When Amazon S3 became available in 2006, developers loved the flexibility and started to build a new kind of globally distributed applications where storage was not a bottleneck. These applications needed to be performant, reliable, and cost-efficient for every user on the planet. So in 2008 a small team (a “two-pizza team“) launched CloudFront in just 200 days. Jeff Barr hinted at the new and yet unnamed service in September and introduced CloudFront two months later.

Since the beginning, CloudFront has provided an easy way to distribute content to end users with low latency, high data transfer speeds, and no long-term commitments. What started as a simple cache for Amazon S3 quickly evolved into a fully featured content delivery network. Now CloudFront delivers applications at blazing speeds across the globe, supporting live sporting events such as NFL, Cricket World Cup, and FIFA World Cup.

At the same time, we also want to provide you with the best tools to secure applications. In 2015, we announced AWS WAF integration with CloudFront to provide fast and secure access control at the edge. Then, we focused on developing robust threat intelligence by combining signals across services. This threat intelligence integrates with CloudFront, adding AWS Shield to protect applications from common exploits and distributed denial of service (DDoS) attacks. For example, we recently detected an unusual spike in HTTP/2 requests to Amazon CloudFront. We quickly realized that CloudFront had automatically mitigated a new type of HTTP request flood DDoS event.

A lot also happens at lower levels than HTTP. For example, when you serve your application with CloudFront, all of the packets received by the application are inspected by a fully inline DDoS mitigation system which doesn’t introduce any observable latency. In this way, L3/L4 DDoS attacks against CloudFront distributions are mitigated in real time.

We also made under-the-hood improvements like s2n-tls (short for “signal to noise”), an open-source implementation of the TLS protocol that has been designed to be small and fast with simplicity as a priority. Another similar improvement is s2n-quic, an open-source QUIC protocol implementation written in Rust.

With CloudFront, you can also control access to content through a number of capabilities. You can restrict access to only authenticated viewers or, through geo-restriction capability, configure the specific geographic locations that can access content.

Security is always important, but not every organization has dedicated security experts on staff. To make robust security more accessible, CloudFront now includes built-in protections such as one-click web application firewall setup, security recommendations, and an intuitive security dashboard. With these integrated security features, teams can put critical safeguards in place without deep security expertise. Our goal is to empower all customers to easily implement security best practices.

Web applications delivery
During the past 15 years, web applications have become much more advanced and essential to end users. When CloudFront launched, our focus was helping deliver content stored in S3 buckets. Dynamic content was introduced to optimize web applications where portions of a website change for each user. Dynamic content also improves access to APIs that need to be delivered globally.

As applications become more distributed, we looked at ways to help developers make efficient use of its global footprint and resources at the edge. To allow customization and personalization of content close to end users and minimize latency, Lambda@Edge was introduced.

When fewer compute resources are needed, CloudFront Functions can run lightweight JavaScript functions across edge locations for low-latency HTTP manipulations and personalized content delivery. Recently, CloudFront Functions expanded to further customize responses, including modifying HTTP status codes and response bodies.

Today, CloudFront handles over 3 trillion HTTP requests daily and uses a global network of more than 600 points of presence and 13 Regional edge caches in more than 100 cities across 50 countries. This scale helps power the most demanding online events. For example, during the 2023 Amazon Prime Day, CloudFront handled peak loads of over 500 million HTTP requests per minute, totaling over 1 trillion HTTP requests.

Amazon CloudFront has more than 600,000 active developers building and delivering applications to end users. To help teams work at their full speed, CloudFront introduced continuous deployment so developers can test and validate configuration changes on a portion of traffic before full deployment.

Media and entertainment
It’s now common to stream music, movies, and TV series to our homes, but 15 years ago, renting DVDs was still the norm. Running streaming servers was technically complex, requiring long-term contracts to access the global infrastructure needed for high performance.

First, we added support for audio and video streaming capabilities using custom protocols since technical standards were still evolving. To handle large audiences and simplify cost-effective delivery of live events, CloudFront launched live HTTP streaming and, shortly after, improved support for both Flash-based (popular at the time) and Apple iOS devices.

As the media industry continued moving to internet-based delivery, AWS acquired Elemental, a pioneer in software-defined video solutions. Integrating Elemental offerings helped provide services, software, and appliances that efficiently and economically scale video infrastructures for use cases such as broadcast and content production.

The evolution of technologies and infrastructure allows for new ways of communication to become possible, such as when NASA did the first-ever live 4K stream from space using CloudFront.

Today, the world’s largest events and leading video platforms rely on CloudFront to deliver massive video catalogs and live stream content to millions. For example, CloudFront delivered streams for the FIFA World Cup 2022 on behalf of more than 19 major broadcasters globally. More recently, CloudFront handled over 120 Tbps of peak data transfer during one of the Thursday Night Football games of the NFL season on Prime Video and helped deliver the Cricket World Cup to millions of viewers across the globe.

What’s next?
Many things have changed during these 15 years but the focus on security, performance, and scalability stays the same. At AWS, it’s always Day 1, and the CloudFront team is constantly looking for ways to improve based on your feedback.

The rise of botnets is driving an ever-evolving, highly dynamic, and shifting threat landscape. Layer 7 DDoS attacks are becoming increasingly prevalent. The pervasiveness of bot traffic is increasing exponentially. As this occurs, we are evolving how we mitigate threats at the network border, at the edge, and in the Region, making it simpler for customers to configure the right security options.

Web applications are becoming more complex and interactive, and viewer expectations on latency and resiliency are even more stringent. This will drive new innovation. As new applications use generative artificial intelligence (AI), needs will evolve. These trends are will continue growing, so our investments will be focused on improving security and edge compute capabilities to support these new use cases.

With the current macroeconomic environment, many customers, especially small and medium-sized businesses and startups, look at how they can reduce their costs. Providing optimal price-performance has always been a priority for CloudFront. Cacheable data transferred to CloudFront edge locations from AWS resources does not incur additional fees. Also, 1 TB of data transfer from CloudFront to the internet per month is included in the free tier. CloudFront operates on a pay-as-you-go model with no upfront costs or minimum usage requirements. For more info, see CloudFront pricing.

As we approach AWS re:Invent, take note of these sessions that can help you learn about the latest innovations and connect with experts:

To learn more on how to speed up your websites and APIs and keep them protected, see the Application Security and Performance section of the AWS Developer Center.

Reduce latency and improve the security for your applications with Amazon CloudFront.

Danilo

New – Multi-account search in AWS Resource Explorer

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-multi-account-search-in-aws-resource-explorer/

With AWS Resource Explorer, you can search for and discover your resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Kinesis data streams, and Amazon DynamoDB tables, across AWS Regions. Starting today, you can also search across accounts within your organization.

It takes just a few minutes to turn on and configure Resource Explorer for an entire organization or a specific organizational unit (OU) and use simple free-form text and filtered searches to find relevant AWS resources across accounts and Regions.

Multi-account search is available in the Resource Explorer console, anywhere in the AWS Management Console through the unified search bar (the search bar at the top of every AWS console page), using the AWS Command Line Interface (AWS CLI), AWS SDKs, or AWS Chatbot. In this way, you can locate a resource quickly, navigate to the appropriate account and service, and take action.

When operating in a well-architected manner, multiple AWS accounts are used to help isolate and manage business applications and data. You can now use Resource Explorer to simplify how you explore your resources across accounts and act on them at scale. For example, Resource Explorer can help you locate impacted resources across your entire organization when investigating increased operational costs, troubleshooting a performance issue, or remediating a security alert.

Let’s see how this works in practice.

Setting up multi-account search
You can set up multi-account search for your organization in four steps:

  1. Enable trusted access for AWS Account Management.
  2. Configure Resource Explorer in every account in the organization or in the OU you want to search through. You can do that in just a few clicks using AWS Systems Manager Quick Setup. Optionally, you can use AWS CloudFormation, or other management tools you are comfortable with.
  3. It is not mandatory, but we suggest creating a delegated admin account for AWS Account Management. Then, to centralize all the required permissions for multi-account creation, we recommend using the delegated admin account to create Resource Explorer multi-account views.
  4. Finally, you can create a multi-account view to start searching across the organization.

Create a multi-account view
I already implemented the first three steps in the previous list. Using the delegated admin account, I go to the Resource Explorer console. There, I choose Views in the Explore resources section and create a view.

I enter a name for the view and select Organization-wide resources visibility. In this way, I can allow visibility of resources located in accounts across my entire organization or in specific OUs. For this view, I select the whole organization.

Console screenshot.

For the Region, I select the one where I have the aggregator index. The aggregator index contains a replicated copy of the local index in every other Region where Resource Explorer has been turned on. Optionally, I can use a filter to limit which resources should be included in this view. I choose to include all resources and additional resource attributes such as tags.

Console screenshot.

Then, I complete the creation of the view. Now, by granting access to the view, I can control who can access what resource information in Resource Explorer.

Using multi-account search
To try the new multi-account view, I choose Resource search from the Explore resources section of the navigation pane. In my query, I want to see if there are Amazon ElastiCache resources for an old version of Redis. I type elasticache:* redis3.2 in the Query field.

Console screenshot.

In the results, I see the different AWS accounts and Regions where these resources are based. For resources in my account, there is a link in the first column that opens that resource in the console. For resources in other accounts, I can use the console with the appropriate account and service to get more information or take action.

Things to know
Multi-account search is available in the following AWS Regions: [[ Regions ]].

There is no additional charge for using AWS Resource Explorer, including for multi-account searches.

To share views with other accounts in an organization, we suggest you use the delegated admin account to create the view with the necessary visibility in terms of resources, Regions, and accounts within the organization and then use AWS Resource Access Manager to share access to the view. For example, you can create a view for a specific OU and then share the view with an account in that OU.

Search for and discover relevant resources across accounts in your organization and across Regions with AWS Resource Explorer.

Danilo

AWS Weekly Roundup – CloudFront security dashboard, EBS snapshots improvements, and more – November 13, 2023

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-cloudfront-security-dashboard-ebs-snapshots-improvements-and-more-november-13-2023/

This week, it was really difficult to choose what to recap here because, as we’re getting closer to AWS re:Invent, service teams are delivering new capabilities at an incredible pace.

Last week’s launches
Here are some of the launches that caught my attention last week:

Amazon Aurora – Aurora MySQL zero-ETL integration with Amazon Redshift is now generally available. Get a walk-through in our AWS News Blog post. Here’s a recap of data integration innovations at AWS. Optimized reads for Aurora PostgreSQL provide up to 8x improved query latency and up to 30 percent cost savings for I/O-intensive applications. Here’s more of a deep dive from the AWS Database Blog.

Amazon EBS – You can now block public sharing of EBS snapshots. Read more about how that works in the launch post.

Amazon Data Lifecycle Manager – Support for pre- and post-script automation of EBS snapshots simplifies application-consistent snapshots. Here’s how to use it with Windows applications.

AWS Health – There’s now improved visibility into planned lifecycle events like end of standard support of a Kubernetes version in Amazon EKS, Amazon RDS certificate rotations, and end of support for other open source software. Here’s how it works.

Amazon CloudFront – Unified security dashboard to enable, monitor, and manage common security protections for your web applications directly from the CloudFront console. Read more at Introducing CloudFront Security Dashboard, a Unified CDN and Security Experience.

Amazon Connect – Reduced outbound telephony pricing across Europe and South America. It’s also easier now to deliver persistent chat experiences for end users.

AWS Lambda – Busy week for the Lambda team! There is now support for Amazon Linux 2023 as both a managed runtime and a container base image. More details in this Compute Blog post. There’s also enhanced auto scaling for Kafka event sources (the Compute Blog has a post with more details) and faster polling scale-up rate for Amazon SQS events when AWS Lambda functions are configured with SQS.

AWS CodeBuild – Now supports AWS Lambda compute to build and test software packages. Read about how it works in this post.

Amazon SQS – Now supports JSON protocol to reduce latency and client-side CPU usage. More in the launch post. There’s also a new integration for Amazon SQS in the Amazon EventBridge Pipes console (the week before that, Amazon Kinesis Data Streams was also integrated into the EventBridge Pipes console).

Amazon SNS –  FIFO topics now support 3,000 messages per second by default.

Amazon EventBridge – There are 22 additional Amazon CloudWatch metrics to help you monitor the performance of your event buses. More info in this post from the AWS Compute Blog.

Amazon OpenSearch ServiceNeural search makes it easier to create and manage semantic search applications.

Amazon Timestream – The UNLOAD statement simplifies exporting time-series data for additional insights.

Amazon Comprehend – New trust and safety features with toxicity detection and prompt safety classification. Read how to apply that to generative AI applications using LangChain.

AWS App Runner – Now available in London, Mumbai, and Paris AWS Regions.

AWS Application Migration Service – Support for AWS App2Container replatforming  of .NET and Java based applications.

Amazon FSx for OpenZFS – Now available in ten additional AWS Regions with support for additional deployment types in seven Regions.

AWS Global Accelerator – There’s now IPv6 support for Network Load Balancer (NLB) endpoints. It was already available for Application Load Balancers (ALBs) and Amazon Elastic Compute Cloud (Amazon EC2) instances.

Amazon GuardDuty – New machine learning (ML) capability enhances threat detection for Amazon EKS.

Other AWS news
Some other news and blog posts that you might have missed:

AWS Local Zones Credit Program – If you have low-latency or data residency requirements for your application, our Local Zones Credit Program can get you started. Fill out our form to receive $500 in AWS credits and apply it to a Local Zones workload.

Amazon CodeWhispererCustomizing coding companions for organizations and optimizing for sustainability.

Sharing what we have learned – Creating a correction of errors document to understand what went wrong and what would be done to prevent it from happening again.

Good tips for containers – Securing API endpoints using Amazon API Gateway and Amazon VPC Lattice.

Another post in this amazing series – Let’s Architect! Tools for developers.

A few highlights from Community.AWS:

Don’t miss the latest AWS open source newsletter by my colleague Ricardo.

Upcoming AWS events
Check your calendars and sign up for these AWS events:

AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Uruguay (November 14), Central Asia (Kazakhstan, Uzbekistan, Kyrgyzstan, and Mongolia on November 17–18), and Guatemala (November 18).

AWS re:Invent (November 27 – December 1) – Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Browse the session catalog and attendee guides and check out the highlights for generative AI. In the AWS re:Invent Builder Hub you can find developer-focused sessions, events, competitions, and content.

Here you can browse all upcoming AWS-led in-person and virtual events and developer-focused events.

And that’s all from me for this week. We’re now taking a break. The next weekly roundup will be after re:Invent!

Danilo

This post is part of our Weekly Roundup series. Check back for a quick roundup of interesting news and announcements from AWS!

New for Amazon SQS – Update the AWS SDK to reduce latency

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-sqs-update-the-aws-sdk-to-reduce-latency/

With Amazon SQS, you can send and receive messages between software components at any scale. It was one of the first AWS services I used and as a Solutions Architect, I helped many customers take advantage of asynchronous communications using message queues. In fact, Amazon SQS has been generally available since July 2006 and, under the hood, has always used the same wire protocol based on XML that we call AWS Query protocol.

Today, I am happy to announce that Amazon SQS now supports a JSON-based wire protocol for all APIs. The AWS JSON protocol avoids many of the shortcomings of AWS Query protocol.

AWS JSON is more efficient than the previous XML-based protocol and can reduce both latency and client-side CPU usage when sending and receiving SQS messages. For example, for a request that sends a simple “hello world” message, the response body size using the old AWS Query protocol is about 400 bytes. The content length of the same SendMessage response using the new AWS JSON protocol is less than 1/3 of the previous size.

Using the New JSON-Based Protocol with Amazon SQS
This is the nicest part of this launch! To benefit from the AWS JSON protocol, you just need to update the AWS SDK to the latest version. While building this new capability, the SQS team was careful so that no code changes are needed to use the new JSON-based wire protocol.

For example, we ran a benchmark using the AWS SDK for Java to compare the old and new wire protocols. We expect similar results from the other AWS SDKs. Based on AWS performance tests for a 5KB message payload, JSON protocol for Amazon SQS reduces end-to-end message processing latency by up to 23 percent and reduces application client side CPU and memory usage. These numbers depend on the actual implementation and can differ from what you’ll see for your own applications.

Availability and Pricing
Amazon SQS support for the new JSON protocol is available today in all AWS Regions where SQS is offered. All generally available AWS SDKs now support AWS JSON for SQS APIs. To get the benefits of this optimization, you just need to update the AWS SDK to the latest version available.

There is no change in pricing when using the AWS JSON protocol. For more information, see Amazon SQS pricing.

If you’re interested in more details on how AWS protocols work, you can have a look at Smithy, the language we use at AWS for defining services and SDKs.

After almost 17 years of being generally available, we’re improving and innovating around our oldest service. Just a few months back, we introduced server-side encryption with Amazon SQS-managed encryption keys (SSE-SQS) by default for newly created queues, increased the default quota for high throughput mode for FIFO queues, and announced support for attribute-based access control (ABAC) using queue tags. More recently, Amazon SQS added new APIs to manage dead-letter queue (DLQ) redrive programmatically. At AWS, it’s always day 1!

Update the AWS SDK to improve CPU and memory usage and reduce latency when using Amazon SQS.

Danilo

New for Amazon Comprehend – Toxicity Detection

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-comprehend-toxicity-detection/

With Amazon Comprehend, you can extract insights from text without being a machine learning expert. Using its built-in models, Comprehend can analyze the syntax of your input documents and find entities, events, key phrases, personally identifiable information (PII), and the overall sentiment or sentiments associated with specific entities (such as brands or products).

Today, we are adding the capability to detect toxic content. This new capability helps you build safer environments for your end users. For example, you can use toxicity detection to improve the safety of applications open to external contributions such as comments. When using generative AI, toxicity detection can be used to check the input prompts and the output responses from large language models (LLMs).

You can use toxicity detection with the AWS Command Line Interface (AWS CLI) and AWS SDKs. Let’s see how this works in practice with a few examples using the AWS CLI, an AWS SDK, and to check the use of an LLM.

Using Amazon Comprehend Toxicity Detection with AWS CLI
The new detect-toxic-content subcommand in the AWS CLI detects toxicity in text. The output contains a list of labels, one for each text segment in input. For each text segment, a list is provided with the labels and a score (between 0 and 1).

Amazon Comprehend toxicity detection API

For example, this AWS CLI command analyzes one text segment and returns one Labels section and an overall Toxicity score for the segment between o and 1:

aws comprehend detect-toxic-content --language-code en --text-segments Text="'Good morning, it\'s a beautiful day.'"
{
    "ResultList": [
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.00039999998989515007
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.01510000042617321
                },
                {
                    "Name": "INSULT",
                    "Score": 0.004699999932199717
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 9.999999747378752e-05
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.0006000000284984708
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.03889999911189079
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.016899999231100082
                }
            ],
            "Toxicity": 0.012299999594688416
        }
    ]
}

As expected, all scores are close to zero, and no toxicity was detected in this text.

To pass input as a file, I first use the AWS CLI --generate-cli-skeleton option to generate a skeleton of the JSON syntax used by the detect-toxic-content command:

aws comprehend detect-toxic-content --generate-cli-skeleton
{
    "TextSegments": [
        {
            "Text": ""
        }
    ],
    "LanguageCode": "en"
}

I write the output to a file and add three text segments (I will not show here the text used to show what happens with toxic content). This time, different levels of toxicity content has been found. Each Labels section is related to the corresponding input text segment.

aws comprehend detect-toxic-content --cli-input-json file://input.json
{
    "ResultList": [
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.03020000085234642
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.12549999356269836
                },
                {
                    "Name": "INSULT",
                    "Score": 0.0738999992609024
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.024399999529123306
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.09510000050067902
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.023900000378489494
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.15549999475479126
                }
            ],
            "Toxicity": 0.06650000065565109
        },
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.03400000184774399
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.2676999866962433
                },
                {
                    "Name": "INSULT",
                    "Score": 0.1981000006198883
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.03139999881386757
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.1777999997138977
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.013000000268220901
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.8395000100135803
                }
            ],
            "Toxicity": 0.41280001401901245
        },
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.9997000098228455
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.39469999074935913
                },
                {
                    "Name": "INSULT",
                    "Score": 0.9265999794006348
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.04650000110268593
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.4203999936580658
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.3353999853134155
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.12409999966621399
                }
            ],
            "Toxicity": 0.8180999755859375
        }
    ]
}

Using Amazon Comprehend Toxicity Detection with AWS SDKs
Similar to what I did with the AWS CLI, I can use an AWS SDK to programmatically detect toxicity in my applications. The following Python script uses the AWS SDK for Python (Boto3) to detect toxicity in the text segments and print the labels if the score is greater than a specified threshold. In the code, I redacted the content of the second and third text segments and replaced it with ***.

import boto3

comprehend = boto3.client('comprehend')

THRESHOLD = 0.2
response = comprehend.detect_toxic_content(
    TextSegments=[
        {
            "Text": "You can go through the door go, he's waiting for you on the right."
        },
        {
            "Text": "***"
        },
        {
            "Text": "***"
        }
    ],
    LanguageCode='en'
)

result_list = response['ResultList']

for i, result in enumerate(result_list):
    labels = result['Labels']
    detected = [ l for l in labels if l['Score'] > THRESHOLD ]
    if len(detected) > 0:
        print("Text segment {}".format(i + 1))
        for d in detected:
            print("{} score {:.2f}".format(d['Name'], d['Score']))

I run the Python script. The output contains the labels and the scores detected in the second and third text segments. No toxicity is detected in the first text segment.

Text segment 2
HATE_SPEECH score 0.27
VIOLENCE_OR_THREAT score 0.84
Text segment 3
PROFANITY score 1.00
HATE_SPEECH score 0.39
INSULT score 0.93
HARASSMENT_OR_ABUSE score 0.42
SEXUAL score 0.34

Using Amazon Comprehend Toxicity Detection with LLMs
I deployed the Mistral 7B model using Amazon SageMaker JumpStart as described in this blog post.

To avoid toxicity in the responses of the model, I built a Python script with three functions:

  • query_endpoint invokes the Mistral 7B model using the endpoint deployed by SageMaker JumpStart.
  • check_toxicity uses Comprehend to detect toxicity in a text and return a list of the detected labels.
  • avoid_toxicity takes in input a list of the detected labels and returns a message describing what to do to avoid toxicity.

The query to the LLM goes through only if no toxicity is detected in the input prompt. Then, the response from the LLM is printed only if no toxicity is detected in output. In case toxicity is detected, the script provides suggestions on how to fix the input prompt.

Here’s the code of the Python script:

import json
import boto3

comprehend = boto3.client('comprehend')
sagemaker_runtime = boto3.client("runtime.sagemaker")

ENDPOINT_NAME = "<REPLACE_WITH_YOUR_SAGEMAKER_JUMPSTART_ENDPOINT>"
THRESHOLD = 0.2


def query_endpoint(prompt):
    payload = {
        "inputs": prompt,
        "parameters": {
            "max_new_tokens": 68,
            "no_repeat_ngram_size": 3,
        },
    }
    response = sagemaker_runtime.invoke_endpoint(
        EndpointName=ENDPOINT_NAME, ContentType="application/json", Body=json.dumps(payload).encode("utf-8")
    )
    model_predictions = json.loads(response["Body"].read())
    generated_text = model_predictions[0]["generated_text"]
    return generated_text


def check_toxicity(text):
    response = comprehend.detect_toxic_content(
        TextSegments=[
            {
                "Text":  text
            }
        ],
        LanguageCode='en'
    )

    labels = response['ResultList'][0]['Labels']
    detected = [ l['Name'] for l in labels if l['Score'] > THRESHOLD ]

    return detected


def avoid_toxicity(detected):
    formatted = [ d.lower().replace("_", " ") for d in detected ]
    message = (
        "Avoid content that is toxic and is " +
        ", ".join(formatted) + ".\n"
    )
    return message


prompt = "Building a website can be done in 10 simple steps:"

detected_labels = check_toxicity(prompt)

if len(detected_labels) > 0:
    # Toxicity detected in the input prompt
    print("Please fix the prompt.")
    print(avoid_toxicity(detected_labels))
else:
    response = query_endpoint(prompt)

    detected_labels = check_toxicity(response)

    if len(detected_labels) > 0:
        # Toxicity detected in the output response
        print("Here's an improved prompt:")
        prompt = avoid_toxicity(detected_labels) + prompt
        print(prompt)
    else:
        print(response)

You’ll not get a toxic response with the sample prompt in the script, but it’s safe to know that you can set up an automatic process to check and mitigate if that happens.

Availability and Pricing
Toxicity detection for Amazon Comprehend is available today in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Sydney).

When using toxicity detection, there are no long-term commitments, and you pay based on the number of input characters in units of 100 characters (1 unit = 100 characters), with a minimum charge of 3 units (300 character) per request. For more information, see Amazon Comprehend pricing.

Improve the safety of your online communities and simplify the adoption of LLMs in your applications with toxicity detection.

Danilo

AWS Weekly Roundup – re:Post Selections, SNS and SQS FIFO improvements, multi-VPC ENI attachments, and more – October 30, 2023

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-repost-selections-sns-and-sqs-fifo-improvements-multi-vpc-eni-attachments-and-more-october-30-2023/

It’s less than a month to AWS re:Invent, but interesting news doesn’t slow down in the meantime. This week is my turn to help keep you up to date!

Last week’s launches
Here are some of the launches that caught my attention last week:

AWS re:Post – With re:Post, you have access to a community of experts that helps you become even more successful on AWS. With Selections, community members can organize knowledge in an aggregated view to create learning paths or curated content sets.

Amazon SNS – First-in-First-out (FIFO) topics now support the option to store and replay messages without needing to provision a separate archival resource. This improves the durability of your event-driven applications and can help you recover from downstream failure scenarios. Find out more in this AWS Comput Blog post – Archiving and replaying messages with Amazon SNS FIFO. Also, you can now use custom data identifiers to protect not only common sensitive data (such as names, addresses, and credit card numbers) but also domain-specific sensitive data, such as your company’s employee IDs. You can find additional info on this feature in this AWS Security blog post – Mask and redact sensitive data published to Amazon SNS using managed and custom data identifiers.

Amazon SQS – With the increased throughput quota for FIFO high throughput mode, you can process up to 18,000 transactions per second, per API action. Note the throughput quota depends on the AWS Region.

Amazon OpenSearch Service – OpenSearch Serverless now supports automated time-based data deletion with new index lifecycle policies. To determine the best strategy to deliver accurate and low latency vector search queries, OpenSearch can now intelligently evaluate optimal filtering strategies, like pre-filtering with approximate nearest neighbor (ANN) or filtering with exact k-nearest neighbor (k-NN). Also, OpenSearch Service now supports Internet Protocol Version 6 (IPv6).

Amazon EC2 – With multi-VPC ENI attachments, you can launch an instance with a primary elastic network interface (ENI) in one virtual private cloud (VPC) and attach a secondary ENI from another VPC. This helps maintain network-level segregation, but still allows specific workloads (like centralized appliances and databases) to communicate between them.

AWS CodePipeline – With parameterized pipelines, you can dynamically pass input parameters to a pipeline execution. You can now start a pipeline execution when a specific git tag is applied to a commit in the source repository.

Amazon MemoryDB – Now supports Graviton3-based R7g nodes that deliver up to 28 percent increased throughput compared to R6g. These nodes also deliver higher networking bandwidth.

Other AWS news
Here are a few posts from some of the other AWS and cloud blogs that I follow:

Networking & Content Delivery Blog – Some of the technical management and hardware decisions we make when building AWS network infrastructure: A Continuous Improvement Model for Interconnects within AWS Data Centers

Interconnect monitoring service infrastructure diagram

DevOps Blog – To help enterprise customers understand how many of developers use CodeWhisperer, how often they use it, and how often they accept suggestions: Introducing Amazon CodeWhisperer Dashboard and CloudWatch Metrics

Front-End Web & Mobile Blog – How to restrict access to your GraphQL APIs to consumers within a private network: Architecture Patterns for AWS AppSync Private APIs

Architecture Blog – Another post in this super interesting series: Let’s Architect! Designing systems for stream data processing

A serverless streaming data pipeline using Amazon Kinesis and AWS Glue

From Community.AWS: Load Testing WordPress Amazon Lightsail Instances and Future-proof Your .NET Apps With Foundation Model Choice and Amazon Bedrock.

Don’t miss the latest AWS open source newsletter by my colleague Ricardo.

Upcoming AWS events
Check your calendars and sign up for these AWS events

AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Jaipur (November 4), Vadodara (November 4), Brasil (November 4), Central Asia (Kazakhstan, Uzbekistan, Kyrgyzstan, and Mongolia on November 17-18), and Guatemala (November 18).

AWS re:Invent (November 27 – December 1) – Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Browse the session catalog and attendee guides and check out the highlights for generative AI.

Here you can browse all upcoming AWS-led in-person and virtual events and developer-focused events.

And that’s all from me for this week. On to the next one!

Danilo

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!