Tag Archives: announcements

Amazon DocumentDB zero-ETL integration with Amazon OpenSearch Service is now available

Post Syndicated from Kaarthiik Thota original https://aws.amazon.com/blogs/big-data/amazon-documentdb-zero-etl-integration-with-amazon-opensearch-service-is-now-available/

Today, we are announcing the general availability of Amazon DocumentDB (with MongoDB compatibility) zero-ETL integration with Amazon OpenSearch Service.

Amazon DocumentDB provides native text search and vector search capabilities. With Amazon OpenSearch Service, you can perform advanced search analytics, such as fuzzy search, synonym search, cross-collection search, and multilingual search, on Amazon DocumentDB data.

Zero-ETL integration simplifies your architecture for advanced search analytics. It frees you from performing undifferentiated heavy lifting tasks and the costs associated with building and managing data pipeline architecture and data synchronization between the two services.

In this post, we show you how to configure zero-ETL integration of Amazon DocumentDB with OpenSearch Service using Amazon OpenSearch Ingestion. It involves performing a full load of Amazon DocumentDB data and continuously streaming the latest data to Amazon OpenSearch Service using change streams. For other ingestion methods, see documentation.

Solution overview

At a high level, this solution involves the following steps:

  1. Enable change streams on the Amazon DocumentDB collections.
  2. Create the OpenSearch Ingestion pipeline.
  3. Load sample data on the Amazon DocumentDB cluster.
  4. Verify the data in OpenSearch Service.

Prerequisites

To implement this solution, you need the following prerequisites:

Zero-ETL will perform an initial full load of your collection by doing a collection scan on the primary instance of your Amazon DocumentDB cluster, which may take several minutes to complete depending on the size of the data, and you may notice elevated resource consumption on your cluster.

Enable change streams on the Amazon DocumentDB collections

Amazon DocumentDB change stream events comprise a time-ordered sequence of data changes due to inserts, updates, and deletes on your data. We use these change stream events to transmit data changes from the Amazon DocumentDB cluster to the OpenSearch Service domain.

Change streams are disabled by default; you can enable them at the individual collection level, database level, or cluster level. To enable change streams on your collections, complete the following steps:

  1. Connect to Amazon DocumentDB using mongo shell.
  2. Enable change streams on your collection with the following code. For this post, we use the Amazon DocumentDB database inventory and collection product:
    db.adminCommand({modifyChangeStreams: 1,
        database: "inventory",
        collection: "product", 
        enable: true});

If you have more than one collection for which you want to stream data into OpenSearch Service, enable change streams for each collection. If you want to enable it at the database or cluster level, see Enabling Change Streams.

It’s recommended to enable change streams for only the required collections.

Create an OpenSearch Ingestion pipeline

OpenSearch Ingestion is a fully managed data collector that delivers real-time log and trace data to OpenSearch Service domains. OpenSearch Ingestion is powered by the open source data collector Data Prepper. Data Prepper is part of the open source OpenSearch project.

With OpenSearch Ingestion, you can filter, enrich, transform, and deliver your data for downstream analysis and visualization. OpenSearch Ingestion is serverless, so you don’t need to worry about scaling your infrastructure, operating your ingestion fleet, and patching or updating the software.

For a comprehensive overview of OpenSearch Ingestion, visit Amazon OpenSearch Ingestion, and for more information about the Data Prepper open source project, visit Data Prepper.

To create an OpenSearch Ingestion pipeline, complete the following steps:

  1. On the OpenSearch Service console, choose Pipelines in the navigation pane.
  2. Choose Create pipeline.
  3. For Pipeline name, enter a name (for example, zeroetl-docdb-to-opensearch).
  4. Set up pipeline capacity for compute resources to automatically scale your pipeline based on the current ingestion workload.
  5. Input the minimum and maximum Ingestion OpenSearch Compute Units (OCUs). In this example, we use the default pipeline capacity settings of minimum 1 Ingestion OCU and maximum 4 Ingestion OCUs.

Each OCU is a combination of approximately 8 GB of memory and 2 vCPUs that can handle an estimated 8 GiB per hour. OpenSearch Ingestion supports up to 96 OCUs, and it automatically scales up and down based on your ingest workload demand.

  1. Choose the configuration blueprint and under Use case in the navigation pane, choose ZeroETL.
  2. Select Zero-ETL with DocumentDB to build the pipeline configuration.

This pipeline is a combination of a source part from the Amazon DocumentDB settings and a sink part for OpenSearch Service.

You must set multiple AWS Identity and Access Management (IAM) roles (sts_role_arn) with the necessary permissions to read data from the Amazon DocumentDB database and collection and write to an OpenSearch Service domain. This role is then assumed by OpenSearch Ingestion pipelines to make sure the right security posture is always maintained when moving the data from source to destination. To learn more, see Setting up roles and users in Amazon OpenSearch Ingestion.

You need one OpenSearch Ingestion pipeline per Amazon DocumentDB collection.

version: "2"
documentdb-pipeline:
  source:
    documentdb:
      acknowledgments: true
      host: "<<docdb-2024-01-03-20-31-17.cluster-abcdef.us-east-1.docdb.amazonaws.com>>"
      port: 27017
      authentication:
        username: ${{aws_secrets:secret:username}}
        password: ${{aws_secrets:secret:password}}
      aws:
        sts_role_arn: "<<arn:aws:iam::123456789012:role/Example-Role>>"
      
      s3_bucket: "<<bucket-name>>"
      s3_region: "<<bucket-region>>" 
      # optional s3_prefix for Opensearch ingestion to write the records
      # s3_prefix: "<<path_prefix>>"
      collections:
        # collection format: <databaseName>.<collectionName>
        - collection: "<<databaseName.collectionName>>"
          export: true
          stream: true
  sink:
    - opensearch:
        # REQUIRED: Provide an AWS OpenSearch endpoint
        hosts: [ "<<https://search-mydomain-1a2a3a4a5a6a7a8a9a0a9a8a7a.us-east-1.es.amazonaws.com>>" ]
        index: "<<index_name>>"
        index_type: custom
        document_id: "${getMetadata(\"primary_key\")}"
        action: "${getMetadata(\"opensearch_action\")}"
        # DocumentDB record creation or event timestamp
        document_version: "${getMetadata(\"document_version\")}"
        document_version_type: "external"
        aws:
          # REQUIRED: Provide a Role ARN with access to the domain. This role should have a trust relationship with osis-pipelines.amazonaws.com
          sts_role_arn: "<<arn:aws:iam::123456789012:role/Example-Role>>"
          # Provide the region of the domain.
          region: "<<us-east-1>>"
          # Enable the 'serverless' flag if the sink is an Amazon OpenSearch Serverless collection
          # serverless: true
          # serverless_options:
            # Specify a name here to create or update network policy for the serverless collection
            # network_policy_name: "network-policy-name"
          
extension:
  aws:
    secrets:
      secret:
        # Secret name or secret ARN
        secret_id: "<<my-docdb-secret>>"
        region: "<<us-east-1>>"
        sts_role_arn: "<<arn:aws:iam::123456789012:role/Example-Role>>"
        refresh_interval: PT1H 

Provide the following parameters from the blueprint:

  • Amazon DocumentDB endpoint – Provide your Amazon DocumentDB cluster endpoint.
  • Amazon DocumentDB collection – Provide your Amazon DocumentDB database name and collection name in the format dbname.collection within the col­­­­­lections section. For example, inventory.product.
  • s3_bucket – Provide your S3 bucket name along with the AWS Region and S3 prefix. This will be used temporarily to hold the data from Amazon DocumentDB for data synchronization.
  • OpenSearch hosts – Provide the OpenSearch Service domain endpoint for the host and provide the preferred index name to store the data.
  • secret_id – Provide the ARN for the secret for the Amazon DocumentDB cluster along with its Region.
  • sts_role_arn – Provide the ARN for the IAM role that has permissions for the Amazon Document DB cluster, S3 bucket, and OpenSearch Service domain.

To learn more, see Creating Amazon OpenSearch Ingestion pipelines.

  1. After entering all the required values, validate the pipeline configuration for any errors.
  2. When designing a production workload, deploy your pipeline within a VPC. Choose your VPC, subnets, and security groups. Also select Attach to VPC and choose the corresponding VPC CIDR range.

The security group inbound rule should have access to the Amazon DocumentDB port. For more information, refer to Securing Amazon OpenSearch Ingestion pipelines within a VPC.

Load sample data on the Amazon DocumentDB cluster

Complete the following steps to load the sample data:

  1. Connect to your Amazon DocumentDB cluster.
  2. Insert some documents into the collection product in the inventory database by running the following commands. For creating and updating documents on Amazon DocumentDB, refer to Working with Documents.
    use inventory;
    
     db.product.insertMany([
       {
          "Item":"Ultra GelPen",
          "Colors":[
             "Violet"
          ],
          "Inventory":{
             "OnHand":100,
             "MinOnHand":35
          },
          "UnitPrice":0.99
       },
       {
          "Item":"Poster Paint",
          "Colors":[
             "Red",
             "Green",
             "Blue",
             "Black",
             "White"
          ],
          "Inventory":{
             "OnHand":47,
             "MinOnHand":50
          }
       },
       {
          "Item":"Spray Paint",
          "Colors":[
             "Black",
             "Red",
             "Green",
             "Blue"
          ],
          "Inventory":{
             "OnHand":47,
             "MinOnHand":50,
             "OrderQnty":36
          }
       }
    ])

Verify the data in OpenSearch Service

You can use the OpenSearch Dashboards dev console to search for the synchronized items within a few seconds. For more information, see Creating and searching for documents in Amazon OpenSearch Service.

To verify the change data capture (CDC), run the following command to update the OnHand and MinOnHand fields for the existing document item Ultra GelPen in the product collection:

db.product.updateOne({
   "Item":"Ultra GelPen"
},
{
   "$set":{
      "Inventory":{
         "OnHand":300,
         "MinOnHand":100
      }
   }
});

Verify the CDC for the update to the document for the item Ultra GelPen on the OpenSearch Service index.

Monitor the CDC pipeline

You can monitor the state of the pipelines by checking the status of the pipeline on the OpenSearch Service console. Additionally, you can use Amazon CloudWatch to provide real-time metrics and logs, which lets you set up alerts in case of a breach of user-defined thresholds.

Clean up

Make sure you clean up unwanted AWS resources created during this post in order to prevent additional billing for these resources. Follow these steps to clean up your AWS account:

  1. On the OpenSearch Service console, choose Domains under Managed clusters in the navigation pane.
  2. Select the domain you want to delete and choose Delete.
  3. Choose Pipelines under Ingestion in the navigation pane.
  4. Select the pipeline you want to delete and on the Actions menu, choose Delete.
  5. On the Amazon S3 console, select the S3 bucket and choose Delete.

Conclusion

In this post, you learned how to enable zero-ETL integration between Amazon DocumentDB change data streams and OpenSearch Service. To learn more about zero-ETL integrations available with other data sources, see Working with Amazon OpenSearch Ingestion pipeline integrations.


About the Authors

Praveen Kadipikonda is a Senior Analytics Specialist Solutions Architect at AWS based out of Dallas. He helps customers build efficient, performant, and scalable analytic solutions. He has worked with building databases and data warehouse solutions for over 15 years.

Kaarthiik Thota is a Senior Amazon DocumentDB Specialist Solutions Architect at AWS based out of London. He is passionate about database technologies and enjoys helping customers solve problems and modernize applications using NoSQL databases. Before joining AWS, he worked extensively with relational databases, NoSQL databases, and business intelligence technologies for over 15 years.

Muthu Pitchaimani is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search applications and solutions. Muthu is interested in the topics o f networking and security, and is based out of Austin, Texas.

Safely remove Kafka brokers from Amazon MSK provisioned clusters

Post Syndicated from Vidhi Taneja original https://aws.amazon.com/blogs/big-data/safely-remove-kafka-brokers-from-amazon-msk-provisioned-clusters/

Today, we are announcing broker removal capability for Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned clusters, which lets you remove multiple brokers from your provisioned clusters. You can now reduce your cluster’s storage and compute capacity by removing sets of brokers, with no availability impact, data durability risk, or disruption to your data streaming applications. Amazon MSK is a fully managed Apache Kafka service that makes it easy for developers to build and run highly available, secure, and scalable streaming applications. Administrators can optimize the costs of their Amazon MSK clusters by reducing broker count and adapting the cluster capacity to the changes in the streaming data demand, without affecting their clusters’ performance, availability, or data durability.

You can use Amazon MSK as a core foundation to build a variety of real-time streaming applications and high-performance event-driven architectures. As business needs and traffic patterns change, cluster capacity is often adjusted to optimize costs. Amazon MSK provides flexibility and elasticity for administrators to right-size MSK clusters. You can increase broker count or the broker size to manage the surge in traffic during peak events or decrease the instance size of brokers of the cluster to reduce capacity. However, to reduce the broker count, earlier you had to undertake effort-intensive migration to another cluster.

With the broker removal capability, you can now remove multiple brokers from your provisioned clusters to meet the varying needs of your streaming workloads. During and post broker removal, the cluster continues to handle read and write requests from the client applications. MSK performs the necessary validations to safeguard against data durability risks and gracefully removes the brokers from the cluster. By using broker removal capability, you can precisely adjust MSK cluster capacity, eliminating the need to change the instance size of every broker in the cluster or having to migrate to another cluster to reduce broker count.

How the broker removal feature works

Before you execute the broker removal operation, you must make some brokers eligible for removal by moving all partitions off of them. You can use Kafka admin APIs or Cruise Control to move partitions to other brokers that you intend to retain in the cluster.

You choose which brokers to remove and move the partitions from those brokers to other brokers using Kafka tools. Alternatively, you may have brokers that are not hosting any partitions. Then use Edit number of brokers feature using the AWS Management Console, or the Amazon MSK API UpdateBrokerCount. Here are details on how you can use this new feature:

  • You can remove a maximum of one broker per Availability Zone (AZ) in a single broker removal operation. To remove more brokers, you can call multiple broker removal operations consecutively after the prior operation has been completed. You must retain at least one broker per AZ in your MSK cluster.
  • The target number of broker nodes in the cluster must be a multiple of the number of availability zones (AZs) in the client subnets parameter. For example, a cluster with subnets in two AZs must have a target number of nodes that is a multiple of two.
  • If the brokers you removed were present in the bootstrap broker string, MSK will perform the necessary routing so that the client’s connectivity to the cluster is not disrupted. You don’t need to make any client changes to change your bootstrap strings.
  • You can add brokers back to your cluster anytime using AWS Console, or the UpdateBrokerCount API.
  • Broker removal is supported on Kafka versions 2.8.1 and above. If you have clusters in lower versions, you must first upgrade to version 2.8.1 or above and then remove brokers.
  • Broker removal doesn’t support the t3.small instance type.
  • You will stop incurring costs for the removed brokers once the broker removal operation is completed successfully.
  • When brokers are removed from a cluster, their associated local storage is removed as well.

Considerations before removing brokers

Removing brokers from an existing Apache Kafka cluster is a critical operation that needs careful planning to avoid service disruption. When deciding how many brokers you should remove from the cluster, determine your cluster’s minimum broker count by considering your requirements around availability, durability, local data retention, and partition count. Here are a few things you should consider:

  • Check Amazon CloudWatch BytesInPerSec and BytesOutPerSec metrics for your cluster. Look for the peak load over a period of 1 month. Use this data with MSK sizing Excel file to identify how many brokers you need to handle your peak load. If the number of brokers listed in the Excel file is higher than the number of brokers that would remain after removing brokers, do not proceed with this operation. This indicates that removing brokers would result in too few brokers for the cluster, which can lead to availability impact for your cluster or applications.
  • Check UserPartitionExists metrics to verify that you have at least 1 empty broker per AZ in your cluster. If not, make sure to remove partitions from at least one broker per AZ before invoking the operation.
  • If you have more than one broker per AZ with no user partitions on them, MSK will randomly pick one of those during the removal operation.
  • Check the PartitionCount metrics to know the number of partitions that exist on your cluster. Check per broker partition limit. The broker removal feature will not allow the removal of brokers if the service detects that any brokers in the cluster have breached the partition limit. In that case, check if any unused topics could be removed instead to free up broker resources.
  • Check if the estimated storage in the Excel file exceeds the currently provisioned storage for the cluster. In that case, first provision additional storage on that cluster. If you are hitting per-broker storage limits, consider approaches like using MSK tiered storage or removing unused topics. Otherwise, avoid moving partitions to just a few brokers as that may lead to a disk full issue.
  • If the brokers you are planning to remove host partitions, make sure those partitions are reassigned to other brokers in the cluster. Use the kafka-reassign-partitions.sh tool or Cruise Control to initiate partition reassignment. Monitor the progress of reassignment to completion. Disregard the __amazon_msk_canary, __amazon_msk_canary_state internal topics, because they are managed by the service and will be automatically removed by MSK while executing the operation.
  • Verify the cluster status is Active, before starting the removal process.
  • Check the performance of the workload on your production environment after you move those partitions. We recommend monitoring this for a week before you remove the brokers to make sure that the other brokers in your cluster can safely handle your traffic patterns.
  • If you experience any impact on your applications or cluster availability after removing brokers, you can add the same number of brokers that you removed earlier by using the UpdateBrokerCount API, and then reassign partitions to the newly added brokers.
  • We recommend you test the entire process in a non-production environment, to identify and resolve any issues before making changes in the production environment.

Conclusion

Amazon MSK’s new broker removal capability provides a safe way to reduce the capacity of your provisioned Apache Kafka clusters. By allowing you to remove brokers without impacting availability, data durability, or disrupting your streaming applications, this feature enables you to optimize costs and right-size your MSK clusters based on changing business needs and traffic patterns. With careful planning and by following the recommended best practices, you can confidently use this capability to manage your MSK resources more efficiently.

Start taking advantage of the broker removal feature in Amazon MSK today. Review the documentation and follow the step-by-step guide to test the process in a non-production environment. Once you are comfortable with the workflow, plan and execute broker removal in your production MSK clusters to optimize costs and align your streaming infrastructure with your evolving workload requirements.


About the Authors


Vidhi Taneja is a Principal Product Manager for Amazon Managed Streaming for Apache Kafka (Amazon MSK) at AWS. She is passionate about helping customers build streaming applications at scale and derive value from real-time data. Before joining AWS, Vidhi worked at Apple, Goldman Sachs and Nutanix in product management and engineering roles. She holds an MS degree from Carnegie Mellon University.


Anusha Dasarakothapalli is a Principal Software Engineer for Amazon Managed Streaming for Apache Kafka (Amazon MSK) at AWS. She started her software engineering career with Amazon in 2015 and worked on products such as S3-Glacier and S3 Glacier Deep Archive, before transitioning to MSK in 2022. Her primary areas of focus lie in streaming technology, distributed systems, and storage.


Masudur Rahaman Sayem is a Streaming Data Architect at AWS. He works with AWS customers globally to design and build data streaming architectures to solve real-world business problems. He specializes in optimizing solutions that use streaming data services and NoSQL. Sayem is very passionate about distributed computing.

Introducing Amazon MWAA support for the Airflow REST API and web server auto scaling

Post Syndicated from Mansi Bhutada original https://aws.amazon.com/blogs/big-data/introducing-amazon-mwaa-support-for-the-airflow-rest-api-and-web-server-auto-scaling/

Apache Airflow is a popular platform for enterprises looking to orchestrate complex data pipelines and workflows. Amazon Managed Workflows for Apache Airflow (Amazon MWAA) is a managed service that streamlines the setup and operation of secure and highly available Airflow environments in the cloud.

In this post, we’re excited to introduce two new features that address common customer challenges and unlock new possibilities for building robust, scalable, and flexible data orchestration solutions using Amazon MWAA. First, the Airflow REST API support enables programmatic interaction with Airflow resources like connections, Directed Acyclic Graphs (DAGs), DAGRuns, and Task instances. Second, the option to horizontally scale web server capacity helps you handle increased demand, whether from REST API requests, command line interface (CLI) usage, or more concurrent Airflow UI users. Both features are available for all actively supported Amazon MWAA versions, including version 2.4.3 and newer.

Airflow REST API support

A frequently requested feature from Amazon MWAA customers has been the ability to interact with their workflows programmatically using Airflow’s APIs. The introduction of REST API support in Amazon MWAA addresses this need, providing a standardized way to access and manage your Airflow environment. With the new REST API, you can now invoke DAG runs, manage datasets, or get the status of Airflow’s metadata database, trigger, and scheduler—all without relying on the Airflow web UI or CLI.

Another example is building monitoring dashboards that aggregate the status of your DAGs across multiple Amazon MWAA environments, or invoke workflows in response to events from external systems, such as completed database jobs or new user signups.

This feature opens up a world of possibilities for integrating your Amazon MWAA environments with other systems and building custom solutions that use the power of your data orchestration pipelines.

To demonstrate this new capability, we use the REST API to invoke a new DAG run. Follow the process detailed in the following sections.

Authenticate with the Airflow REST API

For a user to authenticate with the REST API, they need the necessary permissions to create a web login token, similar to how it works with the Airflow UI. Refer to Creating an Apache Airflow web login token for more details. The user’s AWS Identity and Access Management (IAM) role or policy must include the CreateWebLoginToken permission to generate a token for authenticating. Furthermore, the user’s permissions for interacting with the REST API are determined by the Airflow role assigned to them within Amazon MWAA. The Airflow roles govern the user’s ability to perform various operations, such as invoking DAG runs, checking statuses, or modifying configurations, through the REST API endpoints.

The following is an example of the authentication process:

def get_session_info(region, env_name):
    """
    Retrieves the web server hostname and session cookie for an MWAA environment.
    
    Args:
        region (str): The AWS region where the MWAA environment is located.
        env_name (str): The name of the MWAA environment.

    Returns:
        tuple: A tuple containing the web server hostname and session cookie, or (None, None) on failure.
    """

    logging.basicConfig(level=logging.INFO)

    try:
        # Initialize MWAA client and request a web login token
        mwaa = boto3.client('mwaa', region_name=region)
        response = mwaa.create_web_login_token(Name=env_name)
        
        # Extract the web server hostname and login token
        web_server_host_name = response["WebServerHostname"]
        web_token = response["WebToken"]
        
        # Construct the URL needed for authentication 
        login_url = f"https://{web_server_host_name}/aws_mwaa/login"
        login_payload = {"token": web_token}

        # Make a POST request to the MWAA login url using the login payload
        response = requests.post(
            login_url,
            data=login_payload,
            timeout=10
        )

        # Check if login was succesfull 
        if response.status_code == 200:
        
            # Return the hostname and the session cookie 
            return (
                web_server_host_name,
                response.cookies["session"]
            )
        else:
            # Log an error
            logging.error("Failed to log in: HTTP %d", response.status_code)
            return None
    except requests.RequestException as e:
         # Log any exceptions raised during the request to the MWAA login endpoint
        logging.error("Request failed: %s", str(e))
        return None
    except Exception as e:
        # Log any other unexpected exceptions
        logging.error("An unexpected error occurred: %s", str(e))
        return None

The get_session_info function uses the AWS SDK for Python (Boto3) and the python request library for the initial steps required for authentication, retrieving a web token and a session cookie, which is valid for 12 hours. These will be used for subsequent REST API requests.

Invoke the Airflow REST API endpoint

When authentication is complete, you have the credentials to start sending requests to the API endpoints. In the following example, we use the endpoint /dags/{dag_id}/dagRuns to initiate a DAG run:

def trigger_dag(region, env_name, dag_name):
    """
    Triggers a DAG in a specified MWAA environment using the Airflow REST API.

    Args:
    region (str): AWS region where the MWAA environment is hosted.
    env_name (str): Name of the MWAA environment.
    dag_name (str): Name of the DAG to trigger.
    """

    logging.info(f"Attempting to trigger DAG {dag_name} in environment {env_name} at region {region}")

    # Retrieve the web server hostname and session cookie for authentication
    try:
        web_server_host_name, session_cookie = get_session_info(region, env_name)
        if not session_cookie:
            logging.error("Authentication failed, no session cookie retrieved.")
            return
    except Exception as e:
        logging.error(f"Error retrieving session info: {str(e)}")
        return

    # Prepare headers and payload for the request
    cookies = {"session": session_cookie}
    json_body = {"conf": {}}

    # Construct the URL for triggering the DAG
    url = f"https://{web_server_host_name}/api/v1/dags/{dag_id}/dagRuns"

    # Send the POST request to trigger the DAG
    try:
        response = requests.post(url, cookies=cookies, json=json_body)
        # Check the response status code to determine if the DAG was triggered successfully
        if response.status_code == 200:
            logging.info("DAG triggered successfully.")
        else:
            logging.error(f"Failed to trigger DAG: HTTP {response.status_code} - {response.text}")
    except requests.RequestException as e:
        logging.error(f"Request to trigger DAG failed: {str(e)}")

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)

    # Check if the correct number of arguments is provided
    if len(sys.argv) != 4:
        logging.error("Incorrect usage. Proper format: python script_name.py <region> <env_name> <dag_name>")
        sys.exit(1)

    region = sys.argv[1]
    env_name = sys.argv[2]
    dag_name = sys.argv[3]

    # Trigger the DAG with the provided arguments
    trigger_dag(region, env_name, dag_name)

The following is the complete code of trigger_dag.py:

import sys
import boto3
import requests
import logging

def get_session_info(region, env_name):

    """
    Retrieves the web server hostname and session cookie for an MWAA environment.
    
    Args:
        region (str): The AWS region where the MWAA environment is located.
        env_name (str): The name of the MWAA environment.

    Returns:
        tuple: A tuple containing the web server hostname and session cookie, or (None, None) on failure.
    """

    logging.basicConfig(level=logging.INFO)

    try:
        # Initialize MWAA client and request a web login token
        mwaa = boto3.client('mwaa', region_name=region)
        response = mwaa.create_web_login_token(Name=env_name)
        
        # Extract the web server hostname and login token
        web_server_host_name = response["WebServerHostname"]
        web_token = response["WebToken"]
        
        # Construct the URL needed for authentication 
        login_url = f"https://{web_server_host_name}/aws_mwaa/login"
        login_payload = {"token": web_token}

        # Make a POST request to the MWAA login url using the login payload
        response = requests.post(
            login_url,
            data=login_payload,
            timeout=10
        )

        # Check if login was succesfull 
        if response.status_code == 200:
        
            # Return the hostname and the session cookie 
            return (
                web_server_host_name,
                response.cookies["session"]
            )
        else:
            # Log an error
            logging.error("Failed to log in: HTTP %d", response.status_code)
            return None
    except requests.RequestException as e:
         # Log any exceptions raised during the request to the MWAA login endpoint
        logging.error("Request failed: %s", str(e))
        return None
    except Exception as e:
        # Log any other unexpected exceptions
        logging.error("An unexpected error occurred: %s", str(e))
        return None

def trigger_dag(region, env_name, dag_name):
    """
    Triggers a DAG in a specified MWAA environment using the Airflow REST API.

    Args:
    region (str): AWS region where the MWAA environment is hosted.
    env_name (str): Name of the MWAA environment.
    dag_name (str): Name of the DAG to trigger.
    """

    logging.info(f"Attempting to trigger DAG {dag_name} in environment {env_name} at region {region}")

    # Retrieve the web server hostname and session cookie for authentication
    try:
        web_server_host_name, session_cookie = get_session_info(region, env_name)
        if not session_cookie:
            logging.error("Authentication failed, no session cookie retrieved.")
            return
    except Exception as e:
        logging.error(f"Error retrieving session info: {str(e)}")
        return

    # Prepare headers and payload for the request
    cookies = {"session": session_cookie}
    json_body = {"conf": {}}

    # Construct the URL for triggering the DAG
    url = f"https://{web_server_host_name}/api/v1/dags/{dag_name}/dagRuns"

    # Send the POST request to trigger the DAG
    try:
        response = requests.post(url, cookies=cookies, json=json_body)
        # Check the response status code to determine if the DAG was triggered successfully
        if response.status_code == 200:
            logging.info("DAG triggered successfully.")
        else:
            logging.error(f"Failed to trigger DAG: HTTP {response.status_code} - {response.text}")
    except requests.RequestException as e:
        logging.error(f"Request to trigger DAG failed: {str(e)}")

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)

    # Check if the correct number of arguments is provided
    if len(sys.argv) != 4:
        logging.error("Incorrect usage. Proper format: python script_name.py <region> <env_name> <dag_name>")
        sys.exit(1)

    region = sys.argv[1]
    env_name = sys.argv[2]
    dag_name = sys.argv[3]

    # Trigger the DAG with the provided arguments
    trigger_dag(region, env_name, dag_name)

Run the request script

Run the request script with the following code, providing your AWS Region, Amazon MWAA environment name, and DAG name:

python3 trigger_dag.py <region> <env_name> <dag_name>

Validate the API result

The following screenshot shows the result in the CLI.

Check the DAG run in the Airflow UI

The following screenshot shows the DAG run status in the Airflow UI.

You can use any other endpoint in the REST API to enable programmatic control, automation, integration, and management of Airflow workflows and resources. To learn more about the Airflow REST API and its various endpoints, refer to the Airflow documentation.

Web server auto scaling

Another key request from Amazon MWAA customers has been the ability to dynamically scale their web servers to handle fluctuating workloads. Previously, you were constrained by two web servers provided with an Airflow environment on Amazon MWAA and had no way to horizontally scale web server capacity, which could lead to performance issues during peak loads. The new web server auto scaling feature in Amazon MWAA solves this problem. By automatically scaling the number of web servers based on CPU utilization and active connection count, Amazon MWAA makes sure your Airflow environment can seamlessly accommodate increased demand, whether from REST API requests, CLI usage, or more concurrent Airflow UI users.

Set up web server auto scaling

To set up auto scaling for your Amazon MWAA environment web servers, follow these steps:

  1. On the Amazon MWAA console, navigate to the environment you want to configure auto scaling for.
  2. Choose Edit.
  3. Choose Next.
  4. On the Configure advanced settings page, in the Environment class section, add the maximum and minimum web server count. For this example, we set the upper limit to 5 and lower limit to 2.

These settings allow Amazon MWAA to automatically scale up the Airflow web server when demand increases and scale down conservatively when demand decreases, optimizing resource usage and cost.

Trigger auto scaling programmatically

After you configure auto scaling, you might want to test how it behaves under simulated conditions. Using the Python code structure we discussed earlier for invoking a DAG, you can also use the Airflow REST API to simulate a load test and see how well your auto scaling setup responds. For the purpose of load testing, we have configured our Amazon MWAA environment with an mw1.small instance class. The following is an example implementation using load_test.py:

import sys
import time
import boto3
import requests
import logging
import concurrent.futures

def get_session_info(region, env_name):
    """
    Retrieves the web server hostname and session cookie for an MWAA environment.
    
    Args:
        region (str): The AWS region where the MWAA environment is located.
        env_name (str): The name of the MWAA environment.

    Returns:
        tuple: A tuple containing the web server hostname and session cookie, or (None, None) on failure.
    """
    try:
        # Create an MWAA client in the specified region
        mwaa = boto3.client('mwaa', region_name=region)
        # Request a web login token for the specified environment
        response = mwaa.create_web_login_token(Name=env_name)
        web_server_host_name = response["WebServerHostname"]
        web_token = response["WebToken"]

        # Construct the login URL and payload for authentication
        login_url = f"https://{web_server_host_name}/aws_mwaa/login"
        login_payload = {"token": web_token}

        # Authenticate and obtain the session cookie
        response = requests.post(login_url, data=login_payload, timeout=10)
        if response.status_code == 200:
            return web_server_host_name, response.cookies["session"]
        else:
            logging.error(f"Failed to log in: HTTP {response.status_code}")
            return None, None
    except requests.RequestException as e:
        logging.error(f"Request failed: {e}")
        return None, None
    except Exception as e:
        logging.error(f"An unexpected error occurred: {e}")
        return None, None
    
def call_rest_api(web_server_host_name, session_cookie):
    """
    Calls the Airflow web server API to fetch details of all DAGs and measures the time taken for the call.

    Args:
        web_server_host_name (str): The hostname of the MWAA web server.
        session_cookie (str): The session cookie for authentication.
    """
    # Define the endpoint for fetching all DAGs
    url = f"https://{web_server_host_name}/api/v1/dags"
    headers = {'Content-Type': 'application/json', 'Cookie': f'session={session_cookie}'}

    try:
        start_time = time.time()
        response = requests.get(url, headers=headers)
        elapsed_time = time.time() - start_time

        if response.status_code == 200:
            logging.info(f"API call successful, fetched {len(response.json()['dags'])} DAGs, took {elapsed_time:.2f} seconds")
        else:
            logging.error(f"API call failed with status code: {response.status_code}, took {elapsed_time:.2f} seconds")
    except requests.RequestException as e:
        logging.error(f"Error during API call: {e}")

def run_load_test(web_server_host_name, session_cookie, qps, duration):
    """
    Performs a load test by sending concurrent requests at a specified rate over a given duration.

    Args:
        web_server_host_name (str): The hostname of the MWAA web server.
        session_cookie (str): The session cookie for authentication.
        qps (int): Queries per second.
        duration (int): Duration of the test in seconds.
    """
    interval = 1.0 / qps
    start_time = time.time()

    with concurrent.futures.ThreadPoolExecutor(max_workers=qps) as executor:
        while time.time() - start_time < duration:
            futures = [executor.submit(call_rest_api, web_server_host_name, session_cookie) for _ in range(qps)]
            concurrent.futures.wait(futures)
            time.sleep(interval)
    
    logging.info("Load test completed.")

def main(region, env_name, qps, duration):
    """
    Main function to execute the load testing script.

    Args:
        region (str): AWS region where the MWAA environment is hosted.
        env_name (str): Name of the MWAA environment.
        qps (int): Queries per second.
        duration (int): Duration in seconds.
    """
    web_server_host_name, session_cookie = get_session_info(region, env_name)
    if not web_server_host_name or not session_cookie:
        logging.error("Failed to retrieve session information")
        return

    run_load_test(web_server_host_name, session_cookie, qps, duration)

if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)
    if len(sys.argv) != 5:
        logging.error("Incorrect usage. Proper format: python load_test.py <region> <env_name> <qps> <duration>")
        sys.exit(1)

    region = sys.argv[1]
    env_name = sys.argv[2]
    qps = int(sys.argv[3])
    duration = int(sys.argv[4])

    main(region, env_name, qps, duration)

The Python code uses thread pooling and concurrency concepts to help test the auto scaling performance of your web server by simulating traffic. This script automates the process of sending a specific number of requests per second to your web server, enabling you to trigger an auto scaling event.

You can use the following command to run the script. You have to provide the Region, Amazon MWAA environment name, how many queries per seconds you want to run against the web server, and the duration for which you want the load test to run.

python load_test.py <region> <env_name> <qps> <duration>

For example:

python load_test.py us_west_2 MyMWAAEnvironment 10 1080

The preceding command will run 10 queries per second for 18 minutes.

When the script is running, you will start seeing rows that show how long (in seconds) it took for the web server to process the request.

This time will gradually start to increase. As active connection count or CPU usage increase, Amazon MWAA will dynamically scale the web servers to accommodate the load.

As new web servers come online, your environment will be able to handle increased load, and the response time will drop. Amazon MWAA provides web server container metrics in the AWS/MWAA service namespace in Amazon CloudWatch, allowing you to monitor the web server performance. The following screenshots show an example of the auto scaling event.

Recommendation

Determining the appropriate minimum and maximum web server count involves carefully considering your typical workload patterns, performance requirements, and cost constrains. To set these values, consider metrics like the required REST API throughput at peak times and the maximum number of concurrent UI users you expect to have. It’s important to note that Amazon MWAA can support up to 10 queries per second (QPS) for the Airflow REST API at full scale for any environment size, provided you follow the recommended number of DAGs.

Amazon MWAA integration with CloudWatch provides granular metrics and monitoring capabilities to help you find the optimal configuration for your specific use case. If you anticipate periods of consistently high demand or increased workloads for an extended duration, you can configure your Amazon MWAA environment to maintain a higher minimum number of web servers. By setting the minimum web server setting to 2 or more, you can make sure your environment always has sufficient capacity to handle load peaks without needing to wait for auto scaling to provision additional resources. This comes at the cost of running more web server instances, which is a trade-off between cost-optimization and responsiveness.

Conclusion

Today, we are announcing the availability of the Airflow REST API and web server auto scaling in Amazon MWAA. The REST API provides a standardized way to programmatically interact with and manage resources in your Amazon MWAA environments. This enables seamless integration, automation, and extensibility of Amazon MWAA within your organization’s existing data and application landscape. With web server auto scaling, you can automatically increase the number of web server instances based on resource utilization, and Amazon MWAA makes sure your Airflow workflows can handle fluctuating workloads without manual intervention.

These features lay the foundation for you to build more robust, scalable, and flexible data orchestration pipelines. We encourage you to use them to streamline your data engineering operations and unlock new possibilities for your business.

To start building with Amazon MWAA, see Get started with Amazon Managed Workflows for Apache Airflow.

Stay tuned for future updates and enhancements to Amazon MWAA that will continue to enhance the developer experience and unlock new opportunities for data-driven organizations.


About the Authors

Mansi Bhutada is an ISV Solutions Architect based in the Netherlands. She helps customers design and implement well-architected solutions in AWS that address their business problems. She is passionate about data analytics and networking. Beyond work, she enjoys experimenting with food, playing pickleball, and diving into fun board games.

Kartikay KhatorKartikay Khator is a Solutions Architect within the Global Life Sciences at AWS, where he dedicates his efforts to developing innovative and scalable solutions that cater to the evolving needs of customers. His expertise lies in harnessing the capabilities of AWS Analytics services. Extending beyond his professional pursuits, he finds joy and fulfillment in the world of running and hiking. Having already completed two marathons, he is currently preparing for his next marathon challenge.

Kamen SharlandjievKamen Sharlandjiev is a Sr. Big Data and ETL Solutions Architect, MWAA and AWS Glue ETL expert. He’s on a mission to make life easier for customers who are facing complex data integration and orchestration challenges. His secret weapon? Fully managed AWS services that can get the job done with minimal effort. Follow Kamen on LinkedIn to keep up to date with the latest MWAA and AWS Glue features and news!

A sneak peek at the data protection sessions for re:Inforce 2024

Post Syndicated from Katie Collins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-sessions-for-reinforce-2024/

Join us in Philadelphia, Pennsylvania on June 10–12, 2024 for AWS re:Inforce, a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As an attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring Amazon Web Services (AWS) experts and AWS Security Competency Partners, and keynote and leadership sessions featuring Security leadership.

AWS re:Inforce features content in the following six areas:

  • Data Protection
  • Governance, Risk, and Compliance
  • Identity and Access Management
  • Network and Infrastructure Security
  • Threat Detection and Incident Response
  • Application Security

This post will highlight some of the Data Protection sessions that you can add to your agenda. The data protection content showcases best practices for data in transit, at rest, and in use. Learn how AWS, customers, and AWS Partners work together to protect data across industries like financial services, healthcare, and the public sector. You will learn from AWS leaders about how customers innovate in the cloud, use the latest generative AI tools, and raise the bar on data security, resilience, and privacy.

Breakout sessions, chalk talks, and lightning talks

DAP221: Secure your healthcare generative AI workloads on Amazon EKS
Many healthcare organizations have been modernizing their applications using containers on Amazon EKS. Today, they are increasingly adopting generative AI models to innovate in areas like patient care, drug discovery, and medical imaging analysis. In addition, these organizations must comply with healthcare security and privacy regulations. In this lightning talk, learn how you can work backwards from expected healthcare data protection outcomes. This talk offers guidance on extending healthcare organizations’ standardization of containerized applications on Amazon EKS to build more secure and resilient generative AI workloads.

DAP232: Innovate responsibly: Deep dive into data protection for generative AI
AWS solutions such as Amazon Bedrock and Amazon Q are helping organizations across industries boost productivity and create new ways of operating. Despite all of the excitement, organizations often pause to ask, “How do these new services handle and manage our data?” AWS has designed these services with data privacy in mind and many security controls enabled by default, such as encryption of data at rest and in transit. In this chalk talk, dive into the data flows of these new generative AI services to learn how AWS prioritizes security and privacy for your sensitive data requirements.

DAP301: Building resilient event-driven architectures, feat. United Airlines
United Airlines plans to accept a delivery of 700 new planes by 2032. With this growing fleet comes more destinations, passengers, employees, and baggage—and a big increase in data, the lifeblood of airline operations. United Airlines is using event-driven architecture (EDA) to build a system that scales with their operations and evolves with their hybrid cloud throughout this journey. In this session, learn how United Airlines built a hybrid operations management system by modernizing from mainframes to AWS. Using Amazon MSK, Amazon DynamoDB, AWS KMS, and event mesh AWS ISV Partner Solace, they were able to design a well-crafted EDA to address their needs.

DAP302: Capital One’s approach for secure and resilient applications
Join this session to learn about Capital One’s strategic AWS Secrets Manager implementation that has helped ensure unified security across environments. Discover the key principles that can guide consistent use, with real-world examples to showcase the benefits and challenges faced. Gain insights into achieving reliability and resilience in financial services applications on AWS, including methods for maintaining system functionality amidst failures and scaling operations safely. Find out how you can implement chaos engineering and site reliability engineering using multi-Region services such as Amazon Route 53, AWS Auto Scaling, and Amazon DynamoDB.

DAP321: Securing workloads using data protection services, feat. Fannie Mae
Join this lightning talk to discover how Fannie Mae employs a comprehensive suite of AWS data protection services to securely manage their own keys, certificates, and application secrets. Fannie Mae demonstrates how they utilized services such as AWS Secrets Manager, AWS KMS, and AWS Private Certificate Authority to empower application teams to build securely and align with their organizational and compliance expectations.

DAP331: Encrypt everything: How different AWS services help you protect data
Encryption is supported by every AWS service that stores data. However, not every service implements encryption and key management identically. In this chalk talk, learn in detail how different AWS services such as Amazon S3 or Amazon Bedrock use encryption and manage keys. These insights can help you model threats to your applications and be better prepared to respond to questions about adherence to security standards and compliance requirements. Also, find out about some of the methodologies AWS uses when designing for encryption and key management at scale in a diverse set of services.

Hands-on sessions (builders’ sessions, code talks, and workshops)

DAP251: Build a privacy-enhancing healthcare data collaboration solution
In this builders’ session, learn how to build a privacy-enhanced environment to analyze datasets from multiple sources using AWS Clean Rooms. Build a solution for a fictional life sciences company that is researching a new drug and needs to perform analyses with a hospital system. Find out how you can help protect sensitive data using SQL query controls to limit how the data can be queried, Cryptographic Computing for Clean Rooms (C3R) to keep the data encrypted at all times, and differential privacy to quantifiably safeguard patients’ personal information in the datasets. You must bring your laptop to participate.

DAP341: Data protection controls for your generative AI applications on AWS
Generative AI is one of the most disruptive technologies of our generation and has the potential to revolutionize all industries. Cloud security data protection strategies need to evolve to meet the changing needs of businesses as they adopt generative AI. In this code talk, learn how you can implement various data protection security controls for your generative AI applications using Amazon Bedrock and AWS data protection services. Discover best practices and reference architectures that can help you enforce fine-grained data protection controls to scale your generative AI applications on AWS.

DAP342: Leveraging developer platforms to improve secrets management at scale
In this code talk, learn how you can leverage AWS Secrets Manager and Backstage.io to give developers the freedom to deploy secrets close to their applications while maintaining organizational standards. Explore how using a developer portal can remove the undifferentiated heavy lifting of creating secrets that have consistent naming, tagging, access controls, and encryption. This talk touches on cross-Region replication, cross-account IAM permissions and policies, and access controls and integration with AWS KMS. Also find out about secrets rotation as well as new AWS Secrets Manager features such as BatchGetSecretValue and managed rotation.

DAP371: Encryption in transit
Encryption in transit is a fundamental aspect of data protection. In this workshop, walk through multiple ways to accomplish encryption in transit on AWS. Find out how to enable HTTPS connections between microservices on Amazon ECS and AWS Lambda via Amazon VPC Lattice, enforce end-to-end encryption in Amazon EKS, and use AWS Private Certificate Authority to issue TLS certificates for private applications. You must bring your laptop to participate.

If these sessions look interesting to you, join us in Philadelphia by registering for re:Inforce 2024. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Katie Collins

Katie Collins

Katie is a Senior Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

AWS plans to invest €7.8B into the AWS European Sovereign Cloud, set to launch by the end of 2025

Post Syndicated from Max Peterson original https://aws.amazon.com/blogs/security/aws-plans-to-invest-e7-8b-into-the-aws-european-sovereign-cloud-set-to-launch-by-the-end-of-2025/

English | German

Amazon Web Services (AWS) continues to believe it’s essential that our customers have control over their data and choices for how they secure and manage that data in the cloud. AWS gives customers the flexibility to choose how and where they want to run their workloads, including a proven track record of innovation to support specialized workloads around the world. While many customers are able to meet their stringent security, sovereignty, and privacy requirements using our existing sovereign-by-design AWS Regions, we know there’s not a one-size-fits-all solution. AWS continues to innovate based on the criteria we know are most important to our customers to give them more choice and more control. Last year we announced the AWS European Sovereign Cloud, a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty needs. Today, we’re excited to share more details about the AWS European Sovereign Cloud roadmap so that customers and partners can start planning. The AWS European Sovereign Cloud is planning to launch its first AWS Region in the State of Brandenburg, Germany by the end of 2025. Available to all AWS customers, this effort is backed by a €7.8B investment in infrastructure, jobs creation, and skills development.

The AWS European Sovereign Cloud will utilize the full power of AWS with the same familiar architecture, expansive service portfolio, and APIs that customers use today. This means that customers using the AWS European Sovereign Cloud will get the benefits of AWS infrastructure including industry-leading security, availability, performance, and resilience. We offer a broad set of services, including a full suite of databases, compute, storage, analytics, machine learning and AI, networking, mobile, developer tools, IoT, security, and enterprise applications. Today, customers can start building applications in any existing Region and simply move them to the AWS European Sovereign Cloud when the first Region launches in 2025. Partners in the AWS Partner Network, which features more than 130,000 partners, already provide a range of offerings in our existing AWS Regions to help customers meet requirements and will now be able to seamlessly deploy applications on the AWS European Sovereign Cloud.

More control, more choice

Like our existing Regions, the AWS European Sovereign Cloud will be powered by the AWS Nitro System. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access customer workloads or data running on Amazon Elastic Compute Cloud (Amazon EC2) Nitro based instances. The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. The controls that help prevent operator access are so fundamental to the Nitro System that we’ve added them in our AWS Service Terms to provide an additional contractual assurance to all of our customers.

To date, we have launched 33 Regions around the globe with our secure and sovereign-by-design approach. Customers come to AWS because they want to migrate to and build on a secure cloud foundation. Customers who need to comply with European data residency requirements have the choice to deploy their data to any of our eight existing Regions in Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich, and Spain) to keep their data securely in Europe.

For customers who need to meet additional stringent operational autonomy and data residency requirements within the European Union (EU), the AWS European Sovereign Cloud will be available as another option, with infrastructure wholly located within the EU and operated independently from existing Regions. The AWS European Sovereign Cloud will allow customers to keep all customer data and the metadata they create (such as the roles, permissions, resource labels, and configurations they use to run AWS) in the EU. Customers who need options to address stringent isolation and in-country data residency needs will be able to use AWS Dedicated Local Zones or AWS Outposts to deploy AWS European Sovereign Cloud infrastructure in locations they select. We continue to work with our customers and partners to shape the AWS European Sovereign Cloud, applying learnings from our engagements with European regulators and national cybersecurity authorities.

Continued investment in Europe

Over the last 25 years, we’ve driven economic development through our investment in infrastructure, jobs, and skills in communities and countries across Europe. Since 2010, Amazon has invested more than €150 billion in the EU, and we’re proud to employ more than 150,000 people in permanent roles across the European Single Market.

AWS now plans to invest €7.8 billion in the AWS European Sovereign Cloud by 2040, building on our long-term commitment to Europe and ongoing support of the region’s sovereignty needs. This long-term investment is expected to lead to a ripple effect in the local cloud community through accelerating productivity gains, empowering the digital transformation of businesses, empowering the AWS Partner Network (APN), upskilling the cloud and digital workforce, developing renewable energy projects, and creating a positive impact in the communities where AWS operates. In total, the AWS planned investment is estimated to contribute €17.2 billion to Germany’s total Gross Domestic Product (GDP) through 2040, and support an average 2,800 full-time equivalent jobs in local German businesses each year. These positions, including construction, facility maintenance, engineering, telecommunications, and other jobs within the broader local economy, are part of the AWS data center supply chain.

In addition, AWS is also creating new highly skilled permanent roles to build and operate the AWS European Sovereign Cloud. These jobs will include software engineers, systems developers, and solutions architects. This is part of our commitment that all day-to-day operations of the AWS European Sovereign Cloud will be controlled exclusively by personnel located in the EU, including access to data centers, technical support, and customer service.

In Germany, we also collaborate with local communities on long-term, innovative programs that will have a lasting impact in the areas where our infrastructure is located. This includes developing cloud workforce and education initiatives for learners of all ages, helping to solve for the skills gap and prepare for the tech jobs of the future. For example, last year AWS partnered with Siemens AG to design the first apprenticeship program for AWS data centers in Germany, launched the first national cloud computing certification with the German Chamber of Commerce (DIHK), and established the AWS Skills to Jobs Tech Alliance in Germany. We will work closely with local partners to roll out these skills programs and make sure they are tailored to regional needs.

“High performing, reliable, and secure infrastructure is the most important prerequisite for an increasingly digitalized economy and society. Brandenburg is making progress here. In recent years, we have set on a course to invest in modern and sustainable data center infrastructure in our state, strengthening Brandenburg as a business location. State-of-the-art data centers for secure cloud computing are the basis for a strong digital economy. I am pleased Amazon Web Services (AWS) has chosen Brandenburg for a long-term investment in its cloud computing infrastructure for the AWS European Sovereign Cloud.”

Brandenburg’s Minister of Economic Affairs, Prof. Dr. Jörg Steinbach

Build confidently with AWS

For customers that are early in their cloud adoption journey and are considering the AWS European Sovereign Cloud, we provide a wide range of resources to help adopt the cloud effectively. From lifting and shifting workloads to migrating entire data centers, customers get the organizational, operational, and technical capabilities needed for a successful migration to AWS. For example, we offer the AWS Cloud Adoption Framework (AWS CAF) to provide best practices for organizations to develop an efficient and effective plan for cloud adoption, and AWS Migration Hub to help assess migration needs, define migration and modernization strategy, and leverage automation. We frequently host AWS events, webinars, and workshops focused on cloud adoption and migration strategies, where customers can learn from AWS experts and connect with other customers and partners.

We’re committed to giving customers more control and more choice to help meet their unique digital sovereignty needs, without compromising on the full power of AWS. The AWS European Sovereign Cloud is a testament to this. To help customers and partners continue to plan and build, we will share additional updates as we drive towards launch. You can discover more about the AWS European Sovereign Cloud on our European Digital Sovereignty website.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.
 


German version

AWS European Sovereign Cloud bis Ende 2025: AWS plant Investitionen in Höhe von 7,8 Milliarden Euro

Amazon Web Services (AWS) ist davon überzeugt, dass es für Kunden von essentieller Bedeutung ist, die Kontrolle über ihre Daten und Auswahlmöglichkeiten zu haben, wie sie diese Daten in der Cloud sichern und verwalten. Daher können Kunden flexibel wählen, wie und wo sie ihre Workloads ausführen. Dazu gehört auch eine langjährige Erfolgsbilanz von Innovationen zur Unterstützung spezialisierter Workloads auf der ganzen Welt. Viele Kunden können bereits ihre strengen Sicherheits-, Souveränitäts- und Datenschutzanforderungen mit unseren AWS-Regionen unter dem „sovereign-by-design“-Ansatz erfüllen. Aber wir wissen ebenso: Es gibt keine Einheitslösung für alle. Daher arbeitet AWS kontinuierlich an Innovationen, die auf jenen Kriterien basieren, die für unsere Kunden am wichtigsten sind und ihnen mehr Auswahl sowie Kontrolle bieten. Vor diesem Hintergrund haben wir letztes Jahr die AWS European Sovereign Cloud angekündigt. Mit ihr entsteht eine neue, unabhängige Cloud für Europa. Sie soll Organisationen des öffentlichen Sektors und Kunden in stark regulierten Branchen dabei helfen, die sich wandelnden Anforderungen an die digitale Souveränität zu erfüllen.

Heute freuen wir uns, dass wir weitere Details über die Roadmap der AWS European Sovereign Cloud bekanntgeben können. So können unsere Kunden und Partner mit ihren weiteren Planungen beginnen. Der Start der ersten Region der AWS European Sovereign Cloud ist in Brandenburg bis zum Jahresende 2025 geplant. Dieses Angebot steht allen AWS-Kunden zur Verfügung und wird von einer Investition in Höhe von 7,8 Milliarden Euro in die Infrastruktur, Arbeitsplatzschaffung und Kompetenzentwicklung unterstützt.

Die AWS European Cloud in Brandenburg bietet die volle Leistungsfähigkeit, mit der bekannten Architektur, dem umfangreichen Angebot an Services und denselben APIs, die Millionen von Kunden bereits kennen. Das bedeutet: Kunden der AWS European Sovereign Cloud profitieren somit bei voller Unabhängigkeit von den bekannten Vorteilen der AWS-Infrastruktur, einschließlich der branchenführenden Sicherheit, Verfügbarkeit, Leistung und Resilienz.

AWS-Kunden haben Zugriff auf ein breites Spektrum an Services – darunter ein umfangreiches Angebot bestehend aus Datenbanken, Datenverarbeitung, Datenspeicherung, Analytics, maschinellem Lernen (ML) und künstlicher Intelligenz (KI), Netzwerken, mobilen Applikationen, Entwickler-Tools, Internet of Things (IoT), Sicherheit und Unternehmensanwendungen. Bereits heute können Kunden Anwendungen in jeder bestehenden Region entwickeln und diese einfach in die AWS European Sovereign Cloud auslagern, sobald die erste AWS-Region 2025 startet. Die Partner im AWS-Partnernetzwerks (APN), das mehr als 130.000 Partner umfasst, bietet bereits eine Reihe von Angeboten in den bestehenden AWS-Regionen an. Dadurch unterstützen sie Kunden dabei, ihre Anforderungen zu erfüllen und Anwendungen einfach in der AWS European Sovereign Cloud bereitzustellen.

Mehr Kontrolle, größere Auswahl

Die AWS European Sovereign Cloud nutzt wie auch unsere bestehenden Regionen das AWS Nitro System. Dabei handelt es sich um einen Computing-Backbone für AWS, bei dem Sicherheit und Leistung im Mittelpunkt stehen. Die spezialisierte Hardware und zugehörige Firmware sind so konzipiert, dass strikte Beschränkungen gelten und niemand, auch nicht AWS selbst, auf die Workloads oder Daten von Kunden zugreifen kann, die auf Amazon Elastic Compute Cloud (Amazon EC2) Nitro-basierten Instanzen laufen. Dieses Design wurde von der NCC Group validiert, einem unabhängigen Unternehmen für Cybersicherheit. Die Kontrollen, die den Zugriff durch Betreiber verhindern, sind grundlegend für das Nitro System. Daher haben wir sie in unsere AWS Service Terms aufgenommen, um allen unseren Kunden diese zusätzliche vertragliche Zusicherung zu geben.

Bis heute haben wir 33 Regionen rund um den Globus mit unserem sicheren und „sovereign-by-design“-Ansatz gestartet. Unsere Kunden nutzen AWS, weil sie auf einer sicheren Cloud-Umgebung migrieren und aufbauen möchten. Für Kunden, die europäische Anforderungen an den Ort der Datenverarbeitung erfüllen müssen, bietet AWS die Möglichkeit, ihre Daten in einer unserer acht bestehenden Regionen in Europa zu verarbeiten: Irland, Frankfurt, London, Paris, Stockholm, Mailand, Zürich und Spanien. So können sie ihre Daten sicher innerhalb Europas halten.

Müssen Kunden zusätzliche Anforderungen an die betriebliche Autonomie und den Ort der Datenverarbeitung innerhalb der Europäischen Union erfüllen, steht die AWS European Sovereign Cloud als weitere Option zur Verfügung. Die Infrastruktur hierfür ist vollständig in der EU angesiedelt und wird unabhängig von den bestehenden Regionen betrieben. Sie ermöglicht es AWS-Kunden, ihre Kundeninhalte und von ihnen erstellten Metadaten in der EU zu behalten – etwa Rollen, Berechtigungen, Ressourcenbezeichnungen und Konfigurationen für den Betrieb von AWS.

Sollten Kunden weitere Optionen benötigen, um eine Isolierung zu ermöglichen und strenge Anforderungen an den Ort der Datenverarbeitung in einem bestimmten Land zu erfüllen, können sie auf AWS Dedicated Local Zones oder AWS Outposts zurückgreifen. Auf diese Weise können sie die Infrastruktur der AWS European Sovereign Cloud am Ort ihrer Wahl einsetzen. Wir arbeiten mit unseren Kunden und Partnern kontinuierlich daran, die AWS European Sovereign Cloud so zu gestalten, dass sie den benötigten Anforderungen entspricht. Dabei nutzen wir auch Feedback aus unseren Gesprächen mit europäischen Regulierungsbehörden und nationalen Cybersicherheitsbehörden.

„Eine funktionierende, verlässliche und sichere Infrastruktur ist die wichtigste Vorrausetzung für eine zunehmend digitalisierte Wirtschaft und Gesellschaft. Brandenburg schreitet hier voran. Wir haben in den vergangenen Jahren entscheidende Weichen gestellt, um Investitionen in eine moderne und nachhaltige Rechenzentruminfrastruktur in unserem Land auszubauen und so den Wirtschaftsstandort Brandenburg zu stärken. Hochmoderne Rechenzentren für sicheres Cloud-Computing sind die Basis für eine digitale Wirtschaft. Für unsere digitale Souveränität ist es wichtig, dass Rechenleistungen vor Ort in Deutschland erbracht werden. Ich freue mich, dass Amazon Web Services Brandenburg für ein langfristiges Investment in ihre Cloud-Computing-Infrastruktur für die AWS European Sovereign Cloud ausgewählt hat.“

sagt Brandenburgs Wirtschaftsminister Prof. Dr.-Ing. Jörg Steinbach

Kontinuierliche Investitionen in Europa

Im Laufe der vergangenen 25 Jahre haben wir die wirtschaftliche Entwicklung in europäischen Ländern und Gemeinden vorangetrieben und in Infrastruktur, Arbeitsplätze sowie den Ausbau von Kompetenzen investiert. Seit 2010 hat Amazon über 150 Milliarden Euro in der Europäischen Union investiert und wir sind stolz darauf, im gesamten europäischen Binnenmarkt mehr als 150.000 Menschen in Festanstellung zu beschäftigen.

AWS plant bis zum Jahr 2040 7,8 Milliarden Euro in die AWS European Sovereign Cloud zu investieren. Diese Investition ist Teil der langfristigen Bestrebungen von AWS, das europäische Bedürfnis nach digitaler Souveränität zu unterstützen. Mit dieser langfristigen Investition löst AWS einen Multiplikatoreffekt für Cloud-Computing in Europa aus. Sie wird die digitale Transformation der Verwaltung und von Unternehmen vorantreiben, das AWS Partner Network (APN) stärken, die Zahl der Cloud- und Digitalfachkräfte erhöhen, erneuerbare Energieprojekte vorantreiben und eine positive Wirkung in den Gemeinden erzielen, in denen AWS präsent ist. Insgesamt wird die geplante AWS-Investition bis 2040 voraussichtlich 17,2 Milliarden Euro zum deutschen Bruttoinlandsprodukt und zur Schaffung von 2.800 Vollzeitstellen bei regionalen Unternehmen beitragen. Diese Arbeitsplätze in den Bereichen Bau, Instandhaltung, Ingenieurwesen, Telekommunikation und der breiteren regionalen Wirtschaft sind Teil der Lieferkette für AWS-Rechenzentren.

Darüber hinaus wird AWS neue Stellen für hochqualifizierte festangestellte Fachkräfte wie Softwareentwickler, Systemingenieure und Lösungsarchitekten schaffen, um die AWS European Sovereign Cloud aufzubauen und zu betreiben. Die Investition in zusätzliches Personal unterstreicht unser Commitment, dass der gesamte Betrieb dieser souveränen Cloud-Umgebung – angefangen bei der Zugangskontrolle zu den Rechenzentren über den technischen Support bis hin zum Kundendienst – ausnahmslos durch Fachkräfte innerhalb der Europäischen Union kontrolliert und gesteuert wird.

In Deutschland arbeitet AWS mit den Beteiligten vor Ort auch an langfristigen und innovativen Programmen zusammen. Diese sollen einen nachhaltigen positiven Einfluss auf die Gemeinden haben, in denen sich die Infrastruktur des Unternehmens befindet. AWS konzentriert sich auf die Entwicklung von Cloud-Fachkräften und Schulungsinitiativen für Lernende aller Altersgruppen. Diese Maßnahmen tragen dazu bei, den Fachkräftemangel zu beheben und sich auf die technischen Berufe der Zukunft vorzubereiten. Im vergangenen Jahr hat AWS beispielsweise gemeinsam mit der Siemens AG das erste Ausbildungsprogramm für AWS-Rechenzentren in Deutschland entwickelt. Ebenso hat das Unternehmen in Kooperation mit dem Deutschen Industrie und Handelstag (DIHK) den bundeseinheitlichen Zertifikatslehrgang zum „Cloud Business Expert“ entwickelt sowie die AWS Skills to Jobs Tech Alliance in Deutschland ins Leben gerufen. AWS wird gemeinsam mit lokalen Partnern daran arbeiten, Ausbildungsprogramme und Fortbildungen anzubieten, die auf die Bedürfnisse vor Ort zugeschnitten sind.

Vertrauensvoll bauen mit AWS

Für Kunden, die sich noch am Anfang ihrer Cloud-Reise befinden und die AWS European Sovereign Cloud in Betracht ziehen, bieten wir eine Vielzahl von Ressourcen an, um den Wechsel in die Cloud effektiv zu gestalten. Egal ob einzelne Workloads verlagert oder ganze Rechenzentren migriert werden sollen – Kunden erhalten von uns die nötigen organisatorischen, operativen und technischen Fähigkeiten für eine erfolgreiche Migration zu AWS. Beispielsweise bieten wir das AWS Cloud Adoption Framework (AWS CAF) an, das Unternehmen bei der Entwicklung eines effizienten und effektiven Cloud-Adoptionsplans mit Best Practices unterstützt. Auch der AWS Migration Hub hilft bei der Bewertung des Migrationsbedarfs, der Definition der Migrations- und Modernisierungsstrategie und der Nutzung von Automatisierung. Darüber hinaus veranstalten wir regelmäßig AWS-Events, Webinare und Workshops rund um die Themen Cloud-Adoption und Migrationsstrategie. Dabei können Kunden von AWS-Experten lernen und sich mit anderen Kunden und Partnern vernetzen.

Wir sind bestrebt, unseren Kunden mehr Kontrolle und weitere Optionen anzubieten, damit diese ihre ganz individuellen Anforderungen an die digitale Souveränität erfüllen können, ohne dabei auf die volle Leistungsfähigkeit von AWS verzichten zu müssen.

Um Kunden und Partnern bei der weiteren Planung und Entwicklung zu unterstützen, werden wir laufend zusätzliche Updates bereitstellen, während wir auf den Start der AWS European Sovereign Cloud hinarbeiten. Mehr über die AWS European Sovereign Cloud erfahren Sie auf unserer Website zur European Digital Sovereignty.

 

Max Peterson

Max Peterson

Max is the Vice President of AWS Sovereign Cloud. He leads efforts to ensure that all AWS customers around the world have the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud. Before his current role, Max served as the VP of AWS Worldwide Public Sector (WWPS) and created and led the WWPS International Sales division, with a focus on empowering government, education, healthcare, aerospace and satellite, and nonprofit organizations to drive rapid innovation while meeting evolving compliance, security, and policy requirements. Max has over 30 years of public sector experience and served in other technology leadership roles before joining Amazon. Max has earned both a Bachelor of Arts in Finance and Master of Business Administration in Management Information Systems from the University of Maryland.

New compute-optimized (C7i-flex) Amazon EC2 Flex instances

Post Syndicated from Matheus Guimaraes original https://aws.amazon.com/blogs/aws/new-compute-optimized-c7i-flex-amazon-ec2-flex-instances/

The vast majority of applications don’t run run the CPU flat-out at 100% utilization continuously. Take a web application, for instance. It typically fluctuates between periods of high and low demand, but hardly ever uses a server’s compute at full capacity.

a graph showing how a typical application runs with low-to-moderate CPU utilization most of the time with occasional peaks.

CPU utilization for many common workloads that customers run in the AWS Cloud today. (source: AWS Documentation)

One easy and cost-effective way to run such workloads is to use the Amazon EC2 M7i-flex instances which we introduced last August. These are lower-priced variants of the Amazon EC2 M7i instances offering the same next-generation specs for general purpose compute for the most popular sizes with the added benefit of giving you better price/performance if you don’t need full compute power 100 percent of the time. This makes them a great first choice if you are looking to reduce your running cost while meeting the same performance benchmarks.

This flexibility resonated really well with customers so, today, we are expanding our Flex portfolio by launching Amazon EC2 C7i-flex instances offering similar benefits of price/performance and lower costs for compute-intensive workloads. These are lower-priced variants of the Amazon EC2 C7i instances that offer a baseline level of CPU performance with the ability to scale up to the full compute performance 95% of the time.

C7i-flex instances
C7i-flex offers five of the most common sizes from large to 8xlarge, delivering 19 percent better price performance than Amazon EC2 C6i instances.

Instance name vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
c7i-flex.large 2 4 EBS-only up to 12.5 up to 10
c7i-flex.xlarge 4 8 EBS-only up to 12.5 up to 10
c7i-flex.2xlarge 8 16 EBS-only up to 12.5 up to 10
c7i-flex.4xlarge 16 32 EBS-only up to 12.5 up to 10
c7i-flex.8xlarge 32 64 EBS-only up to 12.5 up to 10

Should I use C7i-flex or C7i?
Both C7i-flex and C7i are compute-optmized instances powered by custom 4th Generation Intel Xeon Scalable processors which are only available at Amazon Web Services (AWS). They offer up to 15 percent better performance over comparable x86-based Intel processors used by other cloud providers.

They both also use DDR5 memory, feature a 2:1 ratio of memory to vCPU, and are ideal for running applications such as web and application servers, databases, caches, Apache Kafka, and Elasticsearch.

So why would you use one over the other? Here are three things to consider when deciding which one is right for you.

Usage pattern
EC2 flex instances are a great fit for when you don’t need to fully utilize all compute resources.

You can achieve 5 percent better price performance and 5 percent lower prices due to efficient use of compute resources. Typically, this is a great fit for most applications, so C7i-flex instances should be the first choice for compute-intensive workloads.

However, if your application requires continuous high CPU usage, then you should use C7i instances instead. They are likely more suitable for workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

Instance sizes
C7i-flex instances offer the most common sizes used by a majority of workloads going up to a maximum of 8xlarge in size.

If you need higher specs, then you should look into the large C7i instances, which include 12xlarge, 16xlarge, 24xlarge, 48xlarge and two bare metal options with metal-24xl and metal-48xl sizes.

Network bandwidth
Larger sizes also offer higher network and Amazon Elastic Block Store (Amazon EBS) bandwidths so you may need to use one of the larger C7i instances depending on your requirements. C7i-flex instances offer up to 12.5 Gbps of network bandwidth and up to 10 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth which should be suitable for most applications.

Things to know
Regions – Visit AWS Services by Region to check whether C7i-flex instances are available in your preferred regions.

Purchasing options – C7i-Flex and C7i instances are available in On-Demand, Savings Plan, Reserved Instance, and Spot form. C7i instances are also available in Dedicated Host and Dedicated Instance form.

To learn more visit Amazon EC2 C7i and C7i-flex instances

Matheus Guimaraes

Build RAG and agent-based generative AI applications with new Amazon Titan Text Premier model, available in Amazon Bedrock

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/build-rag-and-agent-based-generative-ai-applications-with-new-amazon-titan-text-premier-model-available-in-amazon-bedrock/

Today, we’re happy to welcome a new member of the Amazon Titan family of models: Amazon Titan Text Premier, now available in Amazon Bedrock.

Following Amazon Titan Text Lite and Titan Text Express, Titan Text Premier is the latest large language model (LLM) in the Amazon Titan family of models, further increasing your model choice within Amazon Bedrock. You can now choose between the following Titan Text models in Bedrock:

  • Titan Text Premier is the most advanced Titan LLM for text-based enterprise applications. With a maximum context length of 32K tokens, it has been specifically optimized for enterprise use cases, such as building Retrieval Augmented Generation (RAG) and agent-based applications with Knowledge Bases and Agents for Amazon Bedrock. As with all Titan LLMs, Titan Text Premier has been pre-trained on multilingual text data but is best suited for English-language tasks. You can further custom fine-tune (preview) Titan Text Premier with your own data in Amazon Bedrock to build applications that are specific to your domain, organization, brand style, and use case. I’ll dive deeper into model highlights and performance in the following sections of this post.
  • Titan Text Express is ideal for a wide range of tasks, such as open-ended text generation and conversational chat. The model has a maximum context length of 8K tokens.
  • Titan Text Lite is optimized for speed, is highly customizable, and is ideal to be fine-tuned for tasks such as article summarization and copywriting. The model has a maximum context length of 4K tokens.

Now, let’s discuss Titan Text Premier in more detail.

Amazon Titan Text Premier model highlights
Titan Text Premier has been optimized for high-quality RAG and agent-based applications and customization through fine-tuning while incorporating responsible artificial intelligence (AI) practices.

Optimized for RAG and agent-based applications – Titan Text Premier has been specifically optimized for RAG and agent-based applications in response to customer feedback, where respondents named RAG as one of their key components in building generative AI applications. The model training data includes examples for tasks like summarization, Q&A, and conversational chat and has been optimized for integration with Knowledge Bases and Agents for Amazon Bedrock. The optimization includes training the model to handle the nuances of these features, such as their specific prompt formats.

  • High-quality RAG through integration with Knowledge Bases for Amazon Bedrock – With a knowledge base, you can securely connect foundation models (FMs) in Amazon Bedrock to your company data for RAG. You can now choose Titan Text Premier with Knowledge Bases to implement question-answering and summarization tasks over your company’s proprietary data.
    Amazon Titan Text Premier support in Knowledge Bases
  • Automating tasks through integration with Agents for Amazon Bedrock – You can also create custom agents that can perform multistep tasks across different company systems and data sources using Titan Text Premier with Agents for Amazon Bedrock. Using agents, you can automate tasks for your internal or external customers, such as managing retail orders or processing insurance claims.
    Amazon Titan Text Premier with Agents for Amazon Bedrock

We already see customers exploring Titan Text Premier to implement interactive AI assistants that create summaries from unstructured data such as emails. They’re also exploring the model to extract relevant information across company systems and data sources to create more meaningful product summaries.

Here’s a demo video created by my colleague Brooke Jamieson that shows an example of how you can put Titan Text Premier to work for your business.

Custom fine-tuning of Amazon Titan Text Premier (preview) – You can fine-tune Titan Text Premier with your own data in Amazon Bedrock to increase model accuracy by providing your own task-specific labeled training dataset. Customizing Titan Text Premier helps to further specialize your model and create unique user experiences that reflect your company’s brand, style, voice, and services.

Built responsibly – Amazon Titan Text Premier incorporates safe, secure, and trustworthy practices. The AWS AI Service Card for Amazon Titan Text Premier documents the model’s performance across key responsible AI benchmarks from safety and fairness to veracity and robustness. The model also integrates with Guardrails for Amazon Bedrock so you can implement additional safeguards customized to your application requirements and responsible AI policies. Amazon indemnifies customers who responsibly use Amazon Titan models against claims that generally available Amazon Titan models or their outputs infringe on third-party copyrights.

Amazon Titan Text Premier model performance
Titan Text Premier has been built to deliver broad intelligence and utility relevant for enterprises. The following table shows evaluation results on public benchmarks that assess critical capabilities, such as instruction following, reading comprehension, and multistep reasoning against price-comparable models. The strong performance across these diverse and challenging benchmarks highlights that Titan Text Premier is built to handle a wide range of use cases in enterprise applications, offering great price performance. For all benchmarks listed below, a higher score is a better score.

Capability Benchmark Description Amazon Google OpenAI
Titan Text Premier Gemini Pro 1.0 GPT-3.5
General MMLU
(Paper)
Representation of questions in 57 subjects 70.4%
(5-shot)
71.8%
(5-shot)
70.0%
(5-shot)
Instruction following IFEval
(Paper)
Instruction-following evaluation for large language models 64.6%
(0-shot)
not published not published
Reading comprehension RACE-H
(Paper)
Large-scale reading comprehension 89.7%
(5-shot)
not published not published
Reasoning HellaSwag
(Paper)
Common-sense reasoning 92.6%
(10-shot)
84.7%
(10-shot)
85.5%
(10-shot)
DROP, F1 score
(Paper)
Reasoning over text 77.9
(3-shot)
74.1
(Variable Shots)
64.1
(3-shot)
BIG-Bench Hard
(Paper)
Challenging tasks requiring multistep reasoning 73.7%
(3-shot CoT)
75.0%
(3-shot CoT)
not published
ARC-Challenge
(Paper)
Common-sense reasoning 85.8%
(5-shot)
not published 85.2%
(25-shot)

Note: Benchmarks evaluate model performance using a variation of few-shot and zero-shot prompting. With few-shot prompting, you provide the model with a number of concrete examples (three for 3-shot, five for 5-shot, etc.) of how to solve a specific task. This demonstrates the model’s ability to learn from example, called in-context learning. With zero-shot prompting on the other hand, you evaluate a model’s ability to perform tasks by relying only on its preexisting knowledge and general language understanding without providing any examples.

Get started with Amazon Titan Text Premier
To enable access to Amazon Titan Text Premier, navigate to the Amazon Bedrock console and choose Model access on the bottom left pane. On the Model access overview page, choose the Manage model access button in the upper right corner and enable access to Amazon Titan Text Premier.

Select Amazon Titan Text Premier in Amazon Bedrock model access page

To use Amazon Titan Text Premier in the Bedrock console, choose Text or Chat under Playgrounds in the left menu pane. Then choose Select model and select Amazon as the category and Titan Text Premier as the model. To explore the model, you can load examples. The following screenshot shows one of those examples that demonstrates the model’s chain of thought (CoT) and reasoning capabilities.

Amazon Titan Text Premier in the Amazon Bedrock chat playground

By choosing View API request, you can get a code example of how to invoke the model using the AWS Command Line Interface (AWS CLI) with the current example prompt. You can also access Amazon Bedrock and available models using the AWS SDKs. In the following example, I will use the AWS SDK for Python (Boto3).

Amazon Titan Text Premier in action
For this demo, I ask Amazon Titan Text Premier to summarize one of my previous AWS News Blog posts that announced the availability of Amazon Titan Image Generator and the watermark detection feature.

For summarization tasks, a recommended prompt template looks like this:

The following is text from a {{Text Category}}:
{{Text}}
Summarize the {{Text Category}} in {{length of summary}}

For more prompting best practices, check out the Amazon Titan Text Prompt Engineering Guidelines.

I adapt this template to my example and define the prompt. In preparation, I saved my News Blog post as a text file and read it into the post string variable.

prompt = """
The following is text from a AWS News Blog post:

<text>
%s
</text>

Summarize the above AWS News Blog post in a short paragraph.
""" % post

Similar to previous Amazon Titan Text models, Amazon Titan Text Premier supports temperature and topP inference parameters to control the randomness and diversity of the response, as well as maxTokenCount and stopSequences to control the length of the response.

import boto3
import json

bedrock_runtime = boto3.client(service_name="bedrock-runtime")

body = json.dumps({
    "inputText": prompt, 
    "textGenerationConfig":{  
        "maxTokenCount":256,
        "stopSequences":[],
        "temperature":0,
        "topP":0.9
    }
})

Then, I use the InvokeModel API to send the inference request.

response = bedrock_runtime.invoke_model(
    body=body,
	modelId="amazon.titan-text-premier-v1:0",
    accept="application/json", 
    contentType="application/json"
)

response_body = json.loads(response.get('body').read())
print(response_body.get('results')[0].get('outputText'))

And here’s the response:

Amazon Titan Image Generator is now generally available in Amazon Bedrock, giving you an easy way to build and scale generative AI applications with new image generation and image editing capabilities, including instant customization of images. Watermark detection for Titan Image Generator is now generally available in the Amazon Bedrock console. Today, we’re also introducing a new DetectGeneratedContent API (preview) in Amazon Bedrock that checks for the existence of this watermark and helps you confirm whether an image was generated by Titan Image Generator.

For more examples in different programming languages, check out the code examples section in the Amazon Bedrock User Guide.

More resources
Here are some additional resources that you might find helpful:

Intended use cases and more — Check out the AWS AI Service Card for Amazon Titan Text Premier to learn more about the models’ intended use cases, design, and deployment, as well as performance optimization best practices.

AWS Generative AI CDK Constructs — Amazon Titan Text Premier is supported by the AWS Generative AI CDK Constructs, an open source extension of the AWS Cloud Development Kit (AWS CDK), providing sample implementations of AWS CDK for common generative AI patterns.

Amazon Titan models — If you’re curious to learn more about Amazon Titan models in general, check out the following video. Dr. Sherry Marcus, Director of Applied Science for Amazon Bedrock, shares how the Amazon Titan family of models incorporates the 25 years of experience Amazon has innovating with AI and machine learning (ML) across its business.

Now available
Amazon Titan Text Premier is available today in the AWS US East (N. Virginia) Region. Custom fine-tuning for Amazon Titan Text Premier is available today in preview in the AWS US East (N. Virginia) Region. Check the full Region list for future updates. To learn more about the Amazon Titan family of models, visit the Amazon Titan product page. For pricing details, review the Amazon Bedrock pricing page.

Give Amazon Titan Text Premier a try in the Amazon Bedrock console today, send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS contacts, and engage with the generative AI builder community at community.aws.

— Antje

How Amazon Security Lake is helping customers simplify security data management for proactive threat analysis

Post Syndicated from Nisha Amthul original https://aws.amazon.com/blogs/security/how-amazon-security-lake-is-helping-customers-simplify-security-data-management-for-proactive-threat-analysis/

In this post, we explore how Amazon Web Services (AWS) customers can use Amazon Security Lake to efficiently collect, query, and centralize logs on AWS. We also discuss new use cases for Security Lake, such as applying generative AI to Security Lake data for threat hunting and incident response, and we share the latest service enhancements and developments from our growing landscape. Security Lake centralizes security data from AWS environments, software as a service (SaaS) providers, and on-premises and cloud sources into a purpose-built data lake that is stored in your AWS account. Using Open Cybersecurity Schema Framework (OCSF) support, Security Lake normalizes and combines security data from AWS and a broad range of third-party data sources. This helps provide your security team with the ability to investigate and respond to security events and analyze possible threats within your environment, which can facilitate timely responses and help to improve your security across multicloud and hybrid environments.

One year ago, AWS embarked on a mission driven by the growing customer need to revolutionize how security professionals centralize, optimize, normalize, and analyze their security data. As we celebrate the one-year general availability milestone of Amazon Security Lake, we’re excited to reflect on the journey and showcase how customers are using the service, yielding both productivity and cost benefits, while maintaining ownership of their data.

Figure 1 shows how Security Lake works, step by step. For more details, see What is Amazon Security Lake?

Figure 1: How Security Lake works

Figure 1: How Security Lake works

Customer use cases

In this section, we highlight how some of our customers have found the most value with Security Lake and how you can use Security Lake in your organization.

Simplify the centralization of security data management across hybrid environments to enhance security analytics

Many customers use Security Lake to gather and analyze security data from various sources, including AWS, multicloud, and on-premises systems. By centralizing this data in a single location, organizations can streamline data collection and analysis, help eliminate data silos, and improve cross-environment analysis. This enhanced visibility and efficiency allows security teams to respond more effectively to security events. With Security Lake, customers simplify data gathering and reduce the burden of data retention and extract, transform, and load (ETL) processes with key AWS data sources.

For example, Interpublic Group (IPG), an advertising company, uses Security Lake to gain a comprehensive, organization-wide grasp of their security posture across hybrid environments. Watch this video from re:Inforce 2023 to understand how IPG streamlined their security operations.

Before adopting Security Lake, the IPG team had to work through the challenge of managing diverse log data sources. This involved translating and transforming the data, as well as reconciling complex elements like IP addresses. However, with the implementation of Security Lake, IPG was able to access previously unavailable log sources. This enabled them to consolidate and effectively analyze security-related data. The use of Security Lake has empowered IPG to gain comprehensive insight into their security landscape, resulting in a significant improvement in their overall security posture.

“We can achieve a more complete, organization-wide understanding of our security posture across hybrid environments. We could quickly create a security data lake that centralized security-related data from AWS and third-party sources,” — Troy Wilkinson, Global CISO at Interpublic Group

Streamline incident investigation and reduce mean time to respond

As organizations expand their cloud presence, they need to gather information from diverse sources. Security Lake automates the centralization of security data from AWS environments and third-party logging sources, including firewall logs, storage security, and threat intelligence signals. This removes the need for custom, one-off security consolidation pipelines. The centralized data in Security Lake streamlines security investigations, making security management and investigations simpler. Whether you’re in operations or a security analyst, dealing with multiple applications and scattered data can be tough. Security Lake makes it simpler by centralizing the data, reducing the need for scattered systems. Plus, with data stored in your Amazon Simple Storage Service (Amazon S3) account, Security Lake lets you store a lot of historical data, which helps when looking back for patterns or anomalies indicative of security issues.

For example, SEEK, an Australian online employment marketplace, uses Security Lake to streamline incident investigations and reduce mean time to respond (MTTR). Watch this video from re:Invent 2023 to understand how SEEK improved its incident response. Prior to using Security Lake, SEEK had to rely on a legacy VPC Flow Logs system to identify possible indicators of compromise, which took a long time. With Security Lake, the company was able to more quickly identify a host in one of their accounts that was communicating with a malicious IP. Now, Security Lake has provided SEEK with swift access to comprehensive data across diverse environments, facilitating forensic investigations and enabling them scale their security operations better.

Optimize your log retention strategy

Security Lake simplifies data management for customers who need to store large volumes of security logs to meet compliance requirements. It provides customizable retention settings and automated storage tiering to optimize storage costs and security analytics. It automatically partitions and converts incoming security data into a storage and query-efficient Apache Parquet format. Security Lake uses the Apache Iceberg open table format to enhance query performance for your security analytics. Customers can now choose which logs to keep for compliance reasons, which logs to send to their analytics solutions for further analysis, and which logs to query in place for incident investigation use cases. Security Lake helps customers to retain logs that were previously unfeasible to store and to extend storage beyond their typical retention policy, within their security information and event management (SIEM) system.

Figure 2 shows a section of the Security Lake activation page, which presents users with options to set rollup AWS Regions and storage classes.

Figure 2: Security Lake activation page with options to select a roll-up Region and set storage classes

Figure 2: Security Lake activation page with options to select a roll-up Region and set storage classes

For example, Carrier, an intelligent climate and energy solutions company, relies on Security Lake to strengthen its security and governance practices. By using Security Lake, Carrier can adhere to compliance with industry standards and regulations, safeguarding its operations and optimizing its log retention.

“Amazon Security Lake simplifies the process of ingesting and analyzing all our security-related log and findings data into a single data lake, strengthening our enterprise-level security and governance practices. This streamlined approach has enhanced our ability to identify and address potential issues within our environment more effectively, enabling us to proactively implement mitigations based on insights derived from the service,” — Justin McDowell, Associate Director for Enterprise Cloud Services at Carrier

Proactive threat and vulnerability detection

Security Lake helps customers identify potential threats and vulnerabilities earlier in the development process by facilitating the correlation of security events across different data sources, allowing security analysts to identify complex attack patterns more effectively. This proactive approach shifts the responsibility for maintaining secure coding and infrastructure to earlier phases, enhancing overall security posture. Security Lake can also optimize security operations by automating safeguard protocols and involving engineering teams in decision-making, validation, and remediation processes. Also, by seamlessly integrating with security orchestration tools, Security Lake can help expedite responses to security incidents and preemptively help mitigate vulnerabilities before exploitation occurs.

For example, SPH Media, a Singapore media company, uses Security Lake to get better visibility into security activity across their organization, enabling proactive identification of potential threats and vulnerabilities.

Security Lake and generative AI for threat hunting and incident response practices

Centralizing security-related data allows organizations to efficiently analyze behaviors from systems and users across their environments, providing deeper insight into potential risks. With Security Lake, customers can centralize their security logs and have them stored in a standardized format using OCSF. OCSF implements a well-documented schema maintained at schema.ocsf.io, which generative AI can use to contextualize the structured data within Security Lake tables. Security personnel can then ask questions of the data using familiar security investigation terminology rather than needing data analytics skills to translate an otherwise simple question into complex SQL (Structured Query Language) queries. Building upon the previous processes, Amazon Bedrock then takes an organization’s natural language incident response playbooks and converts them into a sequence of queries capable of automating what would otherwise be manual investigation tasks.

“Organizations can benefit from using Amazon Security Lake to centralize and manage security data with the OCSF and Apache Parquet format. It simplifies the integration of disparate security log sources and findings, reducing complexity and cost. By applying generative AI to Security Lake data, SecOps can streamline security investigations, facilitate timely responses, and enhance their overall security,” — Phil Bues, Research Manager, Cloud Security at IDC

Enhance incident response workflow with contextual alerts

Incident responders commonly deal with a high volume of automatically generated alerts from their IT environment. Initially, the response team members sift through numerous log sources to uncover relevant details about affected resources and the causes of alerts, which adds time to the remediation process. This manual process also consumes valuable analyst time, particularly if the alert turns out to be a false positive.

To reduce this burden on incident responders, customers can use generative AI to perform initial investigations and generate human-readable context from alerts. Amazon QuickSight Q is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Incident responders can now combine these capabilities with data in Amazon Security Lake to begin their investigation and build detailed dashboards in minutes. This allows them to visually identify resources and activities that could potentially trigger alerts, giving incident responders a head start in their response efforts. The approach can also be used to validate alert information against alternative sources, enabling faster identification of false positives and allowing responders to focus on genuine security incidents.

As natural language processing (NLP) models evolve they can also interpret incident response playbooks, often written in a human-readable, step-by-step format. Amazon Q Apps will enable the incident responder to use natural language to quickly and securely build their own generative AI playbooks to automate their tasks. This automation can streamline incident response tasks such as creating tickets in the relevant ticketing system, logging bugs in the version control system that hosts application code, and tagging resources as non-compliant if they fail to adhere to the organization’s IT security policy.

Incident investigation with automatic dashboard generation

By using Amazon QuickSight Q and data in Security Lake, incident responders can now generate customized visuals based on the specific characteristics and context of the incident they are investigating. The incident response team can start with a generic investigative question and automatically produce dashboards without writing SQL queries or learning the complexities of how to build a dashboard.

After building a topic in Amazon QuickSight Q. The investigator can either ask their own question or utilize the AI generated questions that Q has provided. In the example shown in Figure 3, the incident responder suspects a potential security incident and is seeking more information about Create or Update API calls in their environment, by asking ” I’m looking for what accounts ran an update or create api call in the last week, can you display the API Operation?

Figure 3: AI Generated visual to begin the investigation

Figure 3: AI Generated visual to begin the investigation

This automatically generated dashboard can be saved for later and from this dashboard the investigator can just focus on one account with a simple right click and get more information about the API calls performed by a single account. as shown in Figure 4.

Figure 4: Investigating API calls for a single account

Figure 4: Investigating API calls for a single account

Using the centralized security data in Amazon Security Lake and Amazon QuickSight Q, incident responders can jump-start investigations and rapidly validate potential threats without having to write complex queries or manually construct dashboards.

Updates since GA launch

Security Lake makes it simpler to analyze security data, gain a more comprehensive understanding of security across your entire organization, and improve the protection of your workloads, applications, and data. Since the general availability release in 2023, we have made various updates to the service. It’s now available in 17 AWS Regions globally. To assist you in evaluating your current and future Security Lake usage and cost estimates, we’ve introduced a new usage page. If you’re currently in a 15-day free trial, your trial usage can serve as a reference for estimating post-trial costs. Accessing Security Lake usage and projected costs is as simple as logging into the Security Lake console.

We have released an integration with Amazon Detective that enables querying and retrieving logs stored in Security Lake. Detective begins pulling raw logs from Security Lake related to AWS CloudTrail management events and Amazon VPC Flow Logs. Security Lake enhances analytics performance with support for OCSF 1.1.0 and Apache Iceberg. Also, Security Lake has integrated several OCSF mapping enhancements, including OCSF Observables, and has adopted the latest version of the OCSF datetime profile for improved usability. Security Lake seamlessly centralizes and normalizes Amazon Elastic Kubernetes Service (Amazon EKS) audit logs, simplifying monitoring and investigation of potential suspicious activities in your Amazon EKS clusters, without requiring additional setup or affecting existing configurations.

Developments in Partner integrations

Over 100 sources of data are available in Security Lake from AWS and third-party sources. Customers can ingest logs into Security Lake directly from their sources. Security findings from over 50 partners are available through AWS Security Hub, and software as a service (SaaS) application logs from popular platforms such as Salesforce, Slack, and Smartsheet can be sent into Security Lake through AWS AppFabric. Since the GA launch, Security Lake has added 18 partner integrations, and we now offer over 70 direct partner integrations. New source integrations include AIShield by Bosch, Contrast Security, DataBahn, Monad, SailPoint, Sysdig, and Talon. New subscribers who have built integrations include Cyber Security Cloud, Devo, Elastic, Palo Alto Networks, Panther, Query.AI, Securonix, and Tego Cyber and new service partner offerings from Accenture, CMD Solutions, part of Mantel Group, Deloitte, Eviden, HOOP Cyber, Kyndryl, Kudelski Security, Infosys, Leidos, Megazone, PwC and Wipro.

Developments in the Open Cybersecurity Schema Framework

The OCSF community has grown over the last year to almost 200 participants across security-focused independent software vendors (ISVs), government agencies, educational institutions, and enterprises. Leading cybersecurity organizations have contributed to the OCSF schema, such as Tanium and Cisco Duo. Many existing Security Lake partners are adding support for OCSF version 1.1, including Confluent, Cribl, CyberArk, DataBahn, Datadog, Elastic, IBM, Netskope, Orca Security, Palo Alto Networks, Panther, Ripjar, Securonix, SentinelOne, SOC Prime, Sumo Logic, Splunk, Tanium, Tego Cyber, Torq, Trellix, Stellar Cyber, Query.AI, Swimlane, and more. Members of the OCSF community who want to validate their OCSF schema for Security Lake have access to a validation script on GitHub.

Get help from AWS Professional Services

The global team of experts in the AWS Professional Services organization can help customers realize their desired business outcomes with AWS. Our teams of data architects and security engineers collaborate with customer security, IT, and business leaders to develop enterprise solutions.

The AWS ProServe Amazon Security Lake Assessment offering is a complimentary workshop for customers. It entails a two-week interactive assessment, delving deep into customer use cases and creating a program roadmap to implement Security Lake alongside a suite of analytics solutions. Through a series of technical and strategic discussions, the AWS ProServe team analyzes use cases for data storage, security, search, visualization, analytics, AI/ML, and generative AI. The team then recommends a target future state architecture to achieve the customer’s security operations goals. At the end of the workshop, customers receive a draft architecture and implementation plan, along with infrastructure cost estimates, training, and other technical recommendations.

Summary

In this post, we showcased how customers use Security Lake to collect, query, and centralize logs on AWS. We discussed new applications for Security Lake and generative AI for threat hunting and incident response. We invite you to discover the benefits of using Security Lake through our 15-day free trial and share your feedback. To help you in getting started and building your first security data lake, we offer a range of resources including an infographic, eBook, demo videos, and webinars. There are many different use cases for Security Lake that can be tailored to suit your AWS environment. Join us at AWS re:Inforce 2024, our annual cloud security event, for more insights and firsthand experiences of how Security Lake can help you centralize, normalize, and optimize your security data.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.

Nisha Amthul

Nisha Amthul

Nisha is a Senior Product Marketing Manager at AWS Security, specializing in detection and response solutions. She has a strong foundation in product management and product marketing within the domains of information security and data protection. When not at work, you’ll find her cake decorating, strength training, and chasing after her two energetic kiddos, embracing the joys of motherhood.

Author

Ross Warren

Ross is a Senior Product SA at AWS for Amazon Security Lake based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Outside of work, he likes to spend time with his family, bake bread, make sawdust and enjoy time outside.

Jonathan Garzon

Jonathan Garzon

Jonathan is a Principal Product Manager at AWS with a passion for building products with delightful customer experiences and solving complex problems. He has launched and managed products in various domains, including networking, cybersecurity, and data analytics. Outside of work, Jonathan enjoys spending time with friends and family, playing soccer, mountain biking, hiking, and playing the guitar.

Build generative AI applications with Amazon Bedrock Studio (preview)

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/build-generative-ai-applications-with-amazon-bedrock-studio-preview/

Today, we’re introducing Amazon Bedrock Studio, a new web-based generative artificial intelligence (generative AI) development experience, in public preview. Amazon Bedrock Studio accelerates the development of generative AI applications by providing a rapid prototyping environment with key Amazon Bedrock features, including Knowledge BasesAgents, and Guardrails.

As a developer, you can now use your company’s single sign-on credentials to sign in to Bedrock Studio and start experimenting. You can build applications using a wide array of top performing models, evaluate, and share your generative AI apps within Bedrock Studio. The user interface guides you through various steps to help improve a model’s responses. You can experiment with model settings, and securely integrate your company data sources, tools, and APIs, and set guardrails. You can collaborate with team members to ideate, experiment, and refine your generative AI applications—all without requiring advanced machine learning (ML) expertise or AWS Management Console access.

As an Amazon Web Services (AWS) administrator, you can be confident that developers will only have access to the features provided by Bedrock Studio, and won’t have broader access to AWS infrastructure and services.

Amazon Bedrock Studio

Now, let me show you how to get started with Amazon Bedrock Studio.

Get started with Amazon Bedrock Studio
As an AWS administrator, you first need to create an Amazon Bedrock Studio workspace, then select and add users you want to give access to the workspace. Once the workspace is created, you can share the workspace URL with the respective users. Users with access privileges can sign in to the workspace using single sign-on, create projects within their workspace, and start building generative AI applications.

Create Amazon Bedrock Studio workspace
Navigate to the Amazon Bedrock console and choose Bedrock Studio on the bottom left pane.

Amazon Bedrock Studio in the Bedrock console

Before creating a workspace, you need to configure and secure the single sign-on integration with your identity provider (IdP) using the AWS IAM Identity Center. For detailed instructions on how to configure various IdPs, such as AWS Directory Service for Microsoft Active Directory, Microsoft Entra ID, or Okta, check out the AWS IAM Identity Center User Guide. For this demo, I configured user access with the default IAM Identity Center directory.

Next, choose Create workspace, enter your workspace details, and create any required AWS Identity and Access Management (IAM) roles.

If you want, you can also select default generative AI models and embedding models for the workspace. Once you’re done, choose Create.

Next, select the created workspace.

Amazon Bedrock Studio, workspace created

Then, choose User management and Add users or groups to select the users you want to give access to this workspace.

Add users to your Amazon Bedrock Studio workspace

Back in the Overview tab, you can now copy the Bedrock Studio URL and share it with your users.

Amazon Bedrock Studio, share workspace URL

Build generative AI applications using Amazon Bedrock Studio
As a builder, you can now navigate to the provided Bedrock Studio URL and sign in with your single sign-on user credentials. Welcome to Amazon Bedrock Studio! Let me show you how to choose from industry leading FMs, bring your own data, use functions to make API calls, and safeguard your applications using guardrails.

Choose from multiple industry leading FMs
By choosing Explore, you can start selecting available FMs and explore the models using natural language prompts.

Amazon Bedrock Studio UI

If you choose Build, you can start building generative AI applications in a playground mode, experiment with model configurations, iterate on system prompts to define the behavior of your application, and prototype new features.

Amazon Bedrock Studio - start building applications

Bring your own data
With Bedrock Studio, you can securely bring your own data to customize your application by providing a single file or by selecting a knowledge base created in Amazon Bedrock.

Amazon Bedrock Studio - start building applications

Use functions to make API calls and make model responses more relevant
A function call allows the FM to dynamically access and incorporate external data or capabilities when responding to a prompt. The model determines which function it needs to call based on an OpenAPI schema that you provide.

Functions enable a model to include information in its response that it doesn’t have direct access to or prior knowledge of. For example, a function could allow the model to retrieve and include the current weather conditions in its response, even though the model itself doesn’t have that information stored.

Amazon Bedrock Studio - Add functions

Safeguard your applications using Guardrails for Amazon Bedrock
You can create guardrails to promote safe interactions between users and your generative AI applications by implementing safeguards customized to your use cases and responsible AI policies.

Amazon Bedrock Studio - Add Guardrails

When you create applications in Amazon Bedrock Studio, the corresponding managed resources such as knowledge bases, agents, and guardrails are automatically deployed in your AWS account. You can use the Amazon Bedrock API to access those resources in downstream applications.

Here’s a short demo video of Amazon Bedrock Studio created by my colleague Banjo Obayomi.

Join the preview
Amazon Bedrock Studio is available today in public preview in AWS Regions US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Bedrock Studio page and User Guide.

Give Amazon Bedrock Studio a try today and let us know what you think! Send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS contacts, and engage with the generative AI builder community at community.aws.

— Antje

AWS Weekly Roundup: Amazon Q, Amazon QuickSight, AWS CodeArtifact, Amazon Bedrock, and more (May 6, 2024)

Post Syndicated from Matheus Guimaraes original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-q-amazon-quicksight-aws-codeartifact-amazon-bedrock-and-more-may-6-2024/

April has been packed with new releases! Last week continued that trend with many new releases supporting a variety of domains such as security, analytics, devops, and many more, as well as more exciting new capabilities within generative AI.

If you missed the AWS Summit London 2024, you can now watch the sessions on demand, including the keynote by Tanuja Randery, VP & Marketing Director, EMEA, and many of the break-out sessions which will continue to be released over the coming weeks.

Last week’s launches
Here are some of the highlights that caught my attention this week:

Manual and automatic rollback from any stage in AWS CodePipeline – You can now rollback any stage, other than Source, to any previously known good state in if you use a V2 pipeline in AWS CodePipeline. You can configure automatic rollback which will use the source changes from the most recent successful pipeline execution in the case of failure, or you can initiate a manual rollback for any stage from the console, API or SDK and choose which pipeline execution you want to use for the rollback.

AWS CodeArtifact now supports RubyGems – Ruby community, rejoice, you can now store your gems in AWS CodeArtifact! You can integrate it with RubyGems.org, and CodeArtifact will automatically fetch any gems requested by the client and store them locally in your CodeArtifact repository. That means that you can have a centralized place for both your first-party and public gems so developers can access their dependencies from a single source.

Ruby-repo screenshot

Create a repository in AWS CodeArtifact and choose “rubygems-store” to connect your repository to RubyGems.org on the “Public upstream repositories” dropdown.

Amazon EventBridge Pipes now supports event delivery through AWS PrivateLink – You can now deliver events to an Amazon EventBridge Pipes target without traversing the public internet by using AWS PrivateLink. You can poll for events in a private subnet in your Amazon Virtual Private Cloud (VPC) without having to deploy any additional infrastructure to keep your traffic private.

Amazon Bedrock launches continue. You can now run scalable, enterprise-grade generative AI workloads with Cohere Command R & R+. And Amazon Titan Text V2 is now optimized for improving Retrieval-Augmented Generation (RAG).

AWS Trusted Advisor – last year we launched Trusted Advisor APIs enabling you to programmatically consume recommendations. A new API is available now that you can use to exclude resources from recommendations.

Amazon EC2 – there have been two new great launches this week for EC2 users. You can now mark your AMIs as “protected” to avoid them being deregistered by accident. You can also now easily discover your active AMIs by simply describing them.

Amazon CodeCatalyst – you can now view your git commit history in the CodeCatalyst console.

General Availability
Many new services and capabilities became generally available this week.

Amazon Q in QuickSight – Amazon Q has brought generative BI to Amazon QuickSight giving you the ability to build beautiful dashboards automatically simply by using natural language and it’s now generally available. To get started, head to the Quicksight Pricing page to explore all options or start a 30-day free trial which allows up to 4 users per QuickSight account to use all the new generative AI features.

With the new generative AI features enabled by Amazon Q in Amazon QuickSight you can use natural language queries to build, sort and filter dashboards. (source: AWS Documentation)

Amazon Q Business (GA) and Amazon Q Apps (Preview) – Also generally available now is Amazon Q Business which we launched last year at AWS re:Invent 2023 with the ability to connect seamlessly with over 40 popular enterprise systems, including Microsoft 365, Salesforce, Amazon Simple Storage Service (Amazon S3), Gmail, and so many more. This allows Amazon Q Business to know about your business so your employees can generate content, solve problems, and take actions that are specific to your business.

We have also launched support for custom plug-ins, so now you can create your own integrations with any third-party application.

Q-business screenshot

With general availability of Amazon Q Business we have also launched the ability to create your own custom plugins to connect to any third-party API.

Another highlight of this release is the launch of Amazon Q Apps, which enables you to quickly generate an app from your conversation with Amazon Q Business, or by describing what you would like it to generate for you. All guardrails from Amazon Q Business apply, and it’s easy to share your apps with colleagues through an admin-managed library. Amazon Q Apps is in preview now.

Check out Channy Yun’s post for a deeper dive into Amazon Q Business and Amazon Q Apps, which guides you through these new features.

Amazon Q Developer – you can use Q Developer to completely change your developer flow. It has all the capabilities of what was previously known as Amazon CodeWhisperer, such as Q&A, diagnosing common errors, generating code including tests, and many more. Now it has expanded, so you can use it to generate SQL, and build data integration pipelines using natural language. In preview, it can describe resources in your AWS account and help you retrieve and analyze cost data from AWS Cost Explorer.

For a full list of AWS announcements, be sure to keep an eye on the ‘What’s New with AWS?‘ page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community.

Discover Claude 3 – If you’re a developer looking for a good source to get started with Claude 3 them I recommend this great post from my colleague Haowen Huang: Mastering Amazon Bedrock with Claude 3: Developer’s Guide with Demos.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

GOTO EDA Day LondonJoin us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners.

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Matheus Guimaraes

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS achieves Spain’s ENS High 311/2022 certification across 172 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-311-2022-certification-across-172-services/

Amazon Web Services (AWS) has recently renewed the Esquema Nacional de Seguridad (ENS) High certification, upgrading to the latest version regulated under Royal Decree 311/2022. The ENS establishes security standards that apply to government agencies and public organizations in Spain and service providers on which Spanish public services depend.

This security framework has gone through significant updates since the Royal Decree 3/2010 to the latest Royal Decree 311/2022 to adapt to evolving cybersecurity threats and technologies. The current scheme defines basic requirements and lists additional security reinforcements to meet the bar of the different security levels (Low, Medium, High).

Achieving the ENS High certification for its 311/2022 version underscores AWS commitment to maintaining robust cybersecurity controls and highlights our proactive approach to cybersecurity.

We are happy to announce the addition of 14 services to the scope of our ENS certification, for a new total of 172 services in scope. The certification now covers 31 Regions. Some of the additional services in scope for ENS High include the following:

  • Amazon Bedrock – This fully managed service offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
  • Amazon EventBridge – Use this service to easily build loosely coupled, event-driven architectures. It creates point-to-point integrations between event producers and consumers without needing to write custom code or manage and provision servers.
  • AWS HealthOmics – This service helps healthcare and life science organizations and their software partners store, query, and analyze genomic, transcriptomic, and other omics data and then uses that data to generate insights to improve health.
  • AWS Signer – This is a fully managed code-signing service to ensure the trust and integrity of your code. AWS Signer manages the code-signing certificate’s public and private keys and enables central management of the code-signing lifecycle.
  • AWS Wickr – This service encrypts messages, calls, and files with a 256-bit end-to-end encryption protocol. Only the intended recipients and the customer organization can decrypt these communications, reducing the risk of adversary-in-the-middle attacks.

AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level as described in Royal Decree 311/2022.

AWS has also updated the existing eight Security configuration guidelines that map the ENS controls to the AWS Well-Architected Framework and provides guidance relating to the following topics: compliance profile, secure configuration, Prowler quick guide, hybrid connectivity, multi-account environments, Amazon WorkSpaces, incident response and monitorization and governance. AWS has also supported Prowler to offer new functionalities and to include the latest controls of the ENS.

For more information about ENS High and the AWS Security configuration guidelines, see the AWS Compliance page Esquema Nacional de Seguridad High. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – Esquema Nacional de Seguridad (ENS) page. You can download the ENS High Certificate from AWS Artifact in the AWS Management Console or from Esquema Nacional de Seguridad High.

As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a security audit program manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain and other EMEA countries. Daniel has ten years of experience in security assurance and compliance, including previous experience as an auditor for the PCI DSS security framework. He also holds the CISSP, PCIP, and ISO 27001 Lead Auditor certifications.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked at companies such as Microsoft and BBVA. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

AWS is issued a renewed certificate for the BIO Thema-uitwerking Clouddiensten with increased scope

Post Syndicated from Ka Yie Lee original https://aws.amazon.com/blogs/security/aws-is-issued-a-renewed-certificate-for-the-bio-thema-uitwerking-clouddiensten-with-increased-scope/

We’re pleased to announce that Amazon Web Services (AWS) demonstrated continuous compliance with the Baseline Informatiebeveiliging Overheid (BIO) Thema-uitwerking Clouddiensten while increasing the AWS services and AWS Regions in scope. This alignment with the BIO Thema-uitwerking Clouddiensten requirements demonstrates our commitment to adhere to the heightened expectations for cloud service providers.

AWS customers across the Dutch public sector can use AWS certified services with confidence, knowing that the AWS services listed in the certificate adhere to the strict requirements imposed on the consumption of cloud-based services.

Baseline Informatiebeveiliging Overheid (BIO)

The BIO framework is an information security framework that the four layers of the Dutch public sector are required to adhere to. This means that it’s mandatory for the Dutch central government, all provinces, municipalities, and regional water authorities to be compliant with the BIO framework.

To support AWS customers in demonstrating their compliance with the BIO framework, AWS developed a Landing Zone for the BIO framework. This Landing Zone for the BIO framework is a pre-configured AWS environment that includes a subset of the technical requirements of the BIO framework. It’s a helpful tool that provides a starting point from which customers can further build their own AWS environment.

For more information regarding the Landing Zone for the BIO framework, see the AWS Reference Guide for Dutch BIO Framework and BIO Theme-elaboration Cloud Services in AWS Artifact. You can also reach out to your AWS account team or contact AWS through the Contact Us page.

Baseline Informatiebeveiliging Overheid Thema-uitwerking Clouddiensten

In addition to the BIO framework, there’s another information security framework designed specifically for the use of cloud services, called BIO Thema-uitwerking Clouddiensten. The BIO Thema-uitwerking Clouddiensten is a guidance document for Dutch cloud service consumers to help them formulate controls and objectives when using cloud services. Consumers can consider it to be an additional control framework on top of the BIO framework.

AWS was evaluated by the monitoring body, EY CertifyPoint, in March 2024, and it was determined that AWS successfully demonstrated compliance for eight AWS services in total. In addition to Amazon Elastic Compute Cloud (Amazon EC2)Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS), which were assessed as compliant in March 2023; Amazon Virtual Private Cloud (Amazon VPC), Amazon GuardDuty, AWS LambdaAWS Config and AWS CloudTrail are added to the scope. The renewed Certificate of Compliance illustrating the compliance status of AWS and the assessment summary report from EY CertifyPoint are available on AWS Artifact. The certificate is available in Dutch and English.

For more information regarding the BIO Thema-uitwerking Clouddiensten, see the AWS Reference Guide for Dutch BIO Framework and BIO Theme-elaboration Cloud Services in AWS Artifact. You can also reach out to your AWS account team or contact AWS through the Contact Us page.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Ka Yie Lee

Ka Yie Lee

Ka Yie is a Security Assurance Specialist for the Benelux region at AWS, based in Amsterdam. She engages with regulators and industry groups in the Netherlands and Belgium. She also helps ensure that AWS addresses local information security frameworks. Ka Yie holds master’s degrees in Accounting, Auditing and Control, and Commercial and Company Law. She also holds professional certifications such as CISSP.

Gokhan Akyuz

Gokhan Akyuz

Gokhan is a Security Audit Program Manager at AWS, based in Amsterdam. He leads security audits, attestations, and certification programs across Europe and the Middle East. He has 17 years of experience in IT and cybersecurity audits, IT risk management, and controls implementation in a wide range of industries.

Build RAG applications with MongoDB Atlas, now available in Knowledge Bases for Amazon Bedrock

Post Syndicated from Abhishek Gupta original https://aws.amazon.com/blogs/aws/build-rag-applications-with-mongodb-atlas-now-available-in-knowledge-bases-for-amazon-bedrock/

Foundational models (FMs) are trained on large volumes of data and use billions of parameters. However, in order to answer customers’ questions related to domain-specific private data, they need to reference an authoritative knowledge base outside of the model’s training data sources. This is commonly achieved using a technique known as Retrieval Augmented Generation (RAG). By fetching data from the organization’s internal or proprietary sources, RAG extends the capabilities of FMs to specific domains, without needing to retrain the model. It is a cost-effective approach to improving model output so it remains relevant, accurate, and useful in various contexts.

Knowledge Bases for Amazon Bedrock is a fully managed capability that helps you implement the entire RAG workflow from ingestion to retrieval and prompt augmentation without having to build custom integrations to data sources and manage data flows.

Today, we are announcing the availability of MongoDB Atlas as a vector store in Knowledge Bases for Amazon Bedrock. With MongoDB Atlas vector store integration, you can build RAG solutions to securely connect your organization’s private data sources to FMs in Amazon Bedrock. This integration adds to the list of vector stores supported by Knowledge Bases for Amazon Bedrock, including Amazon Aurora PostgreSQL-Compatible Edition, vector engine for Amazon OpenSearch Serverless, Pinecone, and Redis Enterprise Cloud.

Build RAG applications with MongoDB Atlas and Knowledge Bases for Amazon Bedrock
Vector Search in MongoDB Atlas is powered by the vectorSearch index type. In the index definition, you must specify the field that contains the vector data as the vector type. Before using MongoDB Atlas vector search in your application, you will need to create an index, ingest source data, create vector embeddings and store them in a MongoDB Atlas collection. To perform queries, you will need to convert the input text into a vector embedding, and then use an aggregation pipeline stage to perform vector search queries against fields indexed as the vector type in a vectorSearch type index.

Thanks to the MongoDB Atlas integration with Knowledge Bases for Amazon Bedrock, most of the heavy lifting is taken care of. Once the vector search index and knowledge base are configured, you can incorporate RAG into your applications. Behind the scenes, Amazon Bedrock will convert your input (prompt) into embeddings, query the knowledge base, augment the FM prompt with the search results as contextual information and return the generated response.

Let me walk you through the process of setting up MongoDB Atlas as a vector store in Knowledge Bases for Amazon Bedrock.

Configure MongoDB Atlas
Start by creating a MongoDB Atlas cluster on AWS. Choose an M10 dedicated cluster tier. Once the cluster is provisioned, create a database and collection. Next, create a database user and grant it the Read and write to any database role. Select Password as the Authentication Method. Finally, configure network access to modify the IP Access List – add IP address 0.0.0.0/0 to allow access from anywhere.

Use the following index definition to create the Vector Search index:

{
  "fields": [
    {
      "numDimensions": 1536,
      "path": "AMAZON_BEDROCK_CHUNK_VECTOR",
      "similarity": "cosine",
      "type": "vector"
    },
    {
      "path": "AMAZON_BEDROCK_METADATA",
      "type": "filter"
    },
    {
      "path": "AMAZON_BEDROCK_TEXT_CHUNK",
      "type": "filter"
    }
  ]
}

Configure the knowledge base
Create an AWS Secrets Manager secret to securely store the MongoDB Atlas database user credentials. Choose Other as the Secret type. Create an Amazon Simple Storage Service (Amazon S3) storage bucket and upload the Amazon Bedrock documentation user guide PDF. Later, you will use the knowledge base to ask questions about Amazon Bedrock.

You can also use another document of your choice because Knowledge Base supports multiple file formats (including text, HTML, and CSV).

Navigate to the Amazon Bedrock console and refer to the Amzaon Bedrock User Guide to configure the knowledge base. In the Select embeddings model and configure vector store, choose Titan Embeddings G1 – Text as the embedding model. From the list of databases, choose MongoDB Atlas.

Enter the basic information for the MongoDB Atlas cluster (Hostname, Database name, etc.) as well as the ARN of the AWS Secrets Manager secret you had created earlier. In the Metadata field mapping attributes, enter the vector store specific details. They should match the vector search index definition you used earlier.

Initiate the knowledge base creation. Once complete, synchronise the data source (S3 bucket data) with the MongoDB Atlas vector search index.

Once the synchronization is complete, navigate to MongoDB Atlas to confirm that the data has been ingested into the collection you created.

Notice the following attributes in each of the MongoDB Atlas documents:

  • AMAZON_BEDROCK_TEXT_CHUNK – Contains the raw text for each data chunk.
  • AMAZON_BEDROCK_CHUNK_VECTOR – Contains the vector embedding for the data chunk.
  • AMAZON_BEDROCK_METADATA – Contains additional data for source attribution and rich query capabilities.

Test the knowledge base
It’s time to ask questions about Amazon Bedrock by querying the knowledge base. You will need to choose a foundation model. I picked Claude v2 in this case and used “What is Amazon Bedrock” as my input (query).

If you are using a different source document, adjust the questions accordingly.

You can also change the foundation model. For example, I switched to Claude 3 Sonnet. Notice the difference in the output and select Show source details to see the chunks cited for each footnote.

Integrate knowledge base with applications
To build RAG applications on top of Knowledge Bases for Amazon Bedrock, you can use the RetrieveAndGenerate API which allows you to query the knowledge base and get a response.

Here is an example using the AWS SDK for Python (Boto3):

import boto3

bedrock_agent_runtime = boto3.client(
    service_name = "bedrock-agent-runtime"
)

def retrieveAndGenerate(input, kbId):
    return bedrock_agent_runtime.retrieve_and_generate(
        input={
            'text': input
        },
        retrieveAndGenerateConfiguration={
            'type': 'KNOWLEDGE_BASE',
            'knowledgeBaseConfiguration': {
                'knowledgeBaseId': kbId,
                'modelArn': 'arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0'
                }
            }
        )

response = retrieveAndGenerate("What is Amazon Bedrock?", "BFT0P4NR1U")["output"]["text"]

If you want to further customize your RAG solutions, consider using the Retrieve API, which returns the semantic search responses that you can use for the remaining part of the RAG workflow.

import boto3

bedrock_agent_runtime = boto3.client(
    service_name = "bedrock-agent-runtime"
)

def retrieve(query, kbId, numberOfResults=5):
    return bedrock_agent_runtime.retrieve(
        retrievalQuery= {
            'text': query
        },
        knowledgeBaseId=kbId,
        retrievalConfiguration= {
            'vectorSearchConfiguration': {
                'numberOfResults': numberOfResults
            }
        }
    )

response = retrieve("What is Amazon Bedrock?", "BGU0Q4NU0U")["retrievalResults"]

Things to know

  • MongoDB Atlas cluster tier – This integration requires requires an Atlas cluster tier of at least M10.
  • AWS PrivateLink – For the purposes of this demo, MongoDB Atlas database IP Access List was configured to allow access from anywhere. For production deployments, AWS PrivateLink is the recommended way to have Amazon Bedrock establish a secure connection to your MongoDB Atlas cluster. Refer to the Amazon Bedrock User guide (under MongoDB Atlas) for details.
  • Vector embedding size – The dimension size of the vector index and the embedding model should be the same. For example, if you plan to use Cohere Embed (which has a dimension size of 1024) as the embedding model for the knowledge base, make sure to configure the vector search index accordingly.
  • Metadata filters – You can add metadata for your source files to retrieve a well-defined subset of the semantically relevant chunks based on applied metadata filters. Refer to the documentation to learn more about how to use metadata filters.

Now available
MongoDB Atlas vector store in Knowledge Bases for Amazon Bedrock is available in the US East (N. Virginia) and US West (Oregon) Regions. Be sure to check the full Region list for future updates.

Learn more

Try out the MongoDB Atlas integration with Knowledge Bases for Amazon Bedrock! Send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS contacts and engage with the generative AI builder community at community.aws.

Abhishek

Stop the CNAME chain struggle: Simplified management with Route 53 Resolver DNS Firewall

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/stop-the-cname-chain-struggle-simplified-management-with-route-53-resolver-dns-firewall/

Starting today, you can configure your DNS Firewall to automatically trust all domains in a resolution chain (such as aCNAME, DNAME, or Alias chain).

Let’s walk through this in nontechnical terms for those unfamiliar with DNS.

Why use DNS Firewall?
DNS Firewall provides protection for outbound DNS requests from your private network in the cloud (Amazon Virtual Private Cloud (Amazon VPC)). These requests route through Amazon Route 53 Resolver for domain name resolution. Firewall administrators can configure rules to filter and regulate the outbound DNS traffic.

DNS Firewall helps to protect against multiple security risks.

Let’s imagine a malicious actor managed to install and run some code on your Amazon Elastic Compute Cloud (Amazon EC2) instances or containers running inside one of your virtual private clouds (VPCs). The malicious code is likely to initiate outgoing network connections. It might do so to connect to a command server and receive commands to execute on your machine. Or it might initiate connections to a third-party service in a coordinated distributed denial of service (DDoS) attack. It might also try to exfiltrate data it managed to collect on your network.

Fortunately, your network and security groups are correctly configured. They block all outgoing traffic except the one to well-known API endpoints used by your app. So far so good—the malicious code cannot dial back home using regular TCP or UDP connections.

But what about DNS traffic? The malicious code may send DNS requests to an authoritative DNS server they control to either send control commands or encoded data, and it can receive data back in the response. I’ve illustrated the process in the following diagram.

DNS exfiltration illustrated

To prevent these scenarios, you can use a DNS Firewall to monitor and control the domains that your applications can query. You can deny access to the domains that you know to be bad and allow all other queries to pass through. Alternately, you can deny access to all domains except those you explicitly trust.

What is the challenge with CNAME, DNAME, and Alias records?
Imagine you configured your DNS Firewall to allow DNS queries only to specific well-known domains and blocked all others. Your application communicates with alexa.amazon.com; therefore, you created a rule allowing DNS traffic to resolve that hostname.

However, the DNS system has multiple types of records. The ones of interest in this article are

  • A records that map a DNS name to an IP address,
  • CNAME records that are synonyms for other DNS names,
  • DNAME records that provide redirection from a part of the DNS name tree to another part of the DNS name tree, and
  • Alias records that provide a Route 53 specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as Amazon CloudFront distributions and Amazon S3 buckets

When querying alexa.amazon.com, I see it’s actually a CNAME record that points to pitangui.amazon.com, which is another CNAME record that points to tp.5fd53c725-frontier.amazon.com, which, in turn, is a CNAME to d1wg1w6p5q8555.cloudfront.net. Only the last name (d1wg1w6p5q8555.cloudfront.net) has an A record associated with an IP address 3.162.42.28. The IP address is likely to be different for you. It points to the closest Amazon CloudFront edge location, likely the one from Paris (CDG52) for me.

A similar redirection mechanism happens when resolving DNAME or Alias records.

DNS resolution for alexa.amazon.com

To allow the complete resolution of such a CNAME chain, you could be tempted to configure your DNS Firewall rule to allow all names under amazon.com (*.amazon.com), but that would fail to resolve the last CNAME that goes to cloudfront.net.

Worst, the DNS CNAME chain is controlled by the service your application connects to. The chain might change at any time, forcing you to manually maintain the list of rules and authorized domains inside your DNS Firewall rules.

Introducing DNS Firewall redirection chain authorization
Based on this explanation, you’re now equipped to understand the new capability we launch today. We added a parameter to the UpdateFirewallRule API (also available on the AWS Command Line Interface (AWS CLI) and AWS Management Console) to configure the DNS Firewall so that it follows and automatically trusts all the domains in a CNAME, DNAME, or Alias chain.

This parameter allows firewall administrators to only allow the domain your applications query. The firewall will automatically trust all intermediate domains in the chain until it reaches the A record with the IP address.

Let’s see it in action
I start with a DNS Firewall already configured with a domain list, a rule group, and a rule that ALLOW queries for the domain alexa.amazon.com. The rule group is attached to a VPC where I have an EC2 instance started.

When I connect to that EC2 instance and issue a DNS query to resolve alexa.amazon.com, it only returns the first name in the domain chain (pitangui.amazon.com) and stops there. This is expected because pitangui.amazon.com is not authorized to be resolved.

DNS query for alexa.amazon.com is blocked at first CNAME

To solve this, I update the firewall rule to trust the entire redirection chain. I use the AWS CLI to call the update-firewall-rule API with a new parameter firewall-domain-redirection-action set to TRUST_REDIRECTION_DOMAIN.

AWS CLI to update the DNS firewall rule

The following diagram illustrates the setup at this stage.

DNS Firewall rule diagram

Back to the EC2 instance, I try the DNS query again. This time, it works. It resolves the entire redirection chain, down to the IP address 🎉.

DNS resolution for the full CNAME chain

Thanks to the trusted chain redirection, network administrators now have an easy way to implement a strategy to block all domains and authorize only known domains in their DNS Firewall without having to care about CNAME, DNAME, or Alias chains.

This capability is available at no additional cost in all AWS Regions. Try it out today!

— seb

Add your Ruby gems to AWS CodeArtifact

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/add-your-ruby-gems-to-aws-codeartifact/

Ruby developers can now use AWS CodeArtifact to securely store and retrieve their gems. CodeArtifact integrates with standard developer tools like gem and bundler.

Applications often use numerous packages to speed up development by providing reusable code for common tasks like network access, cryptography, or data manipulation. Developers also embed SDKs–such as the AWS SDKs–to access remote services. These packages may come from within your organization or from third parties like open source projects. Managing packages and dependencies is integral to software development. Languages like Java, C#, JavaScript, Swift, and Python have tools for downloading and resolving dependencies, and Ruby developers typically use gem and bundler.

However, using third-party packages presents legal and security challenges. Organizations must ensure package licenses are compatible with their projects and don’t violate intellectual property. They must also verify that the included code is safe and doesn’t introduce vulnerabilities, a tactic known as a supply chain attack. To address these challenges, organizations typically use private package servers. Developers can only use packages vetted by security and legal teams made available through private repositories.

CodeArtifact is a managed service that allows the safe distribution of packages to internal developer teams without managing the underlying infrastructure. CodeArtifact now supports Ruby gems in addition to npm, PyPI, Maven, NuGet, SwiftPM, and generic formats.

You can publish and download Ruby gem dependencies from your CodeArtifact repository in the AWS Cloud, working with existing tools such as gem and bundler. After storing packages in CodeArtifact, you can reference them in your Gemfile. Your build system will then download approved packages from the CodeArtifact repository during the build process.

How to get started
Imagine I’m working on a package to be shared with other development teams in my organization.

In this demo, I show you how I prepare my environment, upload the package to the repository, and use this specific package build as a dependency for my project. I focus on the steps specific to Ruby packages. You can read the tutorial written by my colleague Steven to get started with CodeArtifact.

I use an AWS account that has a package repository (MyGemsRepo) and domain (stormacq-test) already configured.

CodeArtifact - Ruby repository

To let the Ruby tools acess my CodeArtifact repository, I start by collecting an authentication token from CodeArtifact.

export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token \
                                     --domain stormacq-test              \
                                     --domain-owner 012345678912         \
                                     --query authorizationToken          \
                                     --output text`

export GEM_HOST_API_KEY="Bearer $CODEARTIFACT_AUTH_TOKEN"

Note that the authentication token expires after 12 hours. I must repeat this command after 12 hours to obtain a fresh token.

Then, I request the repository endpoint. I pass the domain name and domain owner (the AWS account ID). Notice the --format ruby option.

export RUBYGEMS_HOST=`aws codeartifact get-repository-endpoint  \
                           --domain stormacq-test               \
                           --domain-owner 012345678912          \
                           --format ruby                        \
                           --repository MyGemsRepo              \
                           --query repositoryEndpoint           \
                           --output text`

Now that I have the repository endpoint and an authentication token, gem will use these environment variable values to connect to my private package repository.

I create a very simple project, build it, and send it to the package repository.

CodeArtifact - building and pushing a custom package

$ gem build hola.gemspec 

Successfully built RubyGem
  Name: hola-codeartifact
  Version: 0.0.0
  File: hola-codeartifact-0.0.0.gem
  
$ gem push hola-codeartifact-0.0.0.gem 
Pushing gem to https://stormacq-test-486652066693.d.codeartifact.us-west-2.amazonaws.com/ruby/MyGemsRepo...

I verify in the console that the package is available.

CodeArtifact - Hola package is present

Now that the package is available, I can use it in my projects as usual. This involves configuring the local ~/.gemrc file on my machine. I follow the instructions provided by the console, and I make sure I replace ${CODEARTIFACT_AUTH_TOKEN} with its actual value.

CodeArtifact - console instructions to connect to the repo

Once ~/.gemrc is correctly configured, I can install gems as usual. They will be downloaded from my private gem repository.

$ gem install hola-codeartifact

Fetching hola-codeartifact-0.0.0.gem
Successfully installed hola-codeartifact-0.0.0
Parsing documentation for hola-codeartifact-0.0.0
Installing ri documentation for hola-codeartifact-0.0.0
Done installing documentation for hola-codeartifact after 0 seconds
1 gem installed

Install from upstream
I can also associate my repository with an upstream source. It will automatically fetch gems from upstream when I request one.

To associate the repository with rubygems.org, I use the console, or I type

aws codeartifact  associate-external-connection \
                   --domain stormacq-test       \
                   --repository MyGemsRepo      \
                   --external-connection public:ruby-gems-org

{
    "repository": {
        "name": "MyGemsRepo",
        "administratorAccount": "012345678912",
        "domainName": "stormacq-test",
        "domainOwner": "012345678912",
        "arn": "arn:aws:codeartifact:us-west-2:012345678912:repository/stormacq-test/MyGemsRepo",
        "upstreams": [],
        "externalConnections": [
            {
                "externalConnectionName": "public:ruby-gems-org",
                "packageFormat": "ruby",
                "status": "AVAILABLE"
            }
        ],
        "createdTime": "2024-04-12T12:58:44.101000+02:00"
    }
}

Once associated, I can pull any gems through CodeArtifact. It will automatically fetch packages from upstream when not locally available.

$ gem install rake 

Fetching rake-13.2.1.gem
Successfully installed rake-13.2.1
Parsing documentation for rake-13.2.1
Installing ri documentation for rake-13.2.1
Done installing documentation for rake after 0 seconds
1 gem installed

I use the console to verify the rake package is now available in my repo.

Things to know
There are some things to keep in mind before uploading your first Ruby packages.

Pricing and availability
CodeArtifact costs for Ruby packages are the same as for the other package formats already supported. CodeArtifact billing depends on three metrics: the storage (measured in GB per month), the number of requests, and the data transfer out to the internet or to other AWS Regions. Data transfer to AWS services in the same Region is not charged, meaning you can run your continuous integration and delivery (CI/CD) jobs on Amazon Elastic Compute Cloud (Amazon EC2) or AWS CodeBuild, for example, without incurring a charge for the CodeArtifact data transfer. As usual, the pricing page has the details.

CodeArtifact for Ruby packages is available in all 13 Regions where CodeArtifact is available.

Now, go build your Ruby applications and upload your private packages to CodeArtifact!

— seb

Amazon Titan Text V2 now available in Amazon Bedrock, optimized for improving RAG

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-titan-text-v2-now-available-in-amazon-bedrock-optimized-for-improving-rag/

The Amazon Titan family of models, available exclusively in Amazon Bedrock, is built on top of 25 years of Amazon expertise in artificial intelligence (AI) and machine learning (ML) advancements. Amazon Titan foundation models (FMs) offer a comprehensive suite of pre-trained image, multimodal, and text models accessible through a fully managed API. Trained on extensive datasets, Amazon Titan models are powerful and versatile, designed for a range of applications while adhering to responsible AI practices.

The latest addition to the Amazon Titan family is Amazon Titan Text Embeddings V2, the second-generation text embeddings model from Amazon now available within Amazon Bedrock. This new text embeddings model is optimized for Retrieval-Augmented Generation (RAG). It is pre-trained on 100+ languages and on code.

Amazon Titan Text Embeddings V2 now lets you choose the size of of the output vector (either 256, 512, or 1024). Larger vector sizes create more detailed responses, but will also increase the computational time. Shorter vector lengths are less detailed but will improve the response time. Using smaller vectors helps to reduce your storage costs and the latency to search and retrieve document extracts from a vector database. We measured the accuracy of the vectors generated by Amazon Titan Text Embeddings V2 and we observed that vectors with 512 dimensions keep approximately 99 percent of the accuracy provided by vectors with 1024 dimensions. Vectors with 256 dimensions keep 97 percent of the accuracy. This means that you can save 75 percent in vector storage (from 1024 down to 256 dimensions) and keep approximately 97 percent of the accuracy provided by larger vectors.

Amazon Titan Text Embeddings V2 also proposes an improved unit vector normalization that helps improve the accuracy when measuring vector similarity. You can choose between normalized or unnormalized versions of the embeddings based on your use case (normalized is more accurate for RAG use cases). Normalization of a vector is the process of scaling it to have a unit length or magnitude of 1. It is useful to ensure that all vectors have the same scale and contribute equally during vector operations, preventing some vectors from dominating others due to their larger magnitudes.

This new text embeddings model is well-suited for a variety of use cases. It can help you perform semantic searches on documents, for example, to detect plagiarism. It can classify labels into data-based learned representations, for example, to categorize movies into genres. It can also improve the quality and relevance of retrieved or generated search results, for example, recommending content based on interest using RAG.

How embeddings help to improve accuracy of RAG
Imagine you’re a superpowered research assistant for a large language model (LLM). LLMs are like those brainiacs who can write different creative text formats, but their knowledge comes from the massive datasets they were trained on. This training data might be a bit outdated or lack specific details for your needs.

This is where RAG comes in. RAG acts like your assistant, fetching relevant information from a custom source, like a company knowledge base. When the LLM needs to answer a question, RAG provides the most up-to-date information to help it generate the best possible response.

To find the most up-to-date information, RAG uses embeddings. Imagine these embeddings (or vectors) as super-condensed summaries that capture the key idea of a piece of text. A high-quality embeddings model, such as Amazon Titan Text Embeddings V2, can create these summaries accurately, like a great assistant who can quickly grasp the important points of each document. This ensures RAG retrieves the most relevant information for the LLM, leading to more accurate and on-point answers.

Think of it like searching a library. Each page of the book is indexed and represented by a vector. With a bad search system, you might end up with a pile of books that aren’t quite what you need. But with a great search system that understands the content (like a high-quality embeddings model), you’ll get exactly what you’re looking for, making the LLM’s job of generating the answer much easier.

Amazon Titan Text Embeddings V2 overview
Amazon Titan Text Embeddings V2 is optimized for high accuracy and retrieval performance at smaller dimensions for reduced storage and latency. We measured that vectors with 512 dimensions maintain approximately 99 percent of the accuracy provided by vectors with 1024 dimensions. Those with 256 dimensions offer 97 percent of the accuracy.

Max tokens 8,192
Languages 100+ in pre-training
Fine-tuning supported No
Normalization supported Yes
Vector size 256, 512, 1,024 (default)

How to use Amazon Titan Text Embeddings V2
It’s very likely you will interact with Amazon Titan Text Embeddings V2 indirectly through Knowledge Bases for Amazon Bedrock. Knowledge Bases takes care of the heavy lifting to create a RAG-based application. However, you can also use the Amazon Bedrock Runtime API to directly invoke the model from your code. Here is a simple example in the Swift programming language (just to show you you can use any programming language, not just Python):

import Foundation
import AWSBedrockRuntime 

let text = "This is the text to transform in a vector"

// create an API client
let client = try BedrockRuntimeClient(region: "us-east-1")

// create the request 
let request = InvokeModelInput(
   accept: "application/json",
   body: """
   {
      "inputText": "\(text)",
      "dimensions": 256,
      "normalize": true
   }
   """.data(using: .utf8), 
   contentType: "application/json",
   modelId: "amazon.titan-embed-text-v2:0")

// send the request 
let response = try await client.invokeModel(input: request)

// decode the response
let response = String(data: (response.body!), encoding: .utf8)

print(response ?? "")

The model takes three parameters in its payload:

  • inputText – The text to convert to embeddings.
  • normalize – A flag indicating whether or not to normalize the output embeddings. It defaults to true, which is optimal for RAG use cases.
  • dimensions – The number of dimensions the output embeddings should have. Three values are accepted: 256, 512, and 1024 (the default value).

I added the dependency on the AWS SDK for Swift in my Package.swift. I type swift run to build and run this code. It prints the following output (truncated to keep it brief):

{"embedding":[-0.26757812,0.15332031,-0.015991211...-0.8203125,0.94921875],
"inputTextTokenCount":9}

As usual, do not forget to enable access to the new model in the Amazon Bedrock console before using the API.

Amazon Titan Text Embeddings V2 will soon be the default LLM proposed by Knowledge Bases for Amazon Bedrock. Your existing knowledge bases created with the original Amazon Titan Text Embeddings model will continue to work without changes.

To learn more about the Amazon Titan family of models, view the following video:

The new Amazon Titan Text Embeddings V2 model is available today in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) AWS Regions. Check the full Region list for future updates.

To learn more, check out the Amazon Titan in Amazon Bedrock product page and pricing page. Also, do not miss this blog post to learn how to use Amazon Titan Text Embeddings models. You can also visit our community.aws site to find deep-dive technical content and to discover how our Builder communities are using Amazon Bedrock in their solutions.

Give Amazon Titan Text Embeddings V2 a try in the Amazon Bedrock console today, and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

— seb

Introducing Amazon Q data integration in AWS Glue

Post Syndicated from Noritaka Sekiyama original https://aws.amazon.com/blogs/big-data/introducing-amazon-q-data-integration-in-aws-glue/

Today, we’re excited to announce general availability of Amazon Q data integration in AWS Glue. Amazon Q data integration, a new generative AI-powered capability of Amazon Q Developer, enables you to build data integration pipelines using natural language. This reduces the time and effort you need to learn, build, and run data integration jobs using AWS Glue data integration engines.

Tell Amazon Q Developer what you need in English, it will return a complete job for you. For example, you can ask Amazon Q Developer to generate a complete extract, transform, and load (ETL) script or code snippet for individual ETL operations. You can troubleshoot your jobs by asking Amazon Q Developer to explain errors and propose solutions. Amazon Q Developer provides detailed guidance throughout the entire data integration workflow. Amazon Q Developer helps you learn and build data integration jobs using AWS Glue efficiently by generating the required AWS Glue code based on your natural language descriptions. You can create jobs that extract, transform, and load data that is stored in Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon DynamoDB. Amazon Q Developer can also help you connect to third-party, software as a service (SaaS), and custom sources.

With general availability, we added new capabilities for you to author jobs using natural language. Amazon Q Developer can now generate complex data integration jobs with multiple sources, destinations, and data transformations. It can generate data integration jobs for extracts and loads to S3 data lakes including file formats like CSV, JSON, and Parquet, and ingestion into open table formats like Apache Hudi, Delta, and Apache Iceberg. It generates jobs for connecting to over 20 data sources, including relational databases like PostgreSQL, MySQL and Oracle; data warehouses like Amazon Redshift, Snowflake, and Google BigQuery; NoSQL databases like DynamoDB, MongoDB and OpenSearch; tables defined in the AWS Glue Data Catalog; and custom user-supplied JDBC and Spark connectors. Generated jobs can use a variety of data transformations, including filter, project, union, join, and custom user-supplied SQL.

Amazon Q data integration in AWS Glue helps you through two different experiences: the Amazon Q chat experience, and AWS Glue Studio notebook experience. This post describes the end-to-end user experiences to demonstrate how Amazon Q data integration in AWS Glue simplifies your data integration and data engineering tasks.

Amazon Q chat experience

Amazon Q Developer provides a conversational Q&A capability and a code generation capability for data integration. To start using the conversational Q&A capability, choose the Amazon Q icon on the right side of the AWS Management Console.

For example, you can ask, “How do I use AWS Glue for my ETL workloads?” and Amazon Q provides concise explanations along with references you can use to follow up on your questions and validate the guidance.

To start using the AWS Glue code generation capability, use the same window. On the AWS Glue console, start authoring a new job, and ask Amazon Q, “Please provide a Glue script that reads from Snowflake, renames the fields, and writes to Redshift.”

You will notice that the code is generated. With this response, you can learn and understand how you can author AWS Glue code for your purpose. You can copy/paste the generated code to the script editor and configure placeholders. After you configure an AWS Identity and Access Management (IAM) role and AWS Glue connections on the job, save and run the job. When the job is complete, you can start querying the table exported from Snowflake in Amazon Redshift.

Let’s try another prompt that reads data from two different sources, filters and projects them individually, joins on a common key, and writes the output to a third target.  Ask Amazon Q: “I want to read data from S3 in Parquet format, and select some fields. I also want to read data from DynamoDB, select some fields, and filter some rows. I want to union these two datasets and write the results to OpenSearch.

The code is generated. When the job is complete, your index is available in OpenSearch and can be used by your downstream workloads.

AWS Glue Studio notebook experience

Amazon Q data integration in AWS Glue helps you author code in an AWS Glue notebook to speed up development of new data integration applications. In this section, we walk you through how to set up the notebook and run a notebook job.

Prerequisites

Before going forward with this tutorial, complete the following prerequisites:

  1. Set up AWS Glue Studio.
  2. Configure an IAM role to interact with Amazon Q. Attach the following policy to your IAM role for the AWS Glue Studio notebook:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CodeWhispererPermissions",
                "Effect": "Allow",
                "Action": [
                    "codewhisperer:GenerateRecommendations"
                ],
                "Resource": "*"
            }
        ]
    }

Create a new AWS Glue Studio notebook job

Create a new AWS Glue Studio notebook job by completing the following steps:

  1. On the AWS Glue console, choose Notebooks under ETL jobs in the navigation pane.
  2. Under Create job, choose Notebook.
  3. For Engine, select Spark (Python).
  4. For Options, select Start fresh.
  5. For IAM role, choose the IAM role you configured as a prerequisite.
  6. Choose Create notebook.

A new notebook is created with sample cells. Let’s try recommendations using the Amazon Q data integration in AWS Glue to auto-generate code based on your intent. Amazon Q would help you with each step as you express an intent in a Notebook cell.

Add a new cell and enter your comment to describe what you want to achieve. After you press Tab and Enter, the recommended code is shown. First intent is to extract the data: “Give me code that reads a Glue Data Catalog table”, followed by “Give me code to apply a filter transform with star_rating>3” and “Give me code that writes the frame into S3 as Parquet”.

Similar to the Amazon Q chat experience, the code is recommended. If you press Tab, then the recommended code is chosen. You can learn more in User actions.

You can run each cell by simply filling in the appropriate options for your sources in the generated code. At any point in the runs, you can also preview a sample of your dataset by simply using the show() method.

Let’s now try to generate a full script with a single complex prompt. “I have JSON data in S3 and data in Oracle that needs combining. Please provide a Glue script that reads from both sources, does a join, and then writes results to Redshift”

You may notice that, on the notebook, the Amazon Q data integration in AWS Glue generated the same code snippet that was generated in the Amazon Q chat.

You can also run the notebook as a job, either by choosing Run or programmatically.

Conclusion

With Amazon Q data integration, you have an artificial intelligence (AI) expert by your side to integrate data efficiently without deep data engineering expertise. These capabilities simplify and accelerate data processing and integration on AWS. Amazon Q data integration in AWS Glue is available in every AWS Region where Amazon Q is available. To learn more, visit the product page, our documentation, and the Amazon Q pricing page.

A special thanks to everyone who contributed to the launch of Amazon Q data integration in AWS Glue: Alexandra Tello, Divya Gaitonde, Andrew Kim, Andrew King, Anshul Sharma, Anshi Shrivastava, Chuhan Liu, Daniel Obi, Hirva Patel, Henry Caballero Corzo, Jake Zych, Jeremy Samuel, Jessica Cheng, , Keerthi Chadalavada, Layth Yassin, Maheedhar Reddy Chappidi, Maya Patwardhan, Neil Gupta, Raghavendhar Vidyasagar Thiruvoipadi, Rajendra Gujja, Rupak Ravi, Shaoying Dong, Vaibhav Naik, Wei Tang, William Jones, Daiyan Alamgir, Japson Jeyasekaran, Matt Sampson, Kartik Panjabi, Ranu Shah, Chuan Lei, Huzefa Rangwala, Jiani Zhang, Xiao Qin, Mukul Prasad, Alon Halevy, Brian Ross, Alona Nadler, Omer Zaki, Rick Sears, Bratin Saha, G2 Krishnamoorthy, Kinshuk Pahare, Nitin Bahadur, and Santosh Chandrachood.


About the Authors

Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He is responsible for building software artifacts to help customers. In his spare time, he enjoys cycling with his road bike.


Matt Su is a Senior Product Manager on the AWS Glue team. He enjoys helping customers uncover insights and make better decisions using their data with AWS Analytics services. In his spare time, he enjoys skiing and gardening.

Vishal Kajjam is a Software Development Engineer on the AWS Glue team. He is passionate about distributed computing and using ML/AI for designing and building end-to-end solutions to address customers’ data integration needs. In his spare time, he enjoys spending time with family and friends.


Bo Li is a Senior Software Development Engineer on the AWS Glue team. He is devoted to designing and building end-to-end solutions to address customers’ data analytic and processing needs with cloud-based, data-intensive technologies.


XiaoRun Yu is a Software Development Engineer on the AWS Glue team. He is working on building new features for AWS Glue to help customers. Outside of work, Xiaorun enjoys exploring new places in the Bay Area.


Savio Dsouza is a Software Development Manager on the AWS Glue team. His team works on distributed systems & new interfaces for data integration and efficiently managing data lakes on AWS.


Mohit Saxena is a Senior Software Development Manager on the AWS Glue team. His team focuses on building distributed systems to enable customers with interactive and simple-to-use interfaces to efficiently manage and transform petabytes of data across data lakes on Amazon S3, and databases and data warehouses on the cloud.

Amazon Q Business, now generally available, helps boost workforce productivity with generative AI

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-q-business-now-generally-available-helps-boost-workforce-productivity-with-generative-ai/

At AWS re:Invent 2023, we previewed Amazon Q Business, a generative artificial intelligence (generative AI)–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems.

With Amazon Q Business, you can deploy a secure, private, generative AI assistant that empowers your organization’s users to be more creative, data-driven, efficient, prepared, and productive. During the preview, we heard lots of customer feedback and used that feedback to prioritize our enhancements to the service.

Today, we are announcing the general availability of Amazon Q Business with many new features, including custom plugins, and a preview of Amazon Q Apps, generative AI–powered customized and sharable applications using natural language in a single step for your organization.

In this blog post, I will briefly introduce the key features of Amazon Q Business with the new features now available and take a look at the features of Amazon Q Apps. Let’s get started!

Introducing Amazon Q Business
Amazon Q Business connects seamlessly to over 40 popular enterprise data sources and stores document and permission information, including Amazon Simple Storage Service (Amazon S3), Microsoft 365, and Salesforce. It ensures that you access content securely with existing credentials using single sign-on, according to your permissions, and also includes enterprise-level access controls.

Amazon Q Business makes it easy for users to get answers to questions like company policies, products, business results, or code, using its web-based chat assistant. You can point Amazon Q Business at your enterprise data repositories, and it’ll search across all data, summarize logically, analyze trends, and engage in dialog with users.

With Amazon Q Business, you can build secure and private generative AI assistants with enterprise-grade access controls at scale. You can also use administrative guardrails, document enrichment, and relevance tuning to customize and control responses that are consistent with your company’s guidelines.

Here are the key features of Amazon Q Business with new features now available:

End-user web experience
With the built-in web experience, you can ask a question, receive a response, and then ask follow-up questions and add new information with in-text source citations while keeping the context from the previous answer. You can only get a response from data sources that you have access to.

With general availability, we’re introducing a new content creation mode in the web experience. In this mode, Amazon Q Business does not use or access the enterprise content but instead uses generative AI models built into Amazon Q Business for creative use cases such as summarization of responses and crafting personalized emails. To use the content creation mode, you can turn off Respond from approved sources in the conversation settings.

To learn more, visit Using an Amazon Q Business web experience and Customizing an Amazon Q Business web experience in the AWS documentation.

Pre-built data connectors and plugins
You can connect, index, and sync your enterprise data using over 40 pre-built data connectors or an Amazon Kendra retriever, as well as web crawling or uploading your documents directly.

Amazon Q Business ingests content using a built-in semantic document retriever. It also retrieves and respects permission information such as access control lists (ACLs) to allow it to manage access to the data after retrieval. When the data is ingested, your data is secured with the Service-managed key of AWS Key Management Service (AWS KMS).

You can configure plugins to perform actions in enterprise systems, including Jira, Salesforce, ServiceNow, and Zendesk. Users can create a Jira issue or a Salesforce case while chatting in the chat assistant. You can also deploy a Microsoft Teams gateway or a Slack gateway to use an Amazon Q Business assistant in your teams or channels.

With general availability, you can build custom plugins to connect to any third-party application through APIs so that users can use natural language prompts to perform actions such as submitting time-off requests or sending meeting invites directly through Amazon Q Business assistant. Users can also search real-time data, such as time-off balances, scheduled meetings, and more.

When you choose Custom plugin, you can define an OpenAPI schema to connect your third-party application. You can upload the OpenAPI schema to Amazon S3 or copy it to the Amazon Q Business console in-line schema editor compatible with the Swagger OpenAPI specification.

To learn more, visit Data source connectors and Configure plugins in the AWS documentation.

Admin control and guardrails
You can configure global controls to give users the option to either generate large language model (LLM)-only responses or generate responses from connected data sources. You can specify whether all chat responses will be generated using only enterprise data or whether your application can also use its underlying LLM to generate responses when it can’t find answers in your enterprise data. You can also block specific words.

With topic-level controls, you can specify restricted topics and configure behavior rules in response to the topics, such as answering using enterprise data or blocking completely.

To learn more, visit Admin control and guardrails in the AWS documentation.

You can alter document metadata or attributes and content during the document ingestion process by configuring basic logic to specify a metadata field name, select a condition, and enter or select a value and target actions, such as update or delete. You can also use AWS Lambda functions to manipulate document fields and content, such as using optical character recognition (OCR) to extract text from images.

To learn more, visit Document attributes and types in Amazon Q Business and Document enrichment in Amazon Q Business in the AWS documentation.

Enhanced enterprise-grade security and management
Starting April 30, you will need to use AWS IAM Identity Center for user identity management of all new applications rather than using the legacy identity management. You can securely connect your workforce to Amazon Q Business applications either in the web experience or your own interface.

You can also centrally manage workforce access using IAM Identity Center alongside your existing IAM roles and policies. As the number of your accounts scales, IAM Identity Center gives you the option to use it as a single place to manage user access to all your applications. To learn more, visit Setting up Amazon Q Business with IAM Identity Center in the AWS documentation.

At general availability, Amazon Q Business is now integrated with various AWS services to securely connect and store the data and easily deploy and track access logs.

You can use AWS PrivateLink to access Amazon Q Business securely in your Amazon Virtual Private Cloud (Amazon VPC) environment using a VPC endpoint. You can use the Amazon Q Business template for AWS CloudFormation to easily automate the creation and provisioning of infrastructure resources. You can also use AWS CloudTrail to record actions taken by a user, role, or AWS service in Amazon Q Business.

Also, we support Federal Information Processing Standards (FIPS) endpoints, based on the United States and Canadian government standards and security requirements for cryptographic modules that protect sensitive information.

To learn more, visit Security in Amazon Q Business and Monitoring Amazon Q Business in the AWS documentation.

Build and share apps with new Amazon Q Apps (preview)
Today we are announcing the preview of Amazon Q Apps, a new capability within Amazon Q Business for your organization’s users to easily and quickly create generative AI-powered apps based on company data, without requiring any prior coding experience.

With Amazon Q Apps, users simply describe the app they want, in natural language, or they can take an existing conversation where Amazon Q Business helped them solve a problem. With a few clicks, Amazon Q Business will instantly generate an app that accomplishes their desired task that can be easily shared across their organization.

If you are familiar with PartyRock, you can easily use this code-free builder with the added benefit of connecting it to your enterprise data already with Amazon Q Business.

To create a new Amazon Q App, choose Apps in your web experience and enter a simple text expression for a task in the input box. You can try out samples, such as a content creator, interview question generator, meeting note summarizer, and grammar checker.

I will make a document assistant to review and correct a document using the following prompt:

You are a professional editor tasked with reviewing and correcting a document for grammatical errors, spelling mistakes, and inconsistencies in style and tone. Given a file, your goal is to recommend changes to ensure that the document adheres to the highest standards of writing while preserving the author’s original intent and meaning. You should provide a numbered list for all suggested revisions and the supporting reason.

When you choose the Generate button, a document editing assistant app will be automatically generated with two cards—one to upload a document file as an input and another text output card that gives edit suggestions.

When you choose the Add card button, you can add more cards, such as a user input, text output, file upload, or pre-configured plugin by your administrator. If you want to create a Jira ticket to request publishing a post in the corporate blog channel as an author, you can add a Jira Plugin with the result of edited suggestions from the uploaded file.

Once you are ready to share the app, choose the Publish button. You can securely share this app to your organization’s catalog for others to use, enhancing productivity. Your colleagues can choose shared apps, modify them, and publish their own versions to the organizational catalog instead of starting from scratch.

Choose Library to see all of the published Amazon Q Apps. You can search the catalog by labels and open your favorite apps.

Amazon Q Apps inherit robust security and governance controls from Amazon Q Business, including user authentication and access controls, which empower organizations to safely share apps across functions that warrant governed collaboration and innovation.

In the administrator console, you can see your Amazon Q Apps and control or remove them from the library.

To learn more, visit Amazon Q Apps in the AWS documentation.

Now available
Amazon Q Business is generally available today in the US East (N. Virginia) and US West (Oregon) Regions. We are launching two pricing subscription options.

The Amazon Q Business Lite ($3/user/month) subscription provides users access to the basic functionality of Amazon Q Business.

The Amazon Business Pro ($20/user/month) subscription gets users access to all features of Amazon Q Business, as well as Amazon Q Apps (preview) and Amazon Q in QuickSight (Reader Pro), which enhances business analyst and business user productivity using generative business intelligence capabilities.

You can use the free trial (50 users for 60 days) to experiment with Amazon Q Business. For more information about pricing options, visit Amazon Q Business Plan page.

To learn more about Amazon Q Business, you can study Amazon Q Business Getting Started, a free, self-paced digital course on AWS Skill Builder and Amazon Q Developer Center to get more sample codes.

Give it a try in the Amazon Q Business console today! For more information, visit the Amazon Q Business product page and the User Guide in the AWS documentation. Provide feedback to AWS re:Post for Amazon Q or through your usual AWS support contacts.

Channy

Amazon Q Developer, now generally available, includes new capabilities to reimagine developer experience

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/amazon-q-developer-now-generally-available-includes-new-capabilities-to-reimagine-developer-experience/

When Amazon Web Services (AWS) launched Amazon Q Developer as a preview last year, it changed my experience of interacting with AWS services and, at the same time, maximizing the potential of AWS services on a daily basis. Trained on 17 years of AWS knowledge and experience, this generative artificial intelligence (generative AI)–powered assistant helps me build applications on AWS, research best practices, perform troubleshooting, and resolve errors.

Today, we are announcing the general availability of Amazon Q Developer. In this announcement, we have a few updates, including new capabilities. Let’s get started.

New: Amazon Q Developer has knowledge of your AWS account resources
This new capability helps you understand and manage your cloud infrastructure on AWS. With this capability, you can list and describe your AWS resources using natural language prompts, minimizing friction in navigating the AWS Management Console and compiling all information from documentation pages.

To get started, you can navigate to the AWS Management Console and select the Amazon Q Developer icon.

With this new capability, I can ask Amazon Q Developer to list all of my AWS resources. For example, if I ask Amazon Q Developer, “List all of my Lambda functions,” Amazon Q Developer returns the response with a set of my AWS Lambda functions as requested, as well as deep links so I can navigate to each resource easily.

Prompt for you to try: List all of my Lambda functions.

I can also list my resources residing in other AWS Regions without having to navigate through the AWS Management Console.

Prompt for you to try: List my Lambda functions in the Singapore Region.

Not only that, this capability can also generate AWS Command Line Interface (AWS CLI) commands so I can make changes immediately. Here, I ask Amazon Q Developer to change the timeout configuration for my Lambda function.

Prompt for you to try: Change the timeout for Lambda function <NAME of AWS LAMBDA FUNCTION> in the Singapore Region to 10 seconds.

I can see Amazon Q Developer generated an AWS CLI command for me to perform the action. Next, I can copy and paste the command into my terminal to perform the change.

$> aws lambda update-function-configuration --function-name <AWS_LAMBDA_FUNCTION_NAME> --region ap-southeast-1 --timeout 10
{
    "FunctionName": "<AWS_LAMBDA_FUNCTION_NAME>",
    "FunctionArn": "arn:aws:lambda:ap-southeast-1:<ACCOUNT_ID>:function:<AWS_LAMBDA_FUNCTION_NAME>",
    "Runtime": "python3.8",
    "Role": "arn:aws:iam::<ACCOUNT_ID>:role/service-role/-role-1o58f7qb",
    "Handler": "lambda_function.lambda_handler",
    "CodeSize": 399,
    "Description": "",
    "Timeout": 10,
...
<truncated for brevity> }

What I really like about this capability is that it minimizes the time and effort needed to get my account information in the AWS Management Console and generate AWS CLI commands so I can immediately implement any changes that I need. This helps me focus on my workflow to manage my AWS resources.

Amazon Q Developer can now help you understand your costs (preview)
To fully maximize the value of cloud spend, I need to have a thorough understanding of my cloud costs. With this capability, I can get answers to AWS cost-related questions using natural language. This capability works by retrieving and analyzing cost data from AWS Cost Explorer.

Recently, I’ve been building a generative AI demo using Amazon SageMaker JumpStart, and this is the right timing because I need to know the total spend. So, I ask Amazon Q Developer the following prompt to know my spend in Q1 this year.

Prompt for you to try: What were the top three highest-cost services in Q1?

From the Amazon Q response, I can further investigate this result by selecting the Cost Explorer URL, which will bring me to the AWS Cost Explorer dashboard. Then, I can follow up with this prompt:

Prompt for you to try: List services in my account which have the most increment month over month. Provide details and analysis.

In short, this capability makes it easier for me to develop a deep understanding and get valuable insights into my cloud spending.

Amazon Q extension for IDEs
As part of the update, we also released an Amazon Q integrated development environment (IDE) extension for Visual Studio Code and JetBrains IDEs. Now, you will see two extensions in the IDE marketplaces: (1) Amazon Q and (2) AWS Toolkit.

If you’re a new user, after installing the Amazon Q extension, you will see a sign-in page in the IDE with two options: using AWS Builder ID or single sign-on. You can continue to use Amazon Q normally.

For existing users, you will need to update the AWS Toolkit extension in your IDEs. Once you’ve finished the update, if you have existing Amazon Q and Amazon CodeWhisperer connections, even if they’re expired, the new Amazon Q extension will be automatically installed for you.

If you’re using Visual Studio 2022, you can use Amazon Q Developer as part of the AWS Toolkit for Visual Studio 2022 extension.

Free access for advanced capabilities in IDE
As you might know, you can use AWS Builder ID to start using Amazon Q Developer in your preferred IDEs. Now, with this announcement, you have free access to two existing advanced capabilities of Amazon Q Developer in IDE, Amazon Q Developer Agent for software development and Amazon Q Developer Agent for code transformation. I’m really excited about this update!

With the Amazon Q Developer Agent for software development, Amazon Q Developer can help you develop code features for projects in your IDE. To get started, you enter /dev in the Amazon Q Developer chat panel. My colleague Séb shared with me the following screenshot when he was using this capability for his support case project. He used the following prompt to generate an implementation plan for creating a new API in AWS Lambda:

Prompt for you to try: Add an API to list all support cases. Expose this API as a new Lambda function

Amazon Q Developer then provides an initial plan and you can keep on iterating this plan until you’re sure mostly everything is covered. Then, you can accept the plan and select Insert code.

The other capability you can access using AWS Builder ID is Developer Agent for code transformation. This capability will help you in upgrading your Java applications in IntelliJ or Visual Studio Code. Danilo described this capability last year, and you can see his thorough journey in Upgrade your Java applications with Amazon Q Code Transformation (preview).

Improvements in Amazon Q Developer Agent for Code Transformation
The new transformation plan provides details specific to my applications to help me understand the overall upgrade process. To get started, I enter /transform in the Amazon Q Developer chat and provide the necessary details for Amazon Q to start upgrading my java project.

In the first step, Amazon Q identifies and provides details on the Java Development Kit (JDK) version, dependencies, and related code that needs to be updated. The dependencies upgrades now include upgrading popular frameworks to their latest major versions. For example, if you’re building with Spring Boot, it now gets upgraded to version 3 as part of the Java 17 upgrade.

In this step, if Amazon Q identifies any deprecated code that Java language specifications recommend replacing, it will make those updates automatically during the upgrade. This is a new enhancement to Amazon Q capabilities and is available now.

In the third step, this capability will build and run unit tests on the upgraded code, including fixing any issues to ensure the code compilation process will run smoothly after the upgrade.

With this capability, you can upgrade Java 8 and 11 applications that are built using Apache Maven to Java version 17. To get started with the Amazon Q Developer Agent for code transformation capability, you can read and follow the steps at Upgrade language versions with Amazon Q Code Transformation. We also have sample code for you to try this capability.

Things to know

  • Availability — To learn more about the availability of Amazon Q Developer capabilities, please visit Amazon Q Developer FAQs page.
  • Pricing — Amazon Q Developer now offers two pricing tiers – Free (free), and Pro, at $19/month/user.
  • Free self-paced course on AWS Skill Builder — Amazon Q Introduction is a 15-minute course that provides a high-level overview of Amazon Q, a generative AI–powered assistant, and the use cases and benefits of using it. This course is part of Amazon’s AI Ready initiative to provide free AI skills training to 2 million people globally by 2025.

Visit our Amazon Q Developer Center to find deep-dive technical content and to discover how you can speed up your software development work.

Happy building,
Donnie

Run scalable, enterprise-grade generative AI workloads with Cohere Command R & R+, now available in Amazon Bedrock

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/run-scalable-enterprise-grade-generative-ai-workloads-with-cohere-r-r-now-available-in-amazon-bedrock/

In November 2023, we made two new Cohere models available in Amazon Bedrock (Cohere Command Light and Cohere Embed English). Today, we’re announcing the addition of two more Cohere models in Amazon Bedrock; Cohere Command R and Command R+.

Organizations need generative artificial intelligence (generative AI) models to securely interact with information stored in their enterprise data sources. Both Command R and Command R+ are powerful, scalable large language models (LLMs), purpose-built for real-world, enterprise-grade workloads. These models are multilingual and are focused on balancing high efficiency with strong accuracy to excel at capabilities such as Retrieval-Augmented Generation (RAG), and tool use to enable enterprises to move beyond proof-of-concept (POC), and into production using artificial intelligence (AI).

Command R is a scalable multilingual generative model targeting RAG and tool use to enable production-scale AI for enterprises. Command R+ is a state-of-the-art RAG-optimized model designed to tackle enterprise-grade workloads and optimize business AI applications. Command R+ is optimized for advanced RAG to provide enterprise-ready, highly reliable, and verifiable responses due to in-line citations which come standard with this model. With these new Cohere models in Bedrock, you can scale with AI to quickly find the most relevant information to support tasks across business functions like finance, human resources (HR), sales, marketing, and customer support, among others, in a range of business sectors. Tool use is also available in Command R+. Command R+ is a strong multilingual model, and similarly to Command R, features a tokenizer that compresses non-English text much better than the tokenizer used for other models in the market.

Getting started with Command R and Command R+
To get started with both models in Amazon Bedrock, you first need to get access to the models. In the Amazon Bedrock console, choose Model access, and then choose Manage model access. Next, choose your preferred model(s), and then choose Save changes. As you can see, you now have six Cohere models to choose from in Amazon Bedrock – including Command R and Command R+ – giving you greater choice and flexibility to use the optimal models for your specific business needs.

Once you have access to your preferred model, you can use the model in Amazon Bedrock. Refresh the base models table to view the updated status.

The models have been trained to respond in the language of the user such as English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Here’s an example:

Prompt

<s>"Écris une description de produit pour une voiture électrique en 50 à 75 mots"

Output

Découvrez la voiture électrique qui va révolutionner votre façon de conduire.
Avec son design élégant, cette voiture offre une expérience de conduite unique
avec une accélération puissante et une autonomie impressionnante. Sa
technologie avancée vous garantit une charge rapide et une fiabilité inégalée.
Avec sa conception innovante et durable, cette voiture est parfaite pour les 
trajets urbains et les longues distances. Profitez d'une conduite silencieuse
et vivez l'expérience de la voiture électrique!

Programmatically interact with Command R and Command R+
You can also use AWS Command Line Interface (CLI) and AWS Software Development Kit (SDK) to make various calls using Amazon Bedrock APIs. Following, is a sample code in Python that interacts with Amazon Bedrock Runtime APIs with AWS SDK. Taking the same text generation prompt I used earlier, here is how it looks when used programmatically. In this example I’m interacting with the Command R model. Back to Python, I first run the ListFoundationModels API call to discover the modelId for Command R.

import boto3
import json
import numpy

bedrock = boto3.client(service_name='bedrock', region_name='us-east-1')

listModels = bedrock.list_foundation_models(byProvider='cohere')
print("\n".join(list(map(lambda x: f"{x['modelName']} : { x['modelId'] }", listModels['modelSummaries']))))

Running this code gives the list:

Command : cohere.command-text-v14
Command Light : cohere.command-light-text-v14
Embed English : cohere.embed-english-v3
Embed Multilingual : cohere.embed-multilingual-v3
Command R: cohere.command-r-v1:0
Command R+: cohere.command-r-plus-v1:0

From this list, I select cohere.command-r-v1:0 model ID and write the code to generate the text as shown earlier in this post.

import boto3
import json

bedrock = boto3.client(service_name="bedrock-runtime", region_name='us-east-1')

prompt = """
<s>Écris une description de produit pour une voiture électrique en 50 à 75 mots

body = json.dumps({
    "prompt": prompt,
    "max_tokens": 512,
    "top_p": 0.8,
    "temperature": 0.5,
})

modelId = "cohere.command-r-v1:0"

accept = "application/json"
contentType = "application/json"

response = bedrock.invoke_model(
    body=body,
    modelId=modelId,
    accept=accept,
    contentType=contentType
)

print(json.loads(response.get('body').read()))

You can get JSON formatted output as like:

Découvrez la voiture électrique qui va révolutionner votre façon de conduire.
Avec son design élégant, cette voiture offre une expérience de conduite unique
avec une accélération puissante et une autonomie impressionnante. Sa
technologie avancée vous garantit une charge rapide et une fiabilité inégalée.
Avec sa conception innovante et durable, cette voiture est parfaite pour les 
trajets urbains et les longues distances. Profitez d'une conduite silencieuse
et vivez l'expérience de la voiture électrique!

Now Available

Command R and Command R+ models, along with other Cohere models, are available today in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) Regions; check the full Region list for future updates.

Visit our community.aws site to find deep-dive technical content and to discover how our Builder communities are using Amazon Bedrock in their solutions. Give Command R and Command R+ a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

– Veliswa.