All posts by Irshad Buchh

New AWS Region in Mexico is in the works

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/new-aws-region-in-mexico-is-in-the-works/

Today, I am happy to announce that we are working on an AWS Region in Mexico. This AWS Mexico (Central) Region will be the second Region in Latin America joining the AWS South America (São Paulo) Region and will give AWS customers the ability to run workloads and store data that must remain in-country.

Mexico in the works

The Region will include three Availability Zones, each one physically independent of the others in the Region yet far enough apart to minimize the risk that an event in one Availability Zone will have impact on business continuity. The Availability Zones will be connected to each other by high-bandwidth, low-latency network connections over dedicated, fully redundant fiber.

With this announcement, AWS now has five new Regions in the works (Germany, Malaysia, Mexico, New Zealand, and Thailand) and 15 upcoming new Availability Zones.

AWS investment in Mexico

The upcoming AWS Mexico Region is the latest in ongoing investments by AWS in Mexico to provide customers with advanced and secure cloud technologies. Since 2020, AWS has launched seven Amazon CloudFront edge locations in Mexico. Amazon CloudFront is a highly secure and programmable content delivery network (CDN) that accelerates the delivery of data, videos, applications, and APIs to users worldwide with low latency and high transfer speeds.

In 2020, AWS launched AWS Outposts in Mexico. AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. AWS expanded its infrastructure footprint in Mexico again in 2023 with the launch of AWS Local Zones in Queretaro. AWS Local Zones are a type of AWS infrastructure deployment that places compute, storage, database, and other select services closer to large population, industry, and IT centers, enabling customers to deliver applications that require single-digit millisecond latency to end users. In 2023, AWS established an AWS Direct Connect location in Queretaro, allowing customers to establish private connectivity between AWS and their data center, office, or colocation environment.

Here is a glimpse into our customers in Mexico and the exciting, innovative work they’re undertaking:

Banco Santander Mexico is one of the leading financial groups in the country, focused on commercial banking and securities financing, serving more than 20.5 million customers. “AWS has been a strategic partner for our digital transformation,” said Juan Pablo Chiappari, head of IT Infrastructure for North America. “Thanks to their wide range of services, we have been able to innovate faster, improve our customer experience and reduce our operating costs.”

SkyAlert is an innovative technology company that quickly alerts millions of people living in earthquake-prone areas, promoting a culture of prevention against natural disasters. In order to provide customers—both businesses and individuals—with the right tools to protect themselves during earthquakes, SkyAlert migrated its infrastructure to AWS. After implementing its Internet of Things (IoT) solution to run on AWS and its efficient alert service, SkyAlert scales quickly and can send millions of messages in a few seconds, helping to save lives in the event of earthquakes.

Kueski is an online lender for the middle class of Mexico and Latin America. The company uses big data and advanced analytics to approve and deliver loans in a matter of minutes. The company has become the fastest-growing platform of its kind in the region and has already granted thousands of loans. They were born with AWS.

Bolsa Institucional de Valores (BIVA) is a stock exchange based in Mexico, backed by Nasdaq. BIVA provides local and global investors with cutting-edge technology for trading and market solutions and companies with listing and maintenance services. As part of its vision of innovation, BIVA started its journey to the cloud in 2023 by migrating its disaster recovery site, including its trading and market surveillance systems, to AWS, using edge compute capabilities available in both the AWS Local Zones in Queretaro, Mexico, to achieve their low latency needs.

Stay Tuned
The AWS Region in Mexico will open in early 2025. As usual, subscribe to this blog so that you will be among the first to know when the new Region is open!

To learn more about AWS Global Cloud Infrastructure, see the Global Infrastructure page.

— Irshad

AWS Weekly Roundup — AWS Control Tower new API, TLS 1.3 with API Gateway, Private Marketplace Catalogs, and more — February 19, 2024

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-control-tower-new-api-tls-1-3-with-api-gateway-private-marketplace-catalogs-and-more-february-19-2024/

Over the past week, our service teams have continued to innovate on your behalf, and a lot has happened in the Amazon Web Services (AWS) universe that I want to tell you about. I’ll also share about all the AWS Community events and initiatives that are happening around the world.

Let’s dive in!

Last week’s launches
Here are some launches that got my attention during the previous week.

AWS Control Tower introduces APIs to register organizational units – With these new APIs, you can extend governance to organizational units (OUs) using APIs and automate your OU provisioning workflow. The APIs can also be used for OUs that are already under AWS Control Tower governance to re-register OUs after landing zone updates. These APIs include AWS CloudFormation support, allowing customers to manage their OUs with infrastructure as code (IaC).

API Gateway now supports TLS 1.3 – By using TLS 1.3 with API Gateway as the centralized point of control, developers can secure communication between the client and the gateway; uphold the confidentiality, integrity, and authenticity of their API traffic; and benefit from API Gateway’s integration with AWS Certificate Manager (ACM) for centralized deployment of SSL certificates using TLS.

Amazon OpenSearch Service now lets you update cluster volume without blue/green – While blue/green deployments are meant to avoid any disruption to your clusters because the deployment uses additional resources on the domain, it is recommended that you perform them during low traffic periods. Now, you can update volume-related cluster configuration without requiring a blue/green deployment, ensuring minimal performance impact on your online traffic and avoiding any potential disruption to your cluster operations.

Amazon GuardDuty Runtime Monitoring protects clusters running in shared VPC – With this launch, customers who are already opted into automated agent management in GuardDuty will benefit from a renewed 30-day trial of GuardDuty Runtime Monitoring, where we will automatically start monitoring the resources (clusters) deployed in a shared VPC setup. Customers also have the option to manually manage the agent and provision the virtual private cloud (VPC) endpoint in their shared VPC environment.

AWS Marketplace now supports managing Private Marketplace catalogs for OUs – This capability supports distinct product catalogs per business unit or development environment, empowering organizations to align software procurement with specific needs. Additionally, customers can designate a trusted member account as a delegated administrator for Private Marketplace administration, reducing the operational burden on management account administrators. With this launch, organizations can procure more quickly by providing administrators with the agile controls they need to scale their procurement governance across distinct business and user needs.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news

Join AWS Cloud Clubs Captains – The C3 cohort of AWS Cloud Club Captains is open for applications from February 5–23, 2024, at 5:00 PM EST.

AWS open source news and updates – Our colleague Ricardo writes this weekly open source newsletter highlighting new open source projects, tools, and demos from the AWS Community.

Upcoming AWS events

Check your calendars and sign up for upcoming AWS events:

Building with Generative AI on AWS using PartyRock, Amazon Bedrock and Amazon Q – You will gain skills in prompt engineering and using the Amazon Bedrock API. We will also explore how to “chat with your documents” through knowledge bases, Retrieval Augmented Generation (RAG), embeddings, and agents. We will also use next-generation developer tools Amazon Q and Amazon CodeWhisperer to assist in coding and debugging.

Location: AWS Skills Center, 1550-G Crystal Drive, Arlington, VA

AI/ML security – Artificial intelligence and machine learning (AI/ML) and especially generative AI  have become top of mind for many organizations, but even the companies who want to move forward with this new and transformative technology are hesitating. They don’t necessarily understand how they can ensure that what they build will be secure. This webinar explains how they can do that.

AWS Jam Session – Canada Edition – AWS JAM is a gamified learning platform where you come to play, learn, and validate your AWS skills. The morning will include a mix of challenges across various technical domains – security, serverless, AI/ML, analytics, and more. The afternoon will be focused on a different specialty domain each month. You can form teams of up to four people to solve the challenges. There will be prizes for the top three winning teams.

Whether you’re in the Americas, Asia Pacific and Japan, or the EMEA region, there’s an upcoming AWS Innovate Online event that fits your time zone. Innovate Online events are free, online, and designed to inspire and educate you about AWS.

AWS Summits are a series of free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events are designed to educate you about AWS products and services and help you develop the skills needed to build, deploy, and operate your infrastructure and applications. Find an AWS Summit near you and register or set a notification to know when registration opens for a Summit that interests you.

AWS Community re:Invent re:Caps – Join a Community re:Cap event organized by volunteers from AWS User Groups and AWS Cloud Clubs around the world to learn about the latest announcements from AWS re:Invent.

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

– Irshad

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

New chat experience for AWS Glue using natural language – Amazon Q data integration in AWS Glue (Preview)

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/new-chat-experience-for-aws-glue-using-natural-language-amazon-q-data-integration-in-aws-glue-preview/

Today we’re previewing a new chat experience for AWS Glue that will let you use natural language to author and troubleshoot data integration jobs.

Amazon Q data integration in AWS Glue will reduce the time and effort you need to learn, build, and run data integration jobs using AWS Glue data integration engines. You can author jobs, troubleshoot issues, and get instant answers to questions about AWS Glue and anything related to data integration. The chat experience is powered by Amazon Bedrock.

You can describe your data integration workload and Amazon Q will generate a complete ETL script. You can troubleshoot your jobs by asking Amazon Q to explain errors and propose solutions. Amazon Q provides detailed guidance throughout the entire data integration workflow. Amazon Q helps you learn and build data integration jobs using AWS Glue. Amazon Q can help you connect to common AWS sources such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon DynamoDB.

Let me show you some capabilities of Amazon Q data integration in AWS Glue.

1. Conversational Q&A capability

To start using this feature, I can select the Amazon Q icon on the right-hand side of the AWS Management Console.

AmazonQ-1

For example, I can ask, “What is AWS Glue,” and Amazon Q provides concise explanations along with references I can use to follow up on my questions and validate the guidance.

AmazonQ-2

With Amazon Q, I can elaborate on my use cases in more detail to provide context. For example, I can ask Amazon Q, “How do I create an AWS Glue job?”

AmazonQ-2

Next let me ask Amazon Q, “How do I optimize memory management in my AWS Glue job?”

AmazonQ-41

2. AWS Glue job creation

To use this feature, I can tell Amazon Q, “Write a Glue ETL job that reads from Redshift, drops null fields, and writes to S3 as parquet files.”

AmazonQ-Parq

I can copy code into the script editor or notebook with a simple click on the Copy button. I can also tell Amazon Q, “Help me with a Glue job that reads my DynamoDB table, maps the fields, and writes the results to Amazon S3 in Parquet format”.

AmazonQ-DynamoDB

Get started with Amazon Q today
With Amazon Q, you have an artificial intelligence (AI) expert by your side to answer questions, write code faster, troubleshoot issues, optimize workloads, and even help you code new features. These capabilities simplify every phase of building applications on AWS. Amazon Q data integration in AWS Glue is available in every region where Amazon Q is supported. To learn more, see the Amazon Q pricing page.

Learn more

— Irshad

Use natural language to explore and prepare data with a new capability of Amazon SageMaker Canvas

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/use-natural-language-to-explore-and-prepare-data-with-a-new-capability-of-amazon-sagemaker-canvas/

Today, I’m happy to introduce the ability to use natural language instructions in Amazon SageMaker Canvas to explore, visualize, and transform data for machine learning (ML).

SageMaker Canvas now supports using foundation model- (FM) powered natural language instructions to complement its comprehensive data preparation capabilities for data exploration, analysis, visualization, and transformation. Using natural language instructions, you can now explore and transform your data to build highly accurate ML models. This new capability is powered by Amazon Bedrock.

Data is the foundation for effective machine learning, and transforming raw data to make it suitable for ML model building and generating predictions is key to better insights. Analyzing, transforming, and preparing data to build ML models is often the most time-consuming part of the ML workflow. With SageMaker Canvas, data preparation for ML is seamless and fast with 300+ built-in transforms, analyses, and an in-depth data quality insights report without writing any code. Starting today, the process of data exploration and preparation is faster and simpler in SageMaker Canvas using natural language instructions for exploring, visualizing, and transforming data.

Data preparation tasks are now accelerated through a natural language experience using queries and responses. You can quickly get started with contextual, guided prompts to understand and explore your data.

Say I want to build an ML model to predict house prices Using SageMaker Canvas. First, I need to prepare my housing dataset to build an accurate model. To get started with the new natural language instructions, I open the SageMaker Canvas application, and in the left navigation pane, I choose Data Wrangler. Under the Data tab and from the list of available datasets, I select the canvas-housing-sample.csv as the dataset, then select Create a data flow and choose Create. I see the tabular view of my dataset and an introduction to the new Chat for data prep capability.

data-flow

I select Chat for data prep, and it displays the chat interface with a set of guided prompts relevant to my dataset. I can use any of these prompts or query the data for something else.

chat-interface

First, I want to understand the quality of my dataset to identify any outliers or anomalies. I ask SageMaker Canvas to generate a data quality report to accomplish this task.

data-quality

I see there are no major issues with my data. I would now like to visualize the distribution of a couple of features in the data. I ask SageMaker Canvas to plot a chart.

query

I now want to filter certain rows to transform my data. I ask SageMaker Canvas to remove rows where the population is less than 1,000. Canvas removes those rows, shows me a preview of the transformed data, and also gives me the option to view and update the code that generated the transform.

code-view

I am happy with the preview and add the transformed data to my list of data transform steps on the right. SageMaker Canvas adds the step along with the code.

transform

Now that my data is transformed, I can go on to build my ML model to predict house prices and even deploy the model into production using the same visual interface of SageMaker Canvas, without writing a single line of code.

Data preparation has never been easier for ML!

Availability
The new capability in Amazon SageMaker Canvas to explore and transform data using natural language queries is available in all AWS Regions where Amazon SageMaker Canvas and Amazon Bedrock are supported.

Learn more
Amazon SageMaker Canvas product page

Go build!

— Irshad

Leverage foundation models for business analysis at scale with Amazon SageMaker Canvas

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/leverage-foundation-models-for-business-analysis-at-scale-with-amazon-sagemaker-canvas/

Today, I’m excited to introduce a new capability in Amazon SageMaker Canvas to use foundation models (FMs) from Amazon Bedrock and Amazon SageMaker Jumpstart through a no-code experience. This new capability makes it easier for you to evaluate and generate responses from FMs for your specific use case with high accuracy.

Every business has its own set of unique domain-specific vocabulary that generic models are not trained to understand or respond to. The new capability in Amazon SageMaker Canvas bridges this gap effectively. SageMaker Canvas trains the models for you so you don’t need to write any code using our company data so that the model output reflects your business domain and use case such as completing a marketing analysis. For the fine-tuning process, SageMaker Canvas creates a new custom model in your account, and the data used for fine-tuning is not used to train the original FM, ensuring the privacy of your data.

Earlier this year, we expanded support for ready-to-use models in Amazon SageMaker Canvas to include foundation models (FMs). This allows you to access, evaluate, and query FMs such as Claude 2, Amazon Titan, and Jurassic-2 (powered by Amazon Bedrock), as well as publicly available models such as Falcon and MPT (powered by Amazon SageMaker JumpStart) through a no-code interface. Extending this experience, we enabled the ability to query the FMs to generate insights from a set of documents in your own enterprise document index, such as Amazon Kendra. While it is valuable to query FMs, customers want to build FMs that generate responses and insights for their use cases. Starting today, a new capability to build FMs addresses this need to generate custom responses.

To get started, I open the SageMaker Canvas application and in the left navigation pane, I choose My models. I select the New model button, select Fine-tune foundation model, and select Create.

CreateModel

I select the training dataset and can choose up to three models to tune. I choose the input column with the prompt text and the output column with the desired output text. Then, I initiate the fine-tuning process by selecting Fine-tune.

ModelBuild

Once the fine-tuning process is completed, SageMaker Canvas gives me an analysis of the fine-tuned model with different metrics such as perplexity and loss curves, training loss, validation loss, and more. Additionally, SageMaker Canvas provides a model leaderboard that gives me the ability to measure and compare metrics around model quality for the generated models.

Analyze

Now, I am ready to test the model and compare responses with the original base model. To test, I select Test in Ready-to-use models from the Analyze page. The fine-tuned model is automatically deployed and is now available for me to chat and compare responses.

Compare

Now, I am ready to generate and evaluate insights specific to my use case. The icing on the cake was to achieve this without writing a single line of code.

Learn more

Go build!

— Irshad

PS: Writing a blog post at AWS is always a team effort, even when you see only one name under the post title. In this case, I want to thank Shyam Srinivasan for his technical assistance.

Improve developer productivity with generative-AI powered Amazon Q in Amazon CodeCatalyst (preview)

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/improve-developer-productivity-with-generative-ai-powered-amazon-q-in-amazon-codecatalyst-preview/

Today, I’m excited to introduce the preview of new generative artificial intelligence (AI) capabilities within Amazon CodeCatalyst that accelerate software delivery using Amazon Q.

Accelerate feature development – The feature development capability in Amazon Q can help you accelerate the implementation of software development tasks such as adding comments and READMEs, refining issue descriptions, generating small classes and unit tests, and updating CodeCatalyst workflows — tedious and undifferentiated tasks that take up developers’ time. Developers can go from an idea in an issue to fully tested, merge-ready, running code with only natural language inputs, in just a few clicks. AI does the heavy lifting of converting the human prompt to an actionable plan, summarizing source code repositories, generating code, unit tests, and workflows, and summarizing any changes in a pull request which is assigned back to the developer. You can also provide feedback to Amazon Q directly on the published pull request and ask it to generate a new revision. If the code change falls short of expectations, you can create a development environment directly from the pull request, make any necessary adjustments manually, publish a new revision, and proceed with the merge upon approval.

Example: make an API change in an existing application
In the navigation pane, I choose Issues and then I choose Create issue. I give the issue the title, Change the get_all_mysfits() API to return mysfits sorted by the Age attribute. I then assign this issue to Amazon Q and choose Create issue.

Create-issue

Amazon Q will automatically move the issue into the In progress state while it analyzes the issue title and description to formulate a potential solution approach. If there is already some discussion on the issue, it should be summarized in the description to help Q understand what needs to be done. As it works, Amazon Q will report on its progress by leaving comments on the issue at every stage. It will attempt to create a solution based on its understanding of the code already present in the repository and the approach it formulated. If Amazon Q is able to successfully generate a potential solution, it will create a branch and commit code to that branch. It will then create a pull request that will merge the changes into the default branch once approved. Once the pull request is published, Amazon Q will change the issue status to In Review so that you and your team know that the code is now ready for you to review.

pull-request

Summarize a change – Pull request authors can save time by asking Amazon Q to summarize the change they are publishing for review. Today pull request authors have to write the description manually or they may choose not to write it at all. If the author does not provide a description, it makes it harder for reviewers to understand what changes are being made and why, delaying the review process and slowing down software delivery.

Pull request authors and reviewers can also save time by asking Amazon Q to summarize the comments left on the pull request. The summary is useful for the author because they can easily see common feedback themes. For the reviewers it is useful because they can quickly catch up on the conversation and feedback from themselves and other team members. The overall benefits are streamlined collaboration, accelerated review process, and faster software delivery.

Join the preview
Amazon Q is available in Amazon CodeCatalyst today for spaces in AWS Region US West (Oregon).

Learn more

Irshad

Amazon CodeCatalyst introduces custom blueprints and a new enterprise tier

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/amazon-codecatalyst-introduces-custom-blueprints-and-a-new-enterprise-tier/

Today, I’m excited to introduce the new Amazon CodeCatalyst enterprise tier and custom blueprints.

Amazon CodeCatalyst enterprise tier is a new pricing tier that offers features like custom blueprints and project lifecycle management. The enterprise tier is $20/user per month, and each enterprise tier space gets 1,500 compute minutes, 160 Dev Environment hours, and 64GB of Dev Environment storage per paying user. You can use custom blueprints to define best practices for your application code, workflows, and infrastructure. You can publish these blueprints to your CodeCatalyst space, utilizing them for project creation or applying standards to existing projects.

Blueprints help you set up projects in minutes so you can get to work on code immediately. With just a few clicks, you can set up project files and configure built-in, fully integrated tools (for example, source repository, issue management, and continuous integration and delivery (CI/CD) pipeline) with best practices for your particular type of project. You can swap in popular tools like GitHub if needed, while maintaining the unified experience. You spend less time on building, integrating, or operating developer tools over the project’s lifetime.

With custom blueprints, you can define various elements of your CodeCatalyst project, like workflow definitions, infrastructure as code (IaC), and application code. When custom blueprints are updated, those changes are reflected in any project using the blueprint as a pull request update. This streamlined process reduces overhead in setting up your projects and ensures that best practices are consistently applied across your projects. As an admin, you can easily view details about which projects are using each blueprint in your CodeCatalyst space, giving you visibility into how standards are being applied across your projects.

Creating a custom blueprint in CodeCatalyst
I open the CodeCatalyst console and then I navigate to my space. On the Settings tab, I choose Blueprints on the left navigation pane, and then I select Create blueprint. At this point, I codify my best practices in this blueprint. When ready, I’ll publish it back to my space so my teams can use it to create projects.

create-blueprint

After I publish my blueprint, I can view and manage it in my CodeCatalyst space in the Settings tab. In the left panel, I choose Blueprints. Then I select Space blueprints.

manage-blueprint

I select Create project > Space blueprints to create a CodeCatalyst project from my custom blueprint.

create-project

Availability

The Amazon CodeCatalyst enterprise tier is available in US West (Oregon) and Europe (Ireland) Regions, but you can deploy to any commercial Region. To learn more, visit the Amazon CodeCatalyst webpage.

Go build!

— Irshad

Amazon CodeWhisperer offers new AI-powered code remediation, IaC support, and integration with Visual Studio

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/amazon-codewhisperer-offers-new-ai-powered-code-remediation-iac-support-and-integration-with-visual-studio/

Today, we’re announcing the general availability of artificial intelligence (AI)-powered code remediation and infrastructure as code (IaC) support for Amazon CodeWhisperer, an AI-powered productivity tool for the IDE and command line. Amazon CodeWhisperer is also now available in Visual Studio, in preview. These new enhancements to Amazon CodeWhisperer help to enable faster and more efficient software development by offloading undifferentiated work and delivering more automation, security, efficiency, and accelerated code delivery for customers, and provides this support in more places where developers love to work.

AI-powered code remediation – Since its launch, Amazon CodeWhisperer has identified hard-to-find security vulnerabilities with built-in security scans. It now provides generative AI-powered code suggestions to help remediate identified security and code quality issues. Built-in security scanning is designed to detect issues such as exposed credentials and log injection. Generative AI-powered code suggestions are designed to remediate the identified vulnerabilities, and are tailored to your application code so that you can quickly accept fixes with confidence. When a security scan is completed in CodeWhisperer, you are presented with code suggestions that you can simply accept to close the identified vulnerabilities quickly. Generative AI-powered code suggestions speed up the process of addressing security issues, so you can focus on higher-value work instead of manually reviewing code line by line to find the correct solution. You do not need to perform any additional setup in Amazon CodeWhisperer to start using this capability.

Security scanning is available for Java, Python, JavaScript, and now available for TypeScript, C#, AWS CloudFormation (YAML, JSON), AWS CDK (TypeScript, Python), and HashiCorp Terraform (HCL). Code suggestions to remediate vulnerabilities are currently available for code written in Java, Python, and JavaScript.

ACR- image

Infrastructure as code (IaC) – Amazon CodeWhisperer announces support for IaC, now encompassing AWS CloudFormation (YAML, JSON), AWS CDK (Typescript, Python), and HashiCorp Terraform (HCL). This update enhances the efficiency of IaC script development, allowing developers and DevOps teams to write infrastructure code seamlessly. With support for multiple IaC languages, CodeWhisperer promotes collaboration and consistency across diverse teams. This marks a significant advancement in cloud infrastructure development, offering a more streamlined and productive coding experience for users.

IaC

Visual Studio – Amazon CodeWhisperer is now available in Visual Studio 2022 (preview). Developers can build applications faster with real-time code suggestions for C#. Get started with the Individual Tier for free by installing the AWS Toolkit extension and signing in with an AWS Builder ID.

reference-tracker-vs

CodeWhisperer also helps developers code responsibly by flagging code suggestions that may resemble publicly available code. CodeWhisperer will provide the repository URL and license when code similar to public code.

code-suggestion-vs

Finally, Amazon CodeWhisperer recently previewed (11/20) a new time-saving capability for the command line interface. Now, Amazon CodeWhisperer adds typeahead code completions and inline documentation for hundreds of popular CLIs like Git, npm, AWS CLI, and Docker. It also adds the ability for you to translate natural language to shell code. For more details, read Introducing Amazon CodeWhisperer for command line.

Learn more
Amazon CodeWhisperer

Go build!

— Irshad

New – Seventh Generation Memory-optimized Amazon EC2 Instances (R7i)

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/new-seventh-generation-memory-optimized-amazon-ec2-instances-r7i/

Earlier, we introduced a duo of Amazon Elastic Compute Cloud (Amazon EC2) instances to our lineup: the general-purpose Amazon EC2 M7i instances and the compute-optimized Amazon EC2 C7i instances.

Today, I’m happy to share that we’re expanding these seventh-generation x86-based offerings to include memory-optimized Amazon EC2 R7i instances. These instances are powered by custom 4th Generation Intel Xeon Scalable Processors (Sapphire Rapids) exclusive to AWS and will offer the highest compute performance among the comparable fourth-generation Intel processors in the cloud. The R7i instances are available in eleven sizes including two bare metal sizes (coming soon), and offer 15 percent improvement in price-performance compared to Amazon EC2 R6i instances.

Amazon EC2 R7i instances are SAP Certified and are an ideal fit for memory-intensive workloads such as high-performance databases (SQL and NoSQL databases), distributed web scale in-memory caches (Memcached and Redis), in-memory databases (SAP HANA), real-time big data analytics (Apache Hadoop and Spark clusters) and other enterprise applications. Amazon EC2 R7i offers larger instance sizes (48xlarge) with up to 192 vCPUs and 1,536 GiB of memory, including both virtual and bare metal instances, enabling you to consolidate your workloads and scale-up applications.

You can attach up to 128 EBS volumes to each R7i instance; by way of comparison, the R6i instances allow you to attach up to 28 volumes.

Here are the specs for the R7i instances:

Instance Name vCPUs
Memory (GiB)
Network Bandwidth
EBS Bandwidth
r7i.large 2 16 GiB Up to 12.5 Gbps Up to 10 Gbps
r7i.xlarge 4 32 GiB Up to 12.5 Gbps Up to 10 Gbps
r7i.2xlarge 8 64 GiB Up to 12.5 Gbps Up to 10 Gbps
r7i.4xlarge 16 128 GiB Up to 12.5 Gbps Up to 10 Gbps
r7i.8xlarge 32 256 GiB 12.5 Gbps 10 Gbps
r7i.12xlarge 48 384 GiB 18.75 Gbps 15 Gbps
r7i.16xlarge 64 512 GiB 25 Gbps 20 Gbps
r7i.24xlarge 96 768 GiB 37.5 Gbps 30 Gbps
r7i.48xlarge 192 1,536 GiB 50 Gbps 40 Gbps

We’re also getting ready to launch two sizes of bare metal R7i instances soon:

Instance Name vCPUs
Memory (GiB)
Network Bandwidth
EBS Bandwidth
r7i.metal-24xl 96 768 GiB Up to 37.5 Gbps Up to 30 Gbps
r7i.metal-48xl 192 1,536 GiB Up to 50.0 Gbps Up to 40 Gbps

Built-in Accelerators
The Sapphire Rapids processors include four built-in accelerators, each providing hardware acceleration for a specific workload:

  • Advanced Matrix Extensions (AMX) – The AMX extensions are designed to accelerate machine learning and other compute-intensive workloads that involve matrix operations. It improves the efficiency of these operations by providing specialized hardware instructions and registers tailored for matrix computations. Matrix operations, such as multiplication and convolution, are fundamental building blocks in various computational tasks, especially in machine learning algorithms.
  • Intel Data Streaming Accelerator (DSA) – DSA enhances data processing and analytics capabilities for a wide range of applications and enables developers to harness the full potential of their data-driven workloads. With DSA, you gain access to optimized hardware acceleration that delivers exceptional performance for data-intensive tasks.
  • Intel In-Memory Analytics Accelerator (IAA) – This accelerator runs database and analytic workloads faster, with the potential for greater power efficiency. In-memory compression, decompression, encryption at very high throughput, and a suite of analytics primitives support in-memory databases, open-source databases, and data stores like RocksDB and ClickHouse.
  • Intel QuickAssist Technology (QAT) – This accelerator offloads encryption, decryption, and compression, freeing up processor cores and reducing power consumption. It also supports merged compression and encryption in a single data flow. To learn more start at the Intel QuickAssist Technology (Intel QAT) Overview.

Advanced Matrix Extensions are available on all sizes of R7i instances. The Intel QAT, Intel IAA, and Intel DSA accelerators will be available on the r7i.metal-24xl and r7i.metal-48xl instances.

Now Available
The new instances are available in the US East (Ohio, N. Virginia), US West (Oregon), Europe (Spain), Europe (Stockholm), and Europe (Ireland) AWS Regions.

Purchasing Options
R7i instances are available in On-Demand, Reserved, Savings Plan, and Spot Instance form. R7i instances are also available in Dedicated Host and Dedicated Instance form.

— Irshad

AWS ExecLeaders Data and Generative AI Day: Fueling Business Growth with Data and Generative AI

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/aws-execleaders-data-and-generative-ai-day-fueling-business-growth-with-data-and-generative-ai/

Join us on Thursday, October 5, 2023, for a free-to-attend online event, Data and Generative AI Day. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live and YouTube.

In the realm of generative AI, the power and potential hidden within your organization’s data are more expansive than ever before. Generative AI has the capability to reshape customer interactions, elevate employee productivity, stimulate creative ideation, and drive groundbreaking innovation. However, as a forward-thinking leader, what steps are required to fully harness this data-driven potential and translate it into tangible outcomes?

During this half-day event, AWS experts, partners, customers, and leading startups will provide you insights into their efforts to propel innovation using data and generative AI within the ever-evolving landscape of today. You shall find practical guidance from industry leaders on how to navigate the diverse spectrum of opportunities and challenges presented by this transformative technology, all while gaining a glimpse into what the future holds in store.

Here are some of the highlights you can expect from this event.

Swami Sivasubramanian, VP, Database, Analytics, and ML at AWS, will kick off the event with a keynote session where he will share the blueprint to democratize data and AI for business leaders. Swami will share how leaders can usher in the right mindset, strategy, and tools to translate the promise of generative AI into real business value.

Tom Godden, Director of Enterprise Strategy at AWS, will explore practical strategies for leveraging generative AI to drive business outcomes. He will share the frameworks for identifying opportunities to pilot generative AI across your organization and provide business leaders with a timely understanding of how to employ these powerful emerging capabilities.

Gopinath Sankaran, Vice President, Strategic Cloud Ecosystems at Informatica, will share insights on the impact of generative AI on data management and explore how Informatica’s AI-powered Intelligent Data Management Cloud and AWS AI and Analytics services can power a new wave of insights and experiences.

Diego Saenz, Managing Director of Data & AI at Deloitte, and Jojy Matthew, Principal, Global Financial Services Industry (GFSI) Data, Analytics, & AI at Deloitte, will share what a well-crafted data strategy means to generative AI success. Diego will share practical advice on assessing if your data estate is ready for leveraging generative AI and driving business outcomes.

You will hear from AWS leaders and AWS customers FOX, Salesforce, and Booking.com, as they share their data and generative AI journeys and explain how you can leverage this transformational technology to re-imagine your customer and employee experiences.

Data & Generative AI Day

You can add an event reminder to your calendar by registering on the event page.

See you there.

— Irshad

AWS End User Computing Innovation Day 2023: Architecting End User Computing for Change and Agility

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/aws-end-user-computing-innovation-day-2023-architecting-end-user-computing-for-change-and-agility/

Join us on Wednesday, September 13, for a free-to-attend online event, AWS End User Computing Innovation Day 2023. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

Adapting to a complex landscape shaped by return-to-office mandates, pressure to migrate out of self-operated data centers, escalating security concerns, scarcity of in-house IT expertise, and constant focus on controlling expenses creates numerous challenges for IT teams responsible for providing the tools employees need to do their jobs.

AWS End User Computing Innovation Day 2023 is a one-day, free virtual event designed to dissect these very challenges. Join us as we delve into how AWS End User Computing (EUC) services can be harnessed to navigate this transformative era. Discover how to construct a remarkably agile and secure foundation poised to support the immediate and future requirements of remote and hybrid workforces.

During this event, you will have the opportunity to hear directly from senior leaders at AWS. Here are some of the highlights you can expect from this event.

KeynoteMuneer Mirza, General Manager of AWS End User Computing, will kick off with a keynote session. Muneer will explore an array of strategic approaches primed to maximize agility and foster seamless adaptation to change.

Browser-based workload security – Brett Taylor, General Manager of Amazon WorkSpaces Web, will discuss ways to secure web-based applications using Amazon WorkSpaces Web, so you can strengthen your security and compliance posture.

You can add an event reminder to your calendar by registering on the event page.

See you there.

— Irshad

AWS Application Migration Service Major Updates: Global View, Import and Export from Local Disk, and Additional Post-launch Actions

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/aws-application-migration-service-major-updates-global-view-import-and-export-from-local-disk-and-additional-post-launch-actions/

AWS Application Migration Service simplifies, expedites, and reduces the cost of migrating your applications to AWS. It allows you to lift and shift many physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows. You can minimize time-intensive, error-prone manual processes by automating replication and conversion of your source servers from physical, virtual, or cloud infrastructure to run natively on AWS by using Application Migration Service for migration. Earlier this year, we introduced major improvements, such as a server migration metrics dashboard, import and export, and additional post-launch modernization actions.

Today, I’m pleased to announce three major updates to Application Migration Service. Here’s the quick summary for each feature release:

  • Global View – You can manage large-scale migrations across multiple accounts. This feature provides you both visibility and the ability to perform specific actions on source servers, apps, and waves in different AWS accounts.
  • Import and Export from Local Disk – You can now use Application Migration Service to import your source environment inventory list to the service from a CSV file on your local disk. You can also export your source server inventory list from the service to a CSV file and download it to your local disk. You can continue leveraging the previously launched import and export functionality to and from an S3 bucket.
  • Additional Post-launch Actions – In this update, Application Migration Service added four additional predefined post-launch actions. These actions are applied to your migrated applications when you launch them on AWS.

Let me share how you can use these features for your migration.

Global View
Global View provides you the visibility and the ability to perform specific actions on source servers, applications, and waves in different AWS accounts. Global view uses AWS Organizations to structure a management account (which has access to source servers in multiple member accounts) and member accounts (which only have access to their own source servers).

To use this feature, you need to have an AWS account in which AWS Application Migration Service is initialized. This account must be an admin in AWS Organizations or a delegated admin for AWS Application Migration Service. You can view the Global View page on the Application Migration Service page in the AWS Management Console by selecting Global View in the left navigation menu.

You can use the global view feature to see source servers, applications and waves across multiple managed accounts and perform various actions, including:

  • Launching test and cutover instances across accounts
  • Monitoring migration execution progress across accounts

The main Global View page provides an overview of your account and this information changes depending on whether you have a management account or a member account.

In a management account, you can see the AWS organizations permissions, the count of linked accounts, and the total number of source servers, applications, and waves under Account information. The Linked accounts section displays the relevant information for your linked accounts. It shows all the linked accounts this account has access to, including the account you’re logged into (the management account) and the member accounts that are linked to it. If the management account has access to two additional member accounts, the Linked accounts section will show three accounts. It’s the total number of accounts that are visible through this management account (including itself). For member accounts, this page only displays the account information that includes the AWS organizations permissions and the number of source servers, applications, and waves in the specific account.

Global view

In your management account, you can access and review source servers, applications and waves within your account and across all member accounts. As a manager, you can choose between All accounts and My account from the drop-down menu, which allows you to change you view of presented source servers, applications or waves.

Waves

Import and Export from Local Disk
A comprehensive data center inventory forms the foundation of any successful migration endeavor. This inventory encompasses a comprehensive list of servers and applications managed by customers on premises. The inventory is categorized into migration waves to facilitate efficient migration planning.

Typically, this inventory is compiled using discovery tools or created manually by IT administrators. Perhaps you maintain your data center inventory in Excel spreadsheets. With Application Migration Service, we offer seamless support for importing your inventory list from a CSV file, which follows a format similar to the one used by Cloud Migration Factory.

In the previous release, Application Migration Service supported the option to import a file from Amazon S3 and export a file to Amazon S3. In this latest release, Application Migration Service supports the option to import a file from local disk and export a file to local disk. This makes it easy for you to manage large scale-migrations and ingest your inventory of source servers, applications and waves, including their attributes such as EC2 instance type, subnet and tags. These attributes are the parameters used to populate the EC2 launch template.

Import and Export

To start using the import feature, you need to identify your servers and application inventory. You can do this manually or using discovery tools. The next thing you need to do is download the import template, which you can access from the console.

Import Local

After you download the import template, you can start mapping your inventory list onto this template. While mapping your inventory, you can group related servers into applications and waves. You can also perform configurations, such as defining Amazon Elastic Compute Cloud (Amazon EC2) launch template settings and specifying tags for each wave.

The following screenshot is an example of the results of my import template.

Inventory

On the Application Migration Service page in the AWS Management Console, select Import on the left-side navigation menu (under Import and Export). Under the Import inventory tab, select Import from local disk. Select Choose file and choose the local file containing your inventory list. Select Import, and the inventory file is imported into Application Migration Service. When the import process is complete, the details of the import results appear.

Now, you can view all your inventory inside the Source servers, Applications, and Waves pages on the Application Migration Service console.

To export your inventory to a local file, select Export on the left-side navigation menu of the Application Migration Service page. Under Export inventory tab, choose Export to local disk. Specify the name of the file to download under Destination filename. Choose Export, and the inventory file downloads to your local disk. Application Migration Service uses an S3 bucket within your account for the import and export operations, even when using local disk. You must have the required permissions to perform this action. You can modify the exported inventory file and reimport it to perform bulk configuration updates across your inventory. When the global view feature is activated upon reimport, configuration changes are applied also across accounts.

Export Local

Additional Post-launch Actions
Post-launch actions allow you to control and automate actions performed after your servers have been launched in AWS. You can use predefined or custom post-launch actions.

Application Migration Service now has four additional predefined post-launch actions to run in your Amazon EC2 instances on top of the existing predefined post-launch actions. These additional post-launch actions provide you with flexibility to maximize your migration experience.

Post Launch template

The new four additional predefined post-launch actions are as follows:

  • Configure Time Sync – You can use the Time Sync feature to set the time for your Linux instance using ATSS.
  • Validate disk space – You can use the disk space validation feature to obtain visibility into the disk space and to ensure that you have enough available disk space on your target server.
  • Verify HTTP(S) response – You can use the Verify HTTP(S) response feature to conduct HTTP(S) connectivity checks to a predefined list of URLs. The feature verifies the connectivity to the launched target instance.
  • Enable Amazon Inspector – The Enable Amazon Inspector feature allows you to run security scans on your Amazon EC2 resources, including the target instances launched by Application Migration Service. The Amazon Inspector service is enabled at the account level. This action uses the Enable, BatchGetAccountStatus, and CreateServiceLinkedRole APIs.

Now Available
The Global View, Import and Export Feature from Local, and Additional Post-launch Actions are available now, and you can start using them today in all Regions where AWS Application Migration Service is supported. Visit the Application Migration Service User Guide to dive deeper into these exciting features and you can refer to the Getting started with AWS Application Migration Service to kickstart your workload migration to AWS.

—Irshad

Discover How AWS Designed Silicon Fuels Customer Outcomes at AWS Silicon Innovation Day

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/discover-how-aws-designed-silicon-fuels-customer-outcomes-at-aws-silicon-innovation-day/

We hope you will join us on Wednesday, June 21, for a free-to-attend online event, AWS Silicon Innovation Day. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

AWS Silicon Innovation Day is a one-day virtual event on June 21, 2023, that will allow you to better understand AWS Silicon and how you can use AWS’s unique Amazon EC2 chip offerings to your benefit. AWS has designed and developed purpose-built silicon specifically for the cloud.

During this event, you will have the opportunity to hear directly from senior leaders at AWS. Our panel of lead architects, engineers, customers, and analysts will provide insights into our silicon journey. Through deep dives into our cutting-edge silicon design and customer success stories, the panel will provide insights on security enhancements and cost-saving opportunities. Here are some of the highlights you can expect from this event.

Leadership session – To kick off the day, we have a Leadership session featuring Dave Brown, VP of Amazon EC2 and Dr. Ruba Borno, VP of WW Channels and Alliances joining us on stage. Dave will engage in a discussion with Ruba about how you can benefit from the innovation AWS delivers with its silicon technology.

AI/ML session – Gary Szilagyi, VP of Annapurna Labs will discuss with Nafea Bshara, co-founder of Annapurna Labs the utilization of chipset development by his team to create specialized chips for Generative AI, CPU, and the AWS Nitro system. He will highlight how you can harness the Annapurna mindset to develop not only CPUs but also tailor-made chips with specific purposes in mind.

Customer session – Jeff Barr, VP of AWS Evangelism, and Tiffany Wissner, Director of Product Marketing, will delve into insights from our customers. They will share anecdotes and experiences gathered from various sources, such as re:Invent, summits, and developer events, where you have expressed how you harnessed AWS silicon to drive your own remarkable innovations.

Networking session – JR Rivers, Senior Principal Engineer, and Madhura Kale, Senior Product Manager will shed light on the impact of silicon innovation, not only on the benefits you experience using our CPUs, GPUs, or Nitro System, but also on the transformation of AWS’s network infrastructure. They will delve into the realm of networking advancements, showcasing some of the latest innovations and highlighting the instrumental role played by AWS silicon in powering these developments.

Arm and Nitro Innovation sessionAnthony Liguori, VP and Fellow, Nitro System architecture will be joined by Ali Saidi, Director of Annapurna Labs to discuss harnessing the power of hardware and software in tandem to drive the development of cutting-edge silicon technologies.

Analyst and Executive sessionRaj Pai, VP of Amazon EC2 Product Management will engage in a conversation with an analyst, delving into the realm of silicon innovation in the cloud.

Join us for Silicon Innovation Day Wednesday June 21 9:00am - 4:00pm PDT

No advance registration is needed to participate in AWS Silicon Innovation Day, but you can add an event reminder to your calendar by registering on the event page. We sincerely hope that you will join us in embracing the excitement and seizing the valuable learning opportunities at this new event!

Meet you there.

— Irshad

New – Amazon S3 Dual-Layer Server-Side Encryption with Keys Stored in AWS Key Management Service (DSSE-KMS)

Post Syndicated from Irshad Buchh original https://aws.amazon.com/blogs/aws/new-amazon-s3-dual-layer-server-side-encryption-with-keys-stored-in-aws-key-management-service-dsse-kms/

Today, we are launching Amazon S3 dual-layer server-side encryption with keys stored in AWS Key Management Service (DSSE-KMS), a new encryption option in Amazon S3 that applies two layers of encryption to objects when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. DSSE-KMS is designed to meet National Security Agency CNSSP 15 for FIPS compliance and Data-at-Rest Capability Package (DAR CP) Version 5.0 guidance for two layers of CNSA encryption. Using DSSE-KMS, you can fulfill regulatory requirements to apply multiple layers of encryption to your data.

Amazon S3 is the only cloud object storage service where customers can apply two layers of encryption at the object level and control the data keys used for both layers. DSSE-KMS makes it easier for highly regulated customers to fulfill rigorous security standards, such as US Department of Defense (DoD) customers.

With DSSE-KMS, you can specify dual-layer server-side encryption (DSSE) in the PUT or COPY request for an object or configure your S3 bucket to apply DSSE to all new objects by default. You can also enforce DSSE-KMS using IAM and bucket policies. Each layer of encryption uses a separate cryptographic implementation library with individual data encryption keys. DSSE-KMS helps protect sensitive data against the low probability of a vulnerability in a single layer of cryptographic implementation.

DSSE-KMS simplifies the process of applying two layers of encryption to your data, without having to invest in infrastructure required for client-side encryption. Each layer of encryption uses a different implementation of the 256-bit Advanced Encryption Standard with Galois Counter Mode (AES-GCM) algorithm. DSSE-KMS uses the AWS Key Management Service (AWS KMS) to generate data keys, allowing you to control your customer managed keys by setting permissions per key and specifying key rotation schedules. With DSSE-KMS, you can now query and analyze your dual-encrypted data with AWS services such as Amazon Athena, Amazon SageMaker, and more.

With this launch, Amazon S3 now offers four options for server-side encryption:

  1. Server-side encryption with Amazon S3 managed keys (SSE-S3)
  2. Server-side encryption with AWS KMS (SSE-KMS)
  3. Server-side encryption with customer-provided encryption keys (SSE-C)
  4. Dual-layer server-side encryption with keys stored in KMS (DSSE-KMS)

Let’s see how DSSE-KMS works in practice.

Create an S3 Bucket and Turn on DSSE-KMS
To create a new bucket in the Amazon S3 console, I choose Buckets in the navigation pane. I choose Create bucket, and I select a unique and meaningful name for the bucket. Under Default encryption section, I choose DSSE-KMS as the encryption option. From the available AWS KMS keys, I select a key for my requirements. Finally, I choose Create bucket to complete the creation of the S3 bucket, encrypted by DSSE-KMS encryption settings.

Encryption

Upload an Object to the DSSE-SSE enabled S3 Bucket
In the Buckets list, I choose the name of the bucket that I want to upload an object to. On the Objects tab for the bucket, I choose Upload. Under Files and folders, I choose Add files. I then choose a file to upload, and then choose Open. Under Server-side encryption, I choose Do not specify an encryption key. I then choose Upload.

Server Side Encryption

Once the object is uploaded to the S3 bucket, I notice that the uploaded object inherits the Server-side encryption settings from the bucket.

Server Side Encryption Setting

Download a DSSE-KMS Encrypted Object from an S3 Bucket
I select the object that I previously uploaded and choose Download or choose Download as from the Object actions menu. Once the object is downloaded, I open it locally, and the object is decrypted automatically, requiring no change to client applications.

Now Available
Amazon S3 dual-layer server-side encryption with keys stored in AWS KMS (DSSE-KMS) is available today in all AWS Regions. You can get started with DSSE-KMS via the AWS CLI or AWS Management Console. To learn more about all available encryption options on Amazon S3, visit the Amazon S3 User Guide. For pricing information on DSSE-KMS, visit the Amazon S3 pricing page (Storage tab) and the AWS KMS pricing page.

— Irshad