All posts by Channy Yun

Reserve quantum computers, get guidance and cutting-edge capabilities with Amazon Braket Direct

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/reserve-quantum-computers-get-expertise-and-cutting-edge-capabilities-with-amazon-braket-direct/

Today, we are announcing the availability of Braket Direct, a new Amazon Braket program that helps quantum researchers dive deeper into quantum computing. This program lets you get dedicated, private access to the full capacity of various quantum processing units (QPUs) without any queues or wait times, connect with quantum computing specialists to receive expert guidance for your workloads, and get early access to features and devices with limited availability to conduct cutting-edge research on today’s noisy quantum devices.

Since its launch in 2020, Amazon Braket has democratized access to quantum computing by offering on-demand access to various QPUs using shared, public availability windows, where you only pay for the duration of your reservation.

You can now use Braket Direct to reserve the entire dedicated machine for a period of time on IonQ Aria, QuEra Aquila, and Rigetti Aspen-M-3 devices for running your most complex, long-running, time-sensitive workloads, or conducting live events such as training workshops and hackathons, where you pay only for what you reserve.

To further your research, you can now engage directly with Braket’s experts through free office hours or one-on-one, hands-on reservation prep sessions. For deeper research collaborations, you can connect with specialists from quantum hardware providers such as IonQ, Oxford Quantum Circuits, QuEra, Rigetti, or Amazon Quantum Solutions Lab, our dedicated professional services team.

Finally, to truly push the boundaries, you can gain access to experimental capabilities that have limited or reduced availability starting with IonQ’s highest fidelity, 30-qubit Forte device.

Braket Direct expands on our commitment to accelerate research and innovation in quantum computing without requiring any upfront fees or long-term commitments.

Getting started with Braket Direct
To get started, go to the Amazon Braket console and choose Braket Direct in the left pane. You can see new features such as quantum hardware reservation, expert advice and get access to next-generation quantum hardware and features.

1. Request a quantum hardware reservation
To create a reservation, choose Reserve device and select the Device that you would like to reserve. Provide your contact information, including your name and email address, any details about the workload that you would like to execute using your reservation, such as desired reservation length, relevant constraints, and desired schedule.

Braket Direct assures that you have the full capacity of the QPU during your reservation and the predictability that your workloads will execute when your reservation begins.

If you are interested in connecting with a Braket expert for a one-on-one reservation prep session after your reservation is confirmed, you can select that option at no additional cost.

Choose Submit to complete your reservation request. A Braket team member will email you within 2–3 business days, pending request verification. To make the most of your reservation, you can choose to pre-create your tasks and jobs prior to a reservation to maximize use of the time.

To learn more about your quantum tasks and hybrid jobs to execute in a device reservation, see Get started with Braket Direct in the AWS documentation.

2. Get support from quantum computing experts
You can get in touch with quantum experts and get advice about your workload. With Braket office hours, Braket experts can help you go from ideation to execution faster at no additional cost. Explore your device to fit your use case, identify options to make best use of Braket for your algorithm, and get recommendations on how to use certain Braket features like Hybrid Jobs, Braket Pulse, or Analog Hamiltonian Simulation.

To book an upcoming Braket office hours slot, choose Sign up and fill out your contact information, workload details, and any desired discussion topics. You will receive a calendar invitation to the next available slot by email.

To take advantage of experts from quantum hardware providers, choose Connect and browse their professional services listings on AWS Marketplace.

The Amazon Quantum Solutions Lab is a collaborative research and professional services team staffed with quantum computing experts who can assist you in more effectively exploring quantum computing, engaging in quantum research, and assessing the current performance of this technology. To contact the Quantum Solutions Lab, select Connect and fill out contact information and use case details. The team will email you with next steps.

3. Access to cutting-edge capabilities
To move your research quicker, you can get early access to innovative new capabilities. With Braket Direct, you can easily request access to cutting-edge capabilities, such as new quantum devices with limited availability, directly in the Braket console. Today, you can get reservation-only access to IonQ’s highest-fidelity Forte QPU. Due to its limited availability, this device is currently only available through Braket Direct reservations.

Now available
Braket Direct is now generally available in all AWS Regions where Amazon Braket is available. To learn more, see the Braket Direct page and pricing page.

Give it a try and send feedback to AWS re:Post for Amazon Braket, Quantum Computing Stack Exchange, or through your usual AWS Support contacts.

Channy

Amazon ElastiCache Serverless for Redis and Memcached is now available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-elasticache-serverless-for-redis-and-memcached-now-generally-available/

Today, we are announcing the availability of Amazon ElastiCache Serverless, a new serverless option that allows customers to create a cache in under a minute and instantly scale capacity based on application traffic patterns. ElastiCache Serverless is compatible with two popular open-source caching solutions, Redis and Memcached.

You can use ElastiCache Serverless to operate a cache for even the most demanding workloads without spending time in capacity planning or requiring caching expertise. ElastiCache Serverless constantly monitors your application’s memory, CPU, and network resource utilization and scales instantly to accommodate changes to the access patterns of workloads it serves. You can create a highly available cache with data automatically replicated across multiple Availability Zones and up to 99.99 percent availability Service Level Agreement (SLA) for all workloads, which saves you time and money.

Customers wanted to get radical simplicity to deploy and operate a cache. ElastiCache Serverless offers a simple endpoint experience abstracting the underlying cluster topology and cache infrastructure. You can reduce application complexity and have more operational excellence without handling reconnects and rediscovering nodes.

With ElastiCache Serverless, there are no upfront costs, and you pay for only the resources you use. You pay for the amount of cache data storage and ElastiCache Processing Units (ECPUs) resources consumed by your applications.

Getting started with Amazon ElastiCache Serverless
To get started, go to the ElastiCache console and choose Redis caches or Memcached caches in the left navigation pane. ElastiCache Serverless supports engine versions of Redis 7.1 or higher and Memcached 1.6 or higher.

For example, in the case of Redis caches, choose Create Redis cache.

You see two deployment options: either Serverless or Design your own cache to create a node-based cache cluster. Choose the Serverless option, the New cache method, and provide a name.

Use the default settings to create a cache in your default VPC, Availability Zones, service-owned encryption key, and security groups. We will automatically set recommended best practices. You don’t have to enter any additional settings.

If you want to customize default settings, you can set your own security groups, or enable automatic backups. You can also set maximum limits for your compute and memory usage to ensure your cache doesn’t grow beyond a certain size. When your cache reaches the memory limit, keys with a time to live (TTL) are evicted according to the least recently used (LRU) logic. When your compute limit is reached, ElastiCache will throttle requests, which will lead to elevated request latencies.

When you create a new serverless cache, you can see the details of settings for connectivity and data protection, including an endpoint and network environment.

Now, you can configure the ElastiCache Serverless endpoint in your application and connect using any Redis client that supports Redis in cluster mode, such as redis-cli.

$ redis-cli -h channy-redis-serverless.elasticache.amazonaws.com --tls -c -p 6379
set x Hello
OK
get x
"Hello"

You can manage the cache using AWS Command Line Interface (AWS CLI) or AWS SDKs. For more information, see Getting started with Amazon ElastiCache for Redis in the AWS documentation.

If you have an existing Redis cluster, you can migrate your data to ElastiCache Serverless by specifying the ElastiCache backups or Amazon S3 location of a backup file in a standard Redis rdb file format when creating your ElastiCache Serverless cache.

For a Memcached cache, you can create and use a new serverless cache in the same way as Redis.

If you use ElastiCache Serverless for Memcached, there are significant benefits of high availability and instant scaling because they are not natively available in the Memcached engine. You no longer have to write custom business logic, manage multiple caches, or use a third-party proxy layer to replicate data to get high availability with Memcached. Now you can get up to 99.99 percent availability SLA and data replication across multiple Availability Zones.

To connect to the Memcached endpoint, run the openssl client and Memcached commands as shown in the following example output:

$ /usr/bin/openssl s_client -connect channy-memcached-serverless.cache.amazonaws.com:11211 -crlf 
set a 0 0 5
hello
STORED
get a
VALUE a 0 5
hello
END

For more information, see Getting started with Amazon ElastiCache Serverless for Memcached in the AWS documentation.

Scaling and performance
ElastiCache Serverless scales without downtime or performance degradation to the application by allowing the cache to scale up and initiating a scale-out in parallel to meet capacity needs just in time.

To show ElastiCache Serverless’ performance we conducted a simple scaling test. We started with a typical Redis workload with an 80/20 ratio between reads and writes with a key size of 512 bytes. Our Redis client was configured to Read From Replica (RFR) using the READONLY Redis command, for optimal read performance. Our goal is to show how fast workloads can scale on ElastiCache Serverless without any impact on latency.

As you can see in the graph above, we were able to double the requests per second (RPS) every 10 minutes up until the test’s target request rate of 1M RPS. During this test, we observed that p50 GET latency remained around 751 microseconds and at all times below 860 microseconds. Similarly, we observed p50 SET latency remained around 1,050 microseconds, not crossing the 1,200 microseconds even during the rapid increase in throughput.

Things to know

  • Upgrading engine version – ElastiCache Serverless transparently applies new features, bug fixes, and security updates, including new minor and patch engine versions on your cache. When a new major version is available, ElastiCache Serverless will send you a notification in the console and an event in Amazon EventBridge. ElastiCache Serverless major version upgrades are designed for no disruption to your application.
  • Performance and monitoring – ElastiCache Serverless publishes a suite of metrics to Amazon CloudWatch, including memory usage (BytesUsedForCache), CPU usage (ElastiCacheProcessingUnits), and cache metrics, including CacheMissRate, CacheHitRate, CacheHits, CacheMisses, and ThrottledRequests. ElastiCache Serverless also publishes Amazon EventBridge events for significant events, including cache creation, deletion, and limit updates. For a full list of available metrics and events, see the documentation.
  • Security and compliance – ElastiCache Serverless caches are accessible from within a VPC. You can access the data plane using AWS Identity and Access Management (IAM). By default, only the AWS account creating the ElastiCache Serverless cache can access it. ElastiCache Serverless encrypts all data at rest and in-transit by transport layer security (TLS) encrypting each connection to ElastiCache Serverless. You can optionally choose to limit access to the cache within your VPCs, subnets, IAM access, and AWS Key Management Service (AWS KMS) key for encryption. ElastiCache Serverless is compliant with PCI-DSS, SOC, and ISO and is HIPAA eligible.

Now available
Amazon ElastiCache Serverless is now available in all commercial AWS Regions, including China. With ElastiCache Serverless, there are no upfront costs, and you pay for only the resources you use. You pay for cached data in GB-hours, ECPUs consumed, and Snapshot storage in GB-months.

To learn more, see the ElastiCache Serverless page and the pricing page. Give it a try, and please send feedback to AWS re:Post for Amazon ElastiCache or through your usual AWS support contacts.

Channy

Join the preview of Amazon Aurora Limitless Database

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/join-the-preview-amazon-aurora-limitless-database/

Today, we are announcing the preview of Amazon Aurora Limitless Database, a new capability supporting automated horizontal scaling to process millions of write transactions per second and manage petabytes of data in a single Aurora database.

Amazon Aurora read replicas allow you to increase the read capacity of your Aurora cluster beyond the limits of what a single database instance can provide. Now, Aurora Limitless Database scales write throughput and storage capacity of your database beyond the limits of a single Aurora writer instance. The compute and storage capacity that is used for Limitless Database is in addition to and independent of the capacity of your writer and reader instances in the cluster.

With Limitless Database, you can focus on building high-scale applications without having to build and maintain complex solutions for scaling your data across multiple database instances to support your workloads. Aurora Limitless Database scales based on the workload to support write throughput and storage capacity that, until today, would require multiple Aurora writer instances.

The architecture of Amazon Aurora Limitless Database
Limitless Database has a two-layer architecture consisting of multiple database nodes, either transaction routers or shards.

Shards are Aurora PostgreSQL DB instances that each store a subset of the data for your database, allowing for parallel processing to achieve higher write throughput. Transaction routers manage the distributed nature of the database and present a single database image to database clients.

Transaction routers maintain metadata about where data is stored, parse incoming SQL commands and send those commands to shards, aggregate data from shards to return a single result to the client, and manage distributed transactions to maintain consistency across the entire distributed database. All the nodes that make up your Limitless Database architecture are contained in a DB shard group. The DB shard group has a separate endpoint where your access your Limitless Database resources.

Getting started with Aurora Limitless Database
To get started with a preview of Aurora Limitless Database, you can sign up today and will be invited soon. The preview runs in a new Aurora PostgreSQL cluster with version 15 in the AWS US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions.

As part of the creation workflow for an Aurora cluster, choose the Limitless Database compatible version in the Amazon RDS console or the Amazon RDS API. Then you can add a DB shard group and create new Limitless Database tables. You can choose the maximum Aurora capacity units (ACUs).

After the DB shard group is created, you can view its details on the Databases page, including its endpoint.

To use Aurora Limitless Database, you should connect to a DB shard group endpoint, also called the limitless endpoint, using psql or any other connection utility that works with PostgreSQL.

There will be two types of tables that contain your data in Aurora Limitless Database:

  • Sharded tables – These tables are distributed across multiple shards. Data is split among the shards based on the values of designated columns in the table, called shard keys.
  • Reference tables – These tables have all their data present on every shard so that join queries can work faster by eliminating unnecessary data movement. They are commonly used for infrequently modified reference data, such as product catalogs and zip codes.

Once you have created a sharded or reference table, you can load massive data into Aurora Limitless Database and manipulate data in those tables using the standard PostgreSQL queries.

Join the preview
You can join the preview of Amazon Aurora Limitless Database to be among the first to experience all of this power.

Sign up now, give it a try, and please send feedback to AWS re:Post for Amazon Aurora or through your usual AWS support contacts.

Channy

Mutual authentication for Application Load Balancer reliably verifies certificate-based client identities

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/mutual-authentication-for-application-load-balancer-to-reliably-verify-certificate-based-client-identities/

Today, we are announcing support for mutually authenticating clients that present X509 certificates to Application Load Balancer. With this new feature, you can now offload client authentication to the load balancer, ensuring only trusted clients communicate with their backend applications. This new capability is built on S2N, AWS’s open source Transport Layer Security (TLS) implementation that provides strong encryption and protections against zero-day vulnerabilities, which developers can trust.

Mutual authentication (mTLS) is commonly used for business-to-business (B2B) applications such as online banking, automobile, or gaming devices to authenticate devices using digital certificates. Companies typically use it with a private certificate authority (CA) to authenticate their clients before granting access to data and services.

Customers have implemented mutual authentication using self-created or third-party solutions that require additional time and management overhead. These customers spend their engineering resources to build the functionality into their backend, update their code to keep up with the latest security patches, and invest heavily in infrastructure to create and rotate certificates.

With mutual authentication on Application Load Balancer, you have a fully managed, scalable, and cost-effective solution that enables you to use your developer resources to focus on other critical projects. Your ALB will authenticate clients with revocation checks and pass client certificate information to the target, which can be used for authorization by applications.

Getting started with mutual authentication on ALB
To enable mutual authentication on ALB, choose Create Application Load Balancer by the ALB wizard on Amazon EC2 console. When you select HTTPS in the Listeners and routing section, you can see more settings such as security policy, default server certificate, and a new client certificate handling option to support mutual authentication.

With Mutual authentication (mTLS) enabled, you can configure how listeners handle requests that present client certificates. This includes how your Application Load Balancer authenticates certificates and the amount of certificate metadata that is sent to your backend targets.

Mutual authentication has two options. The Passthrough option sends all the client certificate chains received from the client to your backend application using HTTP headers. The mTLS-enabled Application Load Balancer gets the client certificate in the handshake, establishes a TLS connection, and then sends whatever it gets in HTTPS headers to the target application. The application will need to verify the client certificate chain to authenticate the client.

With the Verify with trust store option, Application Load Balancer and client verify each other’s identity and establish a TLS connection to encrypt communication between them. We introduce a new trust store feature, and you can upload any CA bundle with roots and/or intermediate certificates generated by AWS Private Certificate Authority or any other third party CA as the source of trust to validate your client certificates.

It requires selecting an existing trust store or creating a new one. Trust stores contain your CAs, trusted certificates, and, optionally, certificate revocation lists (CRLs). The load balancer uses a trust store to perform mutual authentication with clients.

To use this option and create a new trust store, choose Trust Stores in the left menu of the Amazon EC2 console and choose Create trust store.

You can choose a CA certificate bundle in PEM format and, optionally, CRLs from your Amazon Simple Storage Service (Amazon S3) bucket. A CA certificate bundle is a group of CA certificates (root or intermediate) used by a trust store. CRLs can be used when a CA revokes client certificates that have been compromised, and you need to reject those revoked certificates. You can replace a CA bundle, and add or remove CRLs from the trust store after creation.

You can use the AWS Command Line Interface (AWS CLI) with new APIs such as create-trust-store to upload CA information, configure the mutual-authentication-mode on the Application Load Balancer listener, and send user certificate information to targets.

$ aws elbv2 create-trust-store --name my-tls-name \
    --ca-certificates-bundle-s3-bucket channy-certs \
    --ca-certificates-bundle-s3-key Certificates.pem \
    --ca-certificates-bundle-s3-object-version <version>
>> arn:aws:elasticloadbalancing:root:file1
$ aws elbv2 create-listener --load balancer-arn <value> \
    --protocol HTTPS \
    --port 443 \
    --mutual-authentication Mode=verify,
      TrustStoreArn=<arn:aws:elasticloadbalancing:root:file1>

If you already have your own private CA, such as AWS Private CA, third-party CA, or self-signed CA, you can upload their CA bundle or CRLs to the Application Load Balancer trust store to enable mutual authentication.

To test the mutual authentication on Application Load Balancer, follow the step-by-step instructions to make a self-signed CA bundle and client certificate using OpenSSL, upload them to the Amazon S3 bucket, and use them with an ELB trust store.

You can use curl with the --key and --cert parameters to send the client certificate as part of the request:

$ curl --key my_client.key --cert my_client.pem https://api.yourdomain.com

Mutual authentication can fail if a client presents an invalid or expired certificate, fails to present a certificate, cannot find a trust chain, or if any links in the trust chain have expired, or the certificate is on the revocation list.

Application Load Balancer will close the connections whenever it fails to authenticate a client and will record new connection logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the client’s IP address, handshake latency, TLS cipher used, and client certificate details. You can use these connection logs to analyze request patterns and troubleshoot issues.

To learn more, see Mutual authentication on Application Load Balancer in the AWS documentation.

Now available
Mutual authentication on Application Load Balancer is now available in all commercial AWS Regions where Application Load Balancer is available, except China. With no upfront costs or commitments required, you only pay for what you use. To learn more, see the Elastic Load Balancing pricing page.

Give it a try now and send feedback to AWS re:Post for Amazon EC2 or through your usual AWS Support contacts.

Learn more:
Application Load Balancer product page

Channy

New Cost Optimization Hub centralizes recommended actions to save you money

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-cost-optimization-hub-to-find-all-recommended-actions-in-one-place-for-saving-you-money/

Today, we are announcing Cost Optimization Hub, a new AWS Billing and Cost Management feature that makes it easy for you to identify, filter, aggregate, and quantify savings for AWS cost optimization recommendations.

With the new Cost Optimization Hub, you can interactively query cost optimization recommendations such as idle resource detection, resource rightsizing, and purchasing options across multiple AWS Regions and AWS accounts in your organizations without any data aggregation and processing. You can find out how much you’ll save if you implement those recommendations and easily compare and prioritize recommendations by savings.

Andy Jassy, CEO of Amazon, told shareholders, “We’re trying to build customer relationships (and a business) that outlast all of us, and as a result, our AWS sales and support teams are spending much of their time helping customers optimize their AWS spend so they can better weather this uncertain economy” in his 2022 Letter to Shareholders.

Cost Optimization Hub gathers all cost-optimizing recommended actions across AWS Cloud Financial Management (CFM) services, including AWS Cost Explorer and AWS Compute Optimizer, in one place. It incorporates customer-specific pricing and discounts into these recommendations, and it deduplicates findings and savings to give a consolidated view of your cost optimization opportunities.

If you are a FinOps team or infrastructure management team member who wants to understand your cost optimization opportunities in aggregate, such as which AWS accounts or AWS Regions have the most cost optimization opportunities, you should start with Cost Optimization Hub.

You can easily analyze cost optimization opportunities with built-in filters and grouping options. For example, after understanding which AWS account has the most cost optimization opportunities, you can identify the top cost optimization strategies, such as stopping idle resources, rightsizing, and Graviton migration. If you identify which AWS Region has the highest number of rightsizing opportunities, you can get a list of rightsizing recommendations for the Region. It will redirect you to the Compute Optimizer console through deep linking to validate the details, such as the projected CPU utilization if you implement the change.

Getting started with Cost Optimization Hub
To get started, choose Cost Optimization Hub in the left navigation menu of the AWS Billing and Cost Management Console. You can opt in by selecting Enable. There is a 24-hour wait time for Cost Optimization Hub to populate data initially and data will be refreshed daily afterward.

After opt-in, you can see the dashboard of cost optimization recommendations by AWS account, AWS Region, and tag key. If you want to see the list of resources available for optimization, choose View opportunities.

Cost Optimization Hub supports six types of cost-optimizing recommended actions, including:

  • Stop – Stop idle or unused resources to save up to 100 percent of the resources’ cost.
  • Rightsize – Move to a smaller Amazon EC2 instance type, Amazon EBS volume, AWS Lambda memory size, or AWS Fargate task size
  • Upgrade – Move to a later-generation product, such as moving from EBS io1 volume type to io2.
  • Graviton migration – Move from EC2 instance types with x86-based processors to EC2 instance types with AWS Graviton-based processors to save costs.
  • Purchase Savings Plans – Purchase Compute Savings Plans, EC2 Instance Savings Plans, and Amazon SageMaker Savings Plans
  • Purchase Reserved Instances – Purchase Amazon EC2, Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, and Amazon Redshift Reserved Instances.

You can see the resource type, top recommended action, and estimated monthly savings. You can also filter the list by AWS account, AWS Region, implementation effort, and tag key as the group-by dimension.

You also can classify each recommendation as “Is resources restart needed” or “Is rollback possible.” If you specify Is resources restart needed=No as the filter, you can only see recommendations that don’t require you to restart your resources, such as EBS volume recommendations. Similarly, If you specify Is rollback possible=Yes as the filter, you can only see recommendations that can be rolled back.

If you select a specific source, for example, right-sizing EC2 instance, you can view details and connect to Amazon EC2 and the AWS Compute Optimizer console. Note that estimated monthly savings is a quick approximation of future savings. The actual savings you will realize are dependent on your future AWS usage patterns.

You can also interactively query through AWS Command Line Interface (AWS CLI) and AWS SDKs.  Here’s a sample query to find the recommendations about deleting and rightsizing resources:

$ aws cost-optimization-hub list-recommendations

The preceding query gives you the following results:

{
   "items":[
      {
         "recommendationId":"MDA2MDI1ODQ1MTA1XzQ5MzNhYzZlLWZmYTUtNGI2ZC04YzBkLTAxYWE3Y2JlNjNlYg==",
         "accountId":"006025845105",
         "region":"Global",
         "resourceId":"006025845105_ComputeSavingsPlans",
         "currentResourceType":"ComputeSavingsPlans",
         "recommendedResourceType":"ComputeSavingsPlans",
         "estimatedMonthlySavings":1506.591472696,
         "estimatedSavingsPercentage":55.46400024,
         "estimatedMonthlyCost":2716.341169146,
         "currencyCode":"USD",
         "implementationEffort":"VeryLow",
         "restartNeeded":false,
         "actionType":"PurchaseSavingsPlans",
         "rollbackPossible":false,
         "recommendedResourceSummary":"$1.628/hour with three years term",
         "lastRefreshTimestamp":"2023-10-23T16:54:13-07:00",
         "recommendationLookbackPeriodInDays":30,
         "source":"CostExplorer"
      },
      {
         "recommendationId":"MDA2MDI1ODQ1MTA1XzhiZTRlNTczLTE0MDctNGIzOS05MmY3LTdmN2EzOTU2Y2ZkYw==",
         "accountId":"006025845105",
         "region":"us-east-1",
         "resourceId":"arn:aws:lambda:us-east-1:006025845105:function:Lambda-recommendation-testing:$LATEST",
         "resourceArn":"arn:aws:lambda:us-east-1:006025845105:function:Lambda-recommendation-testing:$LATEST",
         "currentResourceType":"LambdaFunction",
         "recommendedResourceType":"LambdaFunction",
         "estimatedMonthlySavings":3.1682091425308054e-06,
         "estimatedSavingsPercentage":1.936368871741565,
         "estimatedMonthlyCost":0.00016044778307703665,
         "currencyCode":"USD",
         "implementationEffort":"Low",
         "restartNeeded":false,
         "actionType":"Rightsize",
         "rollbackPossible":true,
         "currentResourceSummary":"128 MB memory",
         "recommendedResourceSummary":"160 MB memory",
         "lastRefreshTimestamp":"2023-10-24T04:07:35.364000-07:00",
         "recommendationLookbackPeriodInDays":14,
         "source":"ComputeOptimizer"
      }
   ]
}

For more information about new Cost Optimization Hub APIs, see the Cost Optimization Hub API documentation.

Now available
Cost Optimization Hub is now generally available for all customers. There is no additional charge for this new capability. You can now get started and view cost optimization recommendations across all AWS Regions.

To learn more, see the Cost Optimization Hub page and send feedback to AWS re:Post for Cost Optimization or through your usual AWS Support contacts.

Channy

AWS Weekly Roundup – EC2 DL2q instances, PartyRock, Amplify’s 6th birthday, and more – November 20, 2023

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-ec2-dl2q-instances-partyrock-amplifys-6th-birthday-and-more-november-20-2023/

Last week I saw an astonishing 160+ new service launches. There were so many updates that we decided to publish a weekly roundup again. This continues the same innovative pace of the previous week as we are getting closer to AWS re:Invent 2023.

Our News Blog team is also finalizing new blog posts for re:Invent to introduce awesome launches with service teams for your reading pleasure. Jeff Barr shared The Road to AWS re:Invent 2023 to explain our blogging journey and process. Please stay tuned in the next week!

Last week’s launches
Here are some of the launches that caught my attention last week:

Amazon EC2 DL2q instances – New DL2q instances are powered by Qualcomm AI 100 Standard accelerators and are the first to feature Qualcomm’s AI technology in the public cloud. With eight Qualcomm AI 100 Standard accelerators and 128 GiB of total accelerator memory, you can run popular generative artificial intelligence (AI) applications and extend to edge devices across smartphones, autonomous driving, personal compute, and extended reality headsets to develop and validate these AI workloads before deploying.

PartyRock for Amazon Bedrock – We introduced PartyRock, a fun and intuitive hands-on, generative AI app-building playground powered by Amazon Bedrock. You can experiment, learn all about prompt engineering, build mini-apps, and share them with your friends—all without writing any code or creating an AWS account.

You also can now access the Meta Llama 2 Chat 13B foundation model and Cohere Command Light, Embed English, and multilingual models for Amazon Bedrock.

AWS Amplify celebrates its sixth birthday – We announced six new launches; a new documentation site, support for Next.js 14 with our hosting and JavaScript library, added custom token providers and an automatic React Native social sign-in update to Amplify Auth, new ChangePassword and DeleteUser account settings components, and updated all Amplify UI packages to use new Amplify JavaScript v6. You can also use wildcard subdomains when using a custom domain with your Amplify application deployed to AWS Amplify Hosting.

Amplify docs site UI

Also check out other News Blog posts about major launches published in the past week:

Other AWS service launches
Here are some other bundled feature launches per AWS service:

Amazon Athena  – You can use a new cost-based optimizer (CBO) to enhance query performance based on table and column statistics, collected by AWS Glue Data Catalog and Athena JDBC 3.x driver, a new alternative that supports almost all authentication plugins. You can also use Amazon EMR Studio to develop and run interactive queries on Amazon Athena.

Amazon CloudWatch – You can use a new CloudWatch metric called EBS Stalled I/O Check to monitor the health of your Amazon EBS volumes, the regular expression for Amazon CloudWatch Logs Live Tail filter pattern syntax to search and match relevant log events, observability of SAP Sybase ASE database in CloudWatch Application Insights, and up to two stats commands in a Log Insights query to perform aggregations on the results.

Amazon CodeCatalyst – You can connect to a Amazon Virtual Private Cloud (Amazon VPC) from CodeCatalyst Workflows, provision infrastructure using Terraform within CodeCatalyst Workflows, access CodeCatalyst with your workforce identities configured in IAM Identity Center, and create teams made up of members of the CodeCatalyst space.

Amazon Connect – You can use a pre-built queue performance dashboard and Contact Lens conversational analytics dashboard to view and compare real-time and historical aggregated queue performance. You can use quick responses for chats, previously written formats such as typing in ‘/#greet’ to insert a personalized response, and scanning attachments to detect malware or other unwanted content.

AWS Glue – AWS Glue for Apache Spark added new six database connectors: Teradata, SAP HANA, Azure SQL, Azure Cosmos DB, Vertica, and MongoDB, as well as the native connectivity to Amazon OpenSearch Service.

AWS Lambda – You can see single pane view of metrics, logs, and traces in the AWS Lambda console and advanced logging controls to natively capture logs in JSON structured format. You can view the SAM template on the Lambda console and export the function’s configuration to AWS Application Composer. AWS Lambda also supports Java 21 and NodeJS 20 versions built on the new Amazon Linux 2023 runtime.

AWS Local Zones in Dallas – You can enable the new Local Zone in Dallas, Texas, us-east-1-dfw-2a, with Amazon EC2 C6i, M6i, R6i, C6gn, and M6g instances and Amazon EBS volume types gp2, gp3, io1, sc1, and st1. You can also access Amazon ECS, Amazon EKS, Application Load Balancer, and AWS Direct Connect in this new Local Zone to support a broad set of workloads at the edge.

Amazon Managed Streaming for Apache Kafka (Amazon MSK) – You can standardize access control to Kafka resources using AWS Identity and Access Management (IAM) and build Kafka clients for Amazon MSK Serverless written in all programming languages. These are open source client helper libraries and code samples for popular languages, including Java, Python, Go, and JavaScript. Also, Amazon MSK now supports an enhanced version of Apache Kafka 3.6.0 that offers generally available Tiered Storage and automatically sends you storage capacity alerts when you are at risk of exhausting your storage.

Amazon OpenSearch Service Ingestion – You can migrate your data from Elasticsearch version 7.x clusters to the latest versions of Amazon OpenSearch Service and use persistent buffering to protect the durability of incoming data.

Amazon RDS –Amazon RDS for MySQL now supports creating active-active clusters using the Group Replication plugin, upgrading MySQL 5.7 snapshots to MySQL 8.0, and Innovation Release version of MySQL 8.1.

Amazon RDS Custom for SQL Server extends point-in-time recovery support for up to 1,000 databases, supports Service Master Key Retention to use transparent data encryption (TDE), table- and column-level encryption, DBMail and linked servers, and use SQL Server Developer edition with the bring your own media (BYOM).

Additionally, Amazon RDS Multi-AZ deployments with two readable standbys now supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy.

AWS Partner Central – You can use an improved user experience in AWS Partner Central to build and promote your offerings and the new Investments tab in the Partner Analytics Dashboard to gain actionable insights. You can now link accounts and associated users between Partner Central and AWS Marketplace and use an enhanced co-sell experience with APN Customer Engagements (ACE) manager.

Amazon QuickSight – You can programmatically manage user access and custom permissions support for roles to restrict QuickSight functionality to the QuickSight account for IAM Identity Center and Active Directory using APIs. You can also use shared restricted folders, a Contributor role and support for data source asset types in folders and the Custom Week Start feature, an addition designed to enhance the data analysis experience for customers across diverse industries and social contexts.

AWS Trusted Advisor – You can use new APIs to programmatically access Trusted Advisor best practices checks, recommendations, and prioritized recommendations and 37 new Amazon RDS checks that provide best practices guidance by analyzing DB instance configuration, usage, and performance data.

There’s a lot more launch news that I haven’t covered. See AWS What’s New for more details.

See you virtually in AWS re:Invent
AWS re:Invent 2023Next week we’ll hear the latest from AWS, learn from experts, and connect with the global cloud community in Las Vegas. If you come, check out the agenda, session catalog, and attendee guides before your departure.

If you’re not able to attend re:Invent in person this year, we’re offering the option to livestream our Keynotes and Innovation Talks. With the registration for online pass, you will have access to on-demand keynote, Innovation Talks, and selected breakout sessions after the event.

Channy

Announcing Amazon EC2 Capacity Blocks for ML to reserve GPU capacity for your machine learning workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-amazon-ec2-capacity-blocks-for-ml-to-reserve-gpu-capacity-for-your-machine-learning-workloads/

Recent advancements in machine learning (ML) have unlocked opportunities for customers across organizations of all sizes and industries to reinvent new products and transform their businesses. However, the growth in demand for GPU capacity to train, fine-tune, experiment, and inference these ML models has outpaced industry-wide supply, making GPUs a scarce resource. Access to GPU capacity is an obstacle for customers whose capacity needs fluctuate depending on the research and development phase they’re in.

Today, we are announcing Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML, a new Amazon EC2 usage model that further democratizes ML by making it easy to access GPU instances to train and deploy ML and generative AI models. With EC2 Capacity Blocks, you can reserve hundreds of GPUs collocated in EC2 UltraClusters designed for high-performance ML workloads, using Elastic Fabric Adapter (EFA) networking in a peta-bit scale non-blocking network, to deliver the best network performance available in Amazon EC2.

This is an innovative new way to schedule GPU instances where you can reserve the number of instances you need for a future date for just the amount of time you require. EC2 Capacity Blocks are currently available for Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs in the AWS US East (Ohio) Region. With EC2 Capacity Blocks, you can reserve GPU instances in just a few clicks and plan your ML development with confidence. EC2 Capacity Blocks make it easy for anyone to predictably access EC2 P5 instances that offer the highest performance in EC2 for ML training.

EC2 Capacity Block reservations work similarly to hotel room reservations. With a hotel reservation, you specify the date and duration you want your room for and the size of beds you’d like─a queen bed or king bed, for example. Likewise, with EC2 Capacity Block reservations, you select the date and duration you require GPU instances and the size of the reservation (the number of instances). On your reservation start date, you’ll be able to access your reserved EC2 Capacity Block and launch your P5 instances. At the end of the EC2 Capacity Block duration, any instances still running will be terminated.

You can use EC2 Capacity Blocks when you need capacity assurance to train or fine-tune ML models, run experiments, or plan for future surges in demand for ML applications. Alternatively, you can continue using On-Demand Capacity Reservations for all other workload types that require compute capacity assurance, such as business-critical applications, regulatory requirements, or disaster recovery.

Getting started with Amazon EC2 Capacity Blocks for ML
To reserve your Capacity Blocks, choose Capacity Reservations on the Amazon EC2 console in the US East (Ohio) Region. You can see two capacity reservation options. Select Purchase Capacity Blocks for ML and then Get started to start looking for an EC2 Capacity Block.

Choose your total capacity and specify how long you need the EC2 Capacity Block. You can reserve an EC2 Capacity Block in the following sizes: 1, 2, 4, 8, 16, 32, or 64 p5.48xlarge instances. The total number of days that you can reserve EC2 Capacity Blocks is 1– 14 days in 1-day increments. EC2 Capacity Blocks can be purchased up to 8 weeks in advance.

EC2 Capacity Block prices are dynamic and depend on total available supply and demand at the time you purchase the EC2 Capacity Block. You can adjust the size, duration, or date range in your specifications to search for other EC2 Capacity Block options. When you select Find Capacity Blocks, AWS returns the lowest-priced offering available that meets your specifications in the date range you have specified. At this point, you will be shown the price for the EC2 Capacity Block.

After reviewing EC2 Capacity Blocks details, tags, and total price information, choose Purchase. The total price of an EC2 Capacity Block is charged up front, and the price does not change after purchase. The payment will be billed to your account within 12 hours after you purchase the EC2 Capacity Blocks.

All EC2 Capacity Blocks reservations start at 11:30 AM Coordinated Universal Time (UTC). EC2 Capacity Blocks can’t be modified or canceled after purchase.

You can also use AWS Command Line Interface (AWS CLI) and AWS SDKs to purchase EC2 Capacity Blocks. Use the describe-capacity-block-offerings API to provide your cluster requirements and discover an available EC2 Capacity Block for purchase.

$ aws ec2 describe-capacity-block-offerings \
          --instance-type p5.48xlarge \
          --instance-count 4 \
          --start-date-range 2023-10-30T00:00:00Z \
          --end-date-range 2023-11-01T00:00:00Z \
          –-capacity-duration 48

After you find an available EC2 Capacity Block with the CapacityBlockOfferingId and capacity information from the preceding command, you can use purchase-capacity-block-reservation API to purchase it.

$ aws ec2 purchase-capacity-block-reservation \
          --capacity-block-offering-id cbr-0123456789abcdefg \
          –-instance-platform Linux/UNIX

For more information about new EC2 Capacity Blocks APIs, see the Amazon EC2 API documentation.

Your EC2 Capacity Block has now been scheduled successfully. On the scheduled start date, your EC2 Capacity Block will become active. To use an active EC2 Capacity Block on your starting date, choose the capacity reservation ID for your EC2 Capacity Block. You can see a breakdown of the reserved instance capacity, which shows how the capacity is currently being utilized in the Capacity details section.

To launch instances into your active EC2 Capacity Block, choose Launch instances and follow the normal process of launching EC2 instances and running your ML workloads.

In the Advanced details section, choose Capacity Blocks as the purchase option and select the capacity reservation ID of the EC2 Capacity Block you’re trying to target.

As your EC2 Capacity Block end time approaches, Amazon EC2 will emit an event through Amazon EventBridge, letting you know your reservation is ending soon so you can checkpoint your workload. Any instances running in the EC2 Capacity Block go into a shutting-down state 30 minutes before your reservation ends. The amount you were charged for your EC2 Capacity Block does not include this time period. When your EC2 Capacity Block expires, any instances still running will be terminated.

Now available
Amazon EC2 Capacity Blocks are now available for p5.48xlarge instances in the AWS US East (Ohio) Region. You can view the price of an EC2 Capacity Block before you reserve it, and the total price of an EC2 Capacity Block is charged up-front at the time of purchase. For more information, see the EC2 Capacity Blocks pricing page.

To learn more, see the EC2 Capacity Blocks documentation and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

Rotate Your SSL/TLS Certificates Now – Amazon RDS and Amazon Aurora Expire in 2024

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/rotate-your-ssl-tls-certificates-now-amazon-rds-and-amazon-aurora-expire-in-2024/

Don’t be surprised if you have seen the Certificate Update in the Amazon Relational Database Service (Amazon RDS) console.

If you use or plan to use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) with certificate verification to connect to your database instances of Amazon RDS for MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and Amazon Aurora, it means you should rotate new certificate authority (CA) certificates in both your DB instances and application before the root certificate expires.

Most SSL/TLS certificates (rds-ca-2019) for your DB instances will expire in 2024 after the certificate update in 2020. In December 2022, we released new CA certificates that are valid for 40 years (rds-ca-rsa2048-g1) and 100 years (rds-ca-rsa4096-g1 and rds-ca-ecc384-g1). So, if you rotate your CA certificates, you don’t need to do It again for a long time.

Here is a list of affected Regions and their expiration dates of rds-ca-2019:

Expiration Date Regions
May 8, 2024 Middle East (Bahrain)
August 22, 2024 US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), and South America (São Paulo)
September 9, 2024 China (Beijing), China (Ningxia)
October 26, 2024 Africa (Cape Town)
October 28, 2024 Europe (Milan)
Not affected until 2061 Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), Middle East (UAE), AWS GovCloud (US-East), and AWS GovCloud (US-West)

The following steps demonstrate how to rotate your certificates to maintain connectivity from your application to your database instances.

Step 1 – Identify your impacted Amazon RDS resources
As I said, you can identify the total number of affected DB instances in the Certificate update page of the Amazon RDS console and see all of your affected DB instances. Note: This page only shows the DB instances for the current Region. If you have DB instances in more than one Region, check the certificate update page in each Region to see all DB instances with old SSL/TLS certificates.

You can also use AWS Command Line Interface (AWS CLI) to call describe-db-instances to find instances that use the expiring CA. The query will show a list of RDS instances in your account and us-east-1 Region.

$ aws rds describe-db-instances --region us-east-1 | 
      jq -r '.DBInstances[] | 
      select ((.CACertificateIdentifier != "rds-ca-rsa2048-g1") and 
              (.CACertificateIdentifier != "rds-ca-rsa4096-g1") and 
              (.CACertificateIdentifier != "rds-ca-ecc384-g1")) | 
               "DBInstanceIdentifier: 
              (.DBInstanceIdentifier), CACertificateIdentifier: 
              (.CACertificateIdentifier)"'

Step 2 – Updating your database clients and applications
Before applying the new certificate on your DB instances, you should update the trust store of any clients and applications that use SSL/TLS and the server certificate to connect.  There’s currently no easy method from your DB instances themselves to determine if your applications require certificate verification as a prerequisite to connect. The only option here is to inspect your applications’ source code or configuration files.

Although the DB engine-specific documentation outlines what to look for in most common database connectivity interfaces, we strongly recommend you work with your application developers to determine whether certificate verification is used and the correct way to update the client applications’ SSL/TLS certificates for your specific applications.

To update certificates for your application, you can use the new certificate bundle that contains certificates for both the old and new CA so you can upgrade your application safely and maintain connectivity during the transition period.

For information about checking for SSL/TLS connections and updating applications for each DB engine, see the following topics:

Step 3 – Test CA rotation on a non-production RDS instance
If you have updated new certificates in all your trust stores, you should test with a RDS instance in non-production. Do this set up in a development environment with the same database engine and version as your production environment. This test environment should also be deployed with the same code and configurations as production.

To rotate a new certificate in your test database instance, choose Modify for the DB instance that you want to modify in the Amazon RDS console.

In the Connectivity section, choose rds-ca-rsa2048-g1.

Choose Continue to check the summary of modifications. If you want to apply the changes immediately, choose Apply immediately.

To use the AWS CLI to change the CA from rds-ca-2019 to rds-ca-rsa2048-g1 for a DB instance, call the modify-db-instance command and specify the DB instance identifier with the --ca-certificate-identifier option.

$ aws rds modify-db-instance \
          --db-instance-identifier <mydbinstance> \
          --ca-certificate-identifier rds-ca-rsa2048-g1 \
          --apply-immediately

This is the same way to rotate new certificates manually in the production database instances. Make sure your application reconnects without any issues using SSL/TLS after the rotation using the trust store or CA certificate bundle you referenced.

When you create a new DB instance, the default CA is still rds-ca-2019 until January 25, 2024, when it will be changed to rds-ca-rsa2048-g1. For setting the new CA to create a new DB instance, you can set up a CA override to ensure all new instance launches use the CA of your choice.

$ aws rds modify-certificates \
          --certificate-identifier rds-ca-rsa2048-g1 \
          --region <region name>

You should do this in all the Regions where you have RDS DB instances.

Step 4 – Safely update your production RDS instances
After you’ve completed testing in non production environment, you can start the rotation of your RDS databases CA certificates in your production environment. You can rotate your DB instance manually as shown in Step 3. It’s worth noting that many of the modern engines do not require a restart, but it’s still a good idea to schedule it in your maintenance window.

In the Certificate update page of Step 1, choose the DB instance you want to rotate. By choosing Schedule, you can schedule the certificate rotation for your next maintenance window. By choosing Apply now, you can apply the rotation immediately.

If you choose Schedule, you’re prompted to confirm the certificate rotation. This prompt also states the scheduled window for your update.

After your certificate is updated (either immediately or during the maintenance window), you should ensure that the database and the application continue to work as expected.

Most of modern DB engines do not require restarting your database to update the certificate. If you don’t want to restart the database just for CA update, you can use the --no-certificate-rotation-restart flag in the modify-db-instance command.

$ aws rds modify-db-instance \
          --db-instance-identifier <mydbinstance> \
          --ca-certificate-identifier rds-ca-rsa2048-g1 \
          --no-certificate-rotation-restart

To check if your engine requires a restart you can check the SupportsCertificateRotationWithoutRestart field in the output of the describe-db-engine-versions command. You can use this command to see which engines support rotations without restart:

$ aws rds describe-db-engine-versions \
          --engine <engine> --include-all --region <region> | 
          jq -r '.DBEngineVersions[] | 
          "EngineName: (.Engine), 
           EngineVersion: (.EngineVersion), 
           SupportsCertificateRotationWithoutRestart: (.SupportsCertificateRotationWithoutRestart), 
           SupportedCAs: ([.SupportedCACertificateIdentifiers | 
          join(", ")])"'

Even if you don’t use SSL/TLS for the database instances, I recommend to rotate your CA. You may need to use SSL/TLS in the future, and some database connectors like the JDBC and ODBC connectors check for a valid cert before connecting and using an expired CA can prevent you from doing that.

To learn about updating your certificate by modifying your DB instance manually, automatic server certificate rotation, and finding a sample script for importing certificates into your trust store, see the Amazon RDS User Guide or the Amazon Aurora User Guide.

Things to Know
Here are a couple of important things to know:

  • Amazon RDS Proxy and Amazon Aurora Serverless use certificates from the AWS Certificate Manager (ACM). If you’re using Amazon RDS Proxy when you rotate your SSL/TLS certificate, you don’t need to update applications that use Amazon RDS Proxy connections. If you’re using Aurora Serverless, rotating your SSL/TLS certificate isn’t required.
  • Now through January 25, 2024 – new RDS DB instances will have the rds-ca-2919 certificate by default, unless you specify a different CA via the ca-certificate-identifier option on the create-db-instance API; or you specify a default CA override for your account like mentioned in the above section. Starting January 26, 2024 – any new database instances will default to using the rds-ca-rsa2048-g1 certificate. If you wish for new instances to use a different certificate, you can specify which certificate to use with the AWS console or the AWS CLI. For more information, see the create-db-instance API documentation.
  • Except for Amazon RDS for SQL Server, most modern RDS and Aurora engines support certificate rotation without a database restart in the latest versions. Call describe-db-engine-versions and check for the response field SupportsCertificateRotationWithoutRestart. If this field is set to true, then your instance will not require a database restart for CA update. If set to false, a restart will be required. For more information, see Setting the CA for your database in the AWS documentation.
  • Your rotated CA signs the DB server certificate, which is installed on each DB instance. The DB server certificate identifies the DB instance as a trusted server. The validity of DB server certificate depends on the DB engine and version either 1 year or 3 year. If your CA supports automatic server certificate rotation, RDS automatically handles the rotation of the DB server certificate too. For more information about DB server certificate rotation, see Automatic server certificate rotation in the AWS documentation.
  • You can choose to use the 40-year validity certificate (rds-ca-rsa2048-g1) or the 100-year certificates. The expiring CA used by your RDS instance uses the RSA2048 key algorithm and SHA256 signing algorithm. The rds-ca-rsa2048-g1 uses the exact same configuration and therefore is best suited for compatibility. The 100-year certificates (rds-ca-rsa4096-g1 andrds-ca-ecc384-g1) use more secure encryption schemes than rds-ca-rsa2048-g1. If you want to use them, you should test well in pre-production environments to double-check that your database client and server support the necessary encryption schemes in your Region.

Just Do It Now!
Even if you have one year left until your certificate expires, you should start planning with your team. Updating SSL/TLS certificate may require restart your DB instance before the expiration date. We strongly recommend that you schedule your applications to be updated before the expiry date and run tests on a staging or pre-production database environment before completing these steps in a production environments. To learn more about updating SSL/TLS certificates, see Amazon RDS User Guide and Amazon Aurora User Guide.

If you don’t use SSL/TLS connections, please note that database security best practices are to use SSL/TLS connectivity and to request certificate verification as part of the connection authentication process. To learn more about using SSL/TLS to encrypt a connection to your DB instance, see Amazon RDS User Guide and Amazon Aurora User Guide.

If you have questions or issues, contact your usual AWS Support by your Support plan.

Channy

New – Amazon EC2 C7a Instances Powered By 4th Gen AMD EPYC Processors for Compute Optimized Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-c7a-instances-powered-by-4th-gen-amd-epyc-processors-for-compute-optimized-workloads/

We launched the compute optimized Amazon EC2 C6a instances in February 2022 powered by 3rd Gen AMD EPYC (Milan) processors, running at frequencies up to 3.6 GHz.

Today, we’re announcing the general availability of new, compute optimized Amazon EC2 C7a instances, powered by the 4th Gen AMD EPYC (Genoa) processors with a maximum frequency of 3.7 GHz, which offer up to 50 percent higher performance compared to C6a instances. You can use this increased performance to process data faster, consolidate workloads, and lower the cost of ownership.

C7a instances offer up to 50 percent higher performance compared to C6a instances. These instances are ideal for running compute-intensive workloads such as high-performance web servers, batch processing, ad serving, machine learning, multiplayer gaming, video encoding, high performance computing (HPC) such as scientific modeling, and machine learning.

C7a instances support AVX-512, Vector Neural Network Instructions (VNNI), and brain floating point (bfloat16). These instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in-memory, and deliver 2.25 times more memory bandwidth compared to the previous generation instances for lower latency.

C7a instances feature sizes of up to 192 vCPUs with 384 GiB RAM, which you have a new medium instance size, which enables you to right-size your workloads more accurately, offering 1 vCPU, 2 GiB. Here are the detailed specs:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps)
c7a.medium 1 2 Up to 12.5 Up to 10
c7a.large 2 4 Up to 12.5 Up to 10
c7a.xlarge 4 8 Up to 12.5 Up to 10
c7a.2xlarge 8 16 Up to 12.5 Up to 10
c7a.4xlarge 16 32 Up to 12.5 Up to 10
c7a.8xlarge 32 64 12.5 10
c7a.12xlarge 48 96 18.75 15
c7a.16xlarge 64 128 25 20
c7a.24xlarge 96 192 37.5 30
c7a.32xlarge 128 256 50 40
c7a.48xlarge 192 384 50 40
c7a.metal-48xl 192 384 50 40

C7a instances have up to 50 Gbps enhanced networking and 40 Gbps EBS bandwidth, and you can attach up to 128 EBS volumes to an instance, compared to up to 28 EBS volume attachments with the previous generation instances.

C7a instances support always-on memory encryption with AMD secure memory encryption (SME) and new AVX-512 instructions for accelerating encryption and decryption algorithms, convolutional neural network (CNN) based algorithms, financial analytics, and video encoding workloads. C7a instances also support AES-256 compared to AES-128 in C6a instances for enhanced security.

These instances are built on the AWS Nitro System and support Elastic Fabric Adapter (EFA) for workloads that benefit from lower network latency and highly scalable inter-node communication, such as high-performance computing and video processing.

Now Available
Amazon EC2 C7a instances are now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland). As usual with Amazon EC2, you only pay for what you use. For more information, see the Amazon EC2 pricing page.

To learn more, visit the EC2 C7a instances page and AWS/AMD partner page. You can send feedback to [email protected], AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

Amazon DataZone Now Generally Available – Collaborate on Data Projects across Organizational Boundaries

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-datazone-now-generally-available-collaborate-on-data-projects-across-organizational-boundaries/

Today, we’re announcing the general availability of Amazon DataZone, a new data management service to catalog, discover, analyze, share, and govern data between data producers and consumers in your organization.

At AWS re:Invent 2022, we preannounced Amazon DataZone, and in March 2023, we previewed it publicly.

During the keynote of the last re:Invent, Swami Sivasubramanian, vice president of Databases, Analytics, and Machine Learning at AWS said “I have had the benefit of being an early customer of DataZone to run the AWS weekly business review meeting where we assemble data from our sales pipeline and revenue projections to inform our business strategy.”

During the keynote, a demo led by Shikha Verma, head of product for Amazon DataZone, demonstrated how organizations can use the product to create more effective advertising campaigns and get the most out of their data.

“Every enterprise is made up of multiple teams that own and use data across a variety of data stores. Data people have to pull this data together but do not have an easy way to access or even have visibility to this data. DataZone provides a unified environment where everyone in an organization—from data producers to consumers, can go to access and share data in a governed manner.”

With Amazon DataZone, data producers populate the business data catalog with structured data assets from AWS Glue Data Catalog and Amazon Redshift tables. Data consumers search and subscribe to data assets in the data catalog and share with other business use case collaborators. Consumers can analyze their subscribed data assets with tools—such as Amazon Redshift or Amazon Athena query editors—that are directly accessed from the Amazon DataZone portal. The integrated publishing-and-subscription workflow provides access-auditing capabilities across projects.

Introducing Amazon DataZone
For those of you who aren’t yet familiar with Amazon DataZone, let me introduce you to its key concept and capabilities.

Amazon DataZone Domain represents the distinct boundary of a line of business (LOB) or a business area within an organization that can manage it’s own data, including it’s own data assets and its own definition of data or business terminology, and may have it’s own governing standards. The domain includes all core components such as the data portal, business data catalog, projects and environments, and built-in workflows.

  1. Data portal (outside the AWS Management Console) – This is a web application where different users can go to catalog, discover, govern, share, and analyze data in a self-service fashion. The data portal authenticates users with AWS Identity and Access Manager (IAM) credentials or existing credentials from your identity provider through the AWS IAM Identity Center.
  2. Business data catalog – In your catalog, you can define the taxonomy or the business glossary. You can use this component to catalog data across your organization with business context and thus enable everyone in your organization to find and understand data quickly.
  3. Data projects & environments – You can use projects to simplify access to the AWS analytics by creating business use case–based groupings of people, data assets, and analytics tools. Amazon DataZone projects provide a space where project members can collaborate, exchange data, and share data assets. Within projects, you can create environments that provide the necessary infrastructure to project members such as analytics tools and storage so that project members can easily produce new data or consume data they have access to.
  4. Governance and access control – You can use built-in workflows that allow users across the organization to request access to data in the catalog and owners of the data to review and approve those subscription requests. Once a subscription request is approved, Amazon DataZone can automatically grant access by managing permission at underlying data stores such as AWS Lake Formation and Amazon Redshift.

To learn more, see Amazon DataZone Terminology and Concepts.

Getting Started with Amazon DataZone
To get started, consider a scenario where a product marketing team wants to run campaigns to drive product adoption. To do this, they need to analyze product sales data owned by a sales team. In this walkthrough, the sales team, which acts as the data producer, publishes sales data in Amazon DataZone. Then the marketing team, which acts as the data consumer, subscribes to sales data and analyzes it in order to build a campaign strategy.

To understand how the DataZone works, let’s walk through a condensed version of the Getting started guide for Amazon DataZone.

1. Create a Domain
When you first start using DataZone, you start by creating a domain and all core components such as business data catalog, projects, and environments in the data portal, then exist within that domain. Go to the Amazon DataZone console and choose Create domain.

Enter Domain name and a descrption and leave all other values as default.

For example, in the Service access section, if you choose Create and use a new role by default, Amazon DataZone will automatically create a new role with necessary permissions that authorize DataZone to make API calls on behalf of users within the domain. Check the Quick setup option where DataZone can take care of all the setup steps.

Finally, choose Create domain. Amazon DataZone creates the necessary IAM roles and enables this domain to use resources within your account such as AWS Glue Data Catalog, Amazon Redshift, and Amazon Athena. Domain creation can take several minutes to complete. Wait for the domain to have a status of Available.

2. Create a Project and Environment in the Data Portal
After the domain is successfully created, select it, and on the domain’s summary page, note the data portal URL for the root domain. You can use this URL to access your Amazon DataZone data portal. Choose Open data portal.

To create a new data project as the sales team to publish sales data, choose Create Project.

In the dialogbox, enter “Sales producer project” as the Name, then enter a Description for this project and choose Create.

Once you have the project, you need to create a environment to work with data and analytics tools such as Amazon Athena or Amazon Redshift in this project. Choose Create environment in the overview page or after clicking the Environment tab.

Enter “publish-environment” as the Name, then enter a Description for this environment and choose Environment profile. An environment profile is a pre-defined template that includes technical details required to create an environment such as which AWS account, Region, VPC details, and resources and tools are added to the project.

You can select a couple of default environment profiles. Choosing DataLakeProfile enables you to publish data from your Amazon S3 and AWS Glue based data lakes. It also simplifies querying the AWS Glue tables that you have access to using Amazon Athena.

Next, ignore all the optional parameters and choose Create environment. It takes about a minute for the environment to create certain resources in your AWS account such as IAM roles, an Amazon S3 suffix, AWS Glue databases, and an Athena workgroup, which makes it easier for members of a project to produce and consume data in the data lake.

3. Publish Data in the Data Portal
You have the environment to publish your data in your AWS Glue table. To create this table in Amazon Athena, choose Query data with the Athena link on the right-hand side of the Environments page.

This opens the Athena query editor in a new tab. Select publishenvironment_pub_db from the database dropdown and then paste the following query into the query editor. This will create a table called catalog_sales in the environment’s AWS Glue database.

CREATE TABLE catalog_sales AS 
SELECT 146776932 AS order_number, 23 AS quantity, 23.4 AS wholesale_cost, 45.0 as list_price, 43.0 as sales_price, 2.0 as discount, 12 as ship_mode_sk,13 as warehouse_sk, 23 as item_sk, 34 as catalog_page_sk, 232 as ship_customer_sk, 4556 as bill_customer_sk
UNION ALL SELECT 46776931, 24, 24.4, 46, 44, 1, 14, 15, 24, 35, 222, 4551
UNION ALL SELECT 46777394, 42, 43.4, 60, 50, 10, 30, 20, 27, 43, 241, 4565
UNION ALL SELECT 46777831, 33, 40.4, 51, 46, 15, 16, 26, 33, 40, 234, 4563
UNION ALL SELECT 46779160, 29, 26.4, 50, 61, 8, 31, 15, 36, 40, 242, 4562
UNION ALL SELECT 46778595, 43, 28.4, 49, 47, 7, 28, 22, 27, 43, 224, 4555
UNION ALL SELECT 46779482, 34, 33.4, 64, 44, 10, 17, 27, 43, 52, 222, 4556
UNION ALL SELECT 46779650, 39, 37.4, 51, 62, 13, 31, 25, 31, 52, 224, 4551
UNION ALL SELECT 46780524, 33, 40.4, 60, 53, 18, 32, 31, 31, 39, 232, 4563
UNION ALL SELECT 46780634, 39, 35.4, 46, 44, 16, 33, 19, 31, 52, 242, 4557
UNION ALL SELECT 46781887, 24, 30.4, 54, 62, 13, 18, 29, 24, 52, 223, 4561

You can see the two databases in the dropdown menu. The publishenvironment_pub_db is to provide you with a space to produce new data and choose to publish it to the DataZone catalog. The other one, publishenvironment_sub_db is for project members when they subscribe to or access to data in the catalog within that project.

Make sure that the catalog_sales table is successfully created. Now you have a data asset that can be published into the Amazon DataZone catalog.

As the data producer, you can now go back to the data portal and publish this table into the DataZone catalog. Choose the Data tab in the top menu and Data sources in the left navigation pane.

You can see a default data source automatically created in your environment. When you open this data source, you will see your environments’ publishing database where we just created the catalog_sales table.

This data source will bring in all the tables it finds in the publishing database into the DataZone. By default, automated metadata generation is enabled, which means that any asset that the data source bring into the DataZone will automatically generate the business names of the table and columns for that asset. Choose Run in this data source.

Once the data source has finished running, you can see the catalog sales table in the Data Source Runs.

You can open this asset and see that the publishing job could automatically extract the technical metadata including the schema of the table and several other technical details such as AWS account, Region, and physical location of the data.

If they look correct, you can simply accept these recommendations either by clicking the brain icon in each recommended item or the Accept all button for all recommendations. When you are ready to publish, choose Publish asset and reconfirm in the dialog box.

4. Subscribe Data as a Data Consumer
Now let’s switch the role to a marketing team and see how you can subscribe to or request access this table. Repeat to create a new project called “Marketing consumer project” and a new environment called “subscriber-environment” as the data consumer using the same steps as before.

In the new created project, when you type “catalog sales” in the search bar, you can see the published table in the search results. Choose the Catalog Sales Data.

In the catalog, choose Subscribe.

In the Subscribe to Catalog Sales Data window, select your marketing consumer project, provide a reason for the subscription request, and then choose Subscribe.

When you get a subscription request as a data producer, it will notify you through a task in the sales producer project. Since you are acting as both subscriber and publisher here, you will see a notification.

When you click on this notification, it will open the subscription request including which project has requested access, who the requestor is, and why they need access. Choose Approve and provide a reason for approval.

Now that subscription has been approved, you can see catalog sales data in your marketing consumer project. To confirm this, choose the Data tab in the top menu and Data sources in the left navigation pane.


To analyze your subscribe data, choose the Environments tab in the top menu and Subscribe-environment you created in the marketing consumer project. It shows a new Query Data link in the right pane.

We can see that the catalog sales table is showing up under subscription database.
To make sure that we have access to this table, we can preview it and we can see that the query executes successfully.

This opens the Athena query editor in a new tab. Select subscribeenvironment_sub_db from the database dropdown, and then enter your query into the query editor.

You can now run any queries on the sales data table that you have subscribed to as a consumer (marketing team) and that was published into the business data catalog by a producer (sales team).

For more detailed demos such as publishing AWS Glue tables and Amazon Redshift tables and view, see the YouTube playlist.

What’s New at GA?
During the preview, we had lots of interest and great feedback from customers. I want to quickly review the features and introduce some improvements:

Enterprise-Ready Business Catalog – To add business context and make data discoverable by everyone in the organization, you can customize the catalog with automated metadata generation which uses machine learning to automatically generate business names of data assets and columns within those assets. We also improved metadata curation functionality. At GA, you can attach multiple business glossary terms to assets and glossary terms to individual columns in the asset.

Self-Service for Data Users – To provide data autonomy for users to publish and consume data, you can customize and bring any type of asset to the catalog using APIs. Data publishers can automate metadata discovery through ingestion jobs or manually publish files from Amazon Simple Storage Service (Amazon S3). Data consumers can use faceted search to quickly find and understand the data. Users can be notified of updates in the system or actions to be taken. These events are emitted to the customer’s event bus using Amazon EventBridge to customize actions.

Simplified Access to analysis – At GA, projects will serve as business use case-based logical containers. You can create a project and collaborate on specific business use case-based groupings of people, data, and analytics tools. Within the project, you can create an environment that provides the necessary infrastructure to project members such as analytics tools and storage so that project members can easily produce new data or consume data they have access to. This allows users to add multiple capabilities and analytics tools to the same project depending on their needs.

Governed Data Sharing – Data producers own and manage access to data with a subscription approval workflow that allows consumers to request access and data owners to approve. You can now set up subscription terms to be attached to assets when published and automate subscription grant fulfillment for AWS managed data lakes and Amazon Redshift with customizations using EventBridge events for other sources.

Now Available
Amazon DataZone is now generally available in eleven AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), and South America (São Paulo).

You can use the free trial of Amazon DataZone, which includes 50 users at no additional cost for the first 3 calendar months of usage. The free trial starts when you first create an Amazon DataZone domain in an AWS account. If you exceed the number of monthly users during your trial, you will be charged at the standard pricing.

To learn more, visit the product page and user guide. You can send feedback to AWS re:Post for Amazon DataZone or through your usual AWS Support contacts.

Channy

New – Amazon EC2 M2 Pro Mac Instances Built on Apple Silicon M2 Pro Mac Mini Computers

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m2-pro-mac-instances-built-on-apple-silicon-m2-pro-mac-mini-computers/

Today, we are announcing the general availability of Amazon EC2 M2 Pro Mac instances. These instances deliver up to 35 percent faster performance over the existing M1 Mac instances when building and testing applications for Apple platforms.

New EC2 M2 Pro Mac instances are powered by Apple M2 Pro Mac Mini computers featuring 12 core CPU, 19 core GPU, 32 GiB of memory, and 16 core Apple Neural Engine and uniquely enabled by the AWS Nitro System through high-speed Thunderbolt connections, offering these Mac mini computers as fully integrated and managed compute instances with up to 10 Gbps of Amazon VPC network bandwidth and up to 8 Gbps of Amazon EBS storage bandwidth. EC2 M2 Pro Mac instances support macOS Ventura (version 13.2 or later) as AMIs.

A Story of EC2 Mac Instances
When Jeff Barr first introduced Amazon EC2 Mac Instances in 2020, customers were surprised to be able to run macOS on Amazon EC2 to build, test, package, and sign applications developed with Xcode applications for the Apple platform, including macOS, iOS, iPadOS, tvOS, and watchOS.

In his keynote in AWS re:Invent 2020, Peter DeSantis revealed the secret to build EC2 Mac instances powered by the AWS Nitro System, which makes it possible to offer Apple Mac mini computers as fully integrated and managed compute instances with Amazon VPC networking and Amazon EBS storage, just like any other EC2 instances.

“We did not need to make any changes to the Mac hardware. We simply connected a Nitro controller via the Mac’s Thunderbolt connection. When you launch a Mac instance, your Mac-compatible Amazon Machine Image (AMI) runs directly on the Mac Mini, with no hypervisor. The Nitro controller sets up the instance and provides secure access to the network and any storage attached. And that Mac Mini can now natively use any AWS service.”

In July 2022, we introduced Amazon EC2 M1 Mac Instances built around the Apple-designed M1 System on Chip (SoC). Developers building for iPhone, iPad, Apple Watch, and Apple TV applications can choose either x86-based EC2 Mac instances or Arm-based EC2 M1 instances. If you want to re-architect your apps to natively support Macs with Apple Silicon using EC2 M1 instances, you can build and test your apps to deliver up to 60 percent better price performance over the EC2 Mac instances for iPhone and Mac app build workloads with all the benefits of AWS.

Many customers take advantage of EC2 Mac instances to deliver a complete end-to-end build pipeline on macOS on AWS. With EC2 Mac instances, they can scale their iOS build fleet; easily use custom macOS environments with AMIs; and debug any build or test failures with fully reproducible macOS environments.

Customers have reported up to 4x reduction in build times, up to 3x increase in parallel builds, up to 80 percent reduction in machine-related build failures, and up to 50 percent reduction in fleet size. They can continue to prioritize their time on innovating products and features while reducing the tedious effort required to manage on-premises macOS infrastructure.

To accelerate this innovation, EC2 Mac instances recently began to support replacing root volumes on a running EC2 Mac instance, enabling you to restore the root volume of an EC2 Mac instance to its initial launch state or to a specific snapshot, without requiring you to stop or terminate the instance.

You can also use in-place operating system updates from within the guest environment on EC2 M1 Mac instances to a specific or latest macOS version, including the beta version, by registering your instances with the Apple Developer Program. Developers can now integrate the latest macOS features into their applications and test existing applications for compatibility before public macOS releases.

Getting Started with EC2 M2 Pro Instances
As with other EC2 Mac instances, EC2 M2 Pro Mac instances also support Dedicated Host tenancy with a minimum host allocation duration of 24 hours to align with macOS licensing.

To get started, you should allocate a Mac-dedicated host, a physical server fully dedicated for your own use in your AWS account. After the host is allocated, you can launch, stop, and start your own macOS environment as one instance on that host for one dedicated host.

After the host is allocated, you can start an EC2 Mac instance on it. The procedure is no different from starting any EC2 instance type. Choose your macOS AMI version and select the mac2-m2pro.metal instance type in the Application and OS Images section.

In the Advanced details section, select Dedicated host in Tenancy and a dedicated host you just created in Tenancy host ID.

When you use EC2 Mac instances for the first time, you can use SSH to connect to the newly launched instance as usual or enable Apple Remote Desktop and start a VNC session to the EC2 instance. To learn more, see Sebastien’s series of articles to launch and connect your Mac instance.

When you no longer need the Mac dedicated host, you can terminate your running Mac instance and release the underlying host. Note again that after being allocated, a Mac dedicated host can only be released after 24 hours to align with Apple’s macOS licensing.

Now Available
Amazon EC2 M2 Pro Mac instances are available in the US West (Oregon) and US East (Ohio) AWS Regions, with additional regions coming soon.

To learn more or get started, see Amazon EC2 Mac Instances or visit the EC2 Mac documentation.  You can send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

New – Amazon EC2 R7a Instances Powered By 4th Gen AMD EPYC Processors for Memory Optimized Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r7a-instances-powered-by-4th-gen-amd-epyc-processors-for-memory-optimized-workloads/

We launched the memory optimized Amazon EC2 R6a instances in July 2022 powered by 3rd Gen AMD EPYC (Milan) processors, running at frequencies up to 3.6 GHz. Many customers who run workloads that are dependent on x86 instructions, such as SAP, are looking for ways to optimize their cloud utilization. They’re taking advantage of the compute choice that EC2 offers.

Today, we’re announcing the general availability of new memory optimized Amazon EC2 R7a instances powered by 4th Gen AMD EPYC (Genoa) processors with a maximum frequency of 3.7 GHz, which offer up to 50 percent higher performance compared to the previous generation instances. You can use this increased performance to process data faster, consolidate workloads, and lower the cost of ownership.

R7a instances also support AVX-512, Vector Neural Network Instructions (VNNI), and brain floating point (bfloat16). These instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in-memory, and deliver 2.25 times more memory bandwidth compared to R6a instances for lower latency. Moreover, these instances support always-on memory encryption using AMD secure memory encryption (SME).

These instances are SAP-certified and ideal for high performance, memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA) applications.

R7a instances feature sizes of up to 192 vCPUs with 1536 GiB RAM. Here are the detailed specs:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps)
r7a.medium 1 8 Up to 12.5 Up to 10
r7a.large 2 16 Up to 12.5 Up to 10
r7a.xlarge 4 32 Up to 12.5 Up to 10
r7a.2xlarge 8 64 Up to 12.5 Up to 10
r7a.4xlarge 16 128 Up to 12.5 Up to 10
r7a.8xlarge 32 256 12.5 10
r7a.12xlarge 48 384 18.75 15
r7a.16xlarge 64 512 25 20
r7a.24xlarge 96 768 37.5 30
r7a.32xlarge 128 1024 50 40
r7a.48xlarge 192 1536 50 40

R7a instances have up to 50 Gbps enhanced networking and 40 Gbps EBS bandwidth, which is similar to R6a instances. You have a new medium instance size, which you can use to right-size your workloads more accurately, offering 1 vCPUs, 8 GiB. Additionally, with R7a instances you can attach up to 128 EBS volumes to an instance compared to up to 28 EBS volume attachments with R6a instances. R7a instances support AES-256 compared to AES-128 in R6a instances for enhanced security.

R7a instances are built on the AWS Nitro System and support Elastic Fabric Adapter (EFA) for workloads that benefit from lower network latency and highly scalable inter-node communication, such as high-performance computing and video processing.

Now Available
Amazon EC2 R7a instances are now available in AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland). As usual with Amazon EC2, you only pay for what you use. For more information, see the Amazon EC2 pricing page.

To learn more, visit the EC2 R7a instances page, and AWS/AMD partner page. You can send feedback to [email protected], AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

AWS Weekly Roundup: Farewell EC2-Classic, EBS at 15 Years, and More (Sept. 4, 2023)

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-farewell-ec2-classic-ebs-at-15-years-and-more-sept-4-2023/

Last week, there was some great reading about Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS) written by AWS tech leaders.

Dr. Werner Vogels wrote Farewell EC2-Classic, it’s been swell, celebrating the 17 years of loyal duty of the original version that started what we now know as cloud computing. You can read how it made the process of acquiring compute resources simple, even though the stack running behind the scenes was incredibly complex.

We have come a long way since 2006, and we’re not done innovating for our customers. As celebrated in this year’s AWS Storage Day, Amazon EBS was launched 15 years ago this month. James Hamilton, SVP and distinguished engineer at Amazon, wrote Amazon EBS at 15 Years, about how the service has evolved to handle over 100 trillion I/O operations a day, and transfers over 13 exabytes of data daily.

As Dr. Werner said in his piece, “it’s a reminder that building evolvable systems is a strategy, and revisiting your architectures with an open mind is a must.” Our innovation efforts driven by customer feedback continue today, and this week is no different.

Last Week’s Launches
Here are some launches that got my attention:

Renaming Amazon Kinesis Data Analytics to Amazon Managed Service for Apache Flink – You can now use Amazon Managed Service for Apache Flink, a fully managed and serverless service for you to build and run real-time streaming applications using Apache Flink. All your existing running applications in Kinesis Data Analytics will work as-is, without any changes. To learn more, see my blog post.

Extended Support for Amazon Aurora and Amazon RDS – You can now get more time for support, up to three years, for Amazon Aurora and Amazon RDS database instances running MySQL 5.7, PostgreSQL 11, and higher major versions. This e will allow you time to upgrade to a new major version to help you meet your business requirements even after the community ends support for these versions.

Enhanced Starter Template for AWS Step Functions Workflow Studio – You can now use starter templates to streamline the process of creating and prototyping workflows swiftly, plus a new code mode, which enables builders to move easily between design and code authoring views. With the improved authoring experience in Workflow Studio, you can seamlessly alternate between a drag-and-drop visual builder experience or the new code editor so that you can pick your preferred tool to accelerate development.

To learn more, see Enhancing Workflow Studio with new features for streamlined authoring in the AWS Compute Blog.

Email Delivery History for Every Email in Amazon SES – You can now troubleshoot individual email delivery problems, confirm delivery of critical messages, and identify engaged recipients on a granular, single email basis. Email senders can investigate trends in delivery performance and see delivery and engagement status for each email sent using Amazon SES Virtual Deliverability Manager.

Response Streaming through Amazon SageMaker Real-time Inference – You can now continuously stream inference responses back to the client to help you build interactive experiences for various generative AI applications such as chatbots, virtual assistants, and music generators.

For more details on how to use response streaming along with examples, see Invoke to Stream an Inference Response and How containers should respond in the AWS documentation, and Elevating the generative AI experience: Introducing streaming support in Amazon SageMaker hosting in the AWS Machine Learning Blog.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you might have missed:

AI & Sports: How AWS & the NFL are Changing the Game – Over the last 5 years, AWS has partnered with the National Football League (NFL), helping fans better understand the game, helping broadcasters tell better stories, and helping teams use data to improve operations and player safety. Watch AWS CEO, Adam Selipsky, former NFL All-Pro Larry Fitzgerald, and the NFL Network’s Cynthia Frelund during their earlier livestream discussing the intersection of artificial intelligence and machine learning in sports.

Amazon Bedrock Story from Amazon Science – This is a good article explaining the benefits of using Amazon Bedrock to build and scale generative AI applications with leading foundation models, including Amazon’s Titan FMs, which focus on responsible AI to avoid toxic content.

Amazon EC2 Flexibility Score – This is an open source tool developed by AWS to assess any configuration used to launch instances through an Auto Scaling Group (ASG) against the recommended EC2 best practices. It converts the best practice adoption into a “flexibility score” that can be used to identify, improve, and monitor the configurations.

To learn more open-source news and updates, see this newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS re:InventAWS re:Invent 2023Ready to start planning your re:Invent? Browse the session catalog now. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community.

AWS Global SummitsAWS Summits – The last in-person AWS Summit will be held in Johannesburg on Sept. 26.

AWS Community Days AWS Community Day– Join a community-led conference run by AWS user group leaders in your region: Aotearoa (Sept. 6), Lebanon (Sept. 9), Munich (Sept. 14), Argentina (Sept. 16), Spain (Sept. 23), and Chile (Sept. 30). Visit the landing page to check out all the upcoming AWS Community Days.

CDK Day – A community-led fully virtual event on Sept. 29 with tracks in English and Spanish about CDK and related projects. Learn more at the website.

You can browse all upcoming AWS-led in-person and virtual events, and developer-focused events such as AWS DevDay.

Channy

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Announcing Amazon Managed Service for Apache Flink Renamed from Amazon Kinesis Data Analytics

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-amazon-managed-service-for-apache-flink-renamed-from-amazon-kinesis-data-analytics/

Today we are announcing the rename of Amazon Kinesis Data Analytics to Amazon Managed Service for Apache Flink, a fully managed and serverless service for you to build and run real-time streaming applications using Apache Flink.

We continue to deliver the same experience in your Flink applications without any impact on ongoing operations, developments, or business use cases. All your existing running applications in Kinesis Data Analytics will work as is without any changes.

Many customers use Apache Flink for data processing, including support for diverse use cases with a vibrant open-source community. While Apache Flink applications are robust and popular, they can be difficult to manage because they require scaling and coordination of parallel compute or container resources. With the explosion of data volumes, data types, and data sources, customers need an easier way to access, process, secure, and analyze their data to gain faster and deeper insights without compromising on performance and costs.

Using Amazon Managed Service for Apache Flink, you can set up and integrate data sources or destinations with minimal code, process data continuously with sub-second latencies from hundreds of data sources like Amazon Kinesis Data Streams and Amazon Managed Streaming for Apache Kafka (Amazon MSK), and respond to events in real-time. You can also analyze streaming data interactively with notebooks in just a few clicks with Amazon Managed Service for Apache Flink Studio with built-in visualizations powered by Apache Zeppelin.

With Amazon Managed Service for Apache Flink, you can deploy secure, compliant, and highly available applications. There are no servers and clusters to manage, no compute and storage infrastructure to set up, and you only pay for the resources your applications consume.

A History to Support Apache Flink
Since we launched Amazon Kinesis Data Analytics based on a proprietary SQL engine in 2016, we learned that SQL alone was not sufficient to provide the capabilities that customers needed for efficient stateful stream processing. So, we started investing in Apache Flink, a popular open-source framework and engine for processing real-time data streams.

In 2018, we provided support for Amazon Kinesis Data Analytics for Java as a programmable option for customers to build streaming applications using Apache Flink libraries and choose their own integrated development environment (IDE) to build their applications. In 2020, we repositioned Amazon Kinesis Data Analytics for Java to Amazon Kinesis Data Analytics for Apache Flink to emphasize our continued support for Apache Flink. In 2021, we launched Kinesis Data Analytics Studio (now, Amazon Managed Service for Apache Flink Studio) with a simple, familiar notebook interface for rapid development powered by Apache Zeppelin and using Apache Flink as the processing engine.

Since 2019, we have worked more closely with the Apache Flink community, increasing code contributions in the area of AWS connectors for Apache Flink such as those for Kinesis Data Streams and Kinesis Data Firehose, as well as sponsoring annual Flink Forward events. Recently, we contributed Async Sink to the Flink 1.15 release, which improved cloud interoperability and added more sink connectors and formats, among other updates.

Beyond connectors, we continue to work with the Flink community to contribute availability improvements and deployment options. To learn more, see Making it Easier to Build Connectors with Apache Flink: Introducing the Async Sink in the AWS Open Source Blog.

New Features in Amazon Managed Service for Apache Flink
As I mentioned, you can continue to run your existing Flink applications in Kinesis Data Analytics (now Amazon Managed Apache Flink) without making any changes. I want to let you know about a part of the service along with the console change and new feature,  a blueprint where you create an end-to-end data pipeline with just one click.

First, you can use the new console of Amazon Managed Service for Apache Flink directly under the Analytics section in AWS. To get started, you can easily create Streaming applications or Studio notebooks in the new console, with the same experience as before.

To create a streaming application in the new console, choose Create from scratch or Use a blueprint. With a new blueprint option, you can create and set up all the resources that you need to get started in a single step using AWS CloudFormation.

The blueprint is a curated collection of Apache Flink applications. The first of these has demo data being read from a Kinesis Data Stream and written to an Amazon Simple Storage Service (Amazon S3) bucket.

After creating the demo application, you can configure, run, and open the Apache Flink dashboard to monitor your Flink application’s health with the same experiences as before. You can change a code sample in the GitHub repository to perform different operations using the Flink libraries in your own local development environment.

Blueprints are designed to be extensible, and you can leverage them to create more complex applications to solve your business challenges based on Amazon Managed Service for Apache Flink. Learn more about how to use Apache Flink libraries in the AWS documentation.

You can also use a blueprint to create your Studio notebook using Apache Zeppelin as a new setup option. With this new blueprint option, you can also create and set up all the resources that you need to get started in a single step using AWS CloudFormation.

This blueprint includes Apache Flink applications with demo data being sent to an Amazon MSK topic and read in Managed Service for Apache Flink. With an Apache Zeppelin notebook, you can view, query, and analyze your streaming data. Deploying the blueprint and setting up the Studio notebook takes about ten minutes. Go get a cup of coffee while we set it up!

After creating the new Studio notebook, you can open an Apache Zeppelin notebook to run SQL queries in your note with the same experiences as before. You can view a code sample in the GitHub repository to learn more about how to use Apache Flink libraries.

You can run more SQL queries on this demo data such as user-defined functions, tumbling and hopping windows, Top-N queries, and delivering data to an S3 bucket for streaming.

You can also use Java, Python, or Scala to power up your SQL queries and deploy your note as a continuously running application, as shown in the blog posts, how to use the Studio notebook and query your Amazon MSK topics.

To learn more blueprint samples, see GitHub repositories such as reading from MSK Serverless and writing to Amazon S3, reading from MSK Serverless and writing to MSK Serverless, and reading from MSK Serverless and writing to Amazon S3.

Now Available
You can now use Amazon Managed Service for Apache Flink, renamed from Amazon Kinesis Data Analytics. All your existing running applications in Kinesis Data Analytics will work as is without any changes.

To learn more, visit the new product page and developer guide. You can send feedback to AWS re:Post for Amazon Managed Service for Apache Flink, or through your usual AWS Support contacts.

Channy

New – Amazon EC2 Hpc7a Instances Powered by 4th Gen AMD EPYC Processors Optimized for High Performance Computing

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-hpc7a-instances-powered-by-4th-gen-amd-epyc-processors-optimized-for-high-performance-computing/

In January 2022, we launched Amazon EC2 Hpc6a instances for customers to efficiently run their compute-bound high performance computing (HPC) workloads on AWS with up to 65 percent better price performance over comparable x86-based compute-optimized instances.

As their jobs grow more complex, customers have asked for more cores with more compute performance and more memory and network performance to reduce the time to complete jobs. Additionally, as customers look to bring more of their HPC workloads to EC2, they have asked how we can make it easier to distribute processes to make the best use of memory and network bandwidth, to align with their workload requirements.

Today, we are announcing the general availability of Amazon EC2 Hpc7a instances, the next generation of instance types that are purpose-built for tightly coupled HPC workloads. Hpc7a instances powered by the 4th Gen AMD EPYC processors (Genoa) deliver up to 2.5 times better performance compared to Hpc6a instances. These instances offer 300 Gbps Elastic Fabric Adapter (EFA) bandwidth powered by the AWS Nitro System, for fast and low-latency internode communications.

Hpc7a instances feature Double Data Rate 5 (DDR5) memory, which provides 50 percent higher memory bandwidth compared to DDR4 memory to enable high-speed access to data in memory. These instances are ideal for compute-intensive, latency-sensitive workloads such as computational fluid dynamics (CFD) and numerical weather prediction (NWP).

If you are running on Hpc6a, you can use Hpc7a instances and take advantage of the 2 times higher core density, 2.1 times higher effective memory bandwidth, and 3 times higher network bandwidth to lower the time needed to complete jobs compared to Hpc6a instances.

Here’s a quick infographic that shows you how the Hpc7a instances and the 4th Gen AMD EPYC processor (Genoa) compare to the previous instances and processor:

Hpc7a instances feature sizes of up to 192 cores of the AMD EPYC processors CPUs with 768 GiB RAM. Here are the detailed specs:

Instance Name CPUs RAM (Gib)
EFA Network Bandwidth (Gbps)
Attached Storage
Hpc7a.12xlarge 24 768 Up to 300 EBS Only
Hpc7a.24xlarge 48 768 Up to 300 EBS Only
Hpc7a.48xlarge 96 768 Up to 300 EBS Only
Hpc7a.96xlarge 192 768 Up to 300 EBS Only

These instances provide higher compute, memory, and network performance to run the most compute-intensive workloads, such as CFD, weather forecasting, molecular dynamics, and computational chemistry on AWS.

Similar to EC2 Hpc7g instances released a month earlier, we are offering smaller instance sizes that makes it easier for customers to pick a smaller number of CPU cores to activate while keeping all other resources constant based on their workload requirements. For HPC workloads, common scenarios include providing more memory bandwidth per core for CFD workloads, allocating fewer cores in license-bound scenarios, and supporting more memory per core. To learn more, see Instance sizes in the Amazon EC2 Hpc7 family – a different experience in the AWS HPC Blog.

As with Hpc6a instances, you can use the Hpc7a instance to run your largest and most complex HPC simulations on EC2 and optimize for cost and performance. You can also use the new Hpc7a instances with AWS Batch and AWS ParallelCluster to simplify workload submission and cluster creation. You can also use Amazon FSx for Lustre for submillisecond latencies and up to hundreds of gigabytes per second of throughput for storage.

To achieve the best performance for HPC workloads, these instances have Simultaneous Multithreading (SMT) disabled, they’re available in a single Availability Zone, and they have limited external network and EBS bandwidth.

Now Available
Amazon EC2 Hpc7a instances are available today in three AWS Regions: US East (Ohio), EU (Ireland), and US GovCloud for purchase in On-Demand, Reserved Instances, and Savings Plans. For more information, see the Amazon EC2 pricing page.

To learn more, visit our Hpc7a instances page and get in touch with our HPC team, AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

Join AWS Hybrid Cloud & Edge Day to Learn How to Deploy Your Applications in the Everywhere Cloud

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/join-aws-hybrid-cloud-edge-day-to-learn-how-to-deploy-your-applications-in-the-everywhere-cloud/

In his keynote of AWS re:Invent 2021, Dr. Werner Vogels shared the insight of how “the everywhere cloud” is bringing AWS to new locales through AWS hardware and services and spotlighted it as one of his tech predictions for 2022 and beyond in his blog post.

“What we will see in 2022, and even more so in the years to come, is the cloud accelerating beyond the traditional centralized infrastructure model and into unexpected environments where specialized technology is needed. The cloud will be in your car, your tea kettle, and your TV. The cloud will be in everything from trucks driving down the road, to the ships and planes that transport goods. The cloud will be globally distributed, and connected to almost any digital device or system on Earth, and even in space.”

AWS provides a truly consistent and secure experience to build and run applications across the continuum of environments where customers operate—from the cloud to large metro areas, 5G networks, on-premises locations, and to mobile and Internet of Things (IoT) devices.

To learn more, join us for AWS Hybrid Cloud & Edge Day, a free-to-attend one-day virtual event on August 30, 2023, starting at 10:00 AM PDT (1:00 PM ET). We will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

You can hear from AWS leaders and industry analysts on the latest hybrid cloud and edge computing trends and emerging technologies and learn best practices for using AWS hybrid cloud and edge services across the cloud continuum. Also, learn from our customers on data strategies and key use cases and gain a deeper understanding of AWS hybrid cloud and edge services and new features and benefits.

Here are some of the highlights you can expect from this event:

Leadership session – To kick off the day, we have a leadership session featuring Jan Hofmeyr, vice president of EC2 Edge, sharing insights into how customers are building high-performance, intelligent applications with recently announced AWS hybrid cloud, edge, and IoT capabilities. Elias Khnaser, chief of research at EK Media Group, will join Jan to discuss the global, business, and economic trends impacting hybrid cloud and edge computing and discuss the customer requirements and use cases.

Cloud-closer sessions – We’ll discuss how AWS is bringing the cloud closer to metro areas and telco networks. Services such as AWS Local Zones, AWS Outposts family, and AWS Wavelength bring the power of cloud compute and storage to the edge of 5G networks, unlocking more performant mobile experiences. We’ll highlight new and innovative use cases, including Norton LifeLock, Electronic Arts, and Epic Games, who have taken advantage of the operational consistency between AWS Regions and the edge. Also you can learn how to deploy in hybrid cloud scenarios in on-premises locations, such as examples from MindBody and ElToro through Onica, and more customer cases.

On-premises sessions – Learn about our options to bring AWS Cloud to your data centers and on-premises locations for a truly consistent experience across your environments. We will review real-world examples of how AWS hybrid and edge services enable local processing of data for faster response time and faster decision-making. Also, we will share how Toyota takes advantage of hybrid options from Amazon ECS and Amazon EKS to use familiar management tools across your environments to successfully modernize your applications. You can learn how to meet your on-premises regulatory requirements and real-world scenarios effectively in critical aspects of digital sovereignty and data residency.

Rugged edge sessions – You will learn about AWS services to support rugged, mobile, and disconnected edge, such as AWS Snow Family to enable organizations to deploy compute workloads in locations with denied, disrupted, intermittent, and limited (DDIL) connectivity. Learn how DDR.Live deployed their own 4G/LTE or 5G private network using AWS Private 5G for live events in the place with limited wireless connection. We will discuss the top use cases, such as deploying a pre-trained object detection model and architecting applications at the edge. Finally, we will discuss the benefits and requirements of operating at the edge with Holger Mueller, vice president and principal analyst, Constellation Research, Inc.

IoT panel discussion – We will discuss from panelist of AWS IoT customers and industry experts on their innovation journey. Join us to see how EuroTech brought to market a set of devices and services that improve operational efficiencies with connectivity at the edge. You’ll also hear how Wallbox, an Electric Vehicle charging company, reduced their operational costs and scaled efficiently with AWS IoT services.

Multicloud sessions – AWS has the tools to help you run and support your multicloud operations in the areas of governance, ops management, observability, and more. We will discuss common challenges in hybrid and multicloud environments and how AWS helps you manage, operate, and automate your processes. We’ll also talk about how Rackspace used AWS Systems Manager for instance patching across hybrid and multicloud environments, automating their infrastructure management across cloud providers.

This event is for any customer and builder who is eager to learn more about hybrid cloud, edge computing, IoT, networking, content delivery, and 5G. We’ll cover how you can support applications that need to remain on premises or at the edge due to low latency, local data processing, or data residency requirements.

To learn more details, see the event schedule, and register for AWS Hybrid Cloud & Edge Day, go to the event page.

Channy

New – Amazon EC2 M7a General Purpose Instances Powered by 4th Gen AMD EPYC Processors

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m7a-general-purpose-instances-powered-by-4th-gen-amd-epyc-processors/

In November 2021, we launched Amazon EC2 M6a instances, powered by 3rd Gen AMD EPYC (Milan) processors, running at frequencies up to 3.6 GHz, which offer you up to 35 percent improvement in price performance compared to M5a instances. Many customers who run workloads that are dependent on x86 instructions, such as SAP, are looking for ways to optimize their cloud utilization. They’re taking advantage of the compute choice that EC2 offers.

Today, we’re announcing the general availability of new, general purpose Amazon EC2 M7a instances, powered by the 4th Gen AMD EPYC (Genoa) processors with a maximum frequency of 3.7 GHz, which offer up to 50 percent higher performance compared to M6a instances. This increased performance gives you the ability to process data faster, consolidate workloads, and lower the cost of ownership.

M7a instances support AVX-512, Vector Neural Network Instructions (VNNI) and brain floating point (bfloat16). These instances feature Double Data Rate 5 (DDR5) memory, which enable high-speed access to data in-memory, and deliver 2.25 times more memory bandwidth compared to M6a instances for lower latency.

M7a instances are SAP-certified and ideal for applications that benefit from high performance and high throughput, such as financial applications, application servers, simulation modeling, gaming, mid-size data stores, application development environments, and caching fleets.

M7a instances feature sizes of up to 192 vCPUs with 768 GiB RAM. Here are the detailed specs:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps)
m7a.medium 1 4 Up to 12.5 Up to 10
m7a.large 2 8 Up to 12.5 Up to 10
m7a.xlarge 4 16 Up to 12.5 Up to 10
m7a.2xlarge 8 32 Up to 12.5 Up to 10
m7a.4xlarge 16 64 Up to 12.5 Up to 10
m7a.8xlarge 32 128 12.5 10
m7a.12xlarge 48 192 18.75 15
m7a.16xlarge 64 256 25 20
m7a.24xlarge 96 384 37.5 30
m7a.32xlarge 128 512 50 40
m7a.48xlarge 192 768 50 40
m7a.metal-48xl 192 768 50 40

M7a instances have up to 50 Gbps enhanced networking and 40 Gbps EBS bandwidth, which is similar to M6a instances. But you have a new medium instance size, which enables you to right-size your workloads more accurately, offering 1 vCPUs, 4 GiB, and the largest size offering 192 vCPUs, 768 GiB.

The new instances are built on the AWS Nitro System, a collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware for high performance, high availability, and highly secure cloud instances.

Now Available
Amazon EC2 M7a instances are now available today in AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland). As usual with Amazon EC2, you only pay for what you use. For more information, see the Amazon EC2 pricing page.

To learn more, visit the EC2 M7a instance and AWS/AMD partner page. You can send feedback to [email protected], AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

New – Improve Amazon S3 Glacier Flexible Restore Time By Up To 85% Using Standard Retrieval Tier and S3 Batch Operations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-improve-amazon-s3-glacier-flexible-restore-time-by-up-to-85-using-standard-retrieval-tier-and-s3-batch-operations/

Last year, Amazon S3 Glacier celebrated its tenth anniversary. Amazon S3 Glacier is the leader in cloud cold storage, and I wrote about its innovations over the last decade.

The Amazon S3 Glacier storage classes provide you with long-term, secure, and durable storage options to optimally archive your data at the lowest cost. The Amazon S3 Glacier storage classes (Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive) are purpose-built for colder data, providing you with retrieval flexibility from milliseconds to days, in addition to the ability to store archive data for as low as $1 per terabyte per month.

Many customers tell us that they are keeping their data for longer periods of time because they recognize its future value potential, and that they are already monetizing subsets of their archival data, or plan to use large sets of their archive data in the future. Modern data archiving is not only about optimizing storage costs for cold data; it’s also about setting up mechanisms so that when you need to put that data to work for your business, you can access it as quickly as your business requirements demand.

In 2022, AWS customers restored over 32 billion objects from Amazon S3 Glacier. Customers need to retrieve archived objects quickly when transcoding media, restoring operational backups, training machine learning (ML) models, or analyzing historical data. While customers using S3 Glacier Instant Retrieval can access their data in just milliseconds, S3 Glacier Flexible Retrieval is lower cost and provides three retrieval options: expedited retrievals in 1–5 minutes, standard retrievals in 3–5 hours, and free bulk retrievals in 5–12 hours. S3 Glacier Deep Archive is our lowest cost storage class and provides data retrieval within 12 hours using the standard retrieval option or 48 hours using the bulk retrieval option.

In November 2022, Amazon S3 Glacier improved restore throughput by up to 10 times at no additional cost when retrieving large volumes of archived data in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive. With Amazon S3 Batch Operations, you can automatically initiate requests at a faster rate, allowing you to restore billions of objects containing petabytes of data.

To continue the decade-long trend of cold storage innovation, we are announcing today the general availability of faster Standard retrievals from S3 Glacier Flexible Retrieval by up to 85 percent, at no additional cost. Faster data restores automatically apply to the Standard retrieval tier when using S3 Batch Operations.

Using S3 Batch Operations, you can restore archived data at scale by providing a manifest of objects to be retrieved and specifying a retrieval tier. With S3 Batch Operations, restores in the Standard retrieval tier now typically begin to return objects to you within minutes, down from 3–5 hours, so you can easily speed up your data restores from archive.

Additionally, S3 Batch Operations improves overall restore throughput by applying new performance optimizations to your jobs. As a result, you can restore your data faster and process restored objects sooner. Processing restored data in parallel with ongoing restores helps you accelerate data workflows and quickly respond to business needs.

Getting Started with Faster Standard Retrievals from S3 Glacier Flexible Retrieval
To restore archived data with this performance improvement, you can use S3 Batch Operations to perform both large- and small-scale batch operations on S3 objects. S3 Batch Operations can perform a single operation on lists of S3 objects that you specify. You can use S3 Batch Operations through the AWS Management Console, AWS Command Line Interface (AWS CLI), SDKs, or REST API.

To create a batch job, choose Batch Operations on the left navigation pane of the Amazon S3 console and choose Create job. You can select one of the manifest formats, a list of S3 objects that contains object keys that you want to retrieve. If your manifest format is a CSV file, each row in the file must include the bucket name, object key, and, optionally, the object version.

In the next step, choose the operation that you want to perform on all objects listed in the manifest. The Restore operation initiates restore requests for archived objects on a list of S3 objects that you specify. Using a restore operation results in a restore request for every object that is specified in the manifest.

When you restore with the Standard retrieval tier from the S3 Glacier Flexible Retrieval storage class, you automatically get faster retrievals.

You can also create a restore job with S3InitiateRestoreObject job using the AWS CLI:

$aws s3control create-job \
     --region us-east-1 \
     --account-id 123456789012 \
     --operation '{"S3InitiateRestoreObject": { "ExpirationInDays": 1, "GlacierJobTier":"STANDARD"} }' \
     --report '{"Bucket":"arn:aws:s3:::reports-bucket ","Prefix":"batch-op-restore-job", "Format":" S3BatchOperations_CSV_20180820","Enabled":true,"ReportScope":"FailedTasksOnly"}' \
     --manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820", "Fields":["Bucket","Key"]},"Location":{"ObjectArn":"arn:aws:s3:::inventory-bucket/inventory_for_restore.csv", "ETag":"<ETag>"}}' \
     --role-arn arn:aws:iam::123456789012:role/s3batch-role

You can then check the status of the job submission of the requests by running the following CLI command:

$ aws s3control describe-job \
     --region us-east-1 \
     --account-id 123456789012 \
     --job-id <JobID> \
     --query 'Job'.'ProgressSummary'

You can view and update the job status, add notifications and logging, track job failures, and generate completion reports. S3 Batch Operations job activity is recorded as events in AWS CloudTrail. For tracking job events, you can create a custom rule in Amazon EventBridge and send these events to the target notification resource of your choice, such as Amazon Simple Notification Service (Amazon SNS).

When you create an S3 Batch Operations job, you can also request a completion report for all tasks or just for failed tasks. The completion report contains additional information for each task, including the object key name and version, status, error codes, and descriptions of any errors.

For more information, see Tracking job status and completion reports in the Amazon S3 User Guide.

Here is the result of a sample retrieval job with 250 objects, each sized 100 MB. As you can see from the Previous restore performance line (blue line at the right), these restores would typically finish in 3–5 hours using Standard retrievals. Now, when you use Standard retrievals with S3 Batch Operations, your job typically starts within minutes, as shown in the Improved restore performance line (orange line at the left), improving data restore time by up to 85 percent.

To learn more, see Restoring archived objects at scale from the Amazon S3 Glacier storage classes on the AWS Storage Blog and Restoring an archived object in the Amazon S3 User Guide.

Now Available
Faster standard retrievals for Amazon S3 Glacier Flexible Retrieval are now available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. This performance improvement is available to you at no additional cost. You are charged for S3 Batch Operations and data retrievals. For more information, see the S3 pricing page.

Lastly, we published a new ebook titled “Maximize the value of cold storage with Amazon S3 Glacier“. Read this ebook to learn how Amazon S3 Glacier is helping organizations of all sizes and from all industries transform their data archiving to unlock business value, increase agility, and save on storage costs.

To learn more, visit the S3 Glacier storage classes page and getting started guide, and send feedback to AWS re:Post for S3 Glacier or through your usual AWS Support contacts.

I’m really excited for you to start using this new feature, and I look forward to hearing about even more ways you are reinventing your business with archive data.

Channy

Now Open – AWS Israel (Tel Aviv ) Region

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/now-open-aws-israel-tel-aviv-region/

In June 2021, Jeff Barr announced the upcoming AWS Israel (Tel Aviv) Region. Today we’re announcing the general availability of the AWS Israel (Tel Aviv) Region, with three Availability Zones and the il-central-1 API name.

The new Tel Aviv Region gives customers an additional option for running their applications and serving users from data centers located in Israel. Customers can securely store data in Israel while serving users in the vicinity with even lower latency.

AWS Services in the AWS Israel (Tel Aviv) Region
In the new Tel Aviv Region, you can use C5, C5d, C6g, C6gn, C6i, C6id, D3, G5, I3I3en, I4i, M5, M5dM6gM6gd, M6i, M6id, P4de (public preview only), R5R5d, R6g, R6i, R6id, T3, T3a, T4g instances, and a long list of AWS services including: Amazon API Gateway, AWS AppConfig, AWS Application Auto Scaling, Amazon Aurora, Aurora PostgreSQL, AWS Budgets, AWS Certificate Manager, AWS CloudFormation, Amazon Cloudfront, AWS Cloud Map, AWS CloudTrail, Amazon CloudWatch, Amazon CloudWatch Events, Amazon CloudWatch Logs, AWS CodeBuild, AWS CodeDeploy, AWS Config, AWS Cost Explorer, AWS Database Migration Service, AWS Direct Connect, AWS Directory Service, Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic Compute Cloud (Amazon EC2), Amazon EC2 Auto Scaling, EC2 Image Builder, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service, Amazon ElastiCache, AWS Elastic Beanstalk, Elastic Load Balancing, Elastic Load Balancing – Network (NLB), Amazon EMR, Amazon EventBridge, AWS Fargate, Glacier, AWS Health Dashboard, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, AWS Key Management Service (AWS KMS), AWS Lambda, AWS Marketplace, AWS Mobile SDK for iOS and Android, Amazon OpenSearch Service, AWS Organizations, Amazon Redshift, AWS Resource Access Manager, Amazon Relational Database Service (Amazon RDS), Resource Groups, Amazon Route 53, Amazon Virtual Private Cloud (Amazon VPC), AWS Secrets Manager, AWS Shield Standard, AWS Shield Advanced, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), Amazon Simple Workflow Service (Amazon SWF), AWS Step Functions, AWS Support API, AWS Systems Manager, AWS Trusted Advisor, VM Import/Export, AWS VPN, AWS WAF, and AWS X-Ray.

AWS in Israel
According to the Israel Ministry of Economic Industry, Israel is in the front line of the cloud computing era and “is known to be the ‘start-up nation’ of the number of global start-ups being produced. Over the past decade, Israel has produced over 2,000 start-ups, the majority of these start-ups are driven by software as a service (SaaS). Israeli cloud technology remains a strong promise in the market as new start-ups are continuously penetrating the market.”

AWS began supporting startups in Israel in 2013 through its AWS Activate program. In Israel, AWS works with accelerator organizations such as 8200 EISP, F2 Venture Capitalthejunction, and TechStars as well as venture capital firms like Entrée Capital, Bessemer Venture Partners, Pitango, Vertex Ventures Israel, and Viola Group to support the rapid growth of their portfolio companies.

Back in 2014, we opened an AWS office and a research and development (R&D) center in Israel. Since then, Amazon has expanded its R&D presence in the country, which now includes Prime Air and Alexa Shopping.

In 2015, AWS acquired Annapurna Labs, an Israeli microelectronics company, which has developed advanced compute, networking, security, and storage technologies for AWS—such as AWS-designed Graviton processors, AWS Inferentia, AWS Trainium chips, and the AWS Nitro System.

In 2018, we expanded to new offices in Tel Aviv, including AWS Experience Tel Aviv on Floor28 to support the growth of Israeli startups, enterprises, and government customers through technology-focused events and educational activities. Now, AWS Experience Tel Aviv on Floor28 is an education hub where anyone interested in AWS can attend industry events, workshops, and meetups, and receive free, in-person technical and business guidance from AWS experts.

In 2019, we launched the first AWS infrastructure in Israel, opening an Amazon CloudFront edge location. In 2020, we brought AWS Outposts and AWS Direct Connect to Israel, providing Israeli organizations with the ability to run AWS technology in their own data centers and establish dedicated connections back to the AWS Cloud.

In April 2021, the government of Israel announced that it had selected AWS as its primary cloud provider as part of the Nimbus contract. The Nimbus framework will enable government departments—including the ministries, education, healthcare, and municipalities—to accelerate their digital transformation by using AWS technologies.

AWS continues to invest in upskilling local developers, students, and the next generation of IT leaders in Israel through programs such as AWS Educate, AWS Academy, AWS re/Start, and other Training and Certification programs.

AWS Educate and Academy programs are providing free resources to accelerate cloud-related learning and preparing today’s students in Israel for the jobs of the future. Israel colleges already participating in the AWS Academy program include the Bar Ilan University, Ben-Gurion University of the Negev, Holon Institute of Technology, Jerusalem College of Technology, and University of Haifa. We also launched AWS re/Start to focus on helping unemployed or underemployed individuals to launch a new cloud career. You can now apply to AWS re/Start programs through Appleseeds, Sigma Labs Jerusalem, and Analiza Cyber Intelligence in Israel.

AWS Customers in Israel
We have many amazing customers in Israel who are doing incredible things with AWS, for example:

AI21 Labs – AI21 Labs offers access to its state-of-the-art proprietary language models through AI21 Studio for businesses to build their own generative artificial intelligence applications, as well as its consumer product, Wordtune, the first AI-based writing assistant to understand context and meaning. AI21 Labs scaled to hundreds of GPUs efficiently and cost effectively to build the Jurassic-2 family of language models. These models were trained with distributed and parallelized infrastructure based on Amazon EC2 P4d instances 400 Gbps high-performance networking supported by Elastic Fabric Adaptor (EFA).

Bank Leumi – Leumi is one of the leading banks in Israel and has over 200 branches across the country and dedicated teams using AWS to build an advanced banking services marketplace. In just 5 months, Leumi migrated 16 on-premises applications from its former Kubernetes solution to Amazon EKS Anywhere with no service interruptions. The bank’s new environment facilitates a consistent, scalable approach to deployments, saving time and money and increasing innovation velocity.

CyberArk – CyberArk is an AWS partner in the identity security industry. Centered on privileged access management, CyberArk provides the most comprehensive security SaaS offering on AWS for any identity—human or machine—across business applications, distributed workforces, hybrid cloud workloads, and throughout the DevOps lifecycle. CyberArk Identity Security Intelligence has integrated with AWS CloudTrail Lake to increase visibility and responsiveness associated with targeted threats. CyberArk Audit also delivers security event information to Amazon Security Lake.

Ichilov Hospital – The I-Medata Innovation Center of Ichilov Hospital uses AWS Control Tower to facilitate the fast, consistent, and secure creation of AWS accounts while protecting sensitive medical data. The center also relies on Amazon SageMaker to enable its scientists to build, train, and deploy advanced machine learning models for early detection of deterioration in COVID-19 patients. They had full protection of sensitive medical data on AWS while continuing to enable the productivity of researchers.

You can find more customer stories from Israel.

Available Now
The new Tel Aviv Region is ready to support your business. You can find a detailed list of the services available in this Region on the AWS Regional Services List.

With this launch, AWS now spans 102 Availability Zones in 32 geographic Regions around the world. We have also announced plans for 12 more Availability Zones and four more Regions in Canada, Malaysia, New Zealand, and Thailand.

To learn more, see the Global Infrastructure page, give it a try, and send feedback through your usual AWS support contacts in Israel.

— Channy

P.S. We’re focused on improving our content to provide a better customer experience, and we need your feedback to do so. Please take this quick survey to share insights on your experience with the AWS Blog. Note that this survey is hosted by an external company, so the link does not lead to our website. AWS handles your information as described in the AWS Privacy Notice.

New Amazon EC2 Instances (C7gd, M7gd, and R7gd) Powered by AWS Graviton3 Processor with Local NVMe-based SSD Storage

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-c7gd-m7gd-and-r7gd-powered-by-aws-graviton3-processor-with-local-nvme-based-ssd-storage/

We launched Amazon EC2 C7g instances in May 2022 and M7g and R7g instances in February 2023. Powered by the latest AWS Graviton3 processors, the new instances deliver up to 25 percent higher performance, up to two times higher floating-point performance, and up to 2 times faster cryptographic workload performance compared to AWS Graviton2 processors.

Graviton3 processors deliver up to 3 times better performance compared to AWS Graviton2 processors for machine learning (ML) workloads, including support for bfloat16. They also support DDR5 memory that provides 50 percent more memory bandwidth compared to DDR4. Graviton3 also uses up to 60 percent less energy for the same performance as comparable EC2 instances, which helps you reduce your carbon footprint.

The C7g instances are well suited for compute-intensive workloads, such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modeling, distributed analytics, and CPU-based machine learning inference. The M7g instances are for general purpose workloads such as application servers, microservices, gaming servers, mid-sized data stores, and caching fleets. The R7g instances are a great fit for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.

Today, we’re adding a d variant to all three instance families. The new Amazon EC2 C7gd, M7gd, and R7gd instance types have NVM Express (NVMe) locally attached up to 2 x 1.9 TB SSD drives that are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance. These instances have up to 45 percent better real-time NVMe storage performance than comparable Graviton2-based instances.

These are a great fit for applications that need access to high-speed, low-latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. The data on an instance store volume persists only during the life of the associated EC2 instance.

Here are the specs for these instances:

Instance Size vCPU Memory
(GiB)
Local NVMe Storage (GB) Network Bandwidth
(Gbps)
EBS Bandwidth
(Gbps)
C7gd/M7gd/R7gd C7gd/M7gd/R7gd C7gd/M7gd/R7gd
medium 1 2/ 4 / 8 1 x 59 Up to 12.5 Up to 10
large 2 4 / 8 / 16 1 x 118 Up to 12.5 Up to 10
xlarge 4 8 / 16 / 32 1 x 237 Up to 12.5 Up to 10
2xlarge 8 16 / 32 / 64 1 x 474 Up to 15 Up to 10
4xlarge 16 32 / 64 / 128 1 x 950 Up to 15 Up to 10
8xlarge 32 64 / 128 / 256 1 x 1900 15 10
12xlarge 48 96 / 192/ 384 2 x 1425 22.5 15
16xlarge 64 128 / 256 / 512 2 x 1900 30 20

These instances are built on the AWS Nitro System, a combination of AWS-designed dedicated hardware and a lightweight hypervisor that allows the delivery of isolated multitenancy, private networking, and fast local storage. They provide up to 20 Gbps Amazon Elastic Block Store (Amazon EBS) bandwidth and up to 30 Gbps network bandwidth. The 16xlarge instances also support Elastic Fabric Adapter (EFA) for applications that need a high level of inter-node communication.

Now Available
Amazon EC2 C7gd, M7gd, and R7gd instances are now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Ireland). As usual with Amazon EC2, you only pay for what you use. For more information, see the Amazon EC2 pricing page.

If you’re optimizing applications for Arm architecture, be sure to have a look at our Getting Started collection of resources or learn more about AWS Graviton3-based EC2 instances.

To learn more, visit our Amazon EC2 C7g instances, M7g instances or R7g instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy