Tag Archives: announcements

C5 Type 2 attestation report now available with one new Region and 123 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-available-one-new-region-123-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the 2020 Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added one new AWS Region (Europe-Milan) and 21 additional services and service features to the scope of the 2020 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. Customers in Germany and other European countries can use AWS’s attestation report to help them meet local security requirements of the C5 framework.

The C5 Type 2 report covers the time period October 1, 2019, through September 30, 2020. It was issued by an independent third-party attestation organization and assesses the design and the operational effectiveness of AWS’s controls against C5’s basic and additional criteria. This attestation demonstrates our commitment to meet the security expectations for cloud service providers set by the BSI in Germany.

We continue to add new Regions and services to the C5 compliance scope so that you have more services to choose from that meet regulatory and compliance requirements. AWS has added the Europe (Milan) Region and the following 21 services and service features to this year’s C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page. The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

AWS Asia Pacific (Osaka) Region Now Open to All, with Three AZs and More Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-asia-pacific-osaka-region-now-open-to-all-with-three-azs-more-services/

AWS has had a presence in Japan for a long time! We opened the Asia Pacific (Tokyo) Region in March 2011, added a third Availability Zone (AZ) in 2012, and a fourth in 2018. Since that launch, customers in Japan and around the world have used the region to host an incredibly wide variety of applications!

We opened the Osaka Local Region in 2018 to give our customers in Japan a disaster recovery option for their workloads. Located 400 km from Tokyo, the Osaka Local Region used an isolated, fault-tolerant design contained within a single data center.

From Local to Standard
I am happy to announce that the Osaka Local Region has been expanded and is a now a standard AWS region, complete with three Availability Zones. As is always the case with AWS, the AZs are designed to provide physical redundancy, and are able to withstand power outages, internet downtime, floods, and other natural disasters.

The following services are available, with more in the works: Amazon Elastic Kubernetes Service (EKS), Amazon API Gateway, Auto Scaling, Application Auto Scaling, Amazon Aurora, AWS Config, AWS Personal Health Dashboard, AWS IQ, AWS Organizations, AWS Secrets Manager, AWS Shield Standard (regional), AWS Snowball Edge, AWS Step Functions, AWS Systems Manager, AWS Trusted Advisor, AWS Certificate Manager, CloudEndure Migration, CloudEndure Disaster Recovery, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Elastic Container Registry, Amazon Elastic Container Service (ECS), AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Image Builder, Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon EventBridge, AWS Fargate, Amazon Glacier, AWS Glue, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams, AWS Lambda, AWS Marketplace, AWS Mobile SDK, Network Load Balancer, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS VPN, VM Import/Export, AWS X-Ray, AWS Artifact, AWS PrivateLink, and Amazon Virtual Private Cloud (VPC).

The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.

In addition to the AWS regions in Tokyo and Osaka, customers in Japan also benefit from:

  • 16 CloudFront edge locations in Tokyo.
  • One CloudFront edge location in Osaka.
  • One CloudFront Regional Edge Cache in Tokyo.
  • Two AWS Direct Connect locations in Tokyo.
  • One Direct Connect location in Osaka.

Here are typical latency values from the Asia Pacific (Osaka) Region to other cities in the area:

City Latency
Nagoya 2-5 ms
Hiroshima 2-5 ms
Tokyo 5-8 ms
Fukuoka 11-13 ms
Sendai 12-15 ms
Sapporo 14-17 ms
Seoul 27 ms
Taipei 29 ms
Hong Kong 38 ms
Manila 49 ms

AWS Customers in Japan
As I mentioned earlier, our customers are using the AWS regions in Tokyo and Osaka to host an incredibly wide variety of applications. Here’s a sampling:

Mitsubishi UFJ Financial Group (MUFG) – This financial services company adopted a cloud-first strategy and did their first AWS deployment in 2017. They have built a data platform for their banking and group data that helps them to streamline administrative processes, and also migrated a market risk management system. MUFG has been using the Osaka Local Region and is planning to use the Asia Pacific (Osaka) Region to run more workloads and to support their ongoing digital transformation.

KDDI Corporation (KDDI) – This diversified (telecommunication, financial services, Internet, electricity distribution, consumer appliance, and more) company started using AWS in 2016 after AWS met KDDI’s stringent internal security standards. They currently build and run more than 60 services on AWS, including the backend of the au Denki application, used by consumers to monitor electricity usage and rates. They plan to use the Asia Pacific (Osaka) Region to initiate multi-region service to their customers in Japan.

OGIS-RI – Founded in 1983, this global IT consulting firm is a part of the Daigas Group of companies. OSIS-RI provides information strategy, systems integration, systems development, network construction, support, and security. They use AWS to provide their enterprise customers with ekul, a data measurement service that measures and visualizes gas and electricity usage in real time and send it to corporate customers across Japan.

Sony Bank – Founded in 2001 as an asset management bank for individuals, Sony Bank provides services that include foreign currency deposits, home loans, investment trusts, and debit cards. Their gradual migration of internal banking systems to AWS began in 2013 and was 80% complete at the end of 2019. This migration reduced their infrastructure costs by 60% and more than halved the time it once took to procure and build out new infrastructure.

AWS Resources in Japan
As a quick reminder, enterprises, government and research organizations, small and medium businesses, educators, and startups in Japan have access to a wide variety of AWS and community resources. Here’s a sampling:

Available Now
The new region is open to all AWS customers and you can start to use it today!

Jeff;

 

AWS DeepRacer League’s 2021 Season Launches With New Open and Pro Divisions

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-deepracer-leagues-2021-season-launches-with-new-open-and-pro-divisions/

AWS DeepRacer League LogoAs a developer, I have been hearing a lot of stories lately about how companies have solved their business problems using machine learning (ML), so one of my goals for 2021 is to learn more about it.

For the last few years I have been using artificial intelligence (AI) services such as, Amazon Rekognition, Amazon Comprehend, and others extensively. AI services provide a simple API to solve common ML problems such as image recognition, text to speech, and analysis of sentiment in a text. When using these high-level APIs, you don’t need to understand how the underlying ML model works, nor do you have to train it, or maintain it in any way.

Even though those services are great and I can solve most of my business cases with them, I want to understand how ML algorithms work, and that is how I started tinkering with AWS DeepRacer.

AWS DeepRacer, a service that helps you learn reinforcement learning (RL), has been around since 2018. RL is an advanced ML technique that takes a very different approach to training models than other ML methods. Basically, it can learn very complex behavior without requiring any labeled training data, and it can make short-term decisions while optimizing for a long-term goal.

AWS DeepRacer is an autonomous 1/18th scale race car designed to test RL models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.

Over the past year, the AWS DeepRacer League’s races have gone completely virtual and participants have competed for different kinds of prizes. However, the competition has become dominated by experts and newcomers haven’t had much of a chance to win.

The 2021 season introduces new skill-based Open and Pro racing divisions, where racers of all skill levels have five times more opportunities to win rewards than in previous seasons.

Image of the leagues in the console

How the New AWS DeepRacer Racing Divisions Work

The 2021 AWS DeepRacer league runs from March 1 through the end of October. When it kicks off, all participants will enter the Open division, a place to have fun and develop your RL knowledge with other community members.

At the end of every month, the top 10% of the Open division leaderboard will advance to the Pro division for the remainder of the season; they’ll also receive a Pro Welcome kit full of AWS DeepRacer swag. Pro division racers can win DeepRacer Evo cars and AWS DeepRacer merchandise such as hats and T-shirts.

At the end of every month, the top 16 racers in the Pro division will compete against each other in a live race console. That race will determine who will advance that month to the 2021 Championship Cup at re:Invent 2021.

The monthly Pro division winner gets an expenses-paid trip to re:Invent 2021 and participates in the Championship Cup to get a chance to win a Machine Learning education sponsorship worth $20k.

In both divisions, you can collect digital rewards, including vehicle customizations and accessories which will be released to participants once the winners are announced each month. 

You can start racing in the Open division any time during the 2021 season. Get started here!

Image of my racer profileNew Racer Profiles Increase the Fun

At the end of March, you will be able to create a new racer profile with an avatar and show the world which country you are representing.

I hope to see you in the new AWS DeepRacer season, where I’ll start in the Open division as MaVi.

Start racing today and train your first model for free! 

Marcia

TLS 1.2 will be required for all AWS FIPS endpoints beginning March 31, 2021

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-fips-endpoints/

To help you meet your compliance needs, we’re updating all AWS Federal Information Processing Standard (FIPS) endpoints to a minimum of Transport Layer Security (TLS) 1.2. We have already updated over 40 services to require TLS 1.2, removing support for TLS 1.0 and TLS 1.1. Beginning March 31, 2021, if your client application cannot support TLS 1.2, it will result in connection failures. In order to avoid an interruption in service, we encourage you to act now to ensure that you connect to AWS FIPS endpoints at TLS version 1.2. This change does not affect non-FIPS AWS endpoints.

Amazon Web Services (AWS) continues to notify impacted customers directly via their Personal Health Dashboard and email. However, if you’re connecting anonymously to AWS shared resources, such as through a public Amazon Simple Storage Service (Amazon S3) bucket, then you would not have received a notification, as we cannot identify anonymous connections.

Why are you removing TLS 1.0 and TLS 1.1 support from FIPS endpoints?

At AWS, we’re continually expanding the scope of our compliance programs to meet the needs of customers who want to use our services for sensitive and regulated workloads. Compliance programs, including FedRAMP, require a minimum level of TLS 1.2. To help you meet compliance requirements, we’re updating all AWS FIPS endpoints to a minimum of TLS version 1.2 across all AWS Regions. Following this update, you will not be able to use TLS 1.0 and TLS 1.1 for connections to FIPS endpoints.

How can I detect if I am using TLS 1.0 or TLS 1.1?

To detect the use of TLS 1.0 or 1.1, we recommend that you perform code, network, or log analysis. If you are using an AWS Software Developer Kit (AWS SDK) or Command Line Interface (CLI), we have provided hyperlinks to detailed guidance in our previous TLS blog post about how to examine your client application code and properly configure the TLS version used.

When the application source code is unavailable, you can use a network tool, such as TCPDump (Linux) or Wireshark (Linux or Windows), to analyze your network traffic to find the TLS versions you’re using when connecting to AWS endpoints. For a detailed example of using these tools, see the example, below.

If you’re using Amazon S3, you can also use your access logs to view the TLS connection information for these services and identify client connections that are not at TLS 1.2.

What is the most common use of TLS 1.0 or TLS 1.1?

The most common client applications that use TLS 1.0 or 1.1 are Microsoft .NET Framework versions earlier than 4.6.2. If you use the .NET Framework, please confirm you are using version 4.6.2 or later. For information on how to update and configure .NET Framework to support TLS 1.2, see How to enable TLS 1.2 on clients.

How do I know if I am using an AWS FIPS endpoint?

All AWS services offer TLS 1.2 encrypted endpoints that you can use for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers who need to use FIPS-validated cryptographic libraries to connect to AWS services. You can check our list of all AWS FIPS endpoints and compare the list to your application code, configuration repositories, DNS logs, or other network logs.

EXAMPLE: TLS version detection using a packet capture

To capture the packets, multiple online sources, such as this article, provide guidance for setting up TCPDump on a Linux operating system. On a Windows operating system, the Wireshark tool provides packet analysis capabilities and can be used to analyze packets captured with TCPDump or it can also directly capture packets.

In this example, we assume there is a client application with the local IP address 10.25.35.243 that is making API calls to the CloudWatch FIPS API endpoint in the AWS GovCloud (US-West) Region. To analyze the traffic, first we look up the endpoint URL in the AWS FIPS endpoint list. In our example, the endpoint URL is monitoring.us-gov-west-1.amazonaws.com. Then we use NSLookup to find the IP addresses used by this FIPS endpoint.

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Wireshark is then used to open the captured packets, and filter to just the packets with the relevant IP address. This can be done automatically by selecting one of the packets in the upper section, and then right-clicking to use the Conversation filter/IPv4 option.

After the results are filtered to only the relevant IP addresses, the next step is to find the packet whose description in the Info column is Client Hello. In the lower packet details area, expand the Transport Layer Security section to find the version, which in this example is set to TLS 1.0 (0x0301). This indicates that the client only supports TLS 1.0 and must be modified to support a TLS 1.2 connection.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 3 shows what it looks like after the client has been updated to support TLS 1.2. This second packet capture confirms we are sending TLS 1.2 (0x0303) in the Client Hello packet.

Figure 3: The client TLS has been updated to support TLS 1.2

Figure 3: The client TLS has been updated to support TLS 1.2

Is there more assistance available?

If you have any questions or issues, you can start a new thread on one of the AWS forums, or contact AWS Support or your technical account manager (TAM). The AWS support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support doesn’t include code development for client applications.

Additionally, you can use AWS IQ to find, securely collaborate with, and pay AWS-certified third-party experts for on-demand assistance to update your TLS client components. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log in to your console and select Get Started with AWS IQ to start a request.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janelle Hopper

Janelle is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Author

Daniel Salzedo

Daniel is a Senior Specialist Technical Account Manager – Security. He has over 25 years of professional experience in IT in industries as diverse as video game development, manufacturing, banking and used car sales. He loves working with our wonderful AWS customers to help them solve their complex security challenges at scale.

Fall 2020 PCI DSS report now available with eight additional services in scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2020-pci-dss-report-now-available-with-eight-additional-services-in-scope/

We continue to expand the scope of our assurance programs and are pleased to announce that eight additional services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This gives our customers more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in Amazon Web Services (AWS).

You can see the full list on Services in Scope by Compliance Program. The eight additional services are:

  1. Amazon Augmented AI (Amazon A2I) (excluding public workforce and vendor workforce)
  2. Amazon Kendra
  3. Amazon Keyspaces (for Apache Cassandra)
  4. Amazon Timestream
  5. AWS App Mesh
  6. AWS Cloud Map
  7. AWS Glue DataBrew
  8. AWS Ground Station

Private AWS Local Zones and AWS Wavelength sites were newly assessed as additional infrastructure deployments as part of the fall 2020 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) evidencing AWS PCI compliance status is available through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions. You can contact the compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS. He has over 15 years of experience managing information technology risk and control for Fortune 500 companies covering security compliance, auditing, and control framework implementation. He has a bachelor’s degree in Finance, master’s degree in Business Administration, and industry certifications including CISA and ISSPCS. Outside of work, he loves singing and reading.

Updated whitepaper available: Encrypting File Data with Amazon Elastic File System

Post Syndicated from Joe Travaglini original https://aws.amazon.com/blogs/security/updated-whitepaper-available-encrypting-file-data-with-amazon-elastic-file-system/

We’re sharing an update to the Encrypting File Data with Amazon Elastic File System whitepaper to provide customers with guidance on enforcing encryption of data at rest and in transit in Amazon Elastic File System (Amazon EFS). Amazon EFS provides simple, scalable, highly available, and highly durable shared file systems in the cloud. The file systems you create by using Amazon EFS are elastic, which allows them to grow and shrink automatically as you add and remove data. They can grow to petabytes in size, distributing data across an unconstrained number of storage servers in multiple Availability Zones.

Read the updated whitepaper to learn about best practices for encrypting Amazon EFS. Learn how to enforce encryption at rest while you create an Amazon EFS file system in the AWS Management Console and in the AWS Command Line Interface (AWS CLI), and how to enforce encryption of data in transit at the client connection layer by using AWS Identity and Access Management (IAM).

Download and read the updated whitepaper.

If you have questions or want to learn more, contact your account executive or contact AWS Support. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Joseph Travaglini

For over four years, Joe has been a product manager on the Amazon Elastic File System team, responsible for the Amazon EFS security and compliance roadmap, and a product lead for the launch of EFS Infrequent Access. Prior to joining the Amazon EFS team, Joe was Director of Products at Sqrrl, a cybersecurity analytics startup acquired by AWS in 2018.

Author

Peter Buonora

Pete is a Principal Solutions Architect for AWS, with a focus on enterprise cloud strategy and information security. Pete has worked with the largest customers of AWS to accelerate their cloud adoption and improve their overall security posture.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect for AWS. He enjoys working closely with customers and supporting their digital transformation and AWS adoption journey. His core areas of focus are security, serverless computing, and application integration.

AWS and EU data transfers: strengthened commitments to protect customer data

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-eu-data-transfers-strengthened-commitments-to-protect-customer-data/

Last year we published a blog post describing how our customers can transfer personal data in compliance with both GDPR and the new “Schrems II” ruling. In that post, we set out some of the robust and comprehensive measures that AWS takes to protect customers’ personal data.

Today, we are announcing strengthened contractual commitments that go beyond what’s required by the Schrems II ruling and currently provided by other cloud providers to protect the personal data that customers entrust AWS to process (customer data). Significantly, these new commitments apply to all customer data subject to GDPR processed by AWS, whether it is transferred outside the European Economic Area (EEA) or not. These commitments are automatically available to all customers using AWS to process their customer data, with no additional action required, through a new supplementary addendum to the AWS GDPR Data Processing Addendum.

Our strengthened contractual commitments include:

  • Challenging law enforcement requests: We will challenge law enforcement requests for customer data from governmental bodies, whether inside or outside the EEA, where the request conflicts with EU law, is overbroad, or where we otherwise have any appropriate grounds to do so.
  • Disclosing the minimum amount necessary: We also commit that if, despite our challenges, we are ever compelled by a valid and binding legal request to disclose customer data, we will disclose only the minimum amount of customer data necessary to satisfy the request.

These strengthened commitments to our customers build on our long track record of challenging law enforcement requests. AWS rigorously limits – or rejects outright – law enforcement requests for data coming from any country, including the United States, where they are overly broad or we have any appropriate grounds to do so.

These commitments further demonstrate AWS’s dedication to securing our customers’ data: it is AWS’s highest priority. We implement rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data regardless of which AWS Region a customer selects. Customers have complete control over their data through powerful AWS services and tools that allow them to determine where data will be stored, how it is secured, and who has access.

For example, customers using our latest generation of EC2 instances automatically gain the protection of the AWS Nitro System. Using purpose-built hardware, firmware, and software, AWS Nitro provides unique and industry-leading security and isolation by offloading the virtualization of storage, security, and networking resources to dedicated hardware and software. This enhances security by minimizing the attack surface and prohibiting administrative access while improving performance. Nitro was designed to operate in the most hostile network we could imagine, building in encryption, secure boot, a hardware-based root of trust, a decreased Trusted Computing Base (TCB) and restrictions on operator access. The newly announced AWS Nitro Enclaves feature enables customers to create isolated compute environments with cryptographic controls to assure the integrity of code that is processing highly sensitive data.

AWS encrypts all data in transit, including secure and private connectivity between EC2 instances of all types. Customers can rely on our industry leading encryption features and take advantage of AWS Key Management Services to control and manage their own keys within FIPS-140-2 certified hardware security modules. Regardless of whether data is encrypted or unencrypted, we will always work vigilantly to protect data from any unauthorized access. Find out more about our approach to data privacy.

AWS is constantly working to ensure that our customers can enjoy the benefits of AWS everywhere they operate. We will continue to update our practices to meet the evolving needs and expectations of customers and regulators, and fully comply with all applicable laws in every country in which we operate. With these changes, AWS continues our customer obsession by offering tooling, capabilities, and contractual rights that nobody else does.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Amplify Flutter is Now Generally Available: Build Beautiful Cross-Platform Apps

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amplify-flutter-is-now-generally-available-build-beautiful-cross-platform-apps/

AWS Amplify is a set of tools and services for building secure, scalable mobile and web applications. Currently, Amplify supports iOS, Android, and JavaScript (web and React Native) and is the quickest and easiest way to build applications powered by Amazon Web Services (AWS).

Flutter is Google’s UI toolkit for building natively compiled mobile, web, and desktop applications from a single code base and is one of the fastest-growing mobile frameworks.

Amplify Flutter brings together AWS Amplify and Flutter, and we designed it for customers who have invested in the Flutter ecosystem and now want to take advantage of the power of AWS.

In August 2020, we launched the developer preview of Amplify Flutter and asked for feedback. We were delighted with the response. After months of refining the service, today we are happy to announce the general availability of Amplify Flutter.

New Amplify Flutter Features in GA
The GA release makes it easier to build powerful Flutter apps with the addition of three new capabilities:

First, we recently added a GraphQL API backed by AWS AppSync as well as REST APIs and handlers using Amazon API Gateway and AWS Lambda.

Second, Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data.

Finally, we have Hosted UI which is a great way to implement authentication, and works with Amazon Cognito and other social identity providers such as Facebook, Google and Amazon. Hosted UI is a customizable OAuth 2.0 flow that allows you to launch a login screen without embedding the SDK for Cognito or a social provider in your application.

Digging Deeper Into Amplify DataStore
I have been building an app over the past two weeks using Amplify Flutter, and my favorite feature is Amplify DataStore, primarily because it has saved me so much time.

Working with the REST and GraphQL APIs is great in Amplify. However, when I create a mobile app, I’m often thinking about what happens when the mobile device has intermittent connectivity and can’t connect to the API endpoints. Storing data locally and syncing back to the cloud can become quite complicated. Amplify DataStore solves that problem by providing a persistent on-device data store that handles the offline or online scenario.

When I started developing my app, I used DataStore as a stand-alone local database. However, its power became apparent to me when I connected it to a cloud backend. DataStore uses my AWS AppSync API to sync data when network connectivity is available. If the app is offline, it stores it locally, ready for when a connection becomes available.

Amplify DataStore automatically versions data and implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for Dart based on the GraphQL schema that I provide.

Writing to Amplify DataStore
Writing to the DataStore is straightforward. The documentation site shows an example that you can try yourself that uses a schema from a blog site.

Post newPost = Post(
    title: 'New Post being saved', rating: 15, status: PostStatus.DRAFT);
await Amplify.DataStore.save(newPost);

Reading from Amplify DataStore
To read from the DataStore, you can query for all records of a given model type.

try {
   List<Post> posts = await Amplify.DataStore.query(Post.classType);
 } catch (e) {
   print("Query failed: " + e);
 }

Synchronization with Amplify DataStore
If you enable data synchronization, there can be different versions of an object across clients, and multiple clients may have updated their copies of an object. DataStore will converge different object versions by applying conflict detection and resolution strategies. The default resolution is called Auto Merge, but other strategies include optimistic concurrency control and custom Lambda functions.

Additional Amplify Flutter Features
Amplify Flutter allows you to work with AWS in three additional ways:

  • Authentication. Amplify Flutter provides an interface for authenticating a user and enables use cases like Sign-Up, Sign-In, and Multi-Factor Authentication. Behind the scenes, it provides the necessary authorization to the other Amplify categories. It comes with built-in support for Cognito user pools and identity pools.
  • Storage. Amplify Flutter provides an interface for managing user content for your app in public, protected, or private storage buckets. It enables use cases like upload, download, and deleting objects and provides built-in support for Amazon Simple Storage Service (S3) by default.
  • Analytics. Amplify Flutter enables you to collect tracking data for authenticated or unauthenticated users in Amazon Pinpoint. You can easily record events and extend the default functionality for custom metrics or attributes as needed.

Available Now
Amplify Flutter is now available in GA in all regions that support AWS Amplify. There is no additional cost for using Amplify Flutter; you only pay for the backend services your applications use above the free tier; check out the pricing page for more details.

Visit the Amplify Flutter documentation to get started and learn more. Happy coding.

— Martin

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Top 10 blog posts of 2020

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/top-10-posts-of-2020/

The AWS Security Blog endeavors to provide our readers with a reliable place to find the most up-to-date information on using AWS services to secure systems and tools, as well as thought leadership, and effective ways to solve security issues. In turn, our readers have shown us what’s most important for securing their businesses. To that end, we’re happy to showcase the top 10 most popular posts of 2020:

The top 10 posts of 2020

  1. Use AWS Lambda authorizers with a third-party identity provider to secure Amazon API Gateway REST APIs
  2. How to use trust policies with IAM roles
  3. How to use G Suite as external identity provider AWS SSO
  4. Top 10 security items to improve in your AWS account
  5. Automated response and remediation with AWS Security Hub
  6. How to add authentication single page web application with Amazon Cognito OAuth2 implementation
  7. Get ready for upcoming changes in the AWS Single Sign-On user sign-in process
  8. TLS 1.2 to become the minimum for all AWS FIPS endpoints
  9. How to use KMS and IAM to enable independent security controls for encrypted data in S3
  10. Use AWS Firewall Manager VPC security groups to protect your applications hosted on EC2 instances

If you’re new to AWS, or just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful:

The top five posts of all time

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  5. Securely Connect to Linux Instances Running in a Private Amazon VPC

Though these posts were well received, we’re always looking to improve. Let us know what you’d like to read about in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Author

Anna Brinkmann

Anna manages the Security Blog and enjoys poking her nose into all the details involved in the blog. If you have feedback about the blog, she’s always available on Slack to hear about it. Anna spends her days drinking lots of black tea, cutting extraneous words, and working to streamline processes.

New IRAP report is now available on AWS Artifact for Australian customers

Post Syndicated from Henry Xu original https://aws.amazon.com/blogs/security/new-irap-report-is-now-available-on-aws-artifact-for-australian-customers/

We are excited to announce that a new Information Security Registered Assessors Program (IRAP) report is now available on AWS Artifact. The new IRAP documentation pack brings new services in scope, and includes a Cloud Security Control Matrix (CSCM) for specific information to help customers assess each applicable control that is required by the Australian Government Information Security Manual (ISM).

The scope of the new IRAP report includes a reassessment of 92 services, and adds 5 additional services: Amazon Macie, AWS Backup, AWS CodePipeline, AWS Control Tower, and AWS X-Ray. With the additional 5 services in scope of this cycle, we now have a total of 97 services assessed at the PROTECTED level. This provides more capabilities for our Australian government customers to deploy workloads at the PROTECTED level across security, storage, developer tools, and governance. For the full list of services, see the AWS Services in Scope page and select the IRAP tab. All services in scope for IRAP are available in the Asia Pacific (Sydney) Region.

We developed IRAP documentation pack in accordance with the Australian Cyber Security Centre (ACSC)’s cloud security guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Attorney-General’s Department’s Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA)’s Secure Cloud Strategy.

We created the IRAP documentation pack to help Australian government agencies and their partners to plan, architect, and risk assess their workload based on AWS Cloud services. Please reach out to your AWS representatives to let us know what additional services you would like to see in scope for coming IRAP assessments. We strive to bring more services into the scope of the IRAP PROTECTED level, based on your requirements.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Artifact forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Henry Xu

Henry is an APAC Audit Program Manager in AWS Security Assurance, currently based in Canberra, Australia. He manages our regional compliance programs, including IRAP assessments. With experiences across leadership and technical roles in both public and private sectors, he is passionate about secure cloud adoption. Outside of AWS, Henry enjoys time with his family, and he loves dancing.

AWS PrivateLink for Amazon S3 is Now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/aws-privatelink-for-amazon-s3-now-available/

At AWS re:Invent, we pre-announced that AWS PrivateLink for Amazon S3 was coming soon, and soon has arrived — this new feature is now generally available. AWS PrivateLink provides private connectivity between Amazon Simple Storage Service (S3) and on-premises resources using private IPs from your virtual network.

Way back in 2015, S3 was the first service to add a VPC endpoint; these endpoints provide a secure connection to S3 that does not require a gateway or NAT instances. Our customers welcomed this new flexibility but also told us they needed to access S3 from on-premises applications privately over secure connections provided by AWS Direct Connect or AWS VPN.

Our customers are very resourceful and by setting up proxy servers with private IP addresses in their Amazon Virtual Private Clouds and using gateway endpoints for S3, they found a way to solve this problem. While this solution works, proxy servers typically constrain performance, add additional points of failure, and increase operational complexity.

We looked at how we could solve this problem for our customers without these drawbacks and PrivateLink for S3 is the result.

With this feature you can now access S3 directly as a private endpoint within your secure, virtual network using a new interface VPC endpoint in your Virtual Private Cloud. This extends the functionality of existing gateway endpoints by enabling you to access S3 using private IP addresses. API requests and HTTPS requests to S3 from your on-premises applications are automatically directed through interface endpoints, which connect to S3 securely and privately through PrivateLink.

Interface endpoints simplify your network architecture when connecting to S3 from on-premises applications by eliminating the need to configure firewall rules or an internet gateway. You can also gain additional visibility into network traffic with the ability to capture and monitor flow logs in your VPC. Additionally, you can set security groups and access control policies on your interface endpoints.

Available Now
PrivateLink for S3 is available in all AWS Regions. AWS PrivateLink is available at a low per-GB charge for data processed and a low hourly charge for interface VPC endpoints. We hope you enjoy using this new feature and look forward to receiving your feedback. To learn more, check out the PrivateLink for S3 documentation.

Try out AWS PrivateLink for Amazon S3 today, and happy storing.

— Martin

Over 40 services require TLS 1.2 minimum for AWS FIPS endpoints

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/over-40-services-require-tls-1-2-minimum-for-aws-fips-endpoints/

In a March 2020 blog post, we told you about work Amazon Web Services (AWS) was undertaking to update all of our AWS Federal Information Processing Standard (FIPS) endpoints to a minimum of Transport Layer Security (TLS) 1.2 across all AWS Regions. Today, we’re happy to announce that over 40 services have been updated and now require TLS 1.2:

These services no longer support using TLS 1.0 or TLS 1.1 on their FIPS endpoints. To help you meet your compliance needs, we are updating all AWS FIPS endpoints to a minimum of TLS 1.2 across all Regions. We will continue to update our services to support only TLS 1.2 or later on AWS FIPS endpoints, which you can check on the AWS FIPS webpage. This change doesn’t affect non-FIPS AWS endpoints.

When you make a connection from your client application to an AWS service endpoint, the client provides its TLS minimum and TLS maximum versions. The AWS service endpoint will always select the maximum version offered.

What is TLS?

TLS is a cryptographic protocol designed to provide secure communication across a computer network. API calls to AWS services are secured using TLS.

What is FIPS 140-2?

The FIPS 140-2 is a US and Canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information.

What are AWS FIPS endpoints?

All AWS services offer TLS 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers who need to use FIPS validated cryptographic libraries to connect to AWS services.

Why are we upgrading to TLS 1.2?

Our upgrade to TLS 1.2 across all Regions reflects our ongoing commitment to help customers meet their compliance needs.

Is there more assistance available to help verify or update client applications?

If you’re using an AWS software development kit (AWS SDK), you can find information about how to properly configure the minimum and maximum TLS versions for your clients in the following AWS SDK topics:

You can also visit Tools to Build on AWS and browse by programming language to find the relevant SDK. AWS Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support doesn’t include code development for client applications.

If you have any questions or issues, you can start a new thread on one of the AWS forums, or contact AWS Support or your technical account manager (TAM).

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janelle Hopper

Janelle Hopper is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Verified episode 3: In conversation with Noopur Davis from Comcast

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-3-in-conversation-with-noopur-davis-from-comcast/

2020 emphasized the value of staying connected with our customers. On that front, I’m proud to bring you the third episode of our new video series, Verified. The series showcases conversations with security leaders discussing trends and lessons learned in cybersecurity, privacy, and the cloud. In episode three, I’m talking to Noopur Davis, Executive Vice President and Chief Product and Information Security Officer at Comcast. As you can imagine, she had a busy 2020, as Comcast owns and operates Comcast Business and Xfinity, among others. During our conversation, we spoke about Comcast’s commitment to proactive security, with leaders setting a high bar for technology and decision-making.

Additionally, Noopur shared her journey in becoming a security leader at Comcast, talking about career growth, creating a security culture, diversity and inclusion, and measuring impact. During our conversation, she also detailed the importance of embedding security into the development lifecycle: “At Comcast, we stood up a Cloud Center of Excellence that included network engineering, security engineering and cloud engineering as equal partners. We came together to ensure we had the governance, technology, implementation, and rollout set up. Through this collaboration, everything came together. Collaboration is how this happens—the security team has to be embedded with other key technology teams.”

Noopur also recognized the heroic efforts of Comcast’s team responsible for security and increasing network bandwidth to meet the new work-from-home realities introduced by the pandemic. These efforts included dramatically accelerating timelines to meet pace of demand. “The network has never been more important. People are now doing everything over the network. I’m so proud to say that all the investment over the years that Comcast made in the network has stood up to this new reality. We added 35 terabits per second of capacity to get ready for increased demand. Our frontline people that did this work during the pandemic are really the heroes of Comcast. We also had programs underway that were accelerated by months. We did things in weeks that weren’t planned to be done for months.”

Stay tuned for future episodes of Verified here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix and episode two, an interview with Emma Smith, Vodafone’s Global Cybersecurity Director, on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

New – Multiple Private Marketplace Catalogs

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-multiple-private-marketplace-catalogs/

We launched AWS Marketplace in 2014, and it allows customers to find, buy, and immediately start using cloud-based applications developed by independent software vendors (ISVs). In 2018, we added the ability to add a Private Marketplace where you can curate a list of approved products your users can purchase from AWS Marketplace. Today we are adding a new feature so that you can create multiple Private Marketplace catalogs within your AWS Organizations. Each Private Marketplace can contain a different set of products to provide a tailored experience for particular groups of accounts.

Customers with diverse sets of users need to scale their governance to meet business needs. For example, a large enterprise with subsidiaries in different industries has varying software needs and policies for each subsidiary. IT administrators struggle with scaling their procurement process to address these differing needs across their organization and often revert to one procurement policy to govern their entire org. While there are exceptions, companies depend on time-consuming, manual processes to ensure the correct products are approved and procured. As a result, customers struggle to scale their procurement and governance process to meet their organization’s speed and agility demands.

After listening to customers explain these sorts of issues, we set about developing a solution to help, and the concept of multiple Private Marketplace catalogs was born. We call these Private Marketplace experiences. They represent the experience that your users will see when they browse their AWS Marketplace. Each experience is a collection of approved AWS Marketplace products that you, as an administrator, select and curate. You can add any of the 8000-plus products from AWS Marketplace to an experience and customize the look and feel by changing the Private Marketplace logo, wording, and color.

In the Private Marketplace admin portal, within the AWS Marketplace interface, you can create an Account group, which is essentially a way of grouping together a number of AWS accounts. You can then associate an experience with an Account group, enabling you to give different users distinct experiences. This association allows you to govern the use of third-party software subscriptions and ensure they adhere to your internal procurement policies.

To give you a flavor of this feature, I will create a new Private Marketplace experience.

How to Create a Private Marketplace Experience
I head over to the Private Marketplace interface and select the Experiences link; this lists the default experience called Private Marketplace Experience. I then press the Create experience button.

I give the experience a name and a description and click the Create experience button.

If I drill into the details for my newly created experience, I can then start to add products to my catalog. I use the search in the All AWS Marketplace products section and look for Tableau. I find the Tableau Server product and select it. I then click the Add button which adds the product to my catalog.

Screenshot, adding a product

To associate this experience with a set of AWS accounts, I click on the Create association button; this allows me to create a new Account group.

I give my account group the title Business Intelligence Group and add a description. I then associate AWS accounts that I want to be part of this group. Finally, I associate this group with the Business Intelligence Team experience I created earlier and finally click Create account group.

I now go to the settings for the experience and change the switch so that the experience is now live. On this screen I can also customize the look and feel of the Private Marketplace by uploading a logo, changing the color, and adding custom wording.

Now, if I log in and browse the AWS Marketplace using an account that is in the Account group I just created, I will see the Private Marketplace experience rather than the regular AWS Marketplace.

Available now
The ability to create multiple Private Marketplace catalogs is available now in all regions that support AWS Marketplace, and you can start using the feature directly from the Private Marketplace interface. Additionally, you can use the accompanying set of APIs so that you can integrate with existing approval or ticketing systems to simplify management across catalogs.

Visit the Private Marketplace page to get started and learn more. Happy curating.

— Martin

Understanding memory usage in your Java application with Amazon CodeGuru Profiler

Post Syndicated from Fernando Ciciliati original https://aws.amazon.com/blogs/devops/understanding-memory-usage-in-your-java-application-with-amazon-codeguru-profiler/

“Where has all that free memory gone?” This is the question we ask ourselves every time our application emits that dreaded OutOfMemoyError just before it crashes. Amazon CodeGuru Profiler can help you find the answer.

Thanks to its brand-new memory profiling capabilities, troubleshooting and resolving memory issues in Java applications (or almost anything that runs on the JVM) is much easier. AWS launched the CodeGuru Profiler Heap Summary feature at re:Invent 2020. This is the first step in helping us, developers, understand what our software is doing with all that memory it uses.

The Heap Summary view shows a list of Java classes and data types present in the Java Virtual Machine heap, alongside the amount of memory they’re retaining and the number of instances they represent. The following screenshot shows an example of this view.

Amazon CodeGuru Profiler heap summary view example

Figure: Amazon CodeGuru Profiler Heap Summary feature

Because CodeGuru Profiler is a low-overhead, production profiling service designed to be always on, it can capture and represent how memory utilization varies over time, providing helpful visual hints about the object types and the data types that exhibit a growing trend in memory consumption.

In the preceding screenshot, we can see that several lines on the graph are trending upwards:

  • The red top line, horizontal and flat, shows how much memory has been reserved as heap space in the JVM. In this case, we see a heap size of 512 MB, which can usually be configured in the JVM with command line parameters like -Xmx.
  • The second line from the top, blue, represents the total memory in use in the heap, independent of their type.
  • The third, fourth, and fifth lines show how much memory space each specific type has been using historically in the heap. We can easily spot that java.util.LinkedHashMap$Entry and java.lang.UUID display growing trends, whereas byte[] has a flat line and seems stable in memory usage.

Types that exhibit constantly growing trend of memory utilization with time deserve a closer look. Profiler helps you focus your attention on these cases. Associating the information presented by the Profiler with your own knowledge of your application and code base, you can evaluate whether the amount of memory being used for a specific data type can be considered normal, or if it might be a memory leak – the unintentional holding of memory by an application due to the failure in freeing-up unused objects. In our example above, java.util.LinkedHashMap$Entry and java.lang.UUIDare good candidates for investigation.

To make this functionality available to customers, CodeGuru Profiler uses the power of Java Flight Recorder (JFR), which is now openly available with Java 8 (since OpenJDK release 262) and above. The Amazon CodeGuru Profiler agent for Java, which already does an awesome job capturing data about CPU utilization, has been extended to periodically collect memory retention metrics from JFR and submit them for processing and visualization via Amazon CodeGuru Profiler. Thanks to its high stability and low overhead, the Profiler agent can be safely deployed to services in production, because it is exactly there, under real workloads, that really interesting memory issues are most likely to show up.

Summary

For more information about CodeGuru Profiler and other AI-powered services in the Amazon CodeGuru family, see Amazon CodeGuru. If you haven’t tried the CodeGuru Profiler yet, start your 90-day free trial right now and understand why continuous profiling is becoming a must-have in every production environment. For Amazon CodeGuru customers who are already enjoying the benefits of always-on profiling, this new feature is available at no extra cost. Just update your Profiler agent to version 1.1.0 or newer, and enable Heap Summary in your agent configuration.

 

Happy profiling!

AWS is the first global cloud service provider to comply with the new K-ISMS-P standard

Post Syndicated from Seulun Sung original https://aws.amazon.com/blogs/security/aws-is-the-first-global-cloud-service-provider-to-comply-with-the-new-k-isms-p-standard/

We’re excited to announce that Amazon Web Services (AWS) has achieved certification under the Korea-Personal Information & Information Security Management System (K-ISMS-P) standard (effective from December 16, 2020 to December 15, 2023). The assessment by the Korea Internet & Security Agency (KISA) covered the operation of infrastructure (including compute, storage, networking, databases, and security) in the AWS Asia Pacific (Seoul) Region. AWS was the first global cloud service provider (CSP) to obtain K-ISMS certification (the previous version of K-ISMS-P) back in 2017. Now AWS is the first global CSP to achieve compliance with the K-ISMS portion of the new K-ISMS-P standard.

Sponsored by KISA and affiliated with the Korean Ministry of Science and ICT (MSIT), K-ISMS-P serves as a standard for evaluating whether enterprises and organizations operate and manage their information security management systems consistently and securely, such that they thoroughly protect their information assets. The new K-ISMS-P standard combined the K-ISMS and K-PIMS (Personal Information Management System) standards with updated control items. Accordingly, the new K-ISMS certification and K-ISMS-P certification (personal information–focused) are introduced under the updated standard.

In this year’s audit, 110 services running in the Asia Pacific (Seoul) Region are included. The newly launched Availability Zone in 2020 is also added to the certification scope.

This certification helps enterprises and organizations across South Korea, regardless of industry, meet KISA compliance requirements more efficiently. Achieving this certification demonstrates the proactive approach AWS has taken to meet compliance set by the South Korean government and to deliver secure AWS services to customers. In addition, we’ve launched Quick Start and Operational Best Practices (conformance pack) pages to provide customers with a compliance framework that they can utilize for their K-ISMS-P compliance needs. Enterprises and organizations can use these toolkits and AWS certification to reduce the effort and cost of getting their own K-ISMS-P certification. You can download the AWS K-ISMS certification under the K-ISMS-P standard from AWS Artifact. To learn more about the AWS K-ISMS certification, see the AWS K-ISMS page. If you have any questions, don’t hesitate to contact your AWS Account Manager.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Seulun Sung

Seulun is a Security Audit Program Manager at AWS, leading security certification programs, with a focus on the K-ISMS-P program in South Korea. She has a decade of experience in deploying global policies and processes to local Regions and helping customers adopt regulations. She is passionate about helping to build customers’ trust and provide them assurance on cloud security.

Amazon Lex Introduces an Enhanced Console Experience and New V2 APIs

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amazon-lex-enhanced-console-experience/

Today, the Amazon Lex team has released a new console experience that makes it easier to build, deploy, and manage conversational experiences. Along with the new console, we have also introduced new V2 APIs, including continuous streaming capability. These improvements allow you to reach new audiences, have more natural conversations, and develop and iterate faster.

The new Lex console and V2 APIs make it easier to build and manage bots focusing on three main benefits. First, you can add a new language to a bot at any time and manage all the languages through the lifecycle of design, test, and deployment as a single resource. The new console experience allows you to quickly move between different languages to compare and refine your conversations. I’ll demonstrate later how easy it was to add French to my English bot.

Second, V2 APIs simplify versioning. The new Lex console and V2 APIs provide a simple information architecture where the bot intents and slot types are scoped to a specific language. Versioning is performed at the bot level so that resources such as intents and slot types do not have to be versioned individually. All resources within the bot (language, intents, and slot types) are archived as part of the bot version creation. This new way of working makes it easier to manage bots.

Lastly, you have additional builder productivity tools and capabilities to give you more flexibility and control of your bot design process. You can now save partially completed work as you develop different bot elements as you script, test and tune your configuration. This provides you with more flexibility as you iterate through the bot development. For example, you can save a slot that refers to a deleted slot type. In addition to saving partially completed work, you can quickly navigate across the configuration without getting lost. The new Conversation flow capability allows you to maintain your orientation as you move across the different intents and slot types.

In addition to the enhanced console and APIs, we are providing a new streaming conversation API. Natural conversations are punctuated with pauses and interruptions. For example, a customer may ask to pause the conversation or hold the line while looking up the necessary information before answering a question to retrieve credit card details when providing bill payments. With streaming conversation APIs, you can pause a conversation and handle interruptions directly as you configure the bot. Overall, the design and implementation of the conversation is simplified and easy to manage. The bot builder can quickly enhance the conversational capability of virtual contact center agents or smart assistants.

Let’s create a new bot and explore how some of Lex’s new console and streaming API features provide an improved bot building experience.

Building a bot
I head over to the new V2 Lex console and click on Create bot to start things off.

I select that I want to Start with an example and select the MakeAppointment example.

Over the years, I have spoken at many conferences, so I now offer to review talks that other community members are producing. Since these speakers are often in different time zones, it can be complicated to organize the various appointments for the different types of reviews that I offer. So I have decided to build a bot to streamline the process. I give my bot the name TalkReview and provide a description. I also select Create a role with basic Amazon Lex permissions and use this as my runtime role.

I must add at least one language to my bot, so I start with English (GB). I also select the text-to-speech voice that I want to use should my bot require voice interaction rather than just text.

During the creation, there is a new button that allows me to Add another language. I click on this to add French (FR) to my bot. You can add languages during creation as I am doing here, or you can add additional languages later on as your bot becomes more popular and needs to work with new audiences.

I can now start defining intents for my bot, and I can begin the iterative process of building and testing my bot. I won’t go into all of the details of how to create a bot or show you all of the intents I added, as we have better tutorials that can show you that step-by-step, but I will point out a few new features that make this new enhanced console really compelling.

The new Conversation flow provides you with a visual flow of the conversation, and you can see how the sample utterances you provide and how your conversation might work in the real world. I love this feature because you can click on the various elements, and it will take you to where you can make changes. For example, I can click on the prompt What type of review would you like to schedule and I am taken to the place where I can edit this prompt.

The new console has a very well thought-out approach to versioning a bot. At anytime, on the Bot versions screen, I can click Create version, and it will take a snapshot of the state of the bot’s current configuration. I can then associate that with an alias. For example, in my application, I have an alias called Production. This Production alias is associated with Version 1. Still, at any time, I could switch it to use a different version or even roll back to a previous version if I discover problems.

The testing experience is now very streamlined. Once I have built the bot, I can click the test button on the bottom right hand of the screen and start speaking to the bot and testing the experience. You can also expand the Inspect window, which gives you details about the conversations state, and you can also explore the raw JSON inputs and outputs.

Things to know
Here are a couple of important things to keep in mind when you use the enhanced console

  • Integration with Amazon Connect – Currently, bots built in the new console cannot be integrated with Amazon Connect contact flows. We plan to provide this integration as part of the near-term roadmap. You can use the current console and existing APIs to create and integrate bots with Amazon Connect.
  • Pricing – You only pay for what you use. The charges remain the same for existing audio and text APIs, renamed as RecognizeUtterance and RecognizeText. For the new Streaming capabilities, please refer to the pricing detail here.
  • All existing APIs and bots will continue to be supported. The newly announced features are only available in the new console and V2 APIs.

Go Build
Lex enhanced console is available now, and you can start using it today. The enhanced experience and V2 APIs are available in all existing regions and support all current languages. So, please give this console a try and let us know what you think. To learn more, check out the documentation for the console and the streaming API.

Happy Building!
— Martin

re:Invent – New security sessions launching soon

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-new-security-sessions-launching-soon/

Where did the last month go? Were you able to catch all of the sessions in the Security, Identity, and Compliance track you hoped to see at AWS re:Invent? If you missed any, don’t worry—you can stream all the sessions released in 2020 via the AWS re:Invent website. Additionally, we’re starting 2021 with all new sessions that you can stream live January 12–15. Here are the new Security, Identity, and Compliance sessions—each session is offered at multiple times, so you can find the time that works best for your location and schedule.

Protecting sensitive data with Amazon Macie and Amazon GuardDuty – SEC210
Himanshu Verma, AWS Speaker

Tuesday, January 12 – 11:00 AM to 11:30 AM PST
Tuesday, January 12 – 7:00 PM to 7:30 PM PST
Wednesday, January 13 – 3:00 AM to 3:30 AM PST

As organizations manage growing volumes of data, identifying and protecting your sensitive data can become increasingly complex, expensive, and time-consuming. In this session, learn how Amazon Macie and Amazon GuardDuty together provide protection for your data stored in Amazon S3. Amazon Macie automates the discovery of sensitive data at scale and lowers the cost of protecting your data. Amazon GuardDuty continuously monitors and profiles S3 data access events and configurations to detect suspicious activities. Come learn about these security services and how to best use them for protecting data in your environment.

BBC: Driving security best practices in a decentralized organization – SEC211
Apurv Awasthi, AWS Speaker
Andrew Carlson, Sr. Software Engineer – BBC

Tuesday, January 12 – 1:15 PM to 1:45 PM PST
Tuesday, January 12 – 9:15 PM to 9:45 PM PST
Wednesday, January 13 – 5:15 AM to 5:45 AM PST

In this session, Andrew Carlson, engineer at BBC, talks about BBC’s journey while adopting AWS Secrets Manager for lifecycle management of its arbitrary credentials such as database passwords, API keys, and third-party keys. He provides insight on BBC’s secrets management best practices and how the company drives these at enterprise scale in a decentralized environment that has a highly visible scope of impact.

Get ahead of the curve with DDoS Response Team escalations – SEC321
Fola Bolodeoku, AWS Speaker

Tuesday, January 12 – 3:30 PM to 4:00 PM PST
Tuesday, January 12 – 11:30 PM to 12:00 AM PST
Wednesday, January – 7:30 AM to 8:00 AM PST

This session identifies tools and tricks that you can use to prepare for application security escalations, with lessons learned provided by the AWS DDoS Response Team. You learn how AWS customers have used different AWS offerings to protect their applications, including network access control lists, security groups, and AWS WAF. You also learn how to avoid common misconfigurations and mishaps observed by the DDoS Response Team, and you discover simple yet effective actions that you can take to better protect your applications’ availability and security controls.

Network security for serverless workloads – SEC322
Alex Tomic, AWS Speaker

Thursday, January 14 -1:30 PM to 2:00 PM PST
Thursday, January 14 – 9:30 PM to 10:00 PM PST
Friday, January 15 – 5:30 AM to 6:00 AM PST

Are you building a serverless application using services like Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Aurora, and Amazon SQS? Would you like to apply enterprise network security to these AWS services? This session covers how network security concepts like encryption, firewalls, and traffic monitoring can be applied to a well-architected AWS serverless architecture.

Building your cloud incident response program – SEC323
Freddy Kasprzykowski, AWS Speaker

Wednesday, January 13 – 9:00 AM to 9:30 AM PST
Wednesday, January 13 – 5:00 PM to 5:30 PM PST
Thursday, January 14 – 1:00 AM to 1:30 AM PST

You’ve configured your detection services and now you’ve received your first alert. This session provides patterns that help you understand what capabilities you need to build and run an effective incident response program in the cloud. It includes a review of some logs to see what they tell you and a discussion of tools to analyze those logs. You learn how to make sure that your team has the right access, how automation can help, and which incident response frameworks can guide you.

Beyond authentication: Guide to secure Amazon Cognito applications – SEC324
Mahmoud Matouk, AWS Speaker

Wednesday, January 13 – 2:15 PM to 2:45 PM PST
Wednesday, January 13 – 10:15 PM to 10:45 PM PST
Thursday, January 14 – 6:15 AM to 6:45 AM PST

Amazon Cognito is a flexible user directory that can meet the needs of a number of customer identity management use cases. Web and mobile applications can integrate with Amazon Cognito in minutes to offer user authentication and get standard tokens to be used in token-based authorization scenarios. This session covers best practices that you can implement in your application to secure and protect tokens. You also learn about new Amazon Cognito features that give you more options to improve the security and availability of your application.

Event-driven data security using Amazon Macie – SEC325
Neha Joshi, AWS Speaker

Thursday, January 14 – 8:00 AM to 8:30 AM PST
Thursday, January 14 – 4:00 PM to 4:30 PM PST
Friday, January 15 – 12:00 AM to 12:30 AM PST

Amazon Macie sensitive data discovery jobs for Amazon S3 buckets help you discover sensitive data such as personally identifiable information (PII), financial information, account credentials, and workload-specific sensitive information. In this session, you learn about an automated approach to discover sensitive information whenever changes are made to the objects in your S3 buckets.

Instance containment techniques for effective incident response – SEC327
Jonathon Poling, AWS Speaker

Thursday, January 14 – 10:15 AM to 10:45 AM PST
Thursday, January 14 – 6:15 PM to 6:45 PM PST
Friday, January 15 – 2:15 AM to 2:45 AM PST

In this session, learn about several instance containment and isolation techniques, ranging from simple and effective to more complex and powerful, that leverage native AWS networking services and account configuration techniques. If an incident happens, you may have questions like “How do we isolate the system while preserving all the valuable artifacts?” and “What options do we even have?”. These are valid questions, but there are more important ones to discuss amidst a (possible) incident. Join this session to learn highly effective instance containment techniques in a crawl-walk-run approach that also facilitates preservation and collection of valuable artifacts and intelligence.

Trusted connects for government workloads – SEC402
Brad Dispensa, AWS Speaker

Wednesday, January 13 – 11:15 AM to 11:45 AM PST
Wednesday, January 13 – 7:15 PM to 7:45 PM PST
Thursday, January 14 – 3:15 AM to 3:45 AM PST

Cloud adoption across the public sector is making it easier to provide government workforces with seamless access to applications and data. With this move to the cloud, we also need updated security guidance to ensure public-sector data remain secure. For example, the TIC (Trusted Internet Connections) initiative has been a requirement for US federal agencies for some time. The recent TIC-3 moves from prescriptive guidance to an outcomes-based model. This session walks you through how to leverage AWS features to better protect public-sector data using TIC-3 and the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF). Also, learn how this might map into other geographies.

I look forward to seeing you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Optimizing AWS Lambda cost and performance using AWS Compute Optimizer

Post Syndicated from Chad Schmutzer original https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-cost-and-performance-using-aws-compute-optimizer/

This post is authored by Brooke Chen, Senior Product Manager for AWS Compute Optimizer, Letian Feng, Principal Product Manager for AWS Compute Optimizer, and Chad Schmutzer, Principal Developer Advocate for Amazon EC2

Optimizing compute resources is a critical component of any application architecture. Over-provisioning compute can lead to unnecessary infrastructure costs, while under-provisioning compute can lead to poor application performance.

Launched in December 2019, AWS Compute Optimizer is a recommendation service for optimizing the cost and performance of AWS compute resources. It generates actionable optimization recommendations tailored to your specific workloads. Over the last year, thousands of AWS customers reduced compute costs up to 25% by using Compute Optimizer to help choose the optimal Amazon EC2 instance types for their workloads.

One of the most frequent requests from customers is for AWS Lambda recommendations in Compute Optimizer. Today, we announce that Compute Optimizer now supports memory size recommendations for Lambda functions. This allows you to reduce costs and increase performance for your Lambda-based serverless workloads. To get started, opt in for Compute Optimizer to start finding recommendations.

Overview

With Lambda, there are no servers to manage, it scales automatically, and you only pay for what you use. However, choosing the right memory size settings for a Lambda function is still an important task. Computer Optimizer uses machine-learning based memory recommendations to help with this task.

These recommendations are available through the Compute Optimizer console, AWS CLI, AWS SDK, and the Lambda console. Compute Optimizer continuously monitors Lambda functions, using historical performance metrics to improve recommendations over time. In this blog post, we walk through an example to show how to use this feature.

Using Compute Optimizer for Lambda

This tutorial uses the AWS CLI v2 and the AWS Management Console.

In this tutorial, we setup two compute jobs that run every minute in AWS Region US East (N. Virginia). One job is more CPU intensive than the other. Initial tests show that the invocation times for both jobs typically last for less than 60 seconds. The goal is to either reduce cost without much increase in duration, or reduce the duration in a cost-efficient manner.

Based on these requirements, a serverless solution can help with this task. Amazon EventBridge can schedule the Lambda functions using rules. To ensure that the functions are optimized for cost and performance, you can use the memory recommendation support in Compute Optimizer.

In your AWS account, opt in to Compute Optimizer to start analyzing AWS resources. Ensure you have the appropriate IAM permissions configured – follow these steps for guidance. If you prefer to use the console to opt in, follow these steps. To opt in, enter the following command in a terminal window:

$ aws compute-optimizer update-enrollment-status --status Active

Once you enable Compute Optimizer, it starts to scan for functions that have been invoked for at least 50 times over the trailing 14 days. The next section shows two example scheduled Lambda functions for analysis.

Example Lambda functions

The code for the non-CPU intensive job is below. A Lambda function named lambda-recommendation-test-sleep is created with memory size configured as 1024 MB. An EventBridge rule is created to trigger the function on a recurring 1-minute schedule:

import json
import time

def lambda_handler(event, context):
  time.sleep(30)
  x=[0]*100000000
  return {
    'statusCode': 200,
    'body': json.dumps('Hello World!')
  }

The code for the CPU intensive job is below. A Lambda function named lambda-recommendation-test-busy is created with memory size configured as 128 MB. An EventBridge rule is created to trigger the function on a recurring 1-minute schedule:

import json
import random

def lambda_handler(event, context):
  random.seed(1)
  x=0
  for i in range(0, 20000000):
    x+=random.random()

  return {
    'statusCode': 200,
    'body': json.dumps('Sum:' + str(x))
  }

Understanding the Compute Optimizer recommendations

Compute Optimizer needs a history of at least 50 invocations of a Lambda function over the trailing 14 days to deliver recommendations. Recommendations are created by analyzing function metadata such as memory size, timeout, and runtime, in addition to CloudWatch metrics such as number of invocations, duration, error count, and success rate.

Compute Optimizer will gather the necessary information to provide memory recommendations for Lambda functions, and make them available within 48 hours. Afterwards, these recommendations will be refreshed daily.

These are recent invocations for the non-CPU intensive function:

Recent invocations for the non-CPU intensive function

Function duration is approximately 31.3 seconds with a memory setting of 1024 MB, resulting in a duration cost of about $0.00052 per invocation. Here are the recommendations for this function in the Compute Optimizer console:

Recommendations for this function in the Compute Optimizer console

The function is Not optimized with a reason of Memory over-provisioned. You can also fetch the same recommendation information via the CLI:

$ aws compute-optimizer \
  get-lambda-function-recommendations \
  --function-arns arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-sleep
{
    "lambdaFunctionRecommendations": [
        {
            "utilizationMetrics": [
                {
                    "name": "Duration",
                    "value": 31333.63587049883,
                    "statistic": "Average"
                },
                {
                    "name": "Duration",
                    "value": 32522.04,
                    "statistic": "Maximum"
                },
                {
                    "name": "Memory",
                    "value": 817.67049838188,
                    "statistic": "Average"
                },
                {
                    "name": "Memory",
                    "value": 819.0,
                    "statistic": "Maximum"
                }
            ],
            "currentMemorySize": 1024,
            "lastRefreshTimestamp": 1608735952.385,
            "numberOfInvocations": 3090,
            "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-sleep:$LATEST",
            "memorySizeRecommendationOptions": [
                {
                    "projectedUtilizationMetrics": [
                        {
                            "name": "Duration",
                            "value": 30015.113193697029,
                            "statistic": "LowerBound"
                        },
                        {
                            "name": "Duration",
                            "value": 31515.86878891883,
                            "statistic": "Expected"
                        },
                        {
                            "name": "Duration",
                            "value": 33091.662123300975,
                            "statistic": "UpperBound"
                        }
                    ],
                    "memorySize": 900,
                    "rank": 1
                }
            ],
            "functionVersion": "$LATEST",
            "finding": "NotOptimized",
            "findingReasonCodes": [
                "MemoryOverprovisioned"
            ],
            "lookbackPeriodInDays": 14.0,
            "accountId": "123456789012"
        }
    ]
}

The Compute Optimizer recommendation contains useful information about the function. Most importantly, it has determined that the function is over-provisioned for memory. The attribute findingReasonCodes shows the value MemoryOverprovisioned. In memorySizeRecommendationOptions, Compute Optimizer has found that using a memory size of 900 MB results in an expected invocation duration of approximately 31.5 seconds.

For non-CPU intensive jobs, reducing the memory setting of the function often doesn’t have a negative impact on function duration. The recommendation confirms that you can reduce the memory size from 1024 MB to 900 MB, saving cost without significantly impacting duration. The new duration cost per invocation saves approximately 12%.

The Compute Optimizer console validates these calculations:

Compute Optimizer console validates these calculations

These are recent invocations for the second function which is CPU-intensive:

Recent invocations for the second function which is CPU-intensive

The function duration is about 37.5 seconds with a memory setting of 128 MB, resulting in a duration cost of about $0.000078 per invocation. The recommendations for this function appear in the Compute Optimizer console:

recommendations for this function appear in the Compute Optimizer console

The function is also Not optimized with a reason of Memory under-provisioned. The same recommendation information is available via the CLI:

$ aws compute-optimizer \
  get-lambda-function-recommendations \
  --function-arns arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-busy
{
    "lambdaFunctionRecommendations": [
        {
            "utilizationMetrics": [
                {
                    "name": "Duration",
                    "value": 36006.85851551957,
                    "statistic": "Average"
                },
                {
                    "name": "Duration",
                    "value": 38540.43,
                    "statistic": "Maximum"
                },
                {
                    "name": "Memory",
                    "value": 53.75978407557355,
                    "statistic": "Average"
                },
                {
                    "name": "Memory",
                    "value": 55.0,
                    "statistic": "Maximum"
                }
            ],
            "currentMemorySize": 128,
            "lastRefreshTimestamp": 1608725151.752,
            "numberOfInvocations": 741,
            "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-busy:$LATEST",
            "memorySizeRecommendationOptions": [
                {
                    "projectedUtilizationMetrics": [
                        {
                            "name": "Duration",
                            "value": 27340.37604781184,
                            "statistic": "LowerBound"
                        },
                        {
                            "name": "Duration",
                            "value": 28707.394850202432,
                            "statistic": "Expected"
                        },
                        {
                            "name": "Duration",
                            "value": 30142.764592712556,
                            "statistic": "UpperBound"
                        }
                    ],
                    "memorySize": 160,
                    "rank": 1
                }
            ],
            "functionVersion": "$LATEST",
            "finding": "NotOptimized",
            "findingReasonCodes": [
                "MemoryUnderprovisioned"
            ],
            "lookbackPeriodInDays": 14.0,
            "accountId": "123456789012"
        }
    ]
}

For this function, Compute Optimizer has determined that the function’s memory is under-provisioned. The value of findingReasonCodes is MemoryUnderprovisioned. The recommendation is to increase the memory from 128 MB to 160 MB.

This recommendation may seem counter-intuitive, since the function only uses 55 MB of memory per invocation. However, Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. This means that increasing the memory allocation to 160 MB also reduces the expected duration to around 28.7 seconds. This is because a CPU-intensive task also benefits from the increased CPU performance that comes with the additional memory.

After applying this recommendation, the new expected duration cost per invocation is approximately $0.000075. This means that for almost no change in duration cost, the job latency is reduced from 37.5 seconds to 28.7 seconds.

The Compute Optimizer console validates these calculations:

Compute Optimizer console validates these calculations

Applying the Compute Optimizer recommendations

To optimize the Lambda functions using Compute Optimizer recommendations, use the following CLI command:

$ aws lambda update-function-configuration \
  --function-name lambda-recommendation-test-sleep \
  --memory-size 900

After invoking the function multiple times, we can see metrics of these invocations in the console. This shows that the function duration has not changed significantly after reducing the memory size from 1024 MB to 900 MB. The Lambda function has been successfully cost-optimized without increasing job duration:

Console shows the metrics from recent invocations

To apply the recommendation to the CPU-intensive function, use the following CLI command:

$ aws lambda update-function-configuration \
  --function-name lambda-recommendation-test-busy \
  --memory-size 160

After invoking the function multiple times, the console shows that the invocation duration is reduced to about 28 seconds. This matches the recommendation’s expected duration. This shows that the function is now performance-optimized without a significant cost increase:

Console shows that the invocation duration is reduced to about 28 seconds

Final notes

A couple of final notes:

  • Not every function will receive a recommendation. Compute optimizer only delivers recommendations when it has high confidence that these recommendations may help reduce cost or reduce execution duration.
  • As with any changes you make to an environment, we strongly advise that you test recommended memory size configurations before applying them into production.

Conclusion

You can now use Compute Optimizer for serverless workloads using Lambda functions. This can help identify the optimal Lambda function configuration options for your workloads. Compute Optimizer supports memory size recommendations for Lambda functions in all AWS Regions where Compute Optimizer is available. These recommendations are available to you at no additional cost. You can get started with Compute Optimizer from the console.

To learn more visit Getting started with AWS Compute Optimizer.