Tag Archives: announcements

New – Amazon EC2 R6a Instances Powered by 3rd Gen AMD EPYC Processors for Memory-Intensive Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6a-instances-powered-by-3rd-gen-amd-epyc-processors-for-memory-intensive-workloads/

We launched the general-purpose Amazon EC2 M6a instances at AWS re:Invent 2021 and compute-intensive C6a instances in February of this year. These instances are powered by the 3rd Gen AMD EPYC processors running at frequencies up to 3.6 GHz to offer up to 35 percent better price performance versus the previous generation instances.

Today, we are expanding our portfolio to include memory-optimized Amazon EC2 R6a instances featuring AMD EPYC (Milan) processors 10 percent less expensive than comparable x86 instances.

R6a instances, powered by 3rd Gen AMD EPYC processors are well suited for memory-intensive applications such as high-performance databases (relational databases, noSQL databases), distributed web scale in-memory caches (such as memcached, Redis), in-memory databases such as real-time big data analytics (such as Hadoop, Spark clusters), and other enterprise applications.

R6a instances are built on the AWS Nitro System and support Elastic Fabric Adapter (EFA) for workloads that benefit from lower network latency and highly scalable inter-node communication, such as high-performance computing and video processing.

Here’s a quick recap of the advantages of the new R6a instances compared to R5a instances:

  • Up to 35 percent higher price performance per vCPU versus comparable R5a instances
  • Up to 10 percent less expensive than comparable x86 instances
  • Up to 1536 GiB of memory, 2 times more than the previous generation, giving you the benefit of scaling up databases and running larger in-memory workloads.
  • Up to 192 vCPUs, 50 Gbps enhanced networking, and 40 Gbps EBS bandwidth, enabling you to process data faster, consolidate workloads, and lower the cost of ownership.
  • SAP-certified instances require memory-intensive applications such as high-performance enterprise databases like SAP Business Suite.
  • Support always-on memory encryption with AMD transparent sngle key memory encryption (TSME), and support new AVX2 instructions for accelerating encryption and decryption algorithms.

Here are the specs of R6a instances in detail:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Throughput (Gbps)
r6a.large 2 16 Up to 12.5 Up to 6.6
r6a.xlarge 4 32 Up to 12.5 Up to 6.6
r6a.2xlarge 8 64 Up to 12.5 Up to 6.6
r6a.4xlarge 16 128 Up to 12.5 Up to 6.6
r6a.8xlarge 32 256 12.5 6.6
r6a.12xlarge 48 384 18.75 10
r6a.16xlarge 64 512 25 13.3
r6a.24xlarge 96 768 37.5 20
r6a.32xlarge 128 1024 50 26.6
r6a.48xlarge 192 1536 50 40
r6a.metal 192 1536 50 40

Now Available
You can launch R6a instances today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai) and Europe (Frankfurt, Ireland) as On-DemandSpot, and Reserved Instances or as part of a Savings Plan.

To learn more, visit the R6a instances page. Please send feedback to [email protected]AWS re:Post for EC2, or through your usual AWS Support contacts.

— Channy

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-codewhisperer-in-the-aws-lambda-console-in-preview/

This blog post is written by Mark Richman, Senior Solutions Architect.

Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor.

Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in natural language and code.

CodeWhisperer is available as part of the AWS toolkit extensions for major IDEs, including JetBrains, Visual Studio Code, and AWS Cloud9, currently supporting Python, Java, and JavaScript. In the Lambda console, CodeWhisperer is available as a native code suggestion feature, which is the focus of this blog post.

CodeWhisperer is currently available in preview with a waitlist. This blog post explains how to request access to and activate CodeWhisperer for the Lambda console. Once activated, CodeWhisperer can make code recommendations on-demand in the Lambda code editor as you develop your function. During the preview period, developers can use CodeWhisperer at no cost.

Amazon CodeWhisperer

Amazon CodeWhisperer

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications and only pay for what you use.

With Lambda, you can build your functions directly in the AWS Management Console and take advantage of CodeWhisperer integration. CodeWhisperer in the Lambda console currently supports functions using the Python and Node.js runtimes.

When writing AWS Lambda functions in the console, CodeWhisperer analyzes the code and comments, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. The code recommendations provided by CodeWhisperer are based on ML models trained on a variety of data sources, including Amazon and open source code. Developers can accept the recommendation or simply continue to write their own code.

Requesting CodeWhisperer access

CodeWhisperer integration with Lambda is currently available as a preview only in the N. Virginia (us-east-1) Region. To use CodeWhisperer in the Lambda console, you must first sign up to access the service in preview here or request access directly from within the Lambda console.

In the AWS Lambda console, under the Code tab, in the Code source editor, select the Tools menu, and Request Amazon CodeWhisperer Access.

Request CodeWhisperer access in Lambda console

Request CodeWhisperer access in Lambda console

You may also request access from the Preferences pane.

Request CodeWhisperer access in Lambda console preference pane

Request CodeWhisperer access in Lambda console preference pane

Selecting either of these options opens the sign-up form.

CodeWhisperer sign up form

CodeWhisperer sign up form

Enter your contact information, including your AWS account ID. This is required to enable the AWS Lambda console integration. You will receive a welcome email from the CodeWhisperer team upon once they approve your request.

Activating Amazon CodeWhisperer in the Lambda console

Once AWS enables your preview access, you must turn on the CodeWhisperer integration in the Lambda console, and configure the required permissions.

From the Tools menu, enable Amazon CodeWhisperer Code Suggestions

Enable CodeWhisperer code suggestions

Enable CodeWhisperer code suggestions

You can also enable code suggestions from the Preferences pane:

Enable CodeWhisperer code suggestions from Preferences pane

Enable CodeWhisperer code suggestions from Preferences pane

The first time you activate CodeWhisperer, you see a pop-up containing terms and conditions for using the service.

CodeWhisperer Preview Terms

CodeWhisperer Preview Terms

Read the terms and conditions and choose Accept to continue.

AWS Identity and Access Management (IAM) permissions

For CodeWhisperer to provide recommendations in the Lambda console, you must enable the proper AWS Identity and Access Management (IAM) permissions for either your IAM user or role. In addition to Lambda console editor permissions, you must add the codewhisperer:GenerateRecommendations permission.

Here is a sample IAM policy that grants a user permission to the Lambda console as well as CodeWhisperer:

{
  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "LambdaConsolePermissions",
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:CreateEventSourceMapping",
        "lambda:CreateFunction",
        "lambda:DeleteEventSourceMapping",
        "lambda:GetAccountSettings",
        "lambda:GetEventSourceMapping",
        "lambda:GetFunction",
        "lambda:GetFunctionCodeSigningConfig",
        "lambda:GetFunctionConcurrency",
        "lambda:GetFunctionConfiguration",
        "lambda:InvokeFunction",
        "lambda:ListEventSourceMappings",
        "lambda:ListFunctions",
        "lambda:ListTags",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateEventSourceMapping",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:PassRole",
        "iam:SimulatePrincipalPolicy"
      ],
      "Resource": "*"
    },
    {
      "Sid": "CodeWhispererPermissions",
      "Effect": "Allow",
      "Action": ["codewhisperer:GenerateRecommendations"],
      "Resource": "*"
    }
  ]
}

This example is for illustration only. It is best practice to use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

Demo

To activate and work with code suggestions, use the following keyboard shortcuts:

  • Manually fetch a code suggestion: Option+C (macOS), Alt+C (Windows)
  • Accept a suggestion: Tab
  • Reject a suggestion: ESC, Backspace, scroll in any direction, or keep typing and the recommendation automatically disappears.

Currently, the IDE extensions provide automatic suggestions and can show multiple suggestions. The Lambda console integration requires a manual fetch and shows a single suggestion.

Here are some common ways to use CodeWhisperer while authoring Lambda functions.

Single-line code completion

When typing single lines of code, CodeWhisperer suggests how to complete the line.

CodeWhisperer single-line completion

CodeWhisperer single-line completion

Full function generation

CodeWhisperer can generate an entire function based on your function signature or code comments. In the following example, a developer has written a function signature for reading a file from Amazon S3. CodeWhisperer then suggests a full implementation of the read_from_s3 method.

CodeWhisperer full function generation

CodeWhisperer full function generation

CodeWhisperer may include import statements as part of its suggestions, as in the previous example. As a best practice to improve performance, manually move these import statements to outside the function handler.

Generate code from comments

CodeWhisperer can also generate code from comments. The following example shows how CodeWhisperer generates code to use AWS APIs to upload files to Amazon S3. Write a comment describing the intended functionality and, on the following line, activate the CodeWhisperer suggestions. Given the context from the comment, CodeWhisperer first suggests the function signature code in its recommendation.

CodeWhisperer generate function signature code from comments

CodeWhisperer generate function signature code from comments

After you accept the function signature, CodeWhisperer suggests the rest of the function code.

CodeWhisperer generate function code from comments

CodeWhisperer generate function code from comments

When you accept the suggestion, CodeWhisperer completes the entire code block.

CodeWhisperer generates code to write to S3.

CodeWhisperer generates code to write to S3.

CodeWhisperer can help write code that accesses many other AWS services. In the following example, a code comment indicates that a function is sending a notification using Amazon Simple Notification Service (SNS). Based on this comment, CodeWhisperer suggests a function signature.

CodeWhisperer function signature for SNS

CodeWhisperer function signature for SNS

If you accept the suggested function signature. CodeWhisperer suggest a complete implementation of the send_notification function.

CodeWhisperer function send notification for SNS

CodeWhisperer function send notification for SNS

The same procedure works with Amazon DynamoDB. When writing a code comment indicating that the function is to get an item from a DynamoDB table, CodeWhisperer suggests a function signature.

CodeWhisperer DynamoDB function signature

CodeWhisperer DynamoDB function signature

When accepting the suggestion, CodeWhisperer then suggests a full code snippet to complete the implementation.

CodeWhisperer DynamoDB code snippet

CodeWhisperer DynamoDB code snippet

Once reviewing the suggestion, a common refactoring step in this example would be manually moving the references to the DynamoDB resource and table outside the get_item function.

CodeWhisperer can also recommend complex algorithm implementations, such as Insertion sort.

CodeWhisperer insertion sort.

CodeWhisperer insertion sort.

As a best practice, always test the code recommendation for completeness and correctness.

CodeWhisperer not only provides suggested code snippets when integrating with AWS APIs, but can help you implement common programming idioms, including proper error handling.

Conclusion

CodeWhisperer is a general purpose, machine learning-powered code generator that provides you with code recommendations in real time. When activated in the Lambda console, CodeWhisperer generates suggestions based on your existing code and comments, helping to accelerate your application development on AWS.

To get started, visit https://aws.amazon.com/codewhisperer/. Share your feedback with us at [email protected].

For more serverless learning resources, visit Serverless Land.

AWS achieves TISAX certification (Information with Very High Protection Needs (AL3)

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-tisax-certification-information-with-very-high-protection-needs-al3/

We’re excited to announce the completion of the Trusted Information Security Assessment Exchange (TISAX) certification on June 30, 2022 for 19 AWS Regions. These Regions achieved the Information with Very High Protection Needs (AL3) label for the control domains Information Handling and Data Protection. This alignment with TISAX requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS automotive customers can run their applications in the AWS Cloud certified Regions in confidence.

The following 19 Regions are currently TISAX certified:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Oregon)
  • Africa (Cape Town)
  • Asia Pacific (Hong Kong)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Osaka)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Milan)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

TISAX is a European automotive industry-standard information security assessment (ISA) catalog based on key aspects of information security, such as data protection and connection to third parties.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the European Network Exchange (ENX) Portal (the scope ID and assessment ID are SM22TH and AYA2D4-1, respectively) and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose TISAX.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about TISAX compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

AWS achieves HDS certification to three additional Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification-to-three-additional-regions/

We’re excited to announce that three additional AWS Regions—Asia Pacific (Korea), Europe (London), and Europe (Stockholm)—have been granted the Health Data Hosting (Hébergeur de Données de Santé, HDS) certification. This alignment with the HDS requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS customers who handle personal health data can be hosted in the AWS Cloud certified Regions with confidence.

The following 16 Regions are now in scope of this certification:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

Introduced by the French governmental agency for health, Agence Française de la Santé Numérique (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the Agence du Numérique en Santé (ANS) website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose HDS.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about HDS compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

New — Detect and Resolve Issues Quickly with Log Anomaly Detection and Recommendations from Amazon DevOps Guru

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-detect-and-resolve-issues-quickly-with-log-anomaly-detection-and-recommendations-from-amazon-devops-guru/

Today, we are announcing a new feature, Log Anomaly Detection and Recommendations for Amazon DevOps Guru. With this feature, you can find anomalies throughout relevant logs within your app, and get targeted recommendations to resolve issues. Here’s a quick look at this feature:

AWS launched DevOps Guru, a fully managed AIOps platform service, in December 2020 to make it easier for developers and operators to improve applications’ reliability and availability. DevOps Guru minimizes the time needed for issue remediation by using machine learning models based on more than 20 years of operational expertise in building, scaling, and maintaining applications for Amazon.com.

You can use DevOps Guru to identify anomalies such as increased latency, error rates, and resource constraints and then send alerts with a description and actionable recommendations for remediation. You don’t need any prior knowledge in machine learning to use DevOps Guru, and only need to activate it in the DevOps Guru dashboard.

New Feature – Log Anomaly Detection and Recommendations

Observability and monitoring are integral parts of DevOps and modern applications. Applications can generate several types of telemetry, one of which is metrics, to reveal the performance of applications and to help identify issues.

While the metrics analyzed by DevOps Guru today are critical to surfacing issues occurring in applications, it is still challenging to find the root cause of these issues. As applications become more distributed and complex, developers and IT operators need more automation to reduce the time and effort spend detecting, debugging, and resolving operational issues. By sourcing relevant logs in conjunction with metrics, developers can now more effectively monitor and troubleshoot their applications.

With this new Log Anomaly Detection and Recommendations feature, you can get insights along with precise recommendations from application logs without manual effort. This feature delivers contextualized log data of anomaly occurrences and provides actionable insights from recommendations integrated inside the DevOps Guru dashboard.

The Log Anomaly Detection and Recommendations feature is able to detect exception keywords, numerical anomalies, HTTP status codes, data format anomalies, and more. When DevOps Guru identifies anomalies from logs, you will find relevant log samples and deep links to CloudWatch Logs on the DevOps Guru dashboard. These contextualized logs are an important component for DevOps Guru to provide further features, namely targeted recommendations to help faster troubleshooting and issue remediation.

Let’s Get Started!

This new feature consists of two things, “Log Anomaly Detection” and “Recommendations.” Let’s explore further into how we can use this feature to find the root cause of an issue and get recommendations. As an example, we’ll look at my serverless API built using Amazon API Gateway, with AWS Lambda integrated with Amazon DynamoDB. The architecture is shown in the following image:

If it’s your first time using DevOps Guru, you’ll need to enable it by visiting the DevOps Guru dashboard. You can learn more by visiting the Getting Started page.

Since I’ve already enabled DevOps Guru I can go to the Insights page, navigate to the Log groups section, and select the Enable log anomaly detection.

Log Anomaly Detection

After a few hours, I can visit the DevOps Guru dashboard to check for insights. Here, I get some findings from DevOps Guru, as seen in the following screenshots:

With Log Anomaly Detection, DevOps Guru will show the findings of my serverless API in the Log groups section, as seen in the following screenshot:

I can hover over the anomaly and get a high-level summary of the contextualized enrichment data found in this log group. It also provides me with additional information, including the number of log records analyzed and the log scan time range. From this information, I know these anomalies are new event types that have not been detected in the past with the keyword ERROR.

To investigate further, I can select the log group link and go to the Detail page. The graph shows relevant events that might have occurred around these log showcases, which is a helpful context for troubleshooting the root cause. This Detail page includes different showcases, each representing a cluster of similar log events, like exception keywords and numerical anomalies, found in the logs at the time of the anomaly.

Looking at the first log showcase, I noticed a ConditionalCheckFailedException error within the AWS Lambda function. This can occur when AWS Lambda fails to call DynamoDB. From here, I learned that there was an error in the conditional check section, and I reviewed the logic on AWS Lambda. I can also investigate related CloudWatch Logs groups by selecting View details in CloudWatch links.

One thing I want to emphasize here is that DevOps Guru identifies significant events related to application performance and helps me to see the important things I need to focus on by separating the signal from the noise.

Targeted Recommendations

In addition to anomaly detection of logs, this new feature also provides precise recommendations based on the findings in the logs. You can find these recommendations on the Insights page, by scrolling down to find the Recommendations section.

Here, I get some recommendations from DevOps Guru, which make it easier for me to take immediate steps to remediate the issue. One recommendation shown in the following image is Check DynamoDB ConditionalExpression, which relates to an anomaly found in the logs derived from AWS Lambda.

Availability

You can use DevOps Guru Log Anomaly Detection and Recommendations today at no additional charge in all Regions where DevOps Guru is available, US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

To learn more, please visit Amazon DevOps Guru web site and technical documentation, and get started today.

Happy building
— Donnie

Amazon Redshift Serverless – Now Generally Available with New Capabilities

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-redshift-serverless-now-generally-available-with-new-capabilities/

Last year at re:Invent, we introduced the preview of Amazon Redshift Serverless, a serverless option of Amazon Redshift that lets you analyze data at any scale without having to manage data warehouse infrastructure. You just need to load and query your data, and you pay only for what you use. This allows more companies to build a modern data strategy, especially for use cases where analytics workloads are not running 24-7 and the data warehouse is not active all the time. It is also applicable to companies where the use of data expands within the organization and users in new departments want to run analytics without having to take ownership of data warehouse infrastructure.

Today, I am happy to share that Amazon Redshift Serverless is generally available and that we added many new capabilities. We are also reducing Amazon Redshift Serverless compute costs compared to the preview.

You can now create multiple serverless endpoints per AWS account and Region using namespaces and workgroups:

  • A namespace is a collection of database objects and users, such as database name and password, permissions, and encryption configuration. This is where your data is managed and where you can see how much storage is used.
  • A workgroup is a collection of compute resources, including network and security settings. Each workgroup has a serverless endpoint to which you can connect your applications. When configuring a workgroup, you can set up private or publicly accessible endpoints.

Each namespace can have only one workgroup associated with it. Conversely, each workgroup can be associated with only one namespace. You can have a namespace without any workgroup associated with it, for example, to use it only for sharing data with other namespaces in the same or another AWS account or Region.

In your workgroup configuration, you can now use query monitoring rules to help keep your costs under control. Also, the way Amazon Redshift Serverless automatically scales data warehouse capacity is more intelligent to deliver fast performance for demanding and unpredictable workloads.

Let’s see how this works with a quick demo. Then, I’ll show you what you can do with namespaces and workgroups.

Using Amazon Redshift Serverless
In the Amazon Redshift console, I select Redshift serverless in the navigation pane. To get started, I choose Use default settings to configure a namespace and a workgroup with the most common options. For example, I’ll be able to connect using my default VPC and default security group.

Console screenshot.

With the default settings, the only option left to configure is Permissions. Here, I can specify how Amazon Redshift can interact with other services such as S3, Amazon CloudWatch Logs, Amazon SageMaker, and AWS Glue. To load data later, I give Amazon Redshift access to an S3 bucket. I choose Manage IAM roles and then Create IAM role.

Console screenshot.

When creating the IAM role, I select the option to give access to specific S3 buckets and pick an S3 bucket in the same AWS Region. Then, I choose Create IAM role as default to complete the creation of the role and to automatically use it as the default role for the namespace.

Console screenshot.

I choose Save configuration and after a few minutes the database is ready for use. In the Serverless dashboard, I choose Query data to open the Redshift query editor v2. There, I follow the instructions in the Amazon Redshift Database Developer guide to load a sample database. If you want to do a quick test, a few sample databases (including the one I am using here) are already available in the sample_data_dev database. Note also that loading data into Amazon Redshift is not required for running queries. I can use data from an S3 data lake in my queries by creating an external schema and an external table.

The sample database consists of seven tables and tracks sales activity for a fictional “TICKIT” website, where users buy and sell tickets for sporting events, shows, and concerts.

Sample database tables relations

To configure the database schema, I run a few SQL commands to create the users, venue, category, date, event, listing, and sales tables.

Console screenshot.

Then, I download the tickitdb.zip file that contains the sample data for the database tables. I unzip and load the files to a tickit folder in the same S3 bucket I used when configuring the IAM role.

Now, I can use the COPY command to load the data from the S3 bucket into my database. For example, to load data into the users table:

copy users from 's3://MYBUCKET/tickit/allusers_pipe.txt' iam_role default;

The file containing the data for the sales table uses tab-separated values:

copy sales from 's3://MYBUCKET/tickit/sales_tab.txt' iam_role default delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS';

After I load data in all tables, I start running some queries. For example, the following query joins five tables to find the top five sellers for events based in California (note that the sample data is for the year 2008):

select sellerid, username, (firstname ||' '|| lastname) as sellername, venuestate, sum(qtysold)
from sales, date, users, event, venue
where sales.sellerid = users.userid
and sales.dateid = date.dateid
and sales.eventid = event.eventid
and event.venueid = venue.venueid
and year = 2008
and venuestate = 'CA'
group by sellerid, username, sellername, venuestate
order by 5 desc
limit 5;

Console screenshot.

Now that my database is ready, let’s see what I can do by configuring Amazon Redshift Serverless namespaces and workgroups.

Using and Configuring Namespaces
Namespaces are collections of database data and their security configurations. In the navigation pane of the Amazon Redshift console, I choose Namespace configuration. In the list, I choose the default namespace that I just created.

In the Data backup tab, I can create or restore a snapshot or restore data from one of the recovery points that are automatically created every 30 minutes and kept for 24 hours. That can be useful to recover data in case of accidental writes or deletes.

Console screenshot.

In the Security and encryption tab, I can update permissions and encryption settings, including the AWS Key Management Service (AWS KMS) key used to encrypt and decrypt my resources. In this tab, I can also enable audit logging and export the user, connection, and user activity logs.

Console screenshot.

In the Datashares tab, I can create a datashare to share data with other namespaces and AWS accounts in the same or different Regions. In this tab, I can also create a database from a share I receive from other namespaces or AWS accounts, and I can see the subscriptions for datashares managed by AWS Data Exchange.

Console screenshot.

When I create a datashare, I can select which objects to include. For example, here I want to share only the date and event tables because they don’t contain sensitive data.

Console screenshot.

Using and Configuring Workgroups
Workgroups are collections of compute resources and their network and security settings. They provide the serverless endpoint for the namespace they are configured for. In the navigation pane of the Amazon Redshift console, I choose Workgroup configuration. In the list, I choose the default namespace that I just created.

In the Data access tab, I can update the network and security settings (for example, change the VPC, the subnets, or the security group) or make the endpoint publicly accessible. In this tab, I can also enable Enhanced VPC routing to route network traffic between my serverless database and the data repositories I use (for example, the S3 buckets used to load or unload data) through a VPC instead of the internet. To access serverless endpoints that are in another VPC or subnet, I can create a VPC endpoint managed by Amazon Redshift.

Console screenshot.

In the Limits tab, I can configure the base capacity (expressed in Redshift processing units, or RPUs) used to process my queries. Amazon Redshift Serverless scales the capacity to deal with a higher number of users. Here I also have the option to increase the base capacity to speed up my queries or decrease it to reduce costs.

In this tab, I can also set Usage limits to configure daily, weekly, and monthly thresholds to keep my costs predictable. For example, I configured a daily limit of 200 RPU-hours, and a monthly limit of 2,000 RPU-hours for my compute resources. To control the data-transfer costs for cross-Region datashares, I configured a daily limit of 3 TB and a weekly limit of 10 TB. Finally, to limit the resources used by each query, I use Query limits to time out queries running for more than 60 seconds.

Console screenshot.

Availability and Pricing
Amazon Redshift Serverless is generally available today in the US East (Ohio), US East (N. Virginia), US East (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS Regions.

You can connect to a workgroup endpoint using your favorite client tools via JDBC/ODBC or with the Amazon Redshift query editor v2, a web-based SQL client application available on the Amazon Redshift console. When using web services-based applications (such as AWS Lambda functions or Amazon SageMaker notebooks), you can access your database and perform queries using the built-in Amazon Redshift Data API.

With Amazon Redshift Serverless, you pay only for the compute capacity your database consumes when active. The compute capacity scales up or down automatically based on your workload and shuts down during periods of inactivity to save time and costs. Your data is stored in managed storage, and you pay a GB-month rate.

To give you improved price performance and the flexibility to use Amazon Redshift Serverless for an even broader set of use cases, we are lowering the price from $0.5 to $0.375 per RPU-hour for the US East (N. Virginia) Region. Similarly, we are lowering the price in other Regions by an average of 25 percent from the preview price. For more information, see the Amazon Redshift pricing page.

To help you get practice with your own use cases, we are also providing $300 in AWS credits for 90 days to try Amazon Redshift Serverless. These credits are used to cover your costs for compute, storage, and snapshot usage of Amazon Redshift Serverless only.

Get insights from your data in seconds with Amazon Redshift Serverless.

Danilo

New – Cloud WAN : A Managed WAN Service

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-cloud-wan-a-managed-wan-service/

I am pleased to announce the availability of AWS Cloud WAN, a new network service that makes it easy to build and operate wide area networks (WAN) that connect your data centers and branch offices, as well as multiple VPCs in multiple AWS Regions.

Typically, large enterprises have resources running in different on-premises data centers, branch offices, and in the cloud. To connect these resources, network teams build and manage their own global networks using multiple networking, security, and internet services from multiple providers. They most probably use several technologies and providers to manage cloud-based networks, to connect their data centers to the AWS cloud, and for the connectivity between on-premises data centers and branch offices. All of these networks take different approaches to connectivity, security, and monitoring, resulting in an intricate patchwork of individual networks that are complicated to configure, secure, and manage.

For example, to prevent unauthorized access to resources running across locations that are connected with different network technologies, network operation teams must piece together different firewall solutions from different vendors and then manually configure and manage the policies between them. Every new location, network appliance, and security requirement exponentially increases complexity.

With Cloud WAN, networking teams connect to AWS through their choice of local network providers, then use a central dashboard and network policies to create a unified network that connects their locations and network types. This eliminates the need to configure and manage different networks individually, even when they are based on different technologies. Cloud WAN generates a complete view of your on-premises and AWS networks to help you visualize the health, security, and performance of your entire network.

Cloud WAN provides advanced security and network isolation, and I am excited by the possibilities offered by this network segmentation. You can use policies in Cloud WAN to easily segment your network traffic regardless of how many AWS Regions or on-premises locations you add to your network. For example, you can easily isolate network traffic from retail payment processing from other traffic on your corporate network while still giving both segments access to shared corporate resources. Another example would be the isolation of your development and production environment by creating logical network segments for each environment. This makes it easier to ensure consistent security policies when connecting large numbers of locations with your VPCs especially when your policies need to apply to large groups with unique security and routing requirements. Cloud WAN maintains a consistent configuration across Regions on your behalf. In a traditional network, a segment is like a globally consistent virtual routing and forwarding (VRF) table or a layer 3 IP VPN over an MPLS network. Segments are optional; smaller organizations may use Cloud WAN with one single network segment, encompassing all your traffic.

In addition to network segmentation and the simplicity it brings to your network management tasks, I see four principal benefits of using Cloud WAN:

Centralized management and network monitoring dashboard – Network Manager provides a central dashboard for connecting and managing your branch offices, data centers, VPN connections, and Software-Defined WAN (SD-WAN), as well as your Amazon VPC and AWS Transit Gateway. This dashboard helps you monitor and view the health of your network in one place, simplifying day-to-day operations.

Centralized policy management – You define access controls and traffic routing rules in a central network policy document, expressed in JSON. When you update a policy, Cloud WAN uses a two-step process to ensure accidental errors do not affect your global network. First, you review and validate that your changes will work as expected in production. Once you approve the changes, Cloud WAN handles the configuration details for the entire network. You can change your policy document using the AWS Management Console or Cloud WAN APIs.

Multi-Region VPC connectivity – Cloud WAN connects your VPCs across AWS Regions. Using a simple network policy document, you can create global networks that connect all of your EC2 resources, or you can choose to segment them across Regions.

Built-in automation. Cloud WAN can automatically attach new VPCs and network connections to your network, so you do not need to approve each change manually. It reduces the operational overhead involved in managing a growing network. You do this by tagging attachments and defining network policies that automatically map attachments with a certain tag to a specific network segment. With this tagging structure in place, you can choose which attachments can join a segment automatically, which segments require manual approval, and if attachments on the same segment can talk to each other, all based on the tags you choose.

Let’s get started
To get started with Cloud WAN, I open the AWS Management Console. In the VPC section, there is a new entry for AWS Cloud WAN on the menu on the left. Creating and configuring a global network is a four-step process.

First, I start by creating a global network and a core network.

Cloud WAN create global networkAfter entering the Name and an optional Description, I select Next.

Cloud WAN create core networkAfter giving the core network a Name and a Description, I enter my ASN range and the list of Edge locations, and I enter a Segment name and Segment description for my default segment. The default segment is automatically enabled in all selected edge locations.

Second, I define and attach my core networking policy. The core policy defines the rule to control network access across segments and AWS Regions. Third, I configure segments and segment actions. I can see all routes and filter by network Segment and Edge location.

Cloud Wan - RoutesAnd finally, I register the existing Transit Gateway to the new global network.

Cloud WAN - register transit gateways

Once configured, you have a single monitoring dashboard for your global network. You have access to the network inventory.

Cloud Wan - Monitoring inventoryOr you can have more granular and detailed views with Topology graph and Topology tree.

Cloud Wan - Monitoring topology

Other considerations
During the preview period we ran for Cloud WAN, we often received the question: “When should I build networks with Cloud WAN versus Transit Gateway?” This is a valid question because both Transit Gateway and Cloud WAN allow centralized connectivity between Amazon VPC and on-premises locations. Transit Gateway is a Regional network connectivity hub and is optimal when you operate in a few AWS Regions or when you want to manage your own peering and routing configuration or prefer to use your own automation.

On the other side, Cloud WAN is a managed wide area network (WAN) that unifies your data center, branches, and AWS networks. While you can create your own global network by interconnecting multiple Transit Gateways across Regions, Cloud WAN provides built-in automation, segmentation, and configuration management features designed specifically for building and operating global networks. Cloud WAN has added features such as automated VPC attachments, integrated performance monitoring, and centralized configuration.

But the world is better together, you can peer your Transit Gateways with Cloud WAN’s Core Network Edges (CNEs) and benefit from the central management and monitoring capabilities I described earlier. The peering between Cloud WAN and Transit Gateway keeps your options open – you can migrate from one to another, or use Cloud WAN to centrally connect all your existing Transit Gateways.

But then, AWS released SiteLink in December last year. When should you use SiteLink, and when should you use AWS Cloud WAN? Depending on your use case, you might choose one, the other, or both. Cloud WAN can create and manage networks of VPCs across multiple Regions. SiteLink, on the other hand, connects Direct Connect locations together, bypassing AWS Regions to improve performance. Direct Connect is one of the several connectivity options that you will be able to natively use with Cloud WAN in the future. As of today, you interconnect Direct Connect with Cloud WAN via Transit Gateway peering connections.

Availability and Pricing
Cloud WAN is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), and Middle East (Bahrain) AWS Regions.

As usual, there are no setup or upfront fees, and billing is on-demand based on your actual usage. There are four factors that determine what you pay for using AWS Cloud WAN. First, the number of Core Network Edges (CNEs) deployed. Second, the number of attachments to each CNE. An attachment might be an Amazon VPC, a VPN, or an SD-WAN. Third, the number of Transit Gateways peered with your CNEs. And fourth, there is a data processing charge for traffic sent through each CNE.

On top of these factors that are specific to Cloud WAN, sending data between Regions triggers an EC2 inter-Region data transfer out charge. While EC2 inter-Region data transfer out is billed separately from Cloud WAN, it’s a factor in the total cost of the Cloud WAN service. The pricing page has the details.

Go build your global network!

— seb

A sneak peek at the governance, risk, and compliance sessions for AWS re:Inforce 2022

Post Syndicated from Greg Eppel original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-governance-risk-and-compliance-sessions-for-aws-reinforce-2022/

Register now with discount code SALUZwmdkJJ to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we want to tell you about some of the exciting governance, risk, and compliance sessions planned for AWS re:Inforce 2022. AWS re:Inforce is a conference where you can learn more about security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. AWS re:Inforce 2022 will take place in person in Boston, MA on July 26 and 27. AWS re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

This post will highlight of some of the governance, risk, and compliance offerings that you can sign up for, including breakout sessions, chalk talks, builders’ sessions, and workshops. For the full catalog of all tracks, see the AWS re:Inforce session preview.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and are delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

GRC201: Learn best practices for auditing AWS with Cloud Audit Academy
Do you want to know how to audit in the cloud? Today, control framework language is catered toward on-premises environments, and security IT auditing techniques have not been reshaped for the cloud. The AWS Cloud–specific Cloud Audit Academy provides auditors with the education and tools to audit for security on AWS using a risk-based approach. In this session, experience a condensed sample domain from a four-day Cloud Audit Academy workshop.

GRC203: Panel discussion: Continuous compliance and auditing on AWS
In this session, an AWS leader speaks with senior executives from enterprise customer and AWS Partner organizations as they share their paths to success with compliance and auditing on AWS. Join this session to hear how they have used AWS Cloud Operations to help make compliance and auditing more efficient and improve business outcomes. Also hear how AWS Partners are supporting customer organizations as they automate compliance and move to the cloud.

GRC205: Crawl, walk, run: Accelerating security maturity
Where are you on your cloud security journey? Where do you want to end up? What are your next steps? In this step-by-step roadmap, we provide a comprehensive overview of the AWS security journey based on lessons learned with other organizations. Learn where you are, how to take the next step and how to improve your cloud security program. In this session, we will leverage cloud-native tools like AWS Control Tower, AWS Config, and AWS Security Hub to demonstrate how knowing your current state of security can drive more effective and efficient story telling of your posture.

GRC302: Using AWS security services to build our cloud operations foundation
Organizations new to the cloud need to quickly understand what foundational security capabilities should be considered as a baseline. In this session, learn how AWS security services can help you improve your cloud security posture. Learn how to incorporate security into your AWS architecture based on the AWS Cloud Operations model, which will help you implement governance, manage risk, and achieve compliance while proactively discovering opportunities for improvement.

GRC331: Automating security and compliance with OSCAL
Documentation exports can be very time consuming. In this session, learn how the National Institute of Science and Technology is developing the Open Security Controls Assessment Language (OSCAL) to provide common translation between XML, JSON, and YAML formats. OSCAL also provides a common means to identify and version shared resources, and standardize the expression of assessment artifacts. Learn how AWS is working to implement OSCAL for our security documentation exports so that you can save time when creating and maintaining ATO packages.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

GRC351: Implementing compliance as code on AWS
To manage compliance at the speed and scale the cloud requires, organizations need to implement automation and have an effective mechanism to manage it. In this builders’ session, learn how to implement compliance as code (CaC). CaC shares many of the same benefits as infrastructure as code: speed, automation, peer review, and auditability. Learn about defining controls with AWS Config rules, customizing those controls, using remediation actions, packaging and deploying with AWS Config conformance packs, and validating using a CI/CD pipeline.

GRC352: Deploying repeatable, secure, and compliant Amazon EKS clusters
In this builders’ session, learn how to deploy, manage, and scale containerized applications that run Kubernetes on AWS with AWS Service Catalog. Walk through how to deploy the Kubernetes control plane into a virtual private cloud (VPC), connect worker nodes to the cluster, and configure a bastion host for cluster administrative operations. Using AWS CloudFormation registry resource types, learn how to declare Kubernetes manifests or Helm charts to deploy and manage your Kubernetes applications. With AWS Service Catalog, you can empower your teams to deploy securely configured Amazon Elastic Kubernetes Service (Amazon EKS) clusters in multiple accounts and Regions.

GRC354: Building remediation workflows to simplify compliance
Automation and simplification are key to managing compliance at scale. Remediation is one of the key elements of simplifying and managing risk. In this builders’ session, walk through how to build a remediation workflow using AWS Config and AWS Systems Manager Automation. Then, explore how the workflow can be deployed at scale and monitored with AWS Security Hub to oversee your entire organization.

GRC355: Build a Security Posture Leaderboard using AWS Security Hub
This builders’ session introduces you to the possibilities of creating a robust and comprehensive leaderboard using AWS Security Hub findings to improve security and compliance visibility in your organization. Learn how to design and support various use cases, such as combining security and compliance data into a single, centralized dashboard that allows you to make more informed decisions; correlating Security Hub findings with operational data for deeper insights; building a security and compliance scorecard across various dimensions to share across different stakeholders; and supporting a decentralized organization structure with centralized or shared security function.

Chalk talks

These are highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

GRC233: Critical infrastructure: Supply chain and compliance impacts
In this chalk talk, learn how you can benefit from cloud-based solutions that build in security from the beginning. Review technical details around cybersecurity best practices for OT systems in adherence with government partnership with public and private industries. Dive deep into use cases and best practices for using AWS security services to help improve cybersecurity specifically for water utilities. Hear about opportunities to receive AWS cybersecurity training designed to teach you the skills necessary to support cloud adoption.

GRC304: Scaling the possible: Digitizing the audit experience
Do you want to increase the speed and scale of your audits? As companies expand to new industries and environments, so too does the scale of regulatory compliance. AWS undergoes over 500 audits in a year. In this session, hear from AWS experts as they digitize and automate the regulator/auditor experience. Walk through pre-audit educational training, self-service of control evidence and walkthrough information, live chatting with an audit control owner, and virtual data center tours. This session discusses how innovation and digitization allows companies to build trust with regulators and auditors while reducing the level of effort for internal audit teams and compliance executives.

GRC334: Shared responsibility deep dive at the service level
Auditors and regulators often need assistance understanding which configuration settings and security responsibilities are in the company’s control. Depending on the service, the AWS shared responsibility model can vary, which can affect the process for meeting compliance goals. Join AWS subject-matter experts in this chalk talk for an in-depth discussion on the next wave of compliance activation for AWS customers. Explore the configurable security decisions that users have for each service and how you can map to AWS best practices and security controls.

GRC431: Building purpose-driven and data-rich GRC solutions
Are you getting everything you need out of your data? Or do you not have enough information to make data-driven security decisions? Many organizations trying to modernize and innovate using data often struggle with finding the right data security solutions to build data-driven applications. In this chalk talk, explore how you can use Amazon Virtual Private Cloud (Amazon VPC), AWS Identity and Access Management (IAM), AWS Key Management Service (AWS KMS), AWS Systems Manager, AWS Single Sign-On (AWS SSO), and AWS Config to drive valuable insights to make more informed decisions. Hear about best practices and lessons learned to help you on your journey to garner purpose-filled information.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

GRC272: Executive Security Simulation
The Executive Security Simulation takes senior security management and IT/business executive teams through an experiential exercise that illuminates key decision points for a successful and secure cloud journey. During this team-based, game-like competitive simulation, participants leverage an industry case study to make strategic security, risk, and compliance time-based decisions and investments. Participants experience the impact of these investments and decisions on the critical aspects of their secure cloud adoption. Join this workshop to understand the major success factors to lead security, risk, and compliance in the cloud, and learn applicable decision and investment approaches to specific secure cloud adoption journeys.

GRC371: Automate your compliance and evidence collection with AWS
Automation and simplification are key to managing compliance at scale. Remediation is one of the key elements of simplifying and managing risk. In this workshop, we will walk through building a remediation workflow using AWS Config and AWS Systems Manager and show how it can be deployed at scale and then monitored with Security Hub across the entire organization. In this workshop, you will learn how you can set up a continuous collection process that not only establishes controls to help meet the requirements of compliance but also automates the process of collecting evidence to avoid the time-consuming manual effort to prepare for audits.

GRC372: How to implement governance on AWS with ServiceNow
Many AWS customers use IT service management (ITSM) solutions such as ServiceNow to implement governance and compliance and manage security incidents. In this workshop, learn how to use AWS services such as AWS Service Catalog, AWS Config, AWS Systems Manager, and AWS Security Hub on the ServiceNow service portal. Learn how AWS services align to service management standards by integrating AWS capabilities through ITSM process integration with ServiceNow. Design and implement a curated provisioning strategy, along with incident management and resource transparency/compliance, by using the AWS Service Management Connector for ServiceNow.

GRC471: Building guardrails to meet your custom control requirements
In this session, you will experience the process of identifying, designing, and implementing security configurations, as well as detective and preventive guardrails, to meet custom control requirements. You will use a pre-built environment, read a customer scenario to identify specific control needs, and then learn how to design and implement the custom controls.

If any of these sessions look interesting to you, consider joining us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Greg Eppel

Greg is the Tech Leader for Cloud Operations and is responsible for the global direction of an internal community of hundreds of AWS experts who are focused on the operational capabilities of AWS. Prior to joining AWS in 2016, he was the CTO of a company that provided SaaS solutions to the sports, media, and entertainment industry. He is a Canadian originally from Vancouver, and he currently resides in the DC metro area with his family.

Author

Alexis Robinson

Alexis is the Head of the US Government Security and Compliance Program for AWS. For over 10 years, she has served federal government clients by advising them on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment, including cloud services applicable to AWS US East/West and AWS GovCloud (US) Regions.

Eligible customers can now order a free MFA security key

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/eligible-customers-can-now-order-a-free-mfa-security-key/

One of the best ways for individuals and businesses to protect themselves online is through multi-factor authentication (MFA). MFA offers an additional layer of protection to help prevent unauthorized individuals from gaining access to systems or data.

In fall 2021, Amazon Web Services (AWS) Security began offering a free MFA security key to AWS account owners in the United States. I’m happy to announce that eligible customers can now order the free security key through the ordering portal in the AWS Management Console. In response to customer demand, we’ve streamlined the ordering process, especially for linked accounts. At this time, only U.S.-based AWS account root users who have spent more than $100 each month over the past 3 months are eligible to place an order.

To order your free security key

  1. Confirm your eligibility at the ordering portal. You will be prompted to sign in if you haven’t already.
  2. Choose your free security key from the available options.
  3. Provide your email address for order confirmation and your shipping address.
  4. Place your order.

You can connect the security key to AWS, as well as other security key–enabled applications, such as Dropbox, GitHub, and Gmail. If your organization is still early in adopting MFA, the free security key is another way to help protect your AWS account credentials, as well as to jump start your MFA journey by showing how convenient modern security keys are to use. As you expand your AWS usage, all your users should obtain and enable MFA. This can be done at the AWS Identity and Access Management (IAM) user level in the AWS identity system or upstream in your federated identity provider, since using federated identities is a best practice.

We encourage everyone to use MFA to help protect themselves online. Although some applications do not yet support security keys, nearly all provide an MFA option, such as time-based password codes or mobile push notifications. So, whether you’re signing in to your AWS account, your favorite social networks, or your bank account, MFA can help level-up your security posture.

If you’re not eligible for a free security key but would still like a security key, check out our MFA recommendations, which are available for purchase from many sellers, including Amazon. For more information about the MFA program, see our Free MFA Security Key page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

CJ Moses

CJ Moses

CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

New – Amazon EC2 M1 Mac Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m1-mac-instances/

Last year, during the re:Invent 2021 conference, I wrote a blog post to announce the preview of EC2 M1 Mac instances. I know many of you requested access to the preview, and we did our best but could not satisfy everybody. However, the wait is over. I have the pleasure of announcing the general availability of EC2 M1 Mac instances.

EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance. It connects to your Amazon Virtual Private Cloud (Amazon VPC), boots from Amazon Elastic Block Store (EBS) volumes, and uses EBS snapshots, Amazon Machine Images (AMIs), security groups and other AWS services such as Amazon CloudWatch and AWS Systems Manager.

The availability of EC2 M1 Mac instances lets you access machines built around the Apple-designed M1 System on Chip (SoC). If you are a Mac developer and re-architecting your apps to natively support Macs with Apple silicon, you may now build and test your apps and take advantage of all the benefits of AWS. Developers building for iPhone, iPad, Apple Watch, and Apple TV will also benefit from faster builds. EC2 M1 Mac instances deliver up to 60 percent better price performance over the x86-based EC2 Mac instances for iPhone and Mac app build workloads.

For example, I tested the time it takes to clean, build, archive, and run the unit tests on a sample project I wrote. The new EC2 M1 Mac instances complete this set of tasks in 49 seconds on average. This is 47.8 percent faster than the same set of tasks running on the previous generation of EC2 Mac instances.

To see how to launch an EC2 M1 Mac instance from the AWS Management Console or the AWS Command Line Interface (CLI), I invite you to read my last blog post on the subject.

EC2 Mac M1 Instance

During the six months of the preview, we collected your feedback and fine-tuned the service to your needs.

We’ve added a new FAQ section to our documentation to get started with EC2 M1 Mac instances. Agents for management and observability, such as Systems Manager and CloudWatch, are pre-installed on all our macOS AMIs, along with tools such as the AWS Command Line Interface (CLI) and our AWS SDKs. EC2 M1 Mac instances integrate with other AWS services, such as Amazon Elastic File System (Amazon EFS) for file storage, AWS Auto Scaling, or AWS Secrets Manager.

For example, I am using Secrets Manager to securely store my build secrets, such as the signing keys and certificates used to sign my binaries before to distribute them on the App Store. From my laptop, I first make sure to export the certificate from the macOS keychain. I then upload my certificate to Secrets Manager with this command:

aws secretsmanager create-secret            \
       --name apple-signing-dev-certificate \
       --secret-binary fileb://./secrets/apple_dev_seb.p12 

On the EC2 M1 Mac instance, to prepare my instance before the build phase, I download the certificate, decode it (it is base64-encoded), and store it in the EC2 M1 Mac instance keychain, where the codesign tool will find it during the build.

# download the certificate from Secrets Manager
SIGNING_DEV_KEY=$($aws secretsmanager get-secret-value  \
      --secret-id apple-signing-dev-certificate         \
      --query SecretBinary --output text)
	  

# save the certificate as a file
echo $SIGNING_DEV_KEY | base64 -d > seb_dev_certificate.p12

# import the certificate in the keychain 
security import seb_dev_certificate.p12 \
                -P "my_cert_password"   \
                -k my.dev.keychain      \
                -T /usr/bin/security -T /usr/bin/codesign -T /usr/bin/xcodebuild

# delete the certificate from disk
rm seb_dev_certificate.p12

There are a few more configuration steps to get code signing work from the macOS command line. You can check out this presentation I made or my code repository for the details.

We are preparing a couple of events to help you learn more about EC2 M1 Mac instance use cases and configuration. First, we recently had an online webinar to learn how to take advantage of EC2 Mac instances for iOS development, content is available for you to consume on-demand after a free registration step. Second, we are preparing a one-day, in-person developer conference for later this year. The conference agenda will be packed with technical content and workshops. Stay tuned on social media to learn more about it.

Last and not least, but not related to EC2 Mac instances, the Apple WWDC 2022 conference took place last month, from June 6–8, 2022, and the content is available online. This is a great occasion to learn more about development for Apple systems in general.

And now, go build 😉

— seb

OSPAR 2022 report now available with 142 services in scope

Post Syndicated from Joseph Goh original https://aws.amazon.com/blogs/security/ospar-2022-report-now-available-with-142-services-in-scope/

We’re excited to announce the completion of our annual Outsourced Service Provider’s Audit Report (OSPAR) audit cycle on July 1, 2022. The 2022 OSPAR certification cycle includes the addition of 15 new services in scope, bringing the total number of services in scope to 142 in the AWS Asia Pacific (Singapore) Region.

Newly added services in scope include the following:

Successful completion of the OSPAR assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. Customers can use the OSPAR assessment to conduct due diligence, and to help reduce the effort and costs required for compliance. An independent third-party auditor, selected from the ABS list of approved auditors, performs the OSPAR assessment.

You can download the latest OSPAR report from AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The list of services in scope for OSPAR is available in the report, and is also available on the AWS Services in Scope by Compliance Program webpage.

As always, we’re committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. If you have questions about the OSPAR report, contact your AWS account team.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Joseph Goh

Joseph Goh

Joseph is the APJ ASEAN Lead at AWS based in Singapore. He leads security audits, certifications and compliance programs across the Asia Pacific region. Joseph is passionate about delivering programs that build trust with customers and provide them assurance on cloud security.

2022 H1 IRAP report is now available on AWS Artifact

Post Syndicated from Matt Brunker original https://aws.amazon.com/blogs/security/2022-h1-irap-report-is-now-available-on-aws-artifact/

We’re excited to announce that a new Information Security Registered Assessors Program (IRAP) report is now available on AWS Artifact. Amazon Web Services (AWS) successfully completed an IRAP assessment in May 2022 by an independent ASD (Australian Signals Directorate) certified IRAP assessor. The new IRAP report includes an additional nine AWS services that are now assessed at the PROTECTED classification under IRAP. This brings the total number of services assessed at PROTECTED to 132.

For a full list of these services, see the IRAP tab on the AWS Services in Scope page. The following services are the nine newly assessed services:

The IRAP documentation pack is developed in accordance with the Australian Cyber Security Centre (ACSC) Cloud Security Guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Australian Government Information Security Manual (ISM), the Attorney-General’s Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA) Secure Cloud Strategy.

The IRAP package on AWS Artifact also includes the AWS Consumer Guide and the whitepaper Reference Architectures for ISM PROTECTED Workloads in the AWS Cloud.

The IRAP documentation pack is developed to assist Australian government agencies and their partners to plan, architect, and assess risk for their workloads when they use AWS Cloud services. Reach out to your AWS representatives to let us know which additional services you would like to see in scope for upcoming IRAP assessments. We strive to bring more services into scope at the PROTECTED level to support your requirements.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Matt Brunker

Matt is the security program manager for the Australia and New Zealand region, leading multiple security certification programs. Matt is a passionate cybersecurity professional with a strong background in assisting organisations in the design, implementation, and monitoring of security controls.

How to tune TLS for hybrid post-quantum cryptography with Kyber

Post Syndicated from Brian Jarvis original https://aws.amazon.com/blogs/security/how-to-tune-tls-for-hybrid-post-quantum-cryptography-with-kyber/

We are excited to offer hybrid post-quantum TLS with Kyber for AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In this blog post, we share the performance characteristics of our hybrid post-quantum Kyber implementation, show you how to configure a Maven project to use it, and discuss how to prepare your connection settings for Kyber post-quantum cryptography (PQC).

After five years of intensive research and cryptanalysis among partners from academia, the cryptographic community, and the National Institute of Standards and Technology (NIST), NIST has selected Kyber for post-quantum key encapsulation mechanism (KEM) standardization. This marks the beginning of the next generation of public key encryption. In time, the classical key establishment algorithms we use today, like RSA and elliptic curve cryptography (ECC), will be replaced by quantum-secure alternatives. At AWS Cryptography, we’ve been researching and analyzing the candidate KEMs through each round of the NIST selection process. We began supporting Kyber in round 2 and continue that support today.

A cryptographically relevant quantum computer that is capable of breaking RSA and ECC does not yet exist. However, we are offering hybrid post-quantum TLS with Kyber today so that customers can see how the performance differences of PQC affect their workloads. We also believe that the use of PQC raises the already-high security bar for connecting to AWS KMS and ACM, making this feature attractive for customers with long-term confidentiality needs.

Performance of hybrid post-quantum TLS with Kyber

Hybrid post-quantum TLS incurs a latency and bandwidth overhead compared to classical crypto alone. To quantify this overhead, we measured how long S2N-TLS takes to negotiate hybrid post-quantum (ECDHE + Kyber) key establishment compared to ECDHE alone. We performed the tests with the Linux perf subsystem on an Amazon Elastic Compute Cloud (Amazon EC2) c6i.4xlarge instance in the US East (Northern Virginia) AWS Region, and we initiated 2,000 TLS connections to a test server running in the US West (Oregon) Region, to include typical internet latencies.

Figure 1 shows the latencies of a TLS handshake that uses classical ECDHE and hybrid post-quantum (ECDHE + Kyber) key establishment. The columns are separated to illustrate the CPU time spent by the client and server compared to the time spent sending data over the network.

Figure 1: Latency of classical compared to hybrid post-quantum TLS handshake

Figure 1: Latency of classical compared to hybrid post-quantum TLS handshake

Figure 2 shows the bytes sent and received during the TLS handshake, as measured by the client, for both classical ECDHE and hybrid post-quantum (ECDHE + Kyber) key establishment.

Figure 2: Bandwidth of classical compared to hybrid post-quantum TLS handshake

Figure 2: Bandwidth of classical compared to hybrid post-quantum TLS handshake

This data shows that the overhead for using hybrid post-quantum key establishment is 0.25 ms on the client, 0.23 ms on the server, and an additional 2,356 bytes on the wire. Intra-Region tests would result in lower network latency. Your latencies also might vary depending on network conditions, CPU performance, server load, and other variables.

The results show that the performance of Kyber is strong; the additional latency is one of the top contenders among the NIST PQC candidates that we analyzed in a previous blog post. In fact, the performance of these ciphers has improved during our latest test, because x86-64 assembly-optimized versions of these ciphers are now available for use.

Configure a Maven project for hybrid post-quantum TLS

In this section, we provide a Maven configuration and code example that will show you how to get started using our assembly-optimized, hybrid post-quantum TLS configuration with Kyber.

To configure a Maven project for hybrid post-quantum TLS

  1. Get the preview release of the AWS Common Runtime HTTP client for the AWS SDK for Java 2.x. Your Maven dependency configuration should specify version 2.17.69-PREVIEW or newer, as shown in the following code sample.
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        aws-crt-client
        <version>[2.17.69-PREVIEW,]</version>
    </dependency>

  2. Configure the desired cipher suite in your code’s initialization. The following code sample configures an AWS KMS client to use the latest hybrid post-quantum cipher suite.
    // Check platform support
    if(!TLS_CIPHER_PREF_PQ_TLSv1_0_2021_05.isSupported()){
        throw new RuntimeException(“Hybrid post-quantum cipher suites are not supported.”);
    }
    
    // Configure HTTP client   
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_PREF_PQ_TLSv1_0_2021_05)
              .build();
    
    // Create the AWS KMS async client
    KmsAsyncClient kmsAsync = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();

With that, all calls made with your AWS KMS client will use hybrid post-quantum TLS. You can use the latest hybrid post-quantum cipher suite with ACM by following the preceding example but using an AcmAsyncClient instead.

Tune connection settings for hybrid post-quantum TLS

Although hybrid post-quantum TLS has some latency and bandwidth overhead on the initial handshake, that cost is amortized over the duration of the TLS session, and you can fine-tune your connection settings to help further reduce the cost. In this section, you learn three ways to reduce the impact of hybrid PQC on your TLS connections: connection pooling, connection timeouts, and TLS session resumption.

Connection pooling

Connection pools manage the number of active connections to a server. They allow a connection to be reused without closing and reopening it, which amortizes the cost of connection establishment over time. Part of a connection’s setup time is the TLS handshake, so you can use connection pools to help reduce the impact of an increase in handshake latency.

To illustrate this, we wrote a test application that generates approximately 200 transactions per second to a test server. We varied the maximum concurrency setting of the HTTP client and measured the latency of the test request. In the AWS CRT HTTP client, this is the maxConcurrency setting. If the connection pool doesn’t have an idle connection available, the request latency includes establishing a new connection. Using Wireshark, we captured the network traffic to observe the number of TLS handshakes that took place over the duration of the application. Figure 3 shows the request latency and number of TLS handshakes as the maxConcurrency setting is increased.

Figure 3: Median request latency and number of TLS handshakes as concurrency pool size increases

Figure 3: Median request latency and number of TLS handshakes as concurrency pool size increases

The biggest latency benefit occurred with a maxConcurrency value greater than 1. Beyond that, the latencies were past the point of diminishing returns. For all maxConcurrency values of 10 and below, additional TLS handshakes took place within the connections, but they didn’t have much impact on median latency. These inflection points will depend on your application’s request volume. The takeaway is that connection pooling allows connections to be reused, thereby spreading the cost of any increased TLS negotiation time over many requests.

More detail about using the maxConcurrency option can be found in the AWS SDK for Java API Reference.

Connection timeouts

Connection timeouts work in conjunction with connection pooling. Even if you use a connection pool, there is a limit to how long idle connections stay open before the pool closes them. You can adjust this time limit to save on connection establishment overhead.

A nice way to visualize this setting is to imagine bursty traffic patterns. Despite tuning the connection pool concurrency, your connections keep closing because the burst period is longer than the idle time limit. By increasing the maximum idle time, you can reuse these connections despite bursty behavior.

To simulate the impact of connection timeouts, we wrote a test application that starts 10 threads, each of which activate at the same time on a periodic schedule every 5 seconds for a minute. We set maxConcurrency to 10 to allow each thread to have its own connection. We set connectionMaxIdleTime of the AWS CRT HTTP client to 1 second for the first test; and to 10 seconds for the second test.

When the maximum idle time was 1 second, the connections for all 10 threads closed during the time between each burst. As a result, 100 total connections were formed over the life of the test, causing a median request latency of 20.3 ms. When we changed the maximum idle time to 10 seconds, the 10 initial connections were reused by each subsequent burst, reducing the median request latency to 5.9 ms.

By setting the connectionMaxIdleTime appropriately for your application, you can reduce connection establishment overhead, including TLS negotiation time, to help achieve time savings throughout the life of your application.

More detail about using the connectionMaxIdleTime option can be found in the AWS SDK for Java API Reference.

TLS session resumption

TLS session resumption allows a client and server to bypass the key agreement that is normally performed to arrive at a new shared secret. Instead, communication quickly resumes by using a shared secret that was previously negotiated, or one that was derived from a previous secret (the implementation details depend on the version of TLS in use). This feature requires that both the client and server support it, but if available, TLS session resumption allows the TLS handshake time and bandwidth increases associated with hybrid PQ to be amortized over the life of multiple connections.

Conclusion

As you learned in this post, hybrid post-quantum TLS with Kyber is available for AWS KMS and ACM. This new cipher suite raises the security bar and allows you to prepare your workloads for post-quantum cryptography. Hybrid key agreement has some additional overhead compared to classical ECDHE, but you can mitigate these increases by tuning your connection settings, including connection pooling, connection timeouts, and TLS session resumption. Begin using hybrid key agreement today with AWS KMS and ACM.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Brian Jarvis

Brian Jarvis

Brian is a Senior Software Engineer at AWS Cryptography. His interests are in post-quantum cryptography and cryptographic hardware. Previously, Brian worked in AWS Security, developing internal services used throughout the company. Brian holds a Bachelor’s degree from Vanderbilt University and a Master’s degree from George Mason University in Computer Engineering. He plans to finish his PhD “some day”.

AWS Week in Review – July 4, 2022

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-04-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Summer has arrived in Finland, and these last few days have been hotter than in the Canary Islands! Today in the US it is Independence Day. I hope that if you are celebrating, you’re having a great time. This week I’m very excited about some developer experience and artificial intelligence launches.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

AWS SAM Accelerate is now generally available – SAM Accelerate is a new capability of the AWS Serverless Application Model CLI, which makes it easier for serverless developers to test code changes against the cloud. You can do a hot swap of code directly in the cloud when making a change in your local development environment. This allows you to develop applications faster. Learn more about this launch in the What’s New post.

Amplify UI for React is generally available – Amplify UI is an open-source UI library that helps developers build cloud-native applications. Amplify UI for React comes with over 35 components that you can use, an authentication component that allows you to connect to your backend with no extra configuration, theming for your components. You can also build your UI using Figma. Check the Amplify UI for React site to learn more about all the capabilities offered.

Amazon Connect has new announcements – First, Amazon Connect added support to personalize the flows of the customer experience using Amazon Lex sentiment analysis. It also added support to branch out the flows depending on Amazon Lex confidence scores. Lastly, it added confidence scores to Amazon Connect Customer Profiles to help companies merge duplicate customer records.

Amazon QuickSight – QuickSight authors can now learn and experience Q before signing up. Authors can choose from six different sample topics and explore different visualizations. In addition, QuickSight now supports Level Aware Calculations (LAC) and rolling date functionality. These two new features bring flexibility and simplification to customers to build advanced calculation and dashboards.

Amazon SageMaker – RStudio on SageMaker now allows you to bring your own development environment in a custom image. RStudio on SageMaker is a fully managed RStudio Workbench in the cloud. In addition, SageMaker added four new tabular data modeling algorithms: LightGBM, CatBoost, AutoGluon-Tabular, and TabTransformer to the existing set of built-in algorithms, pre-trained models and pre-built solution templates it provides.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

AWS Support announced an improved experience when creating a case – There is a new interface for creating support cases in the AWS Support Center console. Now you can create a case with a simplified three-step process that guides you through the flow. Learn more about this new process in the What’s new post.

New AWS Step Functions workflows collection on Serverless Land – The Step Functions workflows collection is a new experience that makes it easier to discover, deploy, and share AWS Step Functions workflows. In this collection, you can find opinionated templates that implement the best practices to build using Step Functions. Learn more about this new collection in Ben’s blog post.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS Podcasts in Spanish, which shares a new episode ever other week. The podcast is meant for builders, and it shares stories about how customers implement and learn AWS, how to architect applications, and how to use new services. You can listen to all the episodes directly from your favorite podcast app or from the AWS Podcasts en español website.

AWS open-source news and updates – A newsletter curated by my colleague Ricardo brings you the latest open-source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Summit New York – Join us on July 12 for the in-person AWS Summit. You can register on the AWS Summit page for free.

AWS re:Inforce – This is an in-person learning conference with a focus on security, compliance, identity, and privacy. You can register now to access hundreds of technical sessions, and other content. It will take place July 26 and 27 in Boston, MA.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia

AWS achieves the first OSCAL format system security plan submission to FedRAMP

Post Syndicated from Matthew Donkin original https://aws.amazon.com/blogs/security/aws-achieves-the-first-oscal-format-system-security-plan-submission-to-fedramp/

Amazon Web Services (AWS) is the first cloud service provider to produce an Open Security Control Assessment Language (OSCAL)–formatted system security plan (SSP) for the FedRAMP Project Management Office (PMO). OSCAL is the first step in the AWS effort to automate security documentation to simplify our customers’ journey through cloud adoption and accelerate the authorization to operate (ATO) process.

AWS continues its commitment to innovation and customer obsession. Our incorporation of the OSCAL format will improve the customer experience of reviewing and assessing security documentation. It can take an estimated 4,200 workforce hours for companies to receive an ATO, with much of the effort due to manual review and transcription of documentation. Automating this process through a machine-translatable language gives our customers the ability to ingest security documentation into a governance, risk management, and compliance (GRC) tool to automate much of this time-consuming task. AWS worked with an AWS Partner, to ingest the AWS SSP through their tool, Xacta.

This is a first step in several initiatives AWS has planned to automate the security assurance process across multiple compliance frameworks. We continue to look for ways to earn trust with our customers, and over the next year we will continue to release new solutions that customers can use to rapidly deploy secure and innovative services.

“Providing the SSP packages in OSCAL is a great milestone in security automation marking the beginning of a new era in cybersecurity. We appreciate the leadership in this area and look forward to working with all cyber professionals, in particular with the visionary cloud service providers, to help deliver secure innovation faster to the people they serve.”

– Dr. Michaela Iorga, OSCAL Strategic Outreach Director, NIST

To learn more about OSCAL, visit the NIST OSCAL website. To learn more about FedRAMP’s plans for OSCAL, visit the FedRAMP Blog.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits case studies and customer success stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS account team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Matthew Donkin

Matthew Donkin

Matthew Donkin, AWS Security Compliance Lead, provides direction and guidance for security documentation automation, physical security compliance, and assists customers in navigating compliance in the cloud. He is leading the development of the industries’ first open security controls assessment language (OSCAL) artifacts for adoption of a faster and more reliable way to process resource intensive documentation within the authorization process.

TLS 1.2 to become the minimum TLS protocol level for all AWS API endpoints

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/

At Amazon Web Services (AWS), we continuously innovate to deliver you a cloud computing environment that works to help meet the requirements of the most security-sensitive organizations. To respond to evolving technology and regulatory standards for Transport Layer Security (TLS), we will be updating the TLS configuration for all AWS service API endpoints to a minimum of version TLS 1.2. This update means you will no longer be able to use TLS versions 1.0 and 1.1 with all AWS APIs in all AWS Regions by June 28, 2023. In this post, we will tell you how to check your TLS version, and what to do to prepare.

We have continued AWS support for TLS versions 1.0 and 1.1 to maintain backward compatibility for customers that have older or difficult to update clients, such as embedded devices. Furthermore, we have active mitigations in place that help protect your data for the issues identified in these older versions. Now is the right time to retire TLS 1.0 and 1.1, because increasing numbers of customers have requested this change to help simplify part of their regulatory compliance, and there are fewer and fewer customers using these older versions.

If you are one of the more than 95% of AWS customers who are already using TLS 1.2 or later, you will not be impacted by this change. You are almost certainly already using TLS 1.2 or later if your client software application was built after 2014 using an AWS Software Development Kit (AWS SDK), AWS Command Line Interface (AWS CLI), Java Development Kit (JDK) 8 or later, or another modern development environment. If you are using earlier application versions, or have not updated your development environment since before 2014, you will likely need to update.

If you are one of the customers still using TLS 1.0 or 1.1, then you must update your client software to use TLS 1.2 or later to maintain your ability to connect. It is important to understand that you already have control over the TLS version used when connecting. When connecting to AWS API endpoints, your client software negotiates its preferred TLS version, and AWS uses the highest mutually agreed upon version.

To minimize the availability impact of requiring TLS 1.2, AWS is rolling out the changes on an endpoint-by-endpoint basis over the next year, starting now and ending in June 2023. Before making these potentially breaking changes, we monitor for connections that are still using TLS 1.0 or TLS 1.1. If you are one of the AWS customers who may be impacted, we will notify you on your AWS Health Dashboard, and by email. After June 28, 2023, AWS will update our API endpoint configuration to remove TLS 1.0 and TLS 1.1, even if you still have connections using these versions.

What should you do to prepare for this update?

To minimize your risk, you can self-identify if you have any connections using TLS 1.0 or 1.1. If you find any connections using TLS 1.0 or 1.1, you should update your client software to use TLS 1.2 or later.

AWS CloudTrail records are especially useful to identify if you are using the outdated TLS versions. You can now search for the TLS version used for your connections by using the recently added tlsDetails field. The tlsDetails structure in each CloudTrail record contains the TLS version, cipher suite, and the fully qualified domain name (FQDN, also known as the URL) field used for the API call. You can then use the data in the records to help you pinpoint your client software that is responsible for the TLS 1.0 or 1.1 call, and update it accordingly. Nearly half of AWS services currently provide the TLS information in the CloudTrail tlsDetails field, and we are continuing to roll this out for the remaining services in the coming months.

We recommend you use one of the following options for running your CloudTrail TLS queries:

  1. AWS CloudTrail Lake: You can follow the steps, and use the sample TLS query, in the blog post Using AWS CloudTrail Lake to identify older TLS connections. There is also a built-in sample CloudTrail TLS query available in the AWS CloudTrail Lake console.
  2. Amazon CloudWatch Log Insights: There are two built-in CloudWatch Log Insights sample CloudTrail TLS queries that you can use, as shown in Figure 1.
     
    Figure 1: Available sample TLS queries for CloudWatch Log Insights

    Figure 1: Available sample TLS queries for CloudWatch Log Insights

  3. Amazon Athena: You can query AWS CloudTrail logs in Amazon Athena, and we will be adding support for querying the TLS values in your CloudTrail logs in the coming months. Look for updates and announcements about this in future AWS Security Blog posts.

In addition to using CloudTrail data, you can also identify the TLS version used by your connections by performing code, network, or log analysis as described in the blog post TLS 1.2 will be required for all AWS FIPS endpoints. Note that while this post refers to the FIPS API endpoints, the information about querying for TLS versions is applicable to all API endpoints.

Will I be notified if I am using TLS 1.0 or TLS 1.1?

If we detect that you are using TLS 1.0 or 1.1, you will be notified on your AWS Health Dashboard, and you will receive email notifications. However, you will not receive a notification for connections you make anonymously to AWS shared resources, such as a public Amazon Simple Storage Service (Amazon S3) bucket, because we cannot identify anonymous connections. Furthermore, while we will make every effort to identify and notify every customer, there is a possibility that we may not detect infrequent connections, such as those that occur less than monthly.

How do I update my client to use TLS 1.2 or TLS 1.3?

If you are using an AWS Software Developer Kit (AWS SDK) or the AWS Command Line Interface (AWS CLI), follow the detailed guidance about how to examine your client software code and properly configure the TLS version used in the blog post TLS 1.2 to become the minimum for FIPS endpoints.

We encourage you to be proactive in order to avoid an impact to availability. Also, we recommend that you test configuration changes in a staging environment before you introduce them into production workloads.

What is the most common use of TLS 1.0 or TLS 1.1?

The most common use of TLS 1.0 or 1.1 are .NET Framework versions earlier than 4.6.2. If you use the .NET Framework, please confirm you are using version 4.6.2 or later. For information about how to update and configure the .NET Framework to support TLS 1.2, see How to enable TLS 1.2 on clients in the .NET Configuration Manager documentation.

What is Transport Layer Security (TLS)?

Transport Layer Security (TLS) is a cryptographic protocol that secures internet communications. Your client software can be set to use TLS version 1.0, 1.1, 1.2, or 1.3, or a subset of these, when connecting to service endpoints. You should ensure that your client software supports TLS 1.2 or later.

Is there more assistance available to help verify or update my client software?

If you have any questions or issues, you can start a new thread on the AWS re:Post community, or you can contact AWS Support or your Technical Account Manager (TAM).

Additionally, you can use AWS IQ to find, securely collaborate with, and pay AWS certified third-party experts for on-demand assistance to update your TLS client components. To find out how to submit a request, get responses from experts, and choose the expert with the right skills and experience, see the AWS IQ page. Sign in to the AWS Management Console and select Get Started with AWS IQ to start a request.

What if I can’t update my client software?

If you are unable to update to use TLS 1.2 or TLS 1.3, contact AWS Support or your Technical Account Manager (TAM) so that we can work with you to identify the best solution.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janelle Hopper

Janelle is a Senior Technical Program Manager in AWS Security with over 25 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve the AWS security posture.

Author

Daniel Salzedo

Daniel is a Senior Specialist Technical Account Manager – Security. He has over 25 years of professional experience in IT in industries as diverse as video game development, manufacturing, banking, and used car sales. He loves working with our wonderful AWS customers to help them solve their complex security challenges at scale.

Author

Ben Sherman

Ben is a Software Development Engineer in AWS Security, where he focuses on automation to support AWS compliance obligations. He enjoys experimenting with computing and web services both at work and in his free time.

New – Amazon SageMaker Ground Truth Now Supports Synthetic Data Generation

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/new-amazon-sagemaker-ground-truth-now-supports-synthetic-data-generation/

Today, I am happy to announce that you can now use Amazon SageMaker Ground Truth to generate labeled synthetic image data.

Building machine learning (ML) models is an iterative process that, at a high level, starts with data collection and preparation, followed by model training and model deployment. And especially the first step, collecting large, diverse, and accurately labeled datasets for your model training, is often challenging and time-consuming.

Let’s take computer vision (CV) applications as an example. CV applications have come to play a key role in the industrial landscape. They help improve manufacturing quality or automate warehouses. Yet, collecting the data to train these CV models often takes a long time or can be impossible.

As a data scientist, you might spend months collecting hundreds of thousands of images from the production environments to make sure you capture all variations in data the model will come across. In some cases, finding all data variations might even be impossible, for example, sourcing images of rare product defects, or expensive, if you have to intentionally damage your products to get those images.

And once all data is collected, you need to accurately label the images, which is often a struggle in itself. Manually labeling images is slow and open to human error, and building custom labeling tools and setting up scaled labeling operations can be time-consuming and expensive. One way to mitigate this data challenge is by adding synthetic data to the mix.

Advantages of Combining Real-World Data with Synthetic Data
Combining your real-world data with synthetic data helps to create more complete training datasets for training your ML models.

Synthetic data itself is created by simple rules, statistical models, computer simulations, or other techniques. This allows synthetic data to be created in enormous quantities and with highly accurate labels for annotations across thousands of images. The label accuracy can be done at a very fine granularity, such as on a sub-object or pixel level, and across modalities. Modalities include bounding boxes, polygons, depth, and segments. Synthetic data can also be generated for a fraction of the cost, especially when compared to remote sensing imagery that otherwise relies on satellite, aerial, or drone image collection.

If you combine your real-world data with synthetic data, you can create more complete and balanced data sets, adding data variety that real-world data might lack. With synthetic data, you have the freedom to create any imagery environment, including edge cases that might be difficult to find and replicate in real-world data. You can customize objects and environments with variations, for example, to reflect different lighting, colors, texture, pose, or background. In other words, you can “order” the exact use case you are training your ML model for.

Now, let me show you how you can start sourcing labeled synthetic images using SageMaker Ground Truth.

Get Started on Your Synthetic Data Project with Amazon SageMaker Ground Truth
To request a new synthetic data project, navigate to the Amazon SageMaker Ground Truth console and select Synthetic data.

Amazon SageMaker Ground Truth Synthetic Data

Then, select Open project portal. In the project portal, you can request new projects, monitor projects that are in progress, and view batches of generated images once they become available for review. To initiate a new project, select Request project.

Amazon SageMaker Ground Truth Synthetic Data Project Portal

Describe your synthetic data needs and provide contact information.

Request a synthetic data project

After you submit the request form, you can check your project status in the project dashboard.

Amazon SageMaker Ground Truth Synthetic Data Project Created

In the next step, an AWS expert will reach out to discuss your project requirements in more detail. Upon review, the team will share a custom quote and project timeline.

If you want to continue, AWS digital artists will start by creating a small test batch of labeled synthetic images as a pilot production for you to review.

They collect your project inputs, such as reference photos and available 2D and 3D assets. The team then customizes those assets, adds the specified inclusions, such as scratches, dents, and textures, and creates the configuration that describes all the variations that need to be generated.

They can also create and add new objects based on your requirements, configure distributions and locations of objects in a scene, as well as modify object size, shape, color, and surface texture.

Once the objects are prepared, they are rendered using a photorealistic physics engine, capturing an image of the scene from a sensor that is placed in the virtual world. Images are also automatically labeled. Labels include 2D bounding boxes, instance segmentation, and contours.

You can monitor the progress of the data generation jobs on the project detail page. Once the pilot production test batch becomes available for review, you can spot-check the images and provide feedback for any rework that might be required.

Review available batches of synthetic data

Select the batch you want to review and View details
Sample batch of synthetic data in Amazon SageMaker Ground Truth

In addition to the images, you will also receive output image labels, metadata such as object positions, and image quality metrics as Amazon SageMaker compatible JSON files.

Synthetic Image Fidelity and Diversity Report
With each available batch of images, you also receive a synthetic image fidelity and diversity report. This report provides image and object level statistics and plots that help you make sense of the generated synthetic images.

The statistics are used to describe the diversity and the fidelity of the synthetic images and compare them with real images. Examples of the statistics and plots provided are the distributions of object classes, object sizes, image brightness, and image contrast, as well as the plots evaluating the indistinguishability between synthetic and real images.

Synthetic Image Fidelity and Diversity Report

Once you approve the pilot production test batch, the team will move to the production phase and start generating larger batches of labeled synthetic images with your desired label types, such as 2D bounding boxes, instance segmentation, and contours. Similar to the test batch, each production batch of images will be made available for you together with the image fidelity and diversity report to spot-check, accept, or reject.

All images and artifacts will be available for you to download from your S3 bucket once final production is complete.

Availability
Amazon SageMaker Ground Truth synthetic data is available in US East (N. Virginia). Synthetic data is priced on a per-label basis. You can request a custom quote that is tailored to your specific use case and requirements by filling out the project requirement form.

Learn more about SageMaker Ground Truth synthetic data on our Amazon SageMaker Data Labeling page.

Request your synthetic data project through the Amazon SageMaker Ground Truth console today!

— Antje

Now in Preview – Amazon CodeWhisperer- ML-Powered Coding Companion

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-in-preview-amazon-codewhisperer-ml-powered-coding-companion/

As I was getting ready to write this post I spent some time thinking about some of the coding tools that I have used over the course of my career. This includes the line-oriented editor that was an intrinsic part of the BASIC interpreter that I used in junior high school, the IBM keypunch that I used when I started college, various flavors of Emacs, and Visual Studio. The earliest editors were quite utilitarian, and grew in sophistication as CPU power become more plentiful. At first this increasing sophistication took the form of lexical assistance, such as dynamic completion of partially-entered variable and function names. Later editors were able to parse source code, and to offer assistance based on syntax and data types — Visual Studio‘s IntelliSense, for example. Each of these features broke new ground at the time, and each one had the same basic goal: to help developers to write better code while reducing routine and repetitive work.

Announcing CodeWhisperer
Today I would like to tell you about Amazon CodeWhisperer. Trained on billions of lines of code and powered by machine learning, CodeWhisperer has the same goal. Whether you are a student, a new developer, or an experienced professional, CodeWhisperer will help you to be more productive.

We are launching in preview form with support for multiple IDEs and languages. To get started, you simply install the proper AWS IDE Toolkit, enable the CodeWhisperer feature, enter your preview access code, and start typing:

CodeWhisperer will continually examine your code and your comments, and present you with syntactically correct recommendations. The recommendations are synthesized based on your coding style and variable names, and are not simply snippets.

CodeWhisperer uses multiple contextual clues to drive recommendations including the cursor location in the source code, code that precedes the cursor, comments, and code in other files in the same projects. You can use the recommendations as-is, or you can enhance and customize them as needed. As I mentioned earlier, we trained (and continue to train) CodeWhisperer on billions of lines of code drawn from open source repositories, internal Amazon repositories, API documentation, and forums.

CodeWhisperer in Action
I installed the CodeWhisperer preview in PyCharm and put it through its paces. Here are a few examples to show you what it can do. I want to build a list of prime numbers. I type # See if a number is pr. CodeWhisperer offers to complete this, and I press TAB (the actual key is specific to each IDE) to accept the recommendation:

On the next line, I press Alt-C (again, IDE-specific), and I can choose between a pair of function definitions. I accept the first one, and CodeWhisperer recommends the function body, and here’s what I have:

I write a for statement, and CodeWhisperer recommends the entire body of the loop:

CodeWhisperer can also help me to write code that accesses various AWS services. I start with # create S3 bucket and TAB-complete the rest:

I could show you many more cool examples, but you will learn more by simply joining the preview and taking CodeWhisperer for a spin.

Join the Preview
The preview supports code written in Python, Java, and JavaScript, using VS Code, IntelliJ IDEA, PyCharm, WebStorm, and AWS Cloud9. Support for the AWS Lambda Console is in the works and should be ready very soon.

Join the CodeWhisperer preview and let me know what you think!

Jeff;

AWS re:Inforce 2022: Threat detection and incident response track preview

Post Syndicated from Celeste Bishop original https://aws.amazon.com/blogs/security/aws-reinforce-2022-threat-detection-and-incident-response-track-preview/

Register now with discount code SALXTDVaB7y to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we’re going to highlight just some of the sessions focused on threat detection and incident response that are planned for AWS re:Inforce 2022. AWS re:Inforce is a learning conference focused on security, compliance, identity, and privacy. The event features access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote featuring AWS Security leadership, and more. AWS re:Inforce 2022 will take place in-person in Boston, MA on July 26-27.

AWS re:Inforce organizes content across multiple themed tracks: identity and access management; threat detection and incident response; governance, risk, and compliance; networking and infrastructure security; and data protection and privacy. This post highlights some of the breakout sessions, chalk talks, builders’ sessions, and workshops planned for the threat detection and incident response track. For additional sessions and descriptions, see the re:Inforce 2022 catalog preview. For other highlights, see our sneak peek at the identity and access management sessions and sneak peek at the data protection and privacy sessions.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

TDR201: Running effective security incident response simulations
Security incidents provide learning opportunities for improving your security posture and incident response processes. Ideally you want to learn these lessons before having a security incident. In this session, walk through the process of running and moderating effective incident response simulations with your organization’s playbooks. Learn how to create realistic real-world scenarios, methods for collecting valuable learnings and feeding them back into implementation, and documenting correction-of-error proceedings to improve processes. This session provides knowledge that can help you begin checking your organization’s incident response process, procedures, communication paths, and documentation.

TDR202: What’s new with AWS threat detection services
AWS threat detection teams continue to innovate and improve the foundational security services for proactive and early detection of security events and posture management. Keeping up with the latest capabilities can improve your security posture, raise your security operations efficiency, and reduce your mean time to remediation (MTTR). In this session, learn about recent launches that can be used independently or integrated together for different use cases. Services covered in this session include Amazon GuardDuty, Amazon Detective, Amazon Inspector, Amazon Macie, and centralized cloud security posture assessment with AWS Security Hub.

TDR301: A proactive approach to zero-days: Lessons learned from Log4j
In the run-up to the 2021 holiday season, many companies were hit by security vulnerabilities in the widespread Java logging framework, Apache Log4j. Organizations were in a reactionary position, trying to answer questions like: How do we figure out if this is in our environment? How do we remediate across our environment? How do we protect our environment? In this session, learn about proactive measures that you should implement now to better prepare for future zero-day vulnerabilities.

TDR303: Zoom’s journey to hyperscale threat detection and incident response
Zoom, a leader in modern enterprise video communications, experienced hyperscale growth during the pandemic. Their customer base expanded by 30x and their daily security logs went from being measured in gigabytes to terabytes. In this session, Zoom shares how their security team supported this breakneck growth by evolving to a centralized infrastructure, updating their governance process, and consolidating to a single pane of glass for a more rapid response to security concerns. Solutions used to accomplish their goals include Splunk, AWS Security Hub, Amazon GuardDuty, Amazon CloudWatch, Amazon S3, and others.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop.

TDR351: Using Kubernetes audit logs for incident response automation
In this hands-on builders’ session, learn how to use Amazon CloudWatch and Amazon GuardDuty to effectively monitor Kubernetes audit logs—part of the Amazon EKS control plane logs—to alert on suspicious events, such as an increase in 403 Forbidden or 401 Unauthorized Error logs. Also learn how to automate example incident responses for streamlining workflow and remediation.

TDR352: How to mitigate the risk of ransomware in your AWS environment
Join this hands-on builders’ session to learn how to mitigate the risk from ransomware in your AWS environment using the NIST Cybersecurity Framework (CSF). Choose your own path to learn how to protect, detect, respond, and recover from a ransomware event using key AWS security and management services. Use Amazon Inspector to detect vulnerabilities, Amazon GuardDuty to detect anomalous activity, and AWS Backup to automate recovery. This session is beneficial for security engineers, security architects, and anyone responsible for implementing security controls in their AWS environment.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

TDR231: Automated vulnerability management and remediation for Amazon EC2
In this chalk talk, learn about vulnerability management strategies for Amazon EC2 instances on AWS at scale. Discover the role of services like Amazon Inspector, AWS Systems Manager, and AWS Security Hub in vulnerability management and mechanisms to perform proactive and reactive remediations of findings that Amazon Inspector generates. Also learn considerations for managing vulnerabilities across multiple AWS accounts and Regions in an AWS Organizations environment.

TDR332: Response preparation with ransomware tabletop exercises
Many organizations do not validate their critical processes prior to an event such as a ransomware attack. Through a security tabletop exercise, customers can use simulations to provide a realistic training experience for organizations to test their security resilience and mitigate risk. In this chalk talk, learn about Amazon Managed Services (AMS) best practices through a live, interactive tabletop exercise to demonstrate how to execute a simulation of a ransomware scenario. Attendees will leave with a deeper understanding of incident response preparation and how to use AWS security tools to better respond to ransomware events.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

TDR271: Detecting and remediating security threats with Amazon GuardDuty
This workshop walks through scenarios covering threat detection and remediation using Amazon GuardDuty, a managed threat detection service. The scenarios simulate an incident that spans multiple threat vectors, representing a sample of threats related to Amazon EC2, AWS IAM, Amazon S3, and Amazon EKS, that GuardDuty is able to detect. Learn how to view and analyze GuardDuty findings, send alerts based on the findings, and remediate findings.

TDR371: Building an AWS incident response runbook using Jupyter notebooks
This workshop guides you through building an incident response runbook for your AWS environment using Jupyter notebooks. Walk through an easy-to-follow sample incident using a ready-to-use runbook. Then add new programmatic steps and documentation to the Jupyter notebook, helping you discover and respond to incidents.

TDR372: Detecting and managing vulnerabilities with Amazon Inspector
Join this workshop to get hands-on experience using Amazon Inspector to scan Amazon EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR) for software vulnerabilities. Learn how to manage findings by creating prioritization and suppression rules, and learn how to understand the details found in example findings.

TDR373: Industrial IoT hands-on threat detection
Modern organizations understand that enterprise and industrial IoT (IIoT) yields significant business benefits. However, unaddressed security concerns can expose vulnerabilities and slow down companies looking to accelerate digital transformation by connecting production systems to the cloud. In this workshop, use a case study to detect and remediate a compromised device in a factory using security monitoring and incident response techniques. Use an AWS multilayered security approach and top ten IIoT security golden rules to improve the security posture in the factory.

TDR374: You’ve received an Amazon GuardDuty EC2 finding: What’s next?
You’ve received an Amazon GuardDuty finding drawing your attention to a possibly compromised Amazon EC2 instance. How do you respond? In part one of this workshop, perform an Amazon EC2 incident response using proven processes and techniques for effective investigation, analysis, and lessons learned. Use the AWS CLI to walk step-by-step through a prescriptive methodology for responding to a compromised Amazon EC2 instance that helps effectively preserve all available data and artifacts for investigations. In part two, implement a solution that automates the response and forensics process within an AWS account, so that you can use the lessons learned in your own AWS environments.

If any of the sessions look interesting, consider joining us by registering for re:Inforce 2022. Use code SALXTDVaB7y to save $150 off the price of registration. For a limited time only and while supplies last. Also stay tuned for additional sessions being added to the catalog soon. We look forward to seeing you in Boston!

Celeste Bishop

Celeste Bishop

Celeste is a Product Marketing Manager in AWS Security, focusing on threat detection and incident response solutions. Her background is in experience marketing and also includes event strategy at Fortune 100 companies. Passionate about soccer, you can find her on any given weekend cheering on Liverpool FC, and her local home club, Austin FC.

Charles Goldberg

Charles Goldberg

Charles leads the Security Services product marketing team at AWS. He is based in Silicon Valley and has worked with networking, data protection, and cloud companies. His mission is to help customers understand solution best practices that can reduce the time and resources required for improving their company’s security and compliance outcomes.

New AWS whitepaper: AWS User Guide to Financial Services Regulations and Guidelines in New Zealand

Post Syndicated from Julian Busic original https://aws.amazon.com/blogs/security/new-aws-whitepaper-aws-user-guide-to-financial-services-regulations-and-guidelines-in-new-zealand/

Amazon Web Services (AWS) has released a new whitepaper to help financial services customers in New Zealand accelerate their use of the AWS Cloud.

The new AWS User Guide to Financial Services Regulations and Guidelines in New Zealand—along with the existing AWS Workbook for the RBNZ’s Guidance on Cyber Resilience—continues our efforts to help AWS customers navigate the regulatory expectations of the Reserve Bank of New Zealand (RBNZ) in a shared responsibility environment.

This whitepaper is intended for RBNZ-regulated institutions that are looking to run material workloads in the AWS Cloud, and is particularly useful for leadership, security, risk, and compliance teams that need to understand RBNZ requirements and guidance.

The whitepaper summarizes RBNZ requirements and guidance related to outsourcing, cyber resilience, and the cloud. It also gives RBNZ-regulated institutions information they can use to commence their due diligence and assess how to implement the appropriate programs for their use of AWS cloud services.

This document joins existing guides for other jurisdictions in the Asia Pacific region, such as Australia, India, Singapore, and Hong Kong. As the regulatory environment continues to evolve, we’ll provide further updates on the AWS Security Blog and the AWS Compliance page. You can find more information on cloud-related regulatory compliance at the AWS Compliance Center. You can also reach out to your AWS account manager for help finding the resources you need.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Julian Busic

Julian is a Security Solutions Architect with a focus on regulatory engagement. He works with our customers, their regulators, and AWS teams to help customers raise the bar on secure cloud adoption and usage. Julian has over 15 years of experience working in risk and technology across the financial services industry in Australia and New Zealand.