Tag Archives: announcements

Protect Sensitive Data with Amazon CloudWatch Logs

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/protect-sensitive-data-with-amazon-cloudwatch-logs/

Today we are announcing Amazon CloudWatch Logs data protection, a new set of capabilities for Amazon CloudWatch Logs that leverage pattern matching and machine learning (ML) to detect and protect sensitive log data in transit.

While developers try to prevent logging sensitive information such as Social Security numbers, credit card details, email addresses, and passwords, sometimes it gets logged. Until today, customers relied on manual investigation or third-party solutions to detect and mitigate sensitive information from being logged. If sensitive data is not redacted during ingestion, it will be visible in plain text in the logs and in any downstream system that consumed those logs.

Enforcing prevention across the organization is challenging, which is why quick detection and prevention of access to sensitive data in the logs is important from a security and compliance perspective. Starting today, you can enable Amazon CloudWatch Logs data protection to detect and mask sensitive log data as it is ingested into CloudWatch Logs or as it is in transit.

Customers from all industries that want to take advantage of native data protection capabilities can benefit from this feature. But in particular, it is useful for industries under strict regulations that need to make sure that no personal information gets exposed. Also, customers building payment or authentication services where personal and sensitive information may be captured can use this new feature to detect and mask sensitive information as it’s logged.

Getting Started
You can enable a data protection policy for new or existing log groups from the AWS Management Console, AWS Command Line Interface (CLI), or AWS CloudFormation. From the console, select any log group and create a data protection policy in the Data protection tab.

Enable data protection policy

When you create the policy, you can specify the data you want to protect. Choose from over 100 managed data identifiers, which are a repository of common sensitive data patterns spanning financial, health, and personal information. This feature provides you with complete flexibility in choosing from a wide variety of data identifiers that are specific to your use cases or geographical region.

Configure data protection policy

You can also enable audit reports and send them to another log group, an Amazon Simple Storage Service (Amazon S3) bucket, or Amazon Kinesis Firehose. These reports contain a detailed log of data protection findings.

If you want to monitor and get notified when sensitive data is detected, you can create an alarm around the metric LogEventsWithFindings. This metric shows how many findings there are in a particular log group. This allows you to quickly understand which application is logging sensitive data.

When sensitive information is logged, CloudWatch Logs data protection will automatically mask it per your configured policy. This is designed so that none of the downstream services that consume these logs can see the unmasked data. From the AWS Management Console, AWS CLI, or any third party, the sensitive information in the logs will appear masked.

Example of log file with masked data

Only users with elevated privileges in their IAM policy (add logs:Unmask action in the user policy) can view unmasked data in CloudWatch Logs Insights, logs stream search, or via FilterLogEvents and GetLogEvents APIs.

You can use the following query in CloudWatch Logs Insights to unmask data for a particular log group:

fields @timestamp, @message, unmask(@message)
| sort @timestamp desc
| limit 20

Available Now
Data protection is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo) AWS Regions.

Amazon CloudWatch Logs data protection pricing is based on the amount of data that is scanned for masking. You can check the CloudWatch Logs pricing page to learn more about the pricing of this feature in your Region.

Learn more about data protection on the CloudWatch Logs User Guide.

Marcia

New – Amazon CloudWatch Cross-Account Observability

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-cloudwatch-cross-account-observability/

Deploying applications using multiple AWS accounts is a good practice to establish security and billing boundaries between teams and reduce the impact of operational events. When you adopt a multi-account strategy, you have to analyze telemetry data that is scattered across several accounts. To give you the flexibility to monitor all the components of your applications from a centralized view, we are introducing today Amazon CloudWatch cross-account observability, a new capability to search, analyze, and correlate cross-account telemetry data stored in CloudWatch such as metrics, logs, and traces.

You can now set up a central monitoring AWS account and connect your other accounts as sources. Then, you can search, audit, and analyze logs across your applications to drill down into operational issues in a matter of seconds. You can discover and visualize metrics from many accounts in a single place and create alarms that evaluate metrics belonging to other accounts. You can start with an aggregated cross-account view of your application to visually identify the resources exhibiting errors and dive deep into correlated traces, metrics, and logs to find the root cause. This seamless cross-account data access and navigation helps reduce the time and effort required to troubleshoot issues.

Let’s see how this works in practice.

Configuring CloudWatch Cross-Account Observability
To enable cross-account observability, CloudWatch has introduced the concept of monitoring and source accounts:

  • A monitoring account is a central AWS account that can view and interact with observability data shared by other accounts.
  • A source account is an individual AWS account that shares observability data and resources with one or more monitoring accounts.

You can configure multiple monitoring accounts with the level of visibility you need. CloudWatch cross-account observability is also integrated with AWS Organizations. For example, I can have a monitoring account with wide access to all accounts in my organization for central security and operational teams and then configure other monitoring accounts with more restricted visibility across a business unit for individual service owners.

First, I configure the monitoring account. In the CloudWatch console, I choose Settings in the navigation pane. In the Monitoring account configuration section, I choose Configure.

Console screenshot.

Now I can choose which telemetry data can be shared with the monitoring account: Logs, Metrics, and Traces. I leave all three enabled.

Console screenshot.

To list the source accounts that will share data with this monitoring account, I can use account IDs, organization IDs, or organization paths. I can use an organization ID to include all the accounts in the organization or an organization path to include all the accounts in a department or business unit. In my case, I have only one source account to link, so I enter the account ID.

Console screenshot.

When using the CloudWatch console in the monitoring account to search and display telemetry data, I see the account ID that shared that data. Because account IDs are not easy to remember, I can display a more descriptive “account label.” When configuring the label via the console, I can choose between the account name or the email address used to identify the account. When using an email address, I can also choose whether to include the domain. For example, if all the emails used to identify my accounts are using the same domain, I can use as labels the email addresses without that domain.

There is a quick reminder that cross-account observability only works in the selected Region. If I have resources in multiple Regions, I can configure cross-account observability in each Region. To complete the configuration of the monitoring account, I choose Configure.

Console screenshot.

The monitoring account is now enabled, and I choose Resources to link accounts to determine how to link my source accounts.

Console screenshot.

To link source accounts in an AWS organization, I can download an AWS CloudFormation template to be deployed in a CloudFormation delegated administration account.

To link individual accounts, I can either download a CloudFormation template to be deployed in each account or copy a URL that helps me use the console to set up the accounts. I copy the URL and paste it into another browser where I am signed in as the source account. Then, I can configure which telemetry data to share (logs, metrics, or traces). The Amazon Resource Name (ARN) of the monitoring account configuration is pre-filled because I copy-pasted the URL in the previous step. If I don’t use the URL, I can copy the ARN from the monitoring account and paste it here. I confirm the label used to identify my source account and choose Link.

In the Confirm monitoring account permission dialog, I type Confirm to complete the configuration of the source account.

Using CloudWatch Cross-Account Observability
To see how things work with cross-account observability, I deploy a simple cross-account application using two AWS Lambda functions, one in the source account (multi-account-function-a) and one in the monitoring account (multi-account-function-b). When triggered, the function in the source account publishes an event to an Amazon EventBridge event bus in the monitoring account. There, an EventBridge rule triggers the execution of the function in the monitoring account. This is a simplified setup using only two accounts. You’d probably have your workloads running in multiple source accounts.Architectural diagram.

In the Lambda console, the two Lambda functions have Active tracing and Enhanced monitoring enabled. To collect telemetry data, I use the AWS Distro for OpenTelemetry (ADOT) Lambda layer. The Enhanced monitoring option turns on Amazon CloudWatch Lambda Insights to collect and aggregate Lambda function runtime performance metrics.

Console screenshot.

I prepare a test event in the Lambda console of the source account. Then, I choose Test and run the function a few times.

Console screenshot.

Now, I want to understand what the components of my application, running in different accounts, are doing. I start with logs and then move to metrics and traces.

In the CloudWatch console of the monitoring account, I choose Log groups in the Logs section of the navigation pane. There, I search for and find the log groups created by the two Lambda functions running in different AWS accounts. As expected, each log group shows the account ID and label originating the data. I select both log groups and choose View in Logs Insights.

Console screenshot.

I can now search and analyze logs from different AWS accounts using the CloudWatch Logs Insights query syntax. For example, I run a simple query to see the last twenty messages in the two log groups. I include the @log field to see the account ID that the log belongs to.

Console screenshot.

I can now also create Contributor Insights rules on cross-account log groups. This enables me, for example, to have a holistic view of what security events are happening across accounts or identify the most expensive Lambda requests in a serverless application running in multiple accounts.

Then, I choose All metrics in the Metrics section of the navigation pane. To see the Lambda function runtime performance metrics collected by CloudWatch Lambda Insights, I choose LambdaInsights and then function_name. There, I search for multi-account and memory to see the memory metrics. Again, I see the account IDs and labels that tell me that these metrics are coming from two different accounts. From here, I can just select the metrics I am interested in and create cross-account dashboards and alarms. With the metrics selected, I choose Add to dashboard in the Actions dropdown.

Console screenshot.

I create a new dashboard and choose the Stacked area widget type. Then, I choose Add to dashboard.

Console screenshot.

I do the same for the CPU and memory metrics (but using different widget types) to quickly create a cross-account dashboard where I can keep under control my multi-account setup. Well, there isn’t a lot of traffic yet but I am hopeful.

Console screenshot.

Finally, I choose Service map from the X-Ray traces section of the navigation pane to see the flow of my multi-account application. In the service map, the client triggers the Lambda function in the source account. Then, an event is sent to the other account to run the other Lambda function.

Console screenshot.

In the service map, I select the gear icon for the function running in the source account (multi-account-function-a) and then View traces to look at the individual traces. The traces contain data from multiple AWS accounts. I can search for traces coming from a specific account using a syntax such as:

service(id(account.id: "123412341234"))

Console screenshot.

The service map now stitches together telemetry from multiple accounts in a single place, delivering a consolidated view to monitor their cross-account applications. This helps me to pinpoint issues quickly and reduces resolution time.

Availability and Pricing
Amazon CloudWatch cross-account observability is available today in all commercial AWS Regions using the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. AWS CloudFormation support is coming in the next few days. Cross-account observability in CloudWatch comes with no extra cost for logs and metrics, and the first trace copy is free. See the Amazon CloudWatch pricing page for details.

Having a central point of view to monitor all the AWS accounts that you use gives you a better understanding of your overall activities and helps solve issues for applications that span multiple accounts.

Start using CloudWatch cross-account observability to monitor all your resources.

Danilo

New – A Fully Managed Schema Conversion in AWS Database Migration Service

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-a-fully-managed-schema-conversion-in-aws-database-migration-service/

Since we launched AWS Database Migration Service (AWS DMS) in 2016, customers have securely migrated more than 800,000 databases to AWS with minimal downtime. AWS DMS supports migration between 20+ database and analytics engines, such as Oracle to Amazon Aurora MySQL, MySQL to Amazon Relational Database (Amazon RDS) MySQL, Microsoft SQL Server to Amazon Aurora PostgreSQL, MongoDB to Amazon DocumentDB, Oracle to Amazon Redshift, and to and from Amazon Simple Storage Service (Amazon S3).

Specifically, the AWS Schema Conversion Tool (AWS SCT) makes heterogeneous database and data warehouse migrations predictable and can automatically convert the source schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target engine. For example, it supports the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon Aurora MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL. You can download the AWS SCT for your platform, including Windows or Linux (Fedora and Ubuntu).

Today we announce fully managed AWS DMS Schema Conversion, which streamlines database migrations by making schema assessment and conversion available inside AWS DMS. With DMS Schema Conversion, you can now plan, assess, convert and migrate under one central DMS service. You can access features of DMS Schema Conversion in the AWS Management Console without downloading and executing AWS SCT.

AWS DMS Schema Conversion automatically converts your source database schemas, and a majority of the database code objects to a format compatible with the target database. This includes tables, views, stored procedures, functions, data types, synonyms, and so on, similar to AWS SCT. Any objects that cannot be automatically converted are clearly marked as action items with prescriptive instructions on how to migrate to AWS manually.

In this launch, DMS Schema Conversion supports the following databases as sources for migration projects:

  • Microsoft SQL Server version 2008 R2 and higher
  • Oracle version 10.2 and later, 11g and up to 12.2, 18c, and 19c

DMS Schema Conversion supports the following databases as targets for migration projects:

  • Amazon RDS for MySQL version 8.x
  • Amazon RDS for PostgreSQL version 14.x

Setting Up AWS DMS Schema Conversion
To get started with DMS Schema Conversion, and if it is your first time using AWS DMS, complete the setup tasks to create a virtual private cloud (VPC) using the Amazon VPC service, source, and target database. To learn more, see Prerequisites for AWS Database Migration Service in the AWS documentation.

In the AWS DMS console, you can see new menus to set up Instance profiles, add Data providers, and create Migration projects.

Before you create your migration project, set up an instance profile by choosing Instance profiles in the left pane. An instance profile specifies network and security settings for your DMS Schema Conversion instances. You can create multiple instance profiles and select an instance profile to use for each migration project.

Choose Create instance profile and specify your default VPC or a new VPC, Amazon Simple Storage Service (Amazon S3) bucket to store your schema conversion metadata, and additional settings such as AWS Key Management Service (AWS KMS) keys.

You can create the simplest network configuration with a single VPC configuration. If your source or target data providers are in different VPCs, you can create your instance profile in one of the VPCs, and then link these two VPCs by using VPC peering.

Next, you can add data providers that store the data store type and location information about your source and target databases by choosing Data providers in the left pane. For each database, you can create a single data provider and use it in multiple migration projects.

Your data provider can be a fully managed Amazon RDS instance or a self-managed engine running either on-premises or on an Amazon Elastic Compute Cloud (Amazon EC2) instance.

Choose Create data provider to create a new data provider. You can set the type of the database location manually, such as database engine, domain name or IP address, port number, database name, and so on, for your data provider. Here, I have selected an RDS database instance.

After you create a data provider, make sure that you add database connection credentials in AWS Secrets Manager. DMS Schema Conversion uses this information to connect to a database.

Converting your database schema with AWS DMS Schema Conversion
Now, you can create a migration project for DMS Schema Conversion by choosing Migration projects in the left pane. A migration project describes your source and target data providers, your instance profile, and migration rules. You can also create multiple migration projects for different source and target data providers.

Choose Create migration project and select your instance profile and source and target data providers for DMS Schema Conversion.

After creating your migration project, you can use the project to create assessment reports and convert your database schema. Choose your migration project from the list, then choose the Schema conversion tab and click Launch schema conversion.

Migration projects in DMS Schema Conversion are always serverless. This means that AWS DMS automatically provisions the cloud resources for your migration projects, so you don’t need to manage schema conversion instances.

Of course, the first launch of DMS Schema Conversion requires starting a schema conversion instance, which can take up to 10–15 minutes. This process also reads the metadata from the source and target databases. After a successful first launch, you can access DMS Schema Conversion faster.

An important part of DMS Schema Conversion is that it generates a database migration assessment report that summarizes all of the schema conversion tasks. It also details the action items for schema that cannot be converted to the DB engine of your target database instance. You can view the report in the AWS DMS console or export it as a comma-separated value (.csv) file.

To create your assessment report, choose the source database schema or schema items that you want to assess. After you select the checkboxes, choose Assess in the Actions menu in the source database pane. This report will be archived with .csv files in your S3 bucket. To change the S3 bucket, edit the schema conversion settings in your instance profile.

Then, you can apply the converted code to your target database or save it as a SQL script. To apply converted code, choose Convert in the pane of Source data provider and then Apply changes in the pane of Target data provider.

Once the schema has been converted successfully, you can move on to the database migration phase using AWS DMS. To learn more, see Getting started with AWS Database Migration Service in the AWS documentation.

Now Available
AWS DMS Schema Conversion is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) Regions, and you can start using it today.

To learn more, see the AWS DMS Schema Conversion User Guide, give it a try, and please send feedback to AWS re:Post for AWS DMS or through your usual AWS support contacts.

Channy

AWS Application Migration Service Major Updates – New Migration Servers Grouping, Updated Launch, and Post-Launch Template

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-application-migration-service-major-updates-new-migration-servers-grouping-updated-launch-and-post-launch-template/

Last year, we introduced the general availability of AWS Application Migration Service that simplifies and expedites your migration to AWS by automatically converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. Since the GA launch, we have made improvements, adding features such as agentless replication, MAP 2.0 auto-tagging and support for optional post-launch modernization actions.

Today we announce three major updates of Application Migration Service to support your migration projects of any size:

  • New Migration Servers Grouping – You can group migration servers into “applications,” a group of servers that function together as a single application, and manage the migration stage in “waves,” a plan of migrations including grouping servers and applications.
  • Updated Launch Template – You can modify the general settings and default launch template, and this template is then used to generate the Amazon Elastic Compute Cloud (Amazon EC2) instance launch template of subsequently installed source servers.
  • Updated Post-Launch Template – You can configure custom modernization actions for the post-launch template. You can associate any AWS Systems Manager documents and their parameters with a post-launch custom action.

Let’s dive deep into each launch!

New Migration Servers Grouping – Applications and Waves
Customers have clusters of servers that comprise an application, with dependencies between them. The servers within an application share the same configurations, such as network, security policies, etc. Customers want to migrate complete applications and services, as well as set up and configure the application environment.

We introduce the new concept of “application,” representing a group of servers, and you can manage the migration of an application.

The new application feature groups source servers together with the same application for integrated migration jobs. It includes configuring the environment before migrating the application’s servers, creating the appropriate security groups, and performing bulk actions on all of the applications servers.

You can track and monitor the status of application migration and data replication within the migration lifecycle from source servers.

Also, customers with large migrations plan their migration, grouping servers and applications in waves. These are logical groups that describe the migration plan over time. Waves may include multiple servers and applications that do not necessarily have dependencies between them.

We introduce the new concept of “wave,” assisting customers in building their migration plan, as well as executing and monitoring it.

Application Migration Service supports actions on waves, such as launching all servers in a testing environment or performing cutover of a wave. Application Migration Service also provides reporting and monitoring information at the wave level so that customers will be able to manage their migration projects.

Updated Launch Template – Launch Settings and Default EC2 Launch Template
The launch template allows you to control the way Application Migration Service launches instances in the AWS Cloud. You can change the settings for existing and newly added servers individually. Previously, we only supported the AWS Migration Acceleration Program (MAP) option to add tags to launched migration instances.

We added two new options to modify the global launch template, and this template is then used to generate the EC2 launch templates of subsequently installed source servers. Customers start with a global Application Migration Service launch template, which can be used for predefined launch templates. They would then potentially only have to perform modifications to a smaller subset of source servers, as opposed to all of them.

Here are default settings that will be used when launching target servers:

  • Activate instance type right-sizing – The service will determine the best match instance type. The default instance type defined in the EC2 template will be ignored.
  • Start instance upon launch – The service will launch instances automatically. If this option is not selected, the launched instance will need to be manually started after launch.
  • Copy private IP – This enables you to copy the private IP of your source server to the target.
  • Transfer server tags – Transfer the tags from the source server to the launched instances.
  • Operating system licensing – Specify whether to continue to use the Bring Your Own License model (BYOL) of the source server or use an AWS provided license.

Also, you can configure the default settings that will be applied to the EC2 launch template of every target server, such as default target subnet, additional security groups, default instance type, Amazon Elastic Block Store (Amazon EBS) volume type, IOPS, and throughput to associate with all instances launched by this service.

Updated Post-Launch Template – Custom Actions
Post-launch settings allow you to control and automate actions performed after the server has been launched in AWS. It includes four built-in actions: installing the AWS Systems Manager agent, installing the AWS Elastic Disaster Recover agent and configuring replication, CentOS conversion, and SUSE subscription conversion.

We added a new option to configure custom actions in the post-launch template. You can associate any AWS Systems Manager and its action parameters. It also includes the order in which the actions will be executed and the source server’s operating systems for which the custom action can be configured.

Choose Add custom action to make a new post-launch custom action. For example, the AWS-CopySnapshot, one of Systems Manager Automation’s runbooks, copies a point-in-time snapshot of an EBS volume. You can copy the snapshot within the same AWS Region or from one Region to another.

In the Action parameters, you can assign SnapshotId and SourceRegion to run the AWS Systems Manager CopySnapshot runbook.

You can create your own Systems Manager document to define the actions that Systems Manager performs on your managed instances. Systems Manager offers more than 100 preconfigured documents that you can use by specifying parameters as the post-launch actions. To learn more, see AWS Systems Manager Automation runbook reference in the AWS documentation.

Now Available
The new migration servers grouping, updates on the launch, and post-launch template are available now, and you can start using them today in all Regions where AWS Application Migration Service is supported.

To learn more, see the Application Migration Service User Guide, give it a try, and please send feedback to AWS re:Post for Application Migration Service or through your usual AWS support contacts.

Channy

New – Announcing Amazon EFS Elastic Throughput

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/new-announcing-amazon-efs-elastic-throughput/

Today, we are announcing the availability of Amazon EFS Elastic Throughput, a new throughput mode for Amazon EFS that is designed to provide your applications with as much throughput as they need with pay-as-you-use pricing. This new throughput mode enables you to further simplify running workloads and applications on AWS by providing shared file storage that doesn’t need provisioning or capacity management.

Elastic Throughput is ideal for spiky and unpredictable workloads with performance requirements that are difficult to forecast. When you enable Elastic Throughput on an Amazon EFS file system, you no longer need to think about actively managing your file system performance or over-paying for idle resources in order to ensure performance for your applications. When you enable Elastic Throughput, you don’t specify or provision throughput capacity, Amazon EFS automatically delivers the throughput performance your application needs while you the builder pays only for the amount of data read or written.

Amazon EFS is built to provide serverless, fully elastic file storage that lets you share file data for your cloud-based applications without having to think about provisioning or managing storage capacity and performance. With Elastic Throughput, Amazon EFS now extends its simplicity and elasticity to performance, enabling you to run an even broader range of file workloads on Amazon EFS. Amazon EFS is well suited to support a broad spectrum of use cases that include analytics and data science, machine learning, CI/CD tools, content management and web serving, and SaaS applications.

A Quick Review
As you may already know, Amazon EFS already has the Bursting Throughput mode, which is available as a default and supports bursting to higher levels for up to 12 hours a day. If your application is throughput constrained on Bursting mode (for example, utilizes more than 80 percent of permitted throughput or exhausts burst credits), then you should consider using Provisioned (which we announced in 2018), or the new Elastic Throughput modes.

With this announcement of Elastic Throughput mode, and in addition to the already existing Provisioned Throughput mode, Amazon EFS now offers two options for workloads that require higher levels of throughput performance. You should use Provisioned Throughput if you know your workload’s performance requirements and you expect your workload to consume a higher share (more than 5 percent on average) of your application’s peak throughput capacity. You should use Elastic Throughput if you don’t know your application’s throughput or your application is very spiky.

To access Elastic Throughput mode (or any of the Throughput modes), select Customize (selecting Create instead will create your file system with the default Bursting mode).

Create File system

Create File system

New - Elastic Throughput

New – Elastic Throughput

You can also enable Elastic Throughput for new and existing General Purpose file systems using the Amazon EFS console or programmatically using the Amazon EFS CLI, Amazon EFS API, or AWS CloudFormation.

Elastic Throughput in Action
Once you have enabled Elastic Throughput mode, you will be able to monitor your cost and throughput usage using Amazon CloudWatch and set alerts on unplanned throughput charges using AWS Budgets.

I have a test file system elasticblog that I created previously using the Amazon EFS console, and now I cannot wait to see Elastic Throughput in action.

File system (elasticblog)

File system (elasticblog)

I have provisioned an Amazon Elastic Compute Cloud (Amazon C2) instance which I mounted to my file system. This EC2 instance has data that I will add to the file system.

I have also created CloudWatch Alarms, which will monitor throughput usage and set alarm thresholds (ReadIOBytes, WriteIOBytes, TotalIOBytes, and MetadataIOBytes).

CloudWatch for Throughput Usage

CloudWatch for Throughput Usage

The CloudWatch dashboard for my test file system elasticblog looks like this.

CloudWatch Dashboard - TotalIOBytes for File System

CloudWatch Dashboard – TotalIOBytes for File System

Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.

Available Now
Amazon EFS Elastic Throughput is available in all Regions supporting EFS except for the AWS China Regions.

To learn more, see the Amazon EFS User Guide. Please send feedback to AWS re:Post for Amazon Elastic File System or through your usual AWS support contacts.

Veliswa x

New – Amazon Redshift Support in AWS Backup

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-redshift-support-in-aws-backup/

With Amazon Redshift, you can analyze data in the cloud at any scale. Amazon Redshift offers native data protection capabilities to protect your data using automatic and manual snapshots. This works great by itself, but when you’re using other AWS services, you have to configure more than one tool to manage your data protection policies.

To make this easier, I am happy to share that we added support for Amazon Redshift in AWS Backup. AWS Backup allows you to define a central backup policy to manage data protection of your applications and can now also protect your Amazon Redshift clusters. In this way, you have a consistent experience when managing data protection across all supported services. If you have a multi-account setup, the centralized policies in AWS Backup let you define your data protection policies across all your accounts within your AWS Organizations. To help you meet your regulatory compliance needs, AWS Backup now includes Amazon Redshift in its auditor-ready reports. You also have the option to use AWS Backup Vault Lock to have immutable backups and prevent malicious or inadvertent changes.

Let’s see how this works in practice.

Using AWS Backup with Amazon Redshift
The first step is to turn on the Redshift resource type for AWS Backup. In the AWS Backup console, I choose Settings in the navigation pane and then, in the Service opt-in section, Configure resources. There, I toggle the Redshift resource type on and choose Confirm.

Console screenshot.

Now, I can create or update a backup plan to include the backup of all, or some, of my Redshift clusters. In the backup plan, I can define how often these backups should be taken and for how long they should be kept. For example, I can have daily backups with one week of retention, weekly backups with one month of retention, and monthly backups with one year of retention.

I can also create on-demand backups. Let’s see this with more details. I choose Protected resources in the navigation pane and then Create on-demand backup.

I select Redshift in the Resource type dropdown. In the Cluster identifier, I select one of my clusters. For this workload, I need two weeks of retention. Then, I choose Create on-demand backup.

Console screenshot.

My data warehouse is not huge, so after a few minutes, the backup job has completed.

Console screenshot.

I now see my Redshift cluster in the list of the resources protected by AWS Backup.

Console screenshot.

In the Protected resources list, I choose the Redshift cluster to see the list of the available recovery points.

Console screenshot.

When I choose one of the recovery points, I have the option to restore the full data warehouse or just a table into a new Redshift cluster.

Console screenshot.

I now have the possibility to edit the cluster and database configuration, including security and networking settings. I just update the cluster identifier, otherwise the restore would fail because it must be unique. Then, I choose Restore backup to start the restore job.

After some time, the restore job has completed, and I see the old and the new clusters in the Amazon Redshift console. Using AWS Backup gives me a simple centralized way to manage data protection for Redshift clusters as well as many other resources in my AWS accounts.

Console screenshot.

Availability and Pricing
Amazon Redshift support in AWS Backup is available today in the AWS Regions where both AWS Backup and Amazon Redshift are offered, with the exception of the Regions based in China. You can use this capability via the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs.

There is no additional cost for using AWS Backup compared to the native snapshot capability of Amazon Redshift. Your overall costs depend on the amount of storage and retention you need. For more information, see AWS Backup pricing.

Danilo

New – Fully Managed Blue/Green Deployments in Amazon Aurora and Amazon RDS

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/

When updating databases, using a blue/green deployment technique is an appealing option for users to minimize risk and downtime. This method of making database updates requires two database environments—your current production environment, or blue environment, and a staging environment, or green environment. You must then keep these two environments in sync with each other so you may safely test and upgrade your changes to production.

Amazon Aurora and Amazon Relational Database Service (Amazon RDS) customers can use database cloning and promotable read replicas to help self-manage a blue/green deployment. However, self-managing a blue/green deployment can be costly and complex to build and manage. As a result, customers sometimes delay implementing database updates, choosing availability over the benefits that they would gain from updating their databases.

Today, we are announcing the general availability of Amazon RDS Blue/Green Deployments, a new feature for Amazon Aurora with MySQL compatibility, Amazon RDS for MySQL, and Amazon RDS for MariaDB that enables you to make database updates safer, simpler, and faster.

With just a few steps, you can use Blue/Green Deployments to create a separate, synchronized, fully managed staging environment that mirrors the production environment. The staging environment clones your production environment’s primary database and in-Region read replicas. Blue/Green Deployments keep these two environments in sync using logical replication.

In as fast as a minute, you can promote the staging environment to be the new production environment with no data loss. During switchover, Blue/Green Deployments blocks writes on blue and green environments so that the green catches up with the blue, ensuring no data loss. Then, Blue/Green Deployments redirects production traffic to the newly promoted staging environment, all without any code changes to your application.

With Blue/Green Deployments, you can make changes, such as major and minor version upgrades, schema modifications, and operating system or maintenance updates, to the staging environment without impacting the production workload.

Getting Started with Blue/Green Deployments for MySQL Clusters
You can start updating your databases with just a few clicks in the AWS Management Console. To get started, simply select the database that needs to be updated in the console and click Create Blue/Green Deployment under the Actions dropdown menu.

You can set a Blue/Green Deployment identifier and the attributes of your database to be modified, such as the engine version, DB cluster parameter group, and DB parameter group for green databases. To use a Blue/Green Deployment in your Aurora MySQL DB cluster, you should turn on binary logging, changing the value for the binlog_format parameter from OFF to MIXED in the DB cluster parameter group.

When you choose Create Blue/Green Deployment, it creates a new staging environment and runs automated tasks to prepare the database for production. Note, you will be charged the cost of the green database, including read replicas and DB instances in Multi-AZ deployments, and any other features such as Amazon RDS Performance Insights that you may have enabled on green.

You can also do the same job in the AWS Command Line Interface (AWS CLI). To perform an engine version upgrade, simply add a targetEngineVersion parameter and specify the engine version you’d like to upgrade to. This parameter works with both minor and major version upgrades, and it accepts short versions like 5.7 for Amazon Aurora MySQL-Compatible.

$ aws rds create-blue-green-deployment \
--blue-green-deployment-name my-bg-deployment \
--source arn:aws:rds:us-west-2:1234567890:db:my-aurora-mysql \
--target-engine-version 5.7 \
--region us-west-2 \

After creation is complete, you now have a staging environment that is ready for test and validation before promoting it to be the new production environment.

When testing and qualification of changes are complete, you can choose Switch over in the Actions dropdown menu to promote the staging environment marked as Green to be the new production system.

Now you are nearly ready to switch over your green databases to production. Check the settings of your green databases to verify that they are ready for the switchover. You may also set a timeout setting to determine the maximum time limit for your switchover. If Blue/Green Deployments’ switchover guardrails detect that it would take longer than the specified duration, then the switchover is canceled, and no changes are made to the environments. We recommend that you identify times of low or moderate production traffic to initiate a switchover.

After switchover, Blue/Green Deployments does not delete your old production environment. You may access it for additional validations and performance/regression testing, if needed. Please note that it is your responsibility to delete the old production environment when you no longer need it. Standard billing charges apply on old production instances until you delete them.

Now Available
Amazon RDS Blue/Green Deployments is available today on Amazon Aurora with MySQL Compatibility 5.6 or higher, Amazon RDS for MySQL major version 5.6 or higher, and Amazon RDS for MariaDB 10.2 and higher in all AWS commercial Regions, excluding China, and AWS GovCloud Regions.

To learn more, read the Amazon Aurora MySQL Developer Guide or the Amazon RDS for MySQL User Guide. Give it a try, and please send feedback to AWS re:Post for Amazon RDS or through your usual AWS support contacts.

Channy

New for AWS Backup – Protect and Restore Your CloudFormation Stacks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-backup-protect-and-restore-your-cloudformation-stacks/

To define the data protection policy of an application, you have to look at its components and find which ones store data that needs to be protected. Those are the stateful components of your application, such as databases and file systems. Other components don’t store data but need to be restored as well in case of issues. These are stateless components, such as containers and their network configurations.

When you manage your application using infrastructure as code (IaC), you have a single repository where all these components are described. Can we use this information to help protect your applications? Yes! AWS Backup now supports attaching an AWS CloudFormation stack to your data protection policies.

When you use CloudFormation as a resource, all stateful components supported by AWS Backup are backed up around the same time. The backup also includes the stateless resources in the stack, such as AWS Identity and Access Management (IAM) roles and Amazon Virtual Private Cloud (Amazon VPC) security groups. This gives you a single recovery point that you can use to recover the application stack or the individual resources you need. In case of recovery, you don’t need to mix automated tools with custom scripts and manual activities to recover and put the whole application stack back together. As you modernize and update an application managed with CloudFormation, AWS Backup automatically keeps track of changes and updates the data protection policies for you.

CloudFormation support for AWS Backup also helps you prove compliance of your data protection policies. You can monitor your application resources in AWS Backup Audit Manager, a feature of AWS Backup that enables you to audit and report on the compliance of data protection policies. You can also use AWS Backup Vault Lock to manage the immutability of your backups as required by your compliance obligations.

Let’s see how this works in practice.

Using AWS Backup Support for CloudFormation Stacks
First, I need to turn on the CloudFormation resource type for AWS Backup. In the AWS Backup console, I choose Settings in the navigation pane and then, in the Service opt-in section, Configure resources. There, I toggle the CloudFormation resource type on and choose Confirm.

Console screenshot.

Now that CloudFormation support is enabled, I choose Dashboard in the navigation pane and then Create backup plan. I select the Start with a template option and then the Daily-35day-Retention template. As the name suggests, this template creates daily backups that are kept for 35 days before being automatically deleted. I enter a name for the backup plan and choose Create plan.

Console screenshot.

Now I can assign resources to my backup plan. I enter a resource assignment name and use the default IAM role that is automatically created with the correct permissions.

Console screenshot.

In the Resource selection, I can select Include all resource types to automatically protect all resource types that are enabled in my account. Because I’d like to show how CloudFormation support works, I select Include specific resource types and then CloudFormation in the Select resource types dropdown menu. In the Choose resources menu, I can use the All supported CloudFormation stacks option to have all my stacks protected. For simplicity, I choose to protect only one stack, the my-app stack.

Console screenshot.

I leave the other options at their default values and choose Assign resources. That’s all! Now the CloudFormation stack that I selected will be backed up daily with 35 days of retention. What does that mean? Let’s have a look at what happens when I create an on-demand backup of a CloudFormation stack.

Creating On-Demand Backups for CloudFormation Stacks
I choose Protected resources in the navigation pane and then Create on-demand backup. The next steps are similar to what I did before when assigning resources to a backup plan. I select the CloudFormation resource type and the my-app stack. I use the Create backup now option to start the backup within one hour. I choose 7 days of retention and the Default backup vault. Backup vaults are logical containers that store and organize your backups. I select the default IAM role and choose Create on-demand backup.

Console screenshot.

Within a few minutes, the backup job is running. I expand the Backup job ID in the Backup jobs list to see the resources being backed up. The stateful resources (such as Amazon DynamoDB tables and Amazon Relational Database Service (RDS) databases) are listed with the current state of the backup job. The stateless resources in my stack (such as IAM roles, AWS Lambda functions, and VPC configurations) are backed up by the job with the CloudFormation resource type.

Console screenshot.

When the backup job has completed, I go back to the Protected resources page to see the list of resources that I can now restore. In the list, I see the IDs of the stateful resources (in this case, two DynamoDB tables and an Aurora database) and of the CloudFormation stack. If I choose each of the stateful resources, I see the available recovery points corresponding to the different points in time when that resource has been backed up.

Console screenshot.

If I choose the CloudFormation stack, I get a list of composite recovery points. Each composite recovery point includes all stateless and stateful resources in the stack. More specifically, the stateless resources are included in the CloudFormation template recovery point (the last one in the following screenshot).

Console screenshot.

Restoring a CloudFormation Backup
Inside the composite recovery point, I select the recovery point of the CloudFormation stack and choose Restore. Restoring a CloudFormation stack backup creates a new stack with a change set that represents the backup. I enter the new stack and change set names and choose Restore backup. After a few minutes, the restore job is completed.

In the CloudFormation console, the new stack is under review. I need to apply the change set.

Console screenshot.

I choose the new stack and select the change set created by the restore job to apply the change set.

Console screenshot.

After some time, the resources in my original stack have been recreated in the new stack. The stateful resources have been recreated empty. To recover the stateful resources, I can go back to the list of recovery points, select the recovery point I need, and initiate a restore.

Availability and Pricing
AWS Backup support for CloudFormation stacks is available today using the console, AWS Command Line Interface (CLI), and AWS SDKs in all AWS Regions where AWS Backup is offered. There is no additional cost for the stateless resources backed up and restored by AWS Backup. You only pay for the stateful resources such as databases, storage volumes, or file systems. For more information, see AWS Backup pricing.

You now have an automated solution to create and restore your applications with a simplified experience, eliminating the need to manage custom scripts.

Danilo

Amazon CloudWatch Internet Monitor Preview – End-to-End Visibility into Internet Performance for your Applications

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/cloudwatch-internet-monitor-end-to-end-visibility-into-internet-performance-for-your-applications/

How many times have you had monitoring dashboards show you a normal situation, and at the same time, you have received customer tickets reporting your app is “slow” or unavailable to them? How much time did it take to diagnose these customer reports?

You told us one of your challenges when monitoring internet-facing applications is to gather data outside of AWS to build a realistic picture of how your application behaves for your customers connected to multiple and geographically distant internet providers. Capturing and monitoring data about internet traffic before it reaches your infrastructure is either difficult or very expensive.

I am happy to announce the public preview of Amazon CloudWatch Internet Monitor, a new capability of CloudWatch that gives visibility into how an internet issue might impact the performance and availability of your applications. It allows you to reduce the time it takes to diagnose internet issues from days to minutes.

Internet Monitor uses the connectivity data that we capture from our global networking footprint to calculate a baseline of performance and availability for internet traffic. This is the same data that we use at AWS to monitor our own internet uptime and availability. With Internet Monitor, you can gain awareness of problems that arise on the internet experienced by your end users in different geographic locations and networks.

There is no need to instrument your application code. You can enable the service in the CloudWatch section of the AWS Management Console and start to use it immediately.

Let’s See It in Action
Getting started with Internet Monitor is easy. Let’s imagine I want to monitor the network paths between my customers and my AWS resources. I open the AWS Management Console and navigate to CloudWatch. I select Internet Monitor on the left-side navigation menu. Then, I select Create monitor.

Internet Monitor - Create

On the Create monitor page, I enter a Monitor name, and I select Add resources to choose the resources to monitor. For this demo, I select the VPC and the CloudFront distribution hosting my customer-facing application.

Internet Monitor - Select resources

I have the opportunity to review my choices. Then, I select Create monitor.

Internet Monitor - Final screen

From that moment on, Internet Monitor starts to collect data based on my application’s resource logs behind the scene. There is no need for you to activate (or pay for) VPC Flow Logs, CloudFront logs, or other log types.

After a while, I receive customer complaints about our application being slow. I open Internet Monitor again, I select the monitor I created earlier (Monitor_example), and I immediately see that the application suffers from internet performance issues.

The Health scores graph shows you performance and availability information for your global traffic. AWS has substantial historical data about internet performance and availability for network traffic between geographic locations for different network providers and services. By applying statistical analysis to the data, we can detect when the performance and availability towards your application have dropped, compared to an estimated baseline that we’ve calculated. To make it easier to see those drops, we report that information to you in the form of an estimated performance score and an availability score.

Internet Monitor - Health scoree

I scroll a bit down the page. The Internet traffic overview map shows the overall event status across all monitored locations. I look at the details in the Health events table. It also highlights other events that are happening globally, sorted by total traffic impact. I notice that a performance issue in Las Vegas, Nevada, US, is affecting my application traffic the most.

Internet Monitor - Internet Traffic OverviewNow that I have identified the issue, I am curious about the historical data. Has it happened before?

I select the Historical Explorer tab to understand trends and see earlier data related to this location and network provider. I can view aggregated metrics such as performance score, availability score, bytes transferred, and round-trip time at p50, p90, and p95 percentiles, for a customized timeframe, up to 18 months in the past.

Internet Monitor - Historical dataI can see today’s incident is not the first one. This specific client location and network provider has had multiple issues in the past few months.

Internet Monitor - Historical data detailsNow that I understand the context, I wonder what action I can take to mitigate the issue.

I switch to the Traffic insights tab. I see overall traffic data and top client locations that are being monitored based on total traffic (bytes). Apparently, Las Vegas, Nevada, US, is one of the top client locations.

Internet Monitor - Traffic insights 1

I select the graph to see traffic details for Las Vegas, Nevada, US. In the Lowest Time To First Byte (TTFB) column, I see AWS service and AWS Region setup recommendations for all of the top client location and network combinations. The Predicted Time To First Byte in the table shows the potential impact if I make the suggested architectural change.

In this example, Internet Monitor suggests having CloudFront distribute the traffic currently distributed by EC2 and to allow for some additional traffic to be served by EC2 instances in us-east-1 in addition to us-east-2.

Internet Monitor - Traffic insights 2

Available Today
Internet Monitor is available in public preview today in 20 AWS Regions:

  • In the Americas: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), South America (São Paulo).
  • In Asia and Pacific: Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo).
  • In Europe, Middle East, and Africa: Africa (Cape Town), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain)

Note that AWS CloudFormation support is missing at the moment; it will be added soon.

There is no costs associated with the service during the preview period. Just keep in mind that Internet Monitor vends metrics and logs to CloudWatch; you will be charged for these additional CloudWatch logs and CloudWatch metrics.

Whether you work for a startup or a large enterprise, CloudWatch Internet Monitor helps you be proactive about your application performance and availability. Give it a try today!

— seb

New for Amazon Transcribe – Real-Time Analytics During Live Calls

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-transcribe-real-time-analytics-during-live-calls/

The experience customers have when interacting with a contact center can have a profound impact on them. For this reason, we launched Amazon Transcribe Call Analytics last year to help you analyze customer call recordings and get insights into issues and trends related to customer satisfaction and agent performance.

To assist agents in resolving live calls faster, we are introducing today real-time call analytics in Amazon Transcribe Call Analytics. Real-time call analytics provides APIs for developers to accurately transcribe live calls and at the same time identify customer experience issues and sentiment in real time. Transcribe Call Analytics uses state-of-the-art machine learning capabilities to automatically assess thousands of in-progress calls and detect customer experience issues, such as repeated requests to speak to a manager or cancel a subscription.

With a few clicks, supervisors and analysts can create categories in the AWS console to identify customer experience issues using criteria such as specific terms such as “not happy,” “poor quality,” and “cancel my subscription.” Transcribe Call Analytics analyzes in-progress calls in real time to detect when a category is met. Developers can use those signals, along with sentiment trends from the API, to build a proactive system that alerts supervisors about emerging issues or assists agents with relevant information to solve customer issues.

Transcribe Call Analytics also provides a real-time transcript of the live conversation that supervisors can use to quickly get up to speed on the customer interaction and assess the appropriate action. The in-call transcript also eliminates the need for customers to repeat themselves if the call is transferred to another agent. Agents can focus all their attention on the customer during the call instead of taking notes for entry in a CRM system because Transcribe Call Analytics includes an automated call summarization capability, which identifies the issue, outcome, and action item associated with a call.

Transcribe Call Analytics is a foundational API for AWS Contact Center Intelligence solutions such as post-call analytics and the updated real-time call analytics with agent assist solution using the new real-time capabilities.

Let’s see how this works in practice.

Exploring Real-Time Call Analytics in the Console
To see how this works visually, I use the Amazon Transcribe console. First, I create a category to be notified if some terms are used in the call that would require an escalation. I choose Category Management from the navigation pane and then Create category.

I enter Escalation as the name for the category. I select REAL_TIME in the Category type dropdown. Then, I choose Create from scratch.

Console screenshot.

I only need one rule for this category. In the Rule type dropdown, I select Transcript content match. In the next three options, I choose to trigger the rule when any of the words are mentioned during the entire call, and the speaker is either the customer or the agent. Now, I can enter the words or phrases to look for in the transcript. In this case, I enter cancel, canceled, cancelled, manager, and supervisor. In your case, you might be more specific depending on your business. For example, if subscriptions are your business, you can look for the phrase cancel my subscription.

Console screenshot.

Now that the category has been created, I use one of the sample calls in the console to test it. I choose Real-Time Analytics in the navigation pane. By choosing Configure advanced settings, I can configure the personally identifiable information (PII) identification and redaction settings. For example, I can choose to identify personal data such as email addresses or redact financial data like bank account numbers.

With no additional charge, I can enable Post-call Analytics so that, at the end of the call, I receive the output of the transcription job in an Amazon Simple Storage Service (Amazon S3) bucket. This output is in a similar format to what I’d receive if I were analyzing a call recording with Transcribe Call Analytics. In this way, I can use the post-call analytics output derived from the audio stream in any process I already have in place for output of analytics generated from call recordings, for example, to update dashboards or generate periodic reports.

With Insurance complaints in Step 1: Specify input audio selected, I choose Start streaming. In the Transcription output section of the console, I receive in real-time the transcription of the call. The words of the customer and agent appear as they are pronounced. Each sentence is flagged with its recognized sentiment (positive, neutral, or negative). The Escalation category that I just configured is found in two sentences, first, when the customer mentions that their insurance has been canceled, and then when the agent mentions their manager. Also, part of a sentence is underlined because an issue has been detected.

Console screenshot.

Using the Download dropdown, I download the full JSON transcript. If I am only interested in the transcription, I can download the text transcript. The JSON transcript contains an array where each item is similar to what I’d get in real time when using the real-time call analytics API.

Using the Live Call Analytics With Agent Assist (LCA) Solution
You can use the open-source real-time call analytics with agent assist solution for your contact center or as an inspiration of what Amazon Transcribe enables for developers. Let’s look at a couple of screenshots to understand how it works.

Here there is a list of on-going calls with the overall sentiment, the sentiment trend (is it improving or not?), and the categories found in real-time during the call that can be used for specific activities.

Screenshot from the real-time call analytics with agent assist solution.

When selecting a call from the list, you have access to more in-depth information, including the call transcript and the issues found during the on-going call. This allows to take action quickly to help resolve the call.

Screenshot from the real-time call analytics with agent assist solution.

Availability and Pricing
Amazon Transcribe Call Analytics with real-time capabilities is available today in US (N. Virginia, Oregon), Canada (Central), Europe (Frankfurt, London), and Asia Pacific (Seoul, Sydney, Tokyo) and supports US English, British English, Australian English, US Spanish, Canadian French, French, German, Italian, and Brazilian Portuguese.

With Amazon Transcribe Call Analytics, you pay as you go and are billed monthly based on tiered pricing. For more information, see Amazon Transcribe pricing.

As part of the AWS Free Tier, you can get started with Amazon Transcribe Call Analytics for free, including the new real-time call analytics API. You can analyze up to 60 minutes of call audio monthly for free for the first 12 months. For more information, see the AWS Free Tier page.

If you’re at re:Invent, you can learn more about this new capability in session AIM307 – JPMorganChase real-time agent assist for contact center productivity. I will update this post when the recording of the session is publicly available.

Start analyzing contact center conversations in real-time to improve your customers’ experience.

Danilo

Automated in-AWS Failback for AWS Elastic Disaster Recovery

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/automated-in-aws-failback-for-aws-elastic-disaster-recovery/

I first covered AWS Elastic Disaster Recovery (DRS) in a 2021 blog post. In that post, I described how DRS “enables customers to use AWS as an elastic recovery site for their on-premises applications without needing to invest in on-premises DR infrastructure that lies idle until needed. Once enabled, DRS maintains a constant replication posture for your operating systems, applications, and databases.” I’m happy to announce that, today, DRS now also supports in-AWS failback, adding to the existing support for non-disruptive recovery drills and on-premises failback included in the original release.

I also wrote in my earlier post that drills are an important part of disaster recovery since, if you don’t test, you simply won’t know for sure that your disaster recovery solution will work properly when you need it to. However, customers rarely like to test because it’s a time-consuming activity and also disruptive. Automation and simplification encourage frequent drills, even at scale, enabling you to be better prepared for disaster, and now you can use them irrespective of whether your applications are on-premises or in AWS. Non-disruptive recovery drills provide confidence that you will meet your recovery time objectives (RTOs) and recovery point objectives (RPOs) should you ever need to initiate a recovery or failback. More information on RTOs and RPOs, and why they’re important to define, can be found in the recovery objectives documentation.

The new automated support provides a simplified and expedited experience to fail back Amazon Elastic Compute Cloud (Amazon EC2) instances to the original Region, and both failover and failback processes (for on-premises or in-AWS recovery) can be conveniently started from the AWS Management Console. Also, for customers that want to customize the granular steps that make up a recovery workflow, DRS provides three new APIs, linked at the bottom of this post.

Failover vs. Failback
Failover is switching the running application to another Availability Zone, or even a different Region, should outages or issues occur that threaten the availability of the application. Failback is the process of returning the application to the original on-premises location or Region. For failovers to another Availability Zone, customers who are agnostic to the zone may continue running the application in its new zone indefinitely if so required. In this case, they will reverse the recovery replication, so the recovered instance is protected for future recovery. However, if the failover was to a different Region, its likely customers will want to eventually fail back and return to the original Region when the issues that caused failover have been resolved.

The below images illustrate architectures for in-AWS applications protected by DRS. The architecture in the image below is for cross-Availability Zone scenarios.

Cross-Availability Zone architecture for DRS

The architecture diagram below is for cross-Region scenarios.

Cross-Region architecture for DRS

Let’s assume an incident occurs with an in-AWS application, so we initiate a failover to another AWS Region. When the issue has been resolved, we want to fail back to the original Region. The following animation illustrates the failover and failback processes.

Illustration of the failover and failback processes

Learn more about in-AWS failback with Elastic Disaster Recovery
As I mentioned earlier, three new APIs are also available for customers who want to customize the granular steps involved. The documentation for these can be found using the links below.

The new in-AWS failback support is available in all Regions where AWS Elastic Disaster Recovery is available. Learn more about AWS Elastic Disaster Recovery in the User Guide. For specific information on the new failback support I recommend consulting this topic in the service User Guide

— Steve

New – Amazon ECS Service Connect Enabling Easy Communication Between Microservices

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ecs-service-connect-enabling-easy-communication-between-microservices/

Microservices architectures are a well-known software development approach to make applications composed of small independent services that communicate over well-defined application programming interfaces (APIs). Customers faced challenges when they started breaking down their monolith applications into microservices, as it required specialized networking knowledge to communicate internally with other microservices.

Amazon Elastic Container Services (Amazon ECS) customers have several solutions for service-to-service, but each one comes with some challenges and complications: 1) Elastic Load Balancing (ELB) needs to carefully plan for configuring infrastructure for high availability and incur additional infrastructure cost. 2) Using Amazon ECS Service Discovery often requires developers to write custom application code for collecting traffic metrics and for making network calls resilient. 3) Service mesh solutions such as AWS App Mesh run outside of Amazon ECS despite having advanced traffic monitoring and routing features between services.

Today, we are announcing the general availability of Amazon ECS Service Connect, a new capability that simplifies building and operating resilient distributed applications. ECS Service Connect provides an easy network setup and seamless service communication deployed across multiple ECS clusters and virtual private clouds (VPCs). You can add a layer of resilience to your ECS service communication and get traffic insights with no changes to your application code.

With ECS Service Connect, you can refer and connect to your services by logical names using a namespace provided by AWS Cloud Map and automatically distribute traffic between ECS tasks without deploying and configuring load balancers. You can set some safe defaults for traffic resilience, such as health checking, automatic retries for 503 errors, and connection draining, for each of your ECS services. Additionally, the Amazon ECS console provides easy-to-use dashboards with real-time network traffic metrics for operational convenience and simplified debugging.

Getting Started with Amazon ECS Service Connect
To get started with the ECS Service Connect, you can specify a namespace as part of creating an ECS cluster or create one in the Cloud Map. A namespace represents a way to structure your services and can span across multiple ECS clusters residing in different VPCs. All ECS services that belong to a specific namespace can communicate with existing services in the namespaces, provided existing network-level connectivity.

You can also see a list of Cloud Map namespaces in Namespaces in the left navigation pane of the Amazon ECS console. When you select a namespace, it shows a list of services with the same namespace from two different ECS clusters with database services (db-mysql, db-redis) and backend services (webui, appserver).

When you create an ECS cluster, you can select one of the namespaces in the Default namespaces of the Networking setting. ECS Service Connect is enabled for all new ECS services running in both AWS Fargate and Amazon EC2 instances. To enable all existing services, you would need to redeploy with either a new version of ECS-optimized Amazon Machine Image (AMI), or with a new Fargate Agent that supports ECS Service Connect.

Or, you can simply create a cluster via AWS Command Line Interface (AWS CLI) with serviceConnect parameter and a default Cloud Map namespace name for service discovery purposes.

$ aws ecs create-cluster
     --cluster "svc-cluster-2"
     --serviceConnect {
       "defaultNamespace": "svc-namespace"
}

This command will create an ECS cluster with the namespace on AWS’s behalf. If you would like to use an already existing Cloud Map namespace, you can simply pass the name of the existing namespace here.

Next, let’s create a service with a task definition and expose your web user-interface server using ECS Service Connect.

$ aws ecs create-service
--cluster "svc-cluster-2"
--service-name "webui"
--task-definition "webui-svc-cluster"
--serviceConnect {
  "enabled": true,
  "namespace": "svc-namespace",
  "services":
   [
      {
         "portName": "webui-port",
         "discoveryName": "webui-svc",
         "clientAliases": [
           {
              "port": 80, // *Required *//
              "dnsName": "webui-svc-domain" // * Optional *//
            }
        }
     ]
   ]
}

In this command, portName represents a reference to the container port, and clientAliases assigns the port number and DNS name, overriding the discovery name that is used in the endpoint. Each service has an endpoint URL that contains the protocol, a DNS name, and the port. You can select the protocol and port name in the task definition or the ECS service configuration. For example, an endpoint could be http://webui:80, grpc://appserver:8080, or http://db-redis:8888.

In the ECS console, you can see this configuration of ECS Service Connect for the webui service in the svc-cluster-2 cluster.

As you can see, you can run the same workloads across different clusters with the same clientAlias and namespace name for high availability. ECS Service Connect will intelligently load balance the traffic to the ECS tasks. To connect to services running in different ECS clusters, you need to specify the same namespace name for all your ECS services that need to talk to each other. ECS Service Connect will make your services discoverable to all other services in the same namespace.

Improving Service Resilience with Observability Data
You can collect traffic metrics with ECS Service Connect observability capabilities. By default, for each ECS service, you can see the number of healthy and unhealthy endpoints, along with inbound and outbound traffic volume.

ECS Service Connect supports HTTP/1, HTTP/2, gRPC, and TCP protocols. So, you can collect the number of requests, number of HTTP errors, and average call latency. For gRPC and TCP, you can see the total number of active connections. All of these metrics are pushed to Amazon CloudWatch or other AWS analytics services via custom log routing

In the Advanced menu, you can publish ECS Service Connect Agent logs for help in debugging in case of issues.

These metrics are only visible in the original interface of the CloudWatch console. When you use the CloudWatch console, switch to the original interface to see the additional metric dimensions of “discovery name” and “target discovery name” under the ECS grouping.

The default settings provide you with a starting point for building resilient applications, and you can fine-tune parameters to limit the impact of failures, latency spikes, and network fluctuations on your application behavior using AWS Management Console or dedicated ECS APIs.

Now Available
Amazon ECS Service Connect is available in all commercial Regions, except China, where Amazon ECS is available. ECS Service Connect is fully supported in AWS CloudFormation, AWS CDK, AWS Copilot, and AWS Proton for infrastructure provisioning, code deployments, and monitoring of your services. To learn more, see the Amazon ECS Service Connect Developer Guide.

My colleagues, Hemanth AVS, Senior Container Specialist SA, and Satya Vajrapu, Senior DevOps Consultant, prepared a hands-on workshop to demonstrate an example of the ECS Service Connect. Join CON303 Networking, service mesh, and service discovery with Amazon ECS when you attend AWS re:Invent 2022.

Give it a try, and please send feedback to AWS re:Post for Amazon ECS or through your usual AWS support contacts.

Channy

AWS Digital Sovereignty Pledge: Control without compromise

Post Syndicated from Matt Garman original https://aws.amazon.com/blogs/security/aws-digital-sovereignty-pledge-control-without-compromise/

French | German | Italian | Japanese | Korean

We’ve always believed that for the cloud to realize its full potential it would be essential that customers have control over their data. Giving customers this sovereignty has been a priority for AWS since the very beginning when we were the only major cloud provider to allow customers to control the location and movement of their data. The importance of this foundation has only grown over the last 16 years as the cloud has become mainstream, and governments and standards bodies continue to develop security, data protection, and privacy regulations.

Today, having control over digital assets, or digital sovereignty, is more important than ever.

As we’ve innovated and expanded to offer the world’s most capable, scalable, and reliable cloud, we’ve continued to prioritize making sure customers are in control and able to meet regulatory requirements anywhere they operate. What this looks like varies greatly across industries and countries. In many places around the world, like in Europe, digital sovereignty policies are evolving rapidly. Customers are facing an incredible amount of complexity, and over the last 18 months, many have told us they are concerned that they will have to choose between the full power of AWS and a feature-limited sovereign cloud solution that could hamper their ability to innovate, transform, and grow. We firmly believe that customers shouldn’t have to make this choice.

This is why today we’re introducing the AWS Digital Sovereignty Pledge—our commitment to offering all AWS customers the most advanced set of sovereignty controls and features available in the cloud.

AWS already offers a range of data protection features, accreditations, and contractual commitments that give customers control over where they locate their data, who can access it, and how it is used. We pledge to expand on these capabilities to allow customers around the world to meet their digital sovereignty requirements without compromising on the capabilities, performance, innovation, and scale of the AWS Cloud. At the same time, we will continue to work to deeply understand the evolving needs and requirements of both customers and regulators, and rapidly adapt and innovate to meet them.

Sovereign-by-design

Our approach to delivering on this pledge is to continue to make the AWS Cloud sovereign-by-design—as it has been from day one. Early in our history, we received a lot of input from customers in industries like financial services and healthcare—customers who are among the most security- and data privacy-conscious organizations in the world—about what data protection features and controls they would need to use the cloud. We developed AWS encryption and key management capabilities, achieved compliance accreditations, and made contractual commitments to satisfy the needs of our customers. As customers’ requirements evolve, we evolve and expand the AWS Cloud. A couple of recent examples include the data residency guardrails we added to AWS Control Tower (a service for governing AWS environments) late last year, which give customers even more control over the physical location of where customer data is stored and processed. In February 2022, we announced AWS services that adhere to the Cloud Infrastructure Service Providers in Europe (CISPE) Data Protection Code of Conduct, giving customers an independent verification and an added level of assurance that our services can be used in compliance with the General Data Protection Regulation (GDPR). These capabilities and assurances are available to all AWS customers.

We pledge to continue to invest in an ambitious roadmap of capabilities for data residency, granular access restriction, encryption, and resilience:

1. Control over the location of your data

Customers have always controlled the location of their data with AWS. For example, currently in Europe, customers have the choice to deploy their data into any of eight existing Regions. We commit to deliver even more services and features to protect our customers’ data. We further commit to expanding on our existing capabilities to provide even more fine-grained data residency controls and transparency. We will also expand data residency controls for operational data, such as identity and billing information.

2. Verifiable control over data access

We have designed and delivered first-of-a-kind innovation to restrict access to customer data. The AWS Nitro System, which is the foundation of AWS computing services, uses specialized hardware and software to protect data from outside access during processing on Amazon Elastic Compute Cloud (Amazon EC2). By providing a strong physical and logical security boundary, Nitro is designed to enforce restrictions so that nobody, including anyone in AWS, can access customer workloads on EC2. We commit to continue to build additional access restrictions that limit all access to customer data unless requested by the customer or a partner they trust.

3. The ability to encrypt everything everywhere

Currently, we give customers features and controls to encrypt data, whether in transit, at rest, or in memory. All AWS services already support encryption, with most also supporting encryption with customer managed keys that are inaccessible to AWS. We commit to continue to innovate and invest in additional controls for sovereignty and encryption features so that our customers can encrypt everything everywhere with encryption keys managed inside or outside the AWS Cloud.

4. Resilience of the cloud

It is not possible to achieve digital sovereignty without resiliency and survivability. Control over workloads and high availability are essential in the case of events like supply chain disruption, network interruption, and natural disaster. Currently, AWS delivers the highest network availability of any cloud provider. Each AWS Region is comprised of multiple Availability Zones (AZs), which are fully isolated infrastructure partitions. To better isolate issues and achieve high availability, customers can partition applications across multiple AZs in the same AWS Region. For customers that are running workloads on premises or in intermittently connected or remote use cases, we offer services that provide specific capabilities for offline data and remote compute and storage. We commit to continue to enhance our range of sovereign and resilient options, allowing customers to sustain operations through disruption or disconnection.

Earning trust through transparency and assurances

At AWS, earning customer trust is the foundation of our business. We understand that protecting customer data is key to achieving this. We also know that trust must continue to be earned through transparency. We are transparent about how our services process and transfer data. We will continue to challenge requests for customer data from law enforcement and government agencies. We provide guidance, compliance evidence, and contractual commitments so that our customers can use AWS services to meet compliance and regulatory requirements. We commit to continuing to provide the transparency and business flexibility needed to meet evolving privacy and sovereignty laws.

Navigating changes as a team

Helping customers protect their data in a world with changing regulations, technology, and risks takes teamwork. We would never expect our customers to go it alone. Our trusted partners play a prominent role in bringing solutions to customers. For example, in Germany, T-Systems (part of Deutsche Telekom) offers Data Protection as a Managed Service on AWS. It provides guidance to ensure data residency controls are properly configured, offering services for the configuration and management of encryption keys and expertise to help guide their customers in addressing their digital sovereignty requirements in the AWS Cloud. We are doubling down with local partners that our customers trust to help address digital sovereignty requirements.

We are committed to helping our customers meet digital sovereignty requirements. We will continue to innovate sovereignty features, controls, and assurances within the global AWS Cloud and deliver them without compromise to the full power of AWS.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Matt Garman

Matt Garman

Matt is currently the Senior Vice President of AWS Sales, Marketing and Global Services at AWS, and also sits on Amazon’s executive leadership S-Team. Matt joined Amazon in 2006, and has held several leadership positions in AWS over that time. Matt previously served as Vice President of the Amazon EC2 and Compute Services businesses for AWS for over 10 years. Matt was responsible for P&L, product management, and engineering and operations for all compute and storage services in AWS. He started at Amazon when AWS first launched in 2006 and served as one of the first product managers, helping to launch the initial set of AWS services. Prior to Amazon, he spent time in product management roles at early stage Internet startups. Matt earned a BS and MS in Industrial Engineering from Stanford University, and an MBA from the Kellogg School of Management at Northwestern University.

 


French version

AWS Digital Sovereignty Pledge: le contrôle sans compromis

Nous avons toujours pensé que pour que le cloud révèle son entier potentiel, il était essentiel que les clients aient le contrôle de leurs données. Garantir à nos clients cette souveraineté a été la priorité d’AWS depuis l’origine, lorsque nous étions le seul grand fournisseur de cloud à permettre aux clients de contrôler la localisation et le flux de leurs données. L’importance de cette démarche n’a cessé de croître ces 16 dernières années, à mesure que le cloud s’est démocratisé et que les gouvernements et les organismes de régulation ont développé des réglementations en matière de sécurité, de protection des données et de confidentialité.

Aujourd’hui, le contrôle des ressources numériques, ou souveraineté numérique, est plus important que jamais.

Tout en innovant et en nous développant pour offrir le cloud le plus performant, évolutif et fiable au monde, nous avons continué à ériger comme priorité la garantie que nos clients gardent le contrôle et soient en mesure de répondre aux exigences réglementaires partout où ils opèrent. Ces exigences varient considérablement selon les secteurs et les pays. Dans de nombreuses régions du monde, comme en Europe, les politiques de souveraineté numérique évoluent rapidement. Les clients sont confrontés à une incroyable complexité. Au cours des dix-huit derniers mois, ils nous ont rapporté qu’ils craignaient de devoir choisir entre les vastes possibilités offertes par les services d’AWS et une solution aux fonctionnalités limitées qui pourrait entraver leur capacité à innover, à se transformer et à se développer. Nous sommes convaincus que les clients ne devraient pas avoir à faire un tel choix.

C’est pourquoi nous présentons aujourd’hui l’AWS Digital Sovereignty Pledge – notre engagement à offrir à tous les clients AWS l’ensemble le plus avancé d’outils et de fonctionnalités de contrôle disponibles dans le cloud au service de la souveraineté.

AWS propose déjà une série de fonctionnalités de protection des données, de certifications et d’engagements contractuels qui donnent à nos clients le plein contrôle de la localisation de leurs données, de leur accès et de leur utilisation. Nous nous engageons à développer ces capacités pour permettre à nos clients du monde entier de répondre à leurs exigences en matière de souveraineté numérique, sans faire de compromis sur les capacités, les performances, l’innovation et la portée du cloud AWS. En parallèle, nous continuerons à travailler pour comprendre en profondeur l’évolution des besoins et des exigences des clients et des régulateurs, et nous nous adapterons et innoverons rapidement pour y répondre.

Souverain dès la conception

Pour respecter l’AWS Digital Sovereignty Pledge, notre approche est de continuer à rendre le cloud AWS souverain dès sa conception, comme il l’a été dès le premier jour. Aux débuts de notre histoire, nous recevions de nombreux commentaires de nos clients de secteurs tels que les services financiers et la santé – des clients qui comptent parmi les organisations les plus soucieuses de la sécurité et de la confidentialité des données dans le monde – sur les fonctionnalités et les contrôles de protection des données dont ils auraient besoin pour leur utilisation du cloud. Nous avons développé des compétences en matière de chiffrement et de gestion des données, obtenu des certifications de conformité et pris des engagements contractuels pour répondre aux besoins de nos clients. Nous développons le cloud AWS à mesure que les exigences des clients évoluent. Nous pouvons citer, parmi les exemples récents, les data residency guardrails (les garde-fous de la localisation des données) que nous avons ajoutés en fin d’année dernière à l’AWS Control Tower (un service de gestion des environnements AWS) et qui donnent aux clients davantage de contrôle sur l’emplacement physique de leurs données, où elles sont stockées et traitées. En février 2022, nous avons annoncé de nouveaux services AWS adhérant au Code de Conduite sur la protection des données de l’association CISPE (Cloud Infrastructure Service Providers in Europe (CISPE)). Ils apportent à nos clients une vérification indépendante et un niveau de garantie supplémentaire attestant que nos services peuvent être utilisés conformément au Règlement général sur la protection des données (RGPD). Ces capacités et ces garanties sont disponibles pour tous les clients d’AWS.

Nous nous engageons à poursuivre nos investissements conformément à une ambitieuse feuille de route pour le, , développement de capacités au service de la localisation des données, de la restriction d’accès granulaire (pratique consistant à accorder à des utilisateurs spécifiques différents niveaux d’accès à une ressource particulière), de chiffrement et de résilience :

1. Contrôle de l’emplacement de vos données

AWS a toujours permis à ses clients de contrôler l’emplacement de leurs données. Aujourd’hui en Europe, par exemple, les clients ont le choix de déployer leurs données dans l’une des huit Régions existantes. Nous nous engageons à fournir encore plus de services et de capacités pour protéger les données de nos clients. Nous nous engageons également à développer nos capacités existantes pour fournir des contrôles de localisation des données encore plus précis et transparents. Nous allons également étendre les contrôles de localisation des données pour les données opérationnelles, telles que les informations relatives à l’identité et à la facturation.

2. Contrôle fiable de l’accès aux données

Nous avons conçu et fourni une innovation unique en son genre pour restreindre l’accès aux données clients. Le système AWS Nitro, qui constitue la base des services informatiques d’AWS, utilise du matériel et des logiciels spécialisés pour protéger les données contre tout accès extérieur pendant leur traitement sur les serveurs EC2. En fournissant une solide barrière de sécurité physique et logique, Nitro est conçu pour empêcher, y compris au sein d’AWS, l’accès à ces données sur EC2. Nous nous engageons à continuer à développer des restrictions d’accès supplémentaires qui limitent tout accès aux données de nos clients, sauf indication contraire de la part du client ou de l’un de ses prestataires de confiance.

3. La possibilité de tout chiffrer, partout

Aujourd’hui, nous offrons à nos clients des fonctionnalités et des outils de contrôle pour chiffrer les données, qu’elles soient en transit, au repos ou en mémoire. Tous les services AWS prennent déjà en charge le chiffrement, la plupart permettant également le chiffrement sur des clés gérées par le client et inaccessibles à AWS. Nous nous engageons à continuer d’innover et d’investir dans des outils de contrôle au service de la souveraineté et des fonctionnalités de chiffrement supplémentaires afin que nos clients puissent chiffrer l’ensemble de leurs données partout, avec des clés de chiffrement gérées à l’intérieur ou à l’extérieur du cloud AWS.

4. La résilience du cloud

La souveraineté numérique est impossible sans résilience et sans capacités de continuité d’activité lors de crise majeure. Le contrôle des charges de travail et la haute disponibilité de réseau sont essentiels en cas d’événements comme une rupture de la chaîne d’approvisionnement, une interruption du réseau ou encore une catastrophe naturelle. Actuellement, AWS offre la plus haute disponibilité de réseau de tous les fournisseurs de cloud. Chaque Région AWS est composée de plusieurs zones de disponibilité (AZ), qui sont des portions d’infrastructure totalement isolées. Pour mieux isoler les difficultés et obtenir une haute disponibilité de réseau, les clients peuvent répartir les applications sur plusieurs zones dans la même Région AWS. Pour les clients qui exécutent des charges de travail sur place ou dans des cas d’utilisation à distance ou connectés par intermittence, nous proposons des services qui offrent des capacités spécifiques pour les données hors ligne, le calcul et le stockage à distance. Nous nous engageons à continuer d’améliorer notre gamme d’options souveraines et résilientes, permettant aux clients de maintenir leurs activités en cas de perturbation ou de déconnexion.

Gagner la confiance par la transparence et les garanties

Chez AWS, gagner la confiance de nos clients est le fondement de notre activité. Nous savons que la protection des données de nos clients est essentielle pour y parvenir. Nous savons également que leur confiance s’obtient par la transparence. Nous sommes transparents sur la manière dont nos services traitent et transfèrent leurs données. Nous continuerons à nous contester les demandes de données des clients émanant des autorités judiciaires et des organismes gouvernementaux. Nous fournissons des conseils, des preuves de conformité et des engagements contractuels afin que nos clients puissent utiliser les services AWS pour répondre aux exigences de conformité et de réglementation. Nous nous engageons à continuer à fournir la transparence et la flexibilité commerciale nécessaires pour répondre à l’évolution du cadre réglementaire relatif à la confidentialité et à la souveraineté des données.

Naviguer en équipe dans un monde en perpétuel changement

Aider les clients à protéger leurs données dans un monde où les réglementations, les technologies et les risques évoluent nécessite un travail d’équipe. Nous ne pouvons-nous résoudre à ce que nos clients relèvent seuls ces défis. Nos partenaires de confiance jouent un rôle prépondérant dans l’apport de solutions aux clients. Par exemple, en Allemagne, T-Systems (qui fait partie de Deutsche Telekom) propose la protection des données en tant que service géré sur AWS. L’entreprise fournit des conseils pour s’assurer que les contrôles de localisation des données sont correctement configurés, offrant des services pour la configuration en question, la gestion des clés de chiffrements et une expertise pour aider leurs clients à répondre à leurs exigences de souveraineté numérique dans le cloud AWS. Nous redoublons d’efforts avec les partenaires locaux en qui nos clients ont confiance pour les aider à répondre à ces exigences de souveraineté numérique.

Nous nous engageons à aider nos clients à répondre aux exigences de souveraineté numérique. Nous continuerons d’innover en matière de fonctionnalités, de contrôles et de garanties de souveraineté dans le cloud mondial d’AWS, tout en fournissant sans compromis et sans restriction la pleine puissance d’AWS.


German version

AWS Digital Sovereignty Pledge: Kontrolle ohne Kompromisse

Wir waren immer der Meinung, dass die Cloud ihr volles Potenzial nur dann erschließen kann, wenn Kunden die volle Kontrolle über ihre Daten haben. Diese Datensouveränität des Kunden genießt bei AWS schon seit den Anfängen der Cloud Priorität, als wir der einzige große Anbieter waren, bei dem Kunden sowohl Kontrolle über den Speicherort als auch über die Übertragung ihrer Daten hatten. Die Bedeutung dieser Grundsätze hat über die vergangenen 16 Jahre stetig zugenommen: Die Cloud ist im Mainstream angekommen, sowohl Gesetzgeber als auch Regulatoren entwickeln ihre Vorgaben zu IT-Sicherheit und Datenschutz stetig weiter.

Kontrolle bzw. Souveränität über digitale Ressourcen ist heute wichtiger denn je.

Unsere Innovationen und Entwicklungen haben stets darauf abgezielt, unseren Kunden eine Cloud zur Verfügung zu stellen, die skalierend und zugleich verlässlich global nutzbar ist. Dies beinhaltet auch unseren Kunden die Kontrolle zu gewährleisten die sie benötigen, damit sie alle ihre regulatorischen Anforderungen erfüllen können. Regulatorische Anforderungen sind länder- und sektorspezifisch. Vielerorts – wie auch in Europa – entstehen neue Anforderungen und Regularien zu digitaler Souveränität, die sich rasant entwickeln. Kunden sehen sich einer hohen Anzahl verschiedenster Regelungen ausgesetzt, die eine enorme Komplexität mit sich bringen. Innerhalb der letzten achtzehn Monate haben sich viele unserer Kunden daher mit der Sorge an uns gewandt, vor eine Wahl gestellt zu werden: Entweder die volle Funktionalität und Innovationskraft von AWS zu nutzen, oder auf funktionseingeschränkte „souveräne“ Cloud-Lösungen zurückzugreifen, deren Kapazität für Innovation, Transformation, Sicherheit und Wachstum aber limitiert ist. Wir sind davon überzeugt, dass Kunden nicht vor diese „Wahl“ gestellt werden sollten.

Deswegen stellen wir heute den „AWS Digital Sovereignty Pledge“ vor – unser Versprechen allen AWS Kunden, ohne Kompromisse die fortschrittlichsten Souveränitäts-Kontrollen und Funktionen in der Cloud anzubieten.

AWS bietet schon heute eine breite Palette an Datenschutz-Funktionen, Zertifizierungen und vertraglichen Zusicherungen an, die Kunden Kontrollmechanismen darüber geben, wo ihre Daten gespeichert sind, wer darauf Zugriff erhält und wie sie verwendet werden. Wir werden diese Palette so erweitern, dass Kunden überall auf der Welt, ihre Anforderungen an Digitale Souveränität erfüllen können, ohne auf Funktionsumfang, Leistungsfähigkeit, Innovation und Skalierbarkeit der AWS Cloud verzichten zu müssen. Gleichzeitig werden wir weiterhin daran arbeiten, unser Angebot flexibel und innovativ an die sich weiter wandelnden Bedürfnisse und Anforderungen von Kunden und Regulatoren anzupassen.

Sovereign-by-design

Wir werden den „AWS Digital Sovereignty Pledge“ so umsetzen, wie wir das seit dem ersten Tag machen und die AWS Cloud gemäß unseres „sovereign-by-design“ Ansatz fortentwickeln. Wir haben von Anfang an, durch entsprechende Funktions- und Kontrollmechanismen für spezielle IT-Sicherheits- und Datenschutzanforderungen aus den verschiedensten regulierten Sektoren Lösungen gefunden, die besonders sensiblen Branchen wie beispielsweise dem Finanzsektor oder dem Gesundheitswesen frühzeitig ermöglichten, die Cloud zu nutzen. Auf dieser Basis haben wir die AWS Verschlüsselungs- und Schlüsselmanagement-Funktionen entwickelt, Compliance-Akkreditierungen erhalten und vertragliche Zusicherungen gegeben, welche die Bedürfnisse unserer Kunden bedienen. Dies ist ein stetiger Prozess, um die AWS Cloud auf sich verändernde Kundenanforderungen anzupassen. Ein Beispiel dafür sind die Data Residency Guardrails, um die wir AWS Control Tower Ende letzten Jahres erweitert haben. Sie geben Kunden die volle Kontrolle über die physikalische Verortung ihrer Daten zu Speicherungs- und Verarbeitungszwecken. Dieses Jahr haben wir einen Katalog von AWS Diensten veröffentlicht, die den Cloud Infrastructure Service Providers in Europe (CISPE) erfüllen. Damit verfügen Kunden über eine unabhängige Verifizierung und zusätzliche Versicherung, dass unsere Dienste im Einklang mit der DSGVO verwendet werden können. Diese Instrumente und Nachweise stehen schon heute allen AWS Kunden zur Verfügung.

Wir haben uns ehrgeizige Ziele für unsere Roadmap gesetzt und investieren kontinuierlich in Funktionen für die Verortung von Daten (Datenresidenz), granulare Zugriffsbeschränkungen, Verschlüsselung und Resilienz:

1. Kontrolle über den Ort der Datenspeicherung

Bei AWS hatten Kunden immer schon die Kontrolle über Datenresidenz, also den Ort der Datenspeicherung. Aktuell können Kunden ihre Daten z.B. in 8 bestehenden Regionen innerhalb Europas speichern, von denen 6 innerhalb der Europäischen Union liegen. Wir verpflichten uns dazu, noch mehr Dienste und Funktionen zur Verfügung zustellen, die dem Schutz der Daten unserer Kunden dienen. Ebenso verpflichten wir uns, noch granularere Kontrollen für Datenresidenz und Transparenz auszubauen. Wir werden auch zusätzliche Kontrollen für Daten einführen, die insbesondere die Bereiche Identitäts- und Abrechnungs-Management umfassen.

2. Verifizierbare Kontrolle über Datenzugriffe

Mit dem AWS Nitro System haben wir ein innovatives System entwickelt, welches unberechtigte Zugriffsmöglichkeiten auf Kundendaten verhindert: Das Nitro System ist die Grundlage der AWS Computing Services (EC2). Es verwendet spezialisierte Hardware und Software, um den Schutz von Kundendaten während der Verarbeitung auf EC2 zu gewährleisten. Nitro basiert auf einer starken physikalischen und logischen Sicherheitsabgrenzung und realisiert damit Zugriffsbeschränkungen, die unautorisierte Zugriffe auf Kundendaten auf EC2 unmöglich machen – das gilt auch für AWS als Betreiber. Wir werden darüber hinaus für weitere AWS Services zusätzliche Mechanismen entwickeln, die weiterhin potentielle Zugriffe auf Kundendaten verhindern und nur in Fällen zulassen, die explizit durch Kunden oder Partner ihres Vertrauens genehmigt worden sind.

3. Möglichkeit der Datenverschlüsselung überall und jederzeit

Gegenwärtig können Kunden Funktionen und Kontrollen verwenden, die wir zur Verschlüsselung von Daten während der Übertragung, persistenten Speicherungen oder Verarbeitung in flüchtigem Speicher anbieten. Alle AWS Dienste unterstützen schon heute Datenverschlüsselung, die meisten davon auf Basis der Customer Managed Keys – d.h. Schlüssel, die von Kunden verwaltet werden und für AWS nicht zugänglich sind. Wir werden auch in diesem Bereich weiter investieren und Innovationen vorantreiben. Es wird zusätzliche Kontrollen für Souveränität und Verschlüsselung geben, damit unsere Kunden alles jederzeit und überall verschlüsseln können – und das mit Schlüsseln, die entweder durch AWS oder durch den Kunden selbst bzw. ausgewählte Partner verwaltet werden können.

4. Resilienz der Cloud

Digitale Souveränität lässt sich nicht ohne Ausfallsicherheit und Überlebensfähigkeit herstellen. Die Kontrolle über Workloads und hohe Verfügbarkeit z.B. in Fällen von Lieferkettenstörungen, Netzwerkausfällen und Naturkatastrophen ist essenziell. Aktuell bietet AWS die höchste Netzwerk-Verfügbarkeit unter allen Cloud-Anbietern. Jede AWS Region besteht aus mehreren Availability Zones (AZs), die jeweils vollständig isolierte Partitionen unserer Infrastruktur sind. Um Probleme besser zu isolieren und eine hohe Verfügbarkeit zu erreichen, können Kunden Anwendungen auf mehrere AZs in derselben Region verteilen. Kunden, die Workloads on-premises oder in Szenarien mit sporadischer Netzwerk-Anbindung betreiben, bieten wir Dienste an, welche auf Offline-Daten und Remote Compute und Storage Anwendungsfälle angepasst sind. Wir werden unser Angebot an souveränen und resilienten Optionen ausbauen und fortentwickeln, damit Kunden den Betrieb ihrer Workloads auch bei Trennungs- und Disruptionsszenarien aufrechterhalten können.

Vertrauen durch Transparenz und Zusicherungen

Der Aufbau eines Vertrauensverhältnisses mit unseren Kunden, ist die Grundlage unserer Geschäftsbeziehung bei AWS. Wir wissen, dass der Schutz der Daten unserer Kunden der Schlüssel dazu ist. Wir wissen auch, dass Vertrauen durch fortwährende Transparenz verdient und aufgebaut wird. Wir bieten schon heute transparenten Einblick, wie unsere Dienste Daten verarbeiten und übertragen. Wir werden auch in Zukunft Anfragen nach Kundendaten durch Strafverfolgungsbehörden und Regierungsorganisationen konsequent anfechten. Wir bieten Rat, Compliance-Nachweise und vertragliche Zusicherungen an, damit unsere Kunden AWS Dienste nutzen und gleichzeitig ihre Compliance und regulatorischen Anforderungen erfüllen können. Wir werden auch in Zukunft die Transparenz und Flexibilität an den Tag legen, um auf sich weiterentwickelnde Datenschutz- und Soveränitäts-Regulierungen passende Antworten zu finden.

Den Wandel als Team bewältigen

Regulatorik, Technologie und Risiken sind stetigem Wandel unterworfen: Kunden dabei zu helfen, ihre Daten in diesem Umfeld zu schützen, ist Teamwork. Wir würden nie erwarten, dass unsere Kunden das alleine bewältigen müssen. Unsere Partner genießen hohes Vertrauen und spielen eine wichtige Rolle dabei, Lösungen für Kunden zu entwickeln. Zum Beispiel bietet T-Systems in Deutschland Data Protection as a Managed Service auf AWS an. Das Angebot umfasst Hilfestellungen bei der Konfiguration von Kontrollen zur Datenresidenz, Zusatzdienste im Zusammenhang mit der Schlüsselverwaltung für kryptographische Verfahren und Rat bei der Erfüllung von Anforderungen zu Datensouveränität in der AWS Cloud. Wir werden die Zusammenarbeit mit lokalen Partnern, die besonderes Vertrauen bei unseren gemeinsamen Kunden genießen, intensivieren, um bei der Erfüllung der Digitalen Souveräntitätsanforderungen zu unterstützen.

Wir verpflichten uns dazu unseren Kunden bei der Erfüllung ihre Anforderungen an digitale Souveränität zu helfen. Wir werden weiterhin Souveränitäts-Funktionen, Kontrollen und Zusicherungen für die globale AWS Cloud entwickeln, die das gesamte Leistungsspektrum von AWS erschließen.


Italian version

AWS Digital Sovereignty Pledge: Controllo senza compromessi

Abbiamo sempre creduto che, affinché il cloud possa realizzare in pieno il potenziale, sia essenziale che i clienti abbiano il controllo dei propri dati. Offrire ai clienti questa “sovranità” è sin dall’inizio una priorità per AWS, da quando eravamo l’unico grande cloud provider a consentire ai clienti di controllare la localizzazione e lo spostamento dei propri dati. L’importanza di questo principio è aumentata negli ultimi 16 anni, man mano che il cloud ha iniziato a diffondersi e i governi e gli organismi di standardizzazione hanno continuato a sviluppare normative in materia di sicurezza, protezione dei dati e privacy.

Oggi, avere il controllo sulle risorse digitali, sulla sovranità digitale, è più importante che mai.

Nell’innovare ed espandere l’offerta cloud più completa, scalabile e affidabile al mondo, la nostra priorità è stata assicurarci che i clienti – ovunque operino – abbiano il controllo e siano in grado di soddisfare i requisiti normativi. Il contesto varia notevolmente tra i settori e i paesi. In molti luoghi del mondo, come in Europa, le politiche di sovranità digitale si stanno evolvendo rapidamente. I clienti stanno affrontando un crescente livello di complessità, e negli ultimi diciotto mesi molti di essi ci hanno detto di essere preoccupati di dover scegliere tra usare AWS in tutta la sua potenza e una soluzione cloud sovrana con funzionalità limitate che potrebbe ostacolare la loro capacità di innovazione, trasformazione e crescita. Crediamo fermamente che i clienti non debbano fare questa scelta.

Ecco perché oggi presentiamo l’AWS Digital Sovereignty Pledge, il nostro impegno a offrire a tutti i clienti AWS l’insieme più avanzato di controlli e funzionalità sulla sovranità digitale disponibili nel cloud.

AWS offre già una gamma di funzionalità di protezione dei dati, accreditamenti e impegni contrattuali che consentono ai clienti di controllare la localizzazione, l’accesso e l’utilizzo dei propri dati. Ci impegniamo a espandere queste funzionalità per consentire ai clienti di tutto il mondo di soddisfare i propri requisiti di sovranità digitale senza compromettere le capacità, le prestazioni, l’innovazione e la scalabilità del cloud AWS. Allo stesso tempo, continueremo a lavorare per comprendere a fondo le esigenze e i requisiti in evoluzione sia dei clienti che delle autorità di regolamentazione, adattandoci e innovando rapidamente per soddisfarli.

Sovereign-by-design

Il nostro approccio per mantenere questo impegno è quello di continuare a rendere il cloud AWS “sovereign-by-design”, come è stato sin dal primo giorno. All’inizio della nostra storia, abbiamo ricevuto da clienti in settori come quello finanziario e sanitario – che sono tra le organizzazioni più attente al mondo alla sicurezza e alla privacy dei dati – molti input sulle funzionalità e sui controlli relativi alla protezione dei dati di cui necessitano per utilizzare il cloud. Abbiamo sviluppato le funzionalità di crittografia e gestione delle chiavi di AWS, ottenuto accreditamenti di conformità e preso impegni contrattuali per soddisfare le esigenze dei nostri clienti. Con l’evolversi delle loro esigenze, sviluppiamo ed espandiamo il cloud AWS.

Un paio di esempi recenti includono i data residency guardrail che abbiamo aggiunto a AWS Control Tower (un servizio per la gestione degli ambienti in AWS) alla fine dell’anno scorso, che offrono ai clienti un controllo ancora maggiore sulla posizione fisica in cui i dati dei clienti vengono archiviati ed elaborati. A febbraio 2022 abbiamo annunciato i servizi AWS che aderiscono al Codice di condotta sulla protezione dei dati dei servizi di infrastruttura cloud in Europa (CISPE), offrendo ai clienti una verifica indipendente e un ulteriore livello di garanzia che i nostri servizi possano essere utilizzati in conformità con il Regolamento generale sulla protezione dei dati (GDPR). Queste funzionalità e garanzie sono disponibili per tutti i clienti AWS.

Ci impegniamo a continuare a investire in una roadmap ambiziosa di funzionalità per la residenza dei dati, la restrizione granulare dell’accesso, la crittografia e la resilienza:

1. Controllo della localizzazione dei tuoi dati

I clienti hanno sempre controllato la localizzazione dei propri dati con AWS. Ad esempio, attualmente in Europa i clienti possono scegliere di distribuire i propri dati attraverso una delle otto Region AWS disponibili. Ci impegniamo a fornire ancora più servizi e funzionalità per proteggere i dati dei nostri clienti e ad espandere le nostre capacità esistenti per fornire controlli e trasparenza ancora più dettagliati sulla residenza dei dati. Estenderemo inoltre anche i controlli relativi ai dati operativi, come le informazioni su identità e fatturazione.

2. Controllo verificabile sull’accesso ai dati

Abbiamo progettato e realizzato un’innovazione unica nel suo genere per limitare l’accesso ai dati dei clienti. Il sistema AWS Nitro, che è alla base dei servizi informatici di AWS, utilizza hardware e software specializzati per proteggere i dati dall’accesso esterno durante l’elaborazione su EC2. Fornendo un solido limite di sicurezza fisico e logico, Nitro è progettato per applicare restrizioni in modo che nessuno, nemmeno in AWS, possa accedere ai carichi di lavoro dei clienti su EC2. Ci impegniamo a continuare a creare ulteriori restrizioni di accesso che limitino tutti gli accessi ai dati dei clienti, a meno che non sia richiesto dal cliente o da un partner di loro fiducia.

3. La capacità di criptare tutto e ovunque

Attualmente, offriamo ai clienti funzionalità e controlli per criptare i dati, che siano questi in transito, a riposo o in memoria. Tutti i servizi AWS già supportano la crittografia, e la maggior parte supporta anche la crittografia con chiavi gestite dal cliente, non accessibili per AWS. Ci impegniamo a continuare a innovare e investire in controlli aggiuntivi per la sovranità e le funzionalità di crittografia in modo che i nostri clienti possano criptare tutto e ovunque con chiavi di crittografia gestite all’interno o all’esterno del cloud AWS.

4. Resilienza del cloud

Non è possibile raggiungere la sovranità digitale senza resilienza e affidabilità. Il controllo dei carichi di lavoro e l’elevata disponibilità sono essenziali in caso di eventi come l’interruzione della catena di approvvigionamento, l’interruzione della rete e i disastri naturali. Attualmente AWS offre il più alto livello di disponibilità di rete rispetto a qualsiasi altro cloud provider. Ogni regione AWS è composta da più zone di disponibilità (AZ), che sono partizioni di infrastruttura completamente isolate. Per isolare meglio i problemi e ottenere un’elevata disponibilità, i clienti possono partizionare le applicazioni tra più AZ nella stessa Region AWS. Per i clienti che eseguono carichi di lavoro in locale o in casi d’uso remoti o connessi in modo intermittente, offriamo servizi che forniscono funzionalità specifiche per i dati offline e l’elaborazione e lo storage remoti. Ci impegniamo a continuare a migliorare la nostra gamma di opzioni per la sovranità del dato e la resilienza, consentendo ai clienti di continuare ad operare anche in caso di interruzioni o disconnessioni.

Guadagnare fiducia attraverso trasparenza e garanzie

In AWS, guadagnare la fiducia dei clienti è alla base della nostra attività. Comprendiamo che proteggere i dati dei clienti è fondamentale per raggiungere questo obiettivo, e sappiamo che la fiducia si guadagna anche attraverso la trasparenza. Per questo siamo trasparenti su come i nostri servizi elaborano e spostano i dati. Continueremo ad opporci alle richieste relative ai dati dei clienti provenienti dalle forze dell’ordine e dalle agenzie governative. Forniamo linee guida, prove di conformità e impegni contrattuali in modo che i nostri clienti possano utilizzare i servizi AWS per soddisfare i requisiti normativi e di conformità. Ci impegniamo infine a continuare a fornire la trasparenza e la flessibilità aziendali necessarie a soddisfare l’evoluzione delle leggi sulla privacy e sulla sovranità del dato.

Gestire i cambiamenti come un team

Aiutare i clienti a proteggere i propri dati in un mondo caratterizzato da normative, tecnologie e rischi in evoluzione richiede un lavoro di squadra. Non ci aspetteremmo mai che i nostri clienti lo facessero da soli. I nostri partner di fiducia svolgono un ruolo di primo piano nel fornire soluzioni ai clienti. Ad esempio in Germania, T-Systems (parte di Deutsche Telekom) offre un servizio denominato Data Protection as a Managed Service on AWS. Questo fornisce indicazioni per garantire che i controlli di residenza dei dati siano configurati correttamente, offrendo servizi per la configurazione e la gestione delle chiavi di crittografia, oltre a competenze per aiutare i clienti a soddisfare i requisiti di sovranità digitale nel cloud AWS, per i quali stiamo collaborando con partner locali di cui i nostri clienti si fidano.

Ci impegniamo ad aiutare i nostri clienti a soddisfare i requisiti di sovranità digitale. Continueremo a innovare le funzionalità, i controlli e le garanzie di sovranità del dato all’interno del cloud AWS globale e a fornirli senza compromessi sfruttando tutta la potenza di AWS.


Japanese version

AWS Digital Sovereignty Pledge(AWSのデジタル統制に関するお客様との約束): 妥協のない管理

AWS セールス、マーケティングおよびグローバルサービス担当シニアバイスプレジデント マット・ガーマン(Matt Garman)

クラウドの可能性を最大限に引き出すためには、お客様が自らデータを管理することが不可欠であると、私たちは常に考えてきました。アマゾン ウェブ サービス(以下、AWS) は、お客様がデータの保管場所の管理とデータの移動を統制できる唯一の大手クラウドプロバイダーであった当初から、お客様にこれらの統制を実施していただくことを最優先事項としていました。クラウドが主流になり、政府機関及び標準化団体がセキュリティ、データ保護、プライバシーに関する規制を策定し続ける中で、お客様による統制の重要性は過去 16 年間にわたって一貫して高まっています。

今日、デジタル資産を管理すること、つまりデジタル統制は、かつてないほど重要になっています。

私たちは、世界で最も高性能で、スケーラビリティと信頼性の高いクラウドを提供するために、イノベーションとサービスの拡大を実現してきました。その中でお客様が、どの地域でも継続して管理し、規制要件を満たせるようにすることを最優先してきました。この状況は、業界や国によって大きく異なります。ヨーロッパなど世界中の多くの地域で、デジタル統制に関する政策が急速に発展しています。お客様は非常に複雑な状況に直面しています。過去 18 か月間にわたり、私たちは、お客様から、AWSの全ての機能を使うか、またはイノベーション・変革・成長を妨げてでも機能が限定されたクラウドソリューションを使うかを選択しなければいけない、という懸念の声を多く聞いてきました。私たちは、お客様がこのような選択を迫られるべきではないと考えています。

本日、私たちは、「AWS Digital Sovereignty Pledge(AWSのデジタル統制に関するお客様との約束)」を発表いたします。クラウドで利用できる最も高度な一連の統制管理と機能とを、すべての AWS のお客様に提供します。

AWS は既に、データの保存場所、アクセスできるユーザー、データの使用方法をお客様が管理できるようにする、さまざまなデータ保護機能、認定、契約上の責任を提供しています。私たちは、世界中のお客様が AWS クラウドの機能、パフォーマンス、イノベーション、スケールを犠牲にすることなく、デジタル統制の要件を満たすことができるよう、これらの機能を拡張することを約束いたします。同時に、お客様と規制当局双方の進化するニーズと要件を深く理解し、迅速な導入とイノベーションにより、デジタル統制のニーズと要件を満たすよう継続的に努力していきます。

企画・設計段階からの統制 (sovereign-by-design)

この約束を果たすための私たちのアプローチは、創業初日からそうであったように、AWS クラウドにおいて企画・設計段階からの統制を維持し続けることです。私たちのこれまでの取組の初期段階で、金融サービスやヘルスケアなどの業界のお客様 (世界で最もセキュリティとデータのプライバシーを重視する組織であるお客様) から、クラウドを使うために必要なデータ保護機能や管理について多くのご意見をいただきました。そして、私たちは AWS の暗号化とキー管理機能を開発し、コンプライアンス認定を取得して、お客様のニーズを満たすための契約を締結してきました。お客様の要件の進化に応じて、AWS クラウドも進化、拡張しています。最近の例としては、昨年末に AWS Control Tower (AWS 環境を管理するサービス) に追加したデータレジデンシーガードレールがあります。これにより、お客様のデータが保存および処理される物理的な場所をさらに詳細に管理できるようになりました。2022 年 2 月に、Cloud Infrastructure Service Providers in Europe (CISPE) のデータ保護行動規範に準拠した AWS のサービスを発表しました。これにより、お客様は独立した検証と、当社のサービスがEUの一般データ保護規則 (GDPR) に準拠して使用できるという追加の保証を得ることができます。これらの機能と保証は、すべての AWS のお客様にご利用いただけます。

私たちは、データ保管、きめ細かなアクセス制限、暗号化、耐障害性に関する機能の展開・拡大に引き続き投資することを約束します。

1. データの保管場所の管理

お客様は、AWS を使用して常にデータの保管場所を制御できます。例えば、現在ヨーロッパでは、8 つある既存のリージョンのいずれにもデータを保管できます。私たちは、お客様のデータを保護するために、さらに多くのサービスと機能を提供するよう努めています。さらに、よりきめ細かなデータの保管・管理と透明性を提供するため、既存の機能を拡張することにも取り組んでいます。また、ID や請求情報などの運用データのデータ保管・管理も拡大していきます。

2.検証可能なデータアクセスの管理

私たちは、お客様のデータへのアクセスを制限する、他に類を見ないイノベーションを設計し、実現してきました。AWS のコンピューティングサービスの基盤である AWS Nitro System は、専用のハードウェアとソフトウェアを使用して、Amazon Elastic Compute Cloud (Amazon EC2) での処理中に外部アクセスからデータを保護します。Nitro は、物理的にも論理的にも強固なセキュリティ境界を設けることで、AWS の従業員を含め、誰も Amazon EC2 上のお客様のワークロードにアクセスできないように制限を行います。私たちは、お客様またはお客様が信頼しているパートナーからの要求がない限り、お客様のデータへのすべてのアクセスを制限できるよう、さらなるアクセス管理の構築に継続的に取り組んでいきます。

3.あらゆる場所ですべてを暗号化する機能

現在私たちは、転送中、保存中、メモリ内にあるかを問わず、データを暗号化する機能と管理をお客様に提供しています。すべての AWS のサービスは既に暗号化をサポートしており、そのほとんどのサービスでは、AWSからもアクセスできないお客様が管理するキーによる暗号化もサポートしています。私たちは、お客様が AWS クラウドの内部または外部で管理されている暗号化キーを使用して、あらゆる場所ですべてを暗号化できるように、統制と暗号化機能のさらなる管理に向けたイノベーションと投資を続けていきます。

4. クラウドの耐障害性

可用性と強靭性なくしてデジタル統制を実現することは不可能です。サプライチェーンの中断、ネットワークの中断、自然災害などの事象が発生した場合、ワークロードの管理と高可用性が重要になります。現在、AWS はどのクラウドプロバイダーよりも高いネットワーク可用性を実現しています。各 AWS リージョンは、完全に分離されたインフラストラクチャパーティションである複数のアベイラビリティーゾーン (AZ) で構成されています。生じうる事象をより適切に分離して高可用性を実現するために、お客様は同じ AWS リージョン内の複数の AZ にアプリケーションを分割できます。ワークロードをオンプレミスで実行しているお客様、または断続的な接続やリモートのユースケースには、オフラインデータおよびリモートコンピューティングとストレージに関する特定の機能を備えたサービスを提供しています。私たちは、中断や切断が発生してもお客様が業務を継続できるように、統制と回復力のある選択肢を継続的に拡大していきます。

透明性と保証による信頼の獲得

AWS では、お客様の信頼を得ることはビジネスの根幹です。そのためには、お客様のデータの保護が不可欠であると考えています。また、信頼を継続的に得るには、透明性が必要であることも理解しています。私たちは、当社のサービスによるデータの処理および転送方法に関して、透明性を確保しています。法執行機関や政府機関からのお客様のデータの要求に対しては、引き続き異議申し立てを行っていきます。お客様が AWS のサービスを利用してコンプライアンスや規制の要件を満たすことができるように、ガイダンス、コンプライアンスの証拠、契約責任を提供します。私たちは、進化するプライバシーおよび統制に関する法律に対応するために必要な透明性とビジネスの柔軟性を引き続き提供していきます。

チームとしての変化への対応

規制、テクノロジー、リスクが変化する世界において、お客様によるデータの保護を支援するためには、チームワークが必要です。私たちは、お客様だけで対応することを期待するようなことは決してありません。AWS の信頼できるパートナーが、お客様にソリューションを提供するうえで顕著な役割を果たします。例えば、ドイツでは、T-Systems (ドイツテレコムグループ) が AWS のマネージドサービスとしてデータ保護を提供しています。同社は、データ保護・管理が適切に設定されていることを確認するためのガイダンスを提供し、暗号化キーの設定と管理に関するサービスと専門知識を提供して、顧客が AWS クラウドでデジタル統制要件に対応できるよう支援しています。私たちは、デジタル統制要件への対応を支援するために、お客様が信頼するローカルパートナーとの連携を強化しています。

私たちは、お客様がデジタル統制要件を満たすことができるよう、支援を行うことを約束しています。私たちは引き続き、グローバルな AWS クラウドにおける統制機能、管理、保証を革新し、それらを AWS の全ての機能において妥協することなく提供していきます。


Korean version

AWS 디지털 주권 서약: 타협 없는 제어

작성자: Matt Garman, 아마존웹서비스(AWS) 마케팅 및 글로벌 서비스 담당 수석 부사장

아마존웹서비스(AWS)는 클라우드가 지닌 잠재력이 최대로 발휘되기 위해서는 고객이 반드시 자신의 데이터를 제어할 수 있어야 한다고 항상 믿어 왔습니다. AWS는 서비스 초창기부터 고객이 데이터의 위치와 이동을 제어할 수 있도록 허용하는 유일한 주요 클라우드 공급업체였고 지금까지도 고객에게 이러한 권리를 부여하는 것을 최우선 과제로 삼고 있습니다. 클라우드가 주류가 되고 정부 및 표준 제정 기관이 보안, 데이터 보호 및 개인정보보호 규정을 지속적으로 발전시키면서 지난 16년간 이러한 원칙의 중요성은 더욱 커졌습니다.

오늘날 디지털 자산 또는 디지털 주권 제어는 그 어느 때보다 중요합니다.

AWS는 세계에서 가장 우수하고 확장 가능하며 신뢰할 수 있는 클라우드를 제공하기 위한 혁신과 서비스 확대에 주력하면서, 고객이 사업을 운영하는 모든 곳에서 스스로 통제할 수 있을 뿐 아니라 규제 요구 사항을 충족할 수 있어야 한다는 점을 언제나 최우선 순위로 여겨왔습니다. AWS의 이러한 기업 철학은 개별 산업과 국가에 따라 상당히 다른 모습으로 표출됩니다. 유럽을 비롯해 전 세계 많은 지역에서 디지털 주권 정책이 빠르게 진화하고 있습니다. 고객은 엄청난 복잡성에 직면해 있으며, 지난 18개월 동안 많은 고객들은 AWS의 모든 기능을 갖춘 클라우드 서비스와 혁신, 변화, 성장을 저해할 수 있는 제한적 기능의 클라우드 솔루션 중 하나를 선택해야만 할 수도 있다는 우려가 있다고 말하였습니다. 하지만 AWS는 고객이 이러한 선택을 해서는 안 된다는 굳건한 믿음을 가지고 있습니다.

이것이 바로 오늘, 모든 AWS 고객에게 클라우드 기술에서 가장 발전된 형태의 디지털 주권 제어와 기능을 제공하겠다는 약속인, “AWS 디지털 주권 서약”을 소개하는 이유입니다.

AWS는 이미 고객이 데이터 위치, 데이터에 액세스가 가능한 인력 및 데이터 이용 방식을 제어할 수 있는 다양한 데이터 보호 기능, 인증 및 계약상 의무를 제공하고 있습니다. 또한 전 세계 고객이 AWS 클라우드의 기능, 성능, 혁신 및 규모에 영향을 주지 않으면서 디지털 주권 관련 요건을 충족할 수 있도록 기존 데이터 보호 정책을 지속적으로 확대할 것을 약속합니다. 동시에 고객과 규제 기관의 변화하는 요구와 규제 요건을 깊이 이해하고 이를 충족하기 위해 신속하게 적응하고 혁신을 이어나가고자 합니다.

디지털 주권 보호를 위한 원천 설계

이 약속을 이행하기 위한 우리의 접근 방식은 처음부터 그러했듯이 AWS 클라우드를 애초부터 디지털 주권을 강화하는 방식으로 설계하는 것입니다. AWS는 사업 초기부터 금융 서비스 및 의료 업계와 같이 세계 최고 수준의 보안 및 데이터 정보 보호를 요구하는 조직의 고객으로부터 클라우드를 사용하는 데 필요한 데이터 보호 기능 및 제어에 관한 많은 의견을 수렴했습니다. AWS는 고객의 요구 사항을 충족하기 위하여 암호화 및 핵심적 관리 기능을 개발하고, 규정 준수 인증을 획득했으며, 계약상 의무를 제공하였습니다. 고객의 요구 사항이 변화함에 따라 AWS 클라우드도 진화하고 확장해 왔습니다. 최근의 몇 가지 예를 들면, 작년 말 AWS Control Tower(AWS 환경 관리 서비스)에 추가한 데이터 레지던시 가드레일이 있습니다. 이는 고객 데이터가 저장되고 처리되는 물리적 위치에 대한 제어 권한을 고객에게 훨씬 더 많이 부여하는 것을 목표로 합니다. 2022년 2월에는 유럽 클라우드 인프라 서비스(CISPE) 데이터 보호 행동 강령을 준수하는 AWS 서비스를 발표한 바 있습니다. 이것은 유럽연합의 개인정보보호규정(GDPR)에 따라 서비스를 사용할 수 있다는 독립적인 검증과 추가적 보증을 고객에게 제공하는 것입니다. 이러한 기능과 보증은 AWS의 모든 고객에게 적용됩니다.

AWS는 데이터 레지던시, 세분화된 액세스 제한, 암호화 및 복원력 기능의 증대를 위한 야심 찬 로드맵에 다음과 같이 계속 투자할 것을 약속합니다.

1. 데이터 위치에 대한 제어

고객은 항상 AWS를 통해 데이터 위치를 제어해 왔습니다. 예를 들어 현재 유럽에서는 고객이 기존 8개 리전 중 원하는 위치에 데이터를 배치할 수 있습니다. AWS는 고객의 데이터를 보호하기 위한 더 많은 서비스와 기능을 제공할 것을 약속합니다. 또한, 기존 기능을 확장하여 더욱 세분화된 데이터 레지던시 제어 및 투명성을 제공할 것을 약속합니다. 뿐만 아니라, ID 및 결제 정보와 같은 운영 데이터에 대한 데이터 레지던시 제어를 확대할 예정입니다.

2. 데이터 액세스에 대한 검증 가능한 제어

AWS는 고객 데이터에 대한 액세스를 제한하는 최초의 혁신적 설계를 제공했습니다. AWS 컴퓨팅 서비스의 기반인 AWS Nitro System은 특수 하드웨어 및 소프트웨어를 사용하여 EC2에서 처리하는 동안 외부 액세스로부터 데이터를 보호합니다. 강력한 물리적 보안 및 논리적 보안의 경계를 제공하는 Nitro는 AWS의 모든 사용자를 포함하여 누구도 EC2의 고객 워크로드에 액세스할 수 없도록 설계되었습니다. AWS는 고객 또는 고객이 신뢰하는 파트너의 요청이 있는 경우를 제외하고, 고객 데이터에 대한 모든 액세스를 제한하는 추가 액세스 제한을 계속 구축할 것을 약속합니다.

3. 모든 것을 어디서나 암호화하는 능력

현재 AWS는 전송 중인 데이터, 저장된 데이터 또는 메모리에 있는 데이터를 암호화할 수 있는 기능과 제어권을 고객에게 제공합니다. 모든 AWS 서비스는 이미 암호화를 지원하고 있으며, 대부분 AWS에서 액세스할 수 없는 고객 관리형 키를 사용한 암호화도 지원합니다. 또한 고객이 AWS 클라우드 내부 또는 외부에서 관리되는 암호화 키로 어디서든 어떤 것이든 것을 암호화할 수 있도록 디지털 주권 및 암호화 기능에 대한 추가 제어를 지속적으로 혁신하고 투자할 것을 약속합니다.

4. 클라우드의 복원력

복원력과 생존성 없이는 디지털 주권을 달성할 수 없습니다. 공급망 장애, 네트워크 중단 및 자연 재해와 같은 이벤트가 발생할 경우 워크로드 제어 및 고가용성은 매우 중요합니다. 현재 AWS는 모든 클라우드 공급업체 중 가장 높은 네트워크 가용성을 제공합니다. 각 AWS 리전은 완전히 격리된 인프라 파티션인 여러 가용 영역(AZ)으로 구성됩니다. 문제를 보다 효과적으로 격리하고 고가용성을 달성하기 위해 고객은 동일한 AWS 리전의 여러 AZ로 애플리케이션을 분할할 수 있습니다. 온프레미스 또는 간헐적으로 연결되거나 원격 사용 사례에서 워크로드를 실행하는 고객을 위해 오프라인 데이터와 원격 컴퓨팅 및 스토리지에 대한 특정 기능을 지원하는 서비스를 제공합니다. AWS는 고객이 중단되거나 연결이 끊기는 상황에도 운영을 지속할 수 있도록 자주적이고 탄력적인 옵션의 범위를 지속적으로 강화할 것을 약속합니다.

투명성과 보장을 통한 신뢰 확보

고객의 신뢰를 얻는 것이 AWS 비즈니스의 토대입니다. 이를 위해서는 고객 데이터를 보호하는 것이 핵심이라는 점을 잘 알고 있습니다. 투명성을 통해 계속 신뢰를 쌓아야 한다는 점 또한 잘 알고 있습니다. AWS는 서비스가 데이터를 처리하고 전송하는 방식을 투명하게 공개합니다. 법 집행 기관 및 정부 기관의 고객 데이터 제공 요청에 대하여 계속해서 이의를 제기할 것입니다. AWS는 고객이 AWS 서비스를 사용하여 규정 준수 및 규제 요건을 충족할 수 있도록 지침, 규정 준수 증거 및 계약상의 의무 이행을 제공합니다. AWS는 진화하는 개인 정보 보호 및 각 지역의 디지털 주권 규정을 준수하는 데 필요한 투명성과 비즈니스 유연성을 계속 제공할 것을 약속합니다.

팀 차원의 변화 모색

변화하는 규정, 기술 및 위험이 있는 환경에서 고객이 데이터를 보호할 수 있도록 하려면 팀워크가 필요합니다. 고객 혼자서는 절대 할 수 없는 일입니다. 신뢰할 수 있는 AWS의 파트너는 고객이 문제를 해결하는데 중요한 역할을 합니다. 예를 들어, 독일에서는 T-Systems(Deutsche Telekom의 일부)를 통해 AWS의 관리형 서비스로서의 데이터 보호를 제공합니다. 데이터 레지던시 제어가 올바르게 구성되도록 하기 위한 지침을 제공하고, 암호화 키의 구성 및 관리를 위한 서비스를 통해 고객이 AWS 클라우드에서 디지털 주권 요구 사항을 해결하는 데 도움이 되는 전문 지식을 제공합니다. AWS는 고객이 디지털 주권 요구 사항을 해결하는 데 도움이 될 수 있도록 신뢰할 수 있는 현지 파트너와 협력하고 있습니다.

AWS는 고객이 디지털 주권 요구 사항을 충족할 수 있도록 최선의 지원을 다하고있습니다. AWS는 글로벌 AWS 클라우드 내에서 주권 기능, 제어 및 보안을 지속적으로 혁신하고, AWS의 모든 기능에 대한 어떠한 타협 없이 이를 제공할 것입니다.

Matt Garman

Matt Garman

Matt is currently the Senior Vice President of AWS Sales, Marketing and Global Services at AWS, and also sits on Amazon’s executive leadership S-Team. Matt joined Amazon in 2006, and has held several leadership positions in AWS over that time. Matt previously served as Vice President of the Amazon EC2 and Compute Services businesses for AWS for over 10 years. Matt was responsible for P&L, product management, and engineering and operations for all compute and storage services in AWS. He started at Amazon when AWS first launched in 2006 and served as one of the first product managers, helping to launch the initial set of AWS services. Prior to Amazon, he spent time in product management roles at early stage Internet startups. Matt earned a BS and MS in Industrial Engineering from Stanford University, and an MBA from the Kellogg School of Management at Northwestern University.

2022 Canadian Centre for Cyber Security Assessment Summary report available with 12 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2022-canadian-centre-for-cyber-security-assessment-summary-report-available-with-12-additional-services/

We are pleased to announce the availability of the 2022 Canadian Centre for Cyber Security (CCCS) assessment summary report for Amazon Web Services (AWS). This assessment will bring the total to 132 AWS services and features assessed in the Canada (Central) AWS Region, including 12 additional AWS services. A copy of the summary assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the AWS Services in Scope page. The 12 new services are:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decisions to use cloud services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s Protected B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach). As of November 2022, 132 AWS services in the Canada (Central) Region have been assessed by the CCCS and meet the requirements for the CCCS Medium cloud security profile. Meeting the CCCS Medium cloud security profile is required to host workloads that are classified up to and including the medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and reassesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and on customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, ecommerce, and utilities industries.

AWS Security Profile: Sarah Currey, Delivery Practice Manager

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-sarah-currey-delivery-practice-manager/

In the weeks leading up to AWS re:invent 2022, I’ll share conversations I’ve had with some of the humans who work in AWS Security who will be presenting at the conference, and get a sneak peek at their work and sessions. In this profile, I interviewed Sarah Currey, Delivery Practice Manager in World Wide Professional Services (ProServe).

How long have you been at AWS and what do you do in your current role?

I’ve been at AWS since 2019, and I’m a Security Practice Manager who leads a Security Transformation practice dedicated to helping customers build on AWS. I’m responsible for leading enterprise customers through a variety of transformative projects that involve adopting AWS services to help achieve and accelerate secure business outcomes.

In this capacity, I lead a team of awesome security builders, work directly with the security leadership of our customers, and—one of my favorite aspects of the job—collaborate with internal security teams to create enterprise security solutions.

How did you get started in security?

I come from a non-traditional background, but I’ve always had an affinity for security and technology. I started off learning HTML back in 2006 for my Myspace page (blast from the past, I know) and in college, I learned about offensive security by dabbling in penetration testing. I took an Information Systems class my senior year, but otherwise I wasn’t exposed to security as a career option. I’m from Nashville, TN, so the majority of people I knew were in the music or healthcare industries, and I took the healthcare industry path.

I started my career working at a government affairs firm in Washington, D.C. and then moved on to a healthcare practice at a law firm. I researched federal regulations and collaborated closely with staffers on Capitol Hill to educate them about controls to protect personal health information (PHI), and helped them to determine strategies to adhere to security, risk, and compliance frameworks such as HIPAA and (NIST) SP 800-53. Government regulations can lag behind technology, which creates interesting problems to solve. But in 2015, I was assigned to a project that was planned to last 20 years, and I decided I wanted to move into an industry that operated as a faster pace—and there was no better place than tech. 

From there, I moved to a startup where I worked as a Project Manager responsible for securely migrating customers’ data to the software as a service (SaaS) environment they used and accelerating internal adoption of the environment. I often worked with software engineers and asked, “why is this breaking?” so they started teaching me about different aspects of the service. I interacted regularly with a female software engineer who inspired me to start teaching myself to code. After two years of self-directed learning, I took the leap and quit my job to do a software engineering bootcamp. After the course, I worked as a software engineer where I transformed my security assurance skills into the ability to automate security. The cloud kept coming up in conversations around migrations, so I was curious and achieved software engineering and AWS certifications, eventually moving to AWS. Here, I work closely with highly regulated customers, such as those in healthcare, to advise them on using AWS to operate securely in the cloud, and work on implementing security controls to help them meet frameworks like NIST and HIPAA, so I’ve come full circle.

How do you explain your job to non-technical friends and family?

The general public isn’t sure how to define the cloud, and that’s no different with my friends and family. I get questions all the time like “what exactly is the cloud?” Since I love storytelling, I use real-world examples to relate it to their profession or hobbies. I might talk about the predictive analytics used by the NFL or, for my friends in healthcare, I talk about securing PHI.

However, my favorite general example is describing the AWS Shared Responsibility Model as a house. Imagine a house—AWS is responsible for security of the house. We’re responsible for the physical security of the house, and we build a fence, we make sure there is a strong foundation and secure infrastructure. The customer is the tenant—they can pay as they go, leave when they need to—and they’re responsible for running the house and managing the items, or data, in the house. So it’s my job to help the customer implement new ideas or technologies in the house to help them live more efficiently and securely. I advise them on how to best lock the doors, where to store their keys, how to keep track of who is coming in and out of the house with access to certain rooms, and how to protect their items in the house from other risks.

And for my friends that love Harry Potter, I just say that I work in the Defense Against the Dark Arts.

What are you currently working on that you’re excited about?

There are a lot of things in different spaces that I’m excited about.

One is that I’m part of a ransomware working group to provide an offering that customers can use to prepare for a ransomware event. Many customers want to know what AWS services and features they can use to help them protect their environments from ransomware, and we take real solutions that we’ve used with customers and scale them out. Something that’s really cool about Professional Services is that we’re on the frontlines with customers, and we get to see the different challenges and how we can relate those back to AWS service teams and implement them in our products. These efforts are exciting because they give customers tangible ways to secure their environments and workloads. I’m also excited because we’re focusing not just on the technology but also on the people and processes, which sometimes get forgotten in the technology space.

I’m a huge fan of cross-functional collaboration, and I love working with all the different security teams that we have within AWS and in our customer security teams. I work closely with the Amazon Managed Services (AMS) security team, and we have some very interesting initiatives with them to help our customers operate more securely in the cloud, but more to come on that.

Another exciting project that’s close to my heart is the Inclusion, Diversity, and Equity (ID&E) workstream for the U.S. It’s really important to me to not only have diversity but also inclusion, and I’m leading a team that is helping to amplify diverse voices. I created an Amplification Flywheel to help our employees understand how they can better amplify diverse voices in different settings, such as meetings or brainstorming sessions. The flywheel helps illustrate a process in which 1) an idea is voiced by an underrepresented individual, 2) an ally then amplifies the idea by repeating it and giving credit to the author, 3) others acknowledge the contribution, 4) this creates a more equitable workplace, and 5) the flywheel continues where individuals feel more comfortable sharing ideas in the future.

Within this workstream, I’m also thrilled about helping underrepresented people who already have experience speaking but who may be having a hard time getting started with speaking engagements at conferences. I do mentorship sessions with them so they can get their foot in the door and amplify their own voice and ideas at conferences.

You’re presenting at re:Invent this year. Can you give us a sneak peek of your session?

I’m partnering with Johnny Ray, who is an AMS Senior Security Engineer, to present a session called SEC203: Revitalize your security with the AWS Security Reference Architecture. We’ll be discussing how the AWS SRA can be used as a holistic guide for deploying the full complement of AWS security services in a multi-account environment. The AWS SRA is a living document that we continuously update to help customers revitalize their security best practices as they grow, scale, and innovate.

What do you hope attendees take away from your session?

Technology is constantly evolving, and the security space is no exception. As organizations adopt AWS services and features, it’s important to understand how AWS security services work together to improve your security posture. Attendees will be able to take away tangible ways to:

  • Define the target state of your security architecture
  • Review the capabilities that you’ve already designed and revitalize them with the latest services and features
  • Bootstrap the implementation of your security architecture
  • Start a discussion about organizational governance and responsibilities for security

Johnny and I will also provide attendees with a roadmap at the end of the session that gives customers a plan for the first week after the session, one to three months after the session, and six months after the session, so they have different action items to implement within their organization.

You’ve written about the importance of ID&E in the workplace. In your opinion, what’s the most effective way leaders can foster an inclusive work environment?

I’m super passionate about ID&E, because it’s really important and it makes businesses more effective and a better place to work as a whole. My favorite Amazon Leadership Principle is Earn Trust. It doesn’t matter if you Deliver Results or Insist on the Highest Standards if no one is willing to listen to you because you don’t have trust built up. When it comes to building an inclusive work environment, a lot of earning trust comes from the ability to have empathy, vulnerability, and humility—being able to admit when you made a mistake—with your teammates as well as with your customers. I think we have a unique opportunity at AWS to work closely with customers and learn about what they’re doing and their best practices with ID&E, and share our best practices.

We all make mistakes, we’re all learning, and that’s okay, but having the ability to admit when you’ve made a mistake, apologize, and learn from it makes a much better place to work. When it comes to intent versus impact, I love to give the example—going back to storytelling—of walking down the street and accidentally bumping into someone, causing them to drop their coffee. You didn’t intend to hurt them or spill their coffee; your intent was to keep walking down the street. However, the impact that you had was maybe they’re burnt now, maybe their coffee is all down their clothes, and you had a negative impact on them. Now, you want to apologize and maybe look up more while you’re walking and be more observant of your surroundings. I think this is a good example because sometimes when it comes to ID&E, it can become a culture of blame and that’s not what we want to do—we want to call people in instead of calling them out. I think that’s a great way to build an inclusive team.

You can have a diverse workforce, but if you don’t have inclusion and you’re not listening to people who are underrepresented, that’s not going to help. You need to make sure you’re practicing transformative leadership and truly wanting to change how people behave and think when it comes to ID&E. You want to make sure people are more kind to each other, rather than only checking the box on arbitrary diversity goals. It’s important to be authentic and curious about how you learn from others and their experiences, and to respect them and implement that into different ideas and processes. This is important to make a more equitable workplace.

I love learning from different ID&E leaders like Camille Leak, Aiko Bethea, and Brené Brown. They are inspirational to me because they all approach ID&E with vulnerability and tackle the uncomfortable.

What’s the thing you’re most proud of in your career?

I have two different things—one from a technology standpoint and one from a personal impact perspective.

On the technology side, one of the coolest projects I’ve been on is Change Healthcare, which is an independent healthcare technology company that connects payers, providers, and patients across the United States. They have an important job of protecting a lot of PHI and personally identifiable information (PII) for American citizens. Change Healthcare needed to quickly migrate its ClaimsXten claims processing application to the cloud to meet the needs of a large customer, and it sought to move an internal demo and training application environment to the cloud to enable self-service and agility for developers. During this process, they reached out to AWS, and I took the lead role in advising Change Healthcare on security and how they were implementing their different security controls and technical documentation. I led information security meetings on AWS services, because the processes were new to a lot of the employees who were previously working in data centers. Through working with them, I was able to cut down their migration hours by 58% by using security automation and reduce the cost of resources, as well. I oversaw security for 94 migration cutovers where no security events occurred. It was amazing to see that process and build a great relationship with the company. I still meet with Change Healthcare employees for lunch even though I’m no longer on their projects. For this work, I was awarded the “Above and Beyond the Call of Duty” award, which only three Amazonians get a year, so that was an honor.

From a personal impact perspective, it was terrifying to quit my job and completely change careers, and I dealt with a lot of imposter syndrome—which I still have every day, but I work through it. Something impactful that resulted from this move was that it inspired a lot of people in my network from non-technical backgrounds, especially underrepresented individuals, to dive into coding and pursue a career in tech. Since completing my bootcamp, I’ve had more than 100 people reach out to me to ask about my experience, and about 30 of them quit their job to do a bootcamp and are now software engineers in various fields. So, it’s really amazing to see the life-changing impact of mentoring others.

You do a lot of volunteer work. Can you tell us about the work you do and why you’re so passionate about it?

Absolutely! The importance of giving back to the community cannot be understated.

Over the last 13 years, I have fundraised, volunteered, and advocated in building over 40 different homes throughout the country with Habitat for Humanity. One of my most impactful volunteer experiences was in 2013. I volunteered with a nonprofit called Bike & Build, where we cycled across the United States to raise awareness and money for affordable housing efforts. From Charleston, South Carolina to Santa Cruz, California, the team raised over $158,000, volunteered 3,584 hours, and biked 4,256 miles over the course of three months. This was such an incredible experience to meet hundreds of people across the country and help empower them to learn about affordable housing and improve their lives. It also tested me so much emotionally, mentally, and physically that I learned a lot about myself in the process. Additionally, I was selected by Gap, Inc. to participate in an international Habitat build in Antigua, Guatemala in October of 2014.

I’m currently on the Associate Board of Gilda’s Club, which provides free cancer support to anyone in need. Corporate social responsibility is a passion of mine, and so I helped organize AWS Birthday Boxes and Back to School Bags volunteer events with Gilda’s Club of Middle Tennessee. We purchased and assembled birthday and back-to-school boxes for children whose caregiver was experiencing cancer, so their caregiver would have one less thing to worry about and make sure the child feels special during this tough time. During other AWS team offsites, I’ve organized volunteering through Nashville Second Harvest food bank and created 60 shower and winter kits for individuals experiencing homelessness through ShowerUp.

I also mentor young adult women and non-binary individuals with BuiltByGirls to help them navigate potential career paths in STEM, and I recently joined the Cyversity organization, so I’m excited to give back to the security community.

If you had to pick an industry outside of security, what would you want to do?

History is one of my favorite topics, and I’ve always gotten to know people by having an inquisitive mind. I love listening and asking curious questions to learn more about people’s experiences and ideas. Since I’m drawn to the art of storytelling, I would pick a career as a podcast host where I bring on different guests to ask compelling questions and feature different, rarely heard stories throughout history.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Sarah Curry

Sarah Currey

Sarah (she/her) is a Security Practice Manager with AWS Professional Services, who is focused on accelerating customers’ business outcomes through security. She leads a team of expert security builders who deliver a variety of transformative projects that involve adopting AWS services and implementing security solutions. Sarah is an advocate of mentorship and passionate about building an inclusive, equitable workplace for all.

Introducing payload-based message filtering for Amazon SNS

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-payload-based-message-filtering-for-amazon-sns/

This post is written by Prachi Sharma (Software Development Manager, Amazon SNS), Mithun Mallick (Principal Solutions Architect, AWS Integration Services), and Otavio Ferreira (Sr. Software Development Manager, Amazon SNS).

Amazon Simple Notification Service (SNS) is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. These applications include Amazon Simple Queue Service (SQS), Amazon Kinesis Data Firehose, AWS Lambda, and HTTP/S endpoints. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications.

Today, we’re introducing the payload-based message filtering option of SNS, which augments the existing attribute-based option, enabling you to offload additional filtering logic to SNS and further reduce your application integration costs. For more information, see Amazon SNS Message Filtering.

Overview

You use SNS topics to fan out messages from publisher systems to subscriber systems, addressing your application integration needs in a loosely-coupled way. Without message filtering, subscribers receive every message published to the topic, and require custom logic to determine whether an incoming message needs to be processed or filtered out. This results in undifferentiating code, as well as unnecessary infrastructure costs. With message filtering, subscribers set a filter policy to their SNS subscription, describing the characteristics of the messages in which they are interested. Thus, when a message is published to the topic, SNS can verify the incoming message against the subscription filter policy, and only deliver the message to the subscriber upon a match. For more information, see Amazon SNS Subscription Filter Policies.

However, up until now, the message characteristics that subscribers could express in subscription filter policies were limited to metadata in message attributes. As a result, subscribers could not benefit from message filtering when the messages were published without attributes. Examples of such messages include AWS events published to SNS from 60+ other AWS services, like Amazon Simple Storage Service (S3), Amazon CloudWatch, and Amazon CloudFront. For more information, see Amazon SNS Event Sources.

The new payload-based message filtering option in SNS empowers subscribers to express their SNS subscription filter policies in terms of the contents of the message. This new capability further enables you to use SNS message filtering for your event-driven architectures (EDA) and cross-account workloads, specifically where subscribers may not be able to influence a given publisher to have its events sent with attributes. With payload-based message filtering, you have a simple, no-code option to further prevent unwanted data from being delivered to and processed by subscriber systems, thereby simplifying the subscribers’ code as well as reducing costs associated with downstream compute infrastructure. This new message filtering option is available across SNS Standard and SNS FIFO topics, for JSON message payloads.

Applying payload-based filtering in a use case

Consider an insurance company moving their lead generation platform to a serverless architecture based on microservices, adopting enterprise integration patterns to help them develop and scale these microservices independently. The company offers a variety of insurance types to its customers, including auto and home insurance. The lead generation and processing workflow for each insurance type is different, and entails notifying different backend microservices, each designed to handle a specific type of insurance request.

Payload filtering example

Payload filtering example

The company uses multiple frontend apps to interact with customers and receive leads from them, including a web app, a mobile app, and a call center app. These apps submit the customer-generated leads to an internal lead storage microservice, which then uploads the leads as XML documents to an S3 bucket. Next, the S3 bucket publishes events to an SNS topic to notify that lead documents have been created. Based on the contents of each lead document, the SNS topic forks the workflow by delivering the auto insurance leads to an SQS queue and the home insurance leads to another SQS queue. These SQS queues are respectively polled by the auto insurance and the home insurance lead processing microservices. Each processing microservice applies its business logic to validate the incoming leads.

The following S3 event, in JSON format, refers to a lead document uploaded with key auto-insurance-2314.xml to the S3 bucket. S3 automatically publishes this event to SNS, which in turn matches the S3 event payload against the filter policy of each subscription in the SNS topic. If the event matches the subscription filter policy, SNS delivers the event to the subscribed SQS queue. Otherwise, SNS filters the event out.

{
  "Records": [{
    "eventVersion": "2.1",
    "eventSource": "aws:s3",
    "awsRegion": "sa-east-1",
    "eventTime": "2022-11-21T03:41:29.743Z",
    "eventName": "ObjectCreated:Put",
    "userIdentity": {
      "principalId": "AWS:AROAJ7PQSU42LKEHOQNIC:demo-user"
    },
    "requestParameters": {
      "sourceIPAddress": "177.72.241.11"
    },
    "responseElements": {
      "x-amz-request-id": "SQCC55WT60XABW8CF",
      "x-amz-id-2": "FRaO+XDBrXtx0VGU1eb5QaIXH26tlpynsgaoJrtGYAWYRhfVMtq/...dKZ4"
    },
    "s3": {
      "s3SchemaVersion": "1.0",
      "configurationId": "insurance-lead-created",
      "bucket": {
        "name": "insurance-bucket-demo",
        "ownerIdentity": {
          "principalId": "A1ATLOAF34GO2I"
        },
        "arn": "arn:aws:s3:::insurance-bucket-demo"
      },
      "object": {
        "key": "auto-insurance-2314.xml",
        "size": 17,
        "eTag": "1530accf30cab891d759fa3bb8322211",
        "sequencer": "00737AF379B2683D6C"
      }
    }
  }]
}

To express its interest in auto insurance leads only, the SNS subscription for the auto insurance lead processing microservice sets the following filter policy. Note that, unlike attribute-based policies, payload-based policies support property nesting.

{
  "Records": {
    "s3": {
      "object": {
        "key": [{
          "prefix": "auto-"
        }]
      }
    },
    "eventName": [{
      "prefix": "ObjectCreated:"
    }]
  }
}

Likewise, to express its interest in home insurance leads only, the SNS subscription for the home insurance lead processing microservice sets the following filter policy.

{
  "Records": {
    "s3": {
      "object": {
        "key": [{
          "prefix": "home-"
        }]
      }
    },
    "eventName": [{
      "prefix": "ObjectCreated:"
    }]
  }
}

Note that each filter policy uses the string prefix matching capability of SNS message filtering. In this use case, this matching capability enables the filter policy to match only the S3 objects whose key property value starts with the insurance type it’s interested in (either auto- or home-). Note as well that each filter policy matches only the S3 events whose eventName property value starts with ObjectCreated, as opposed to ObjectRemoved. For more information, see Amazon S3 Event Notifications.

Deploying the resources and filter policies

To deploy the AWS resources for this use case, you need an AWS account with permissions to use SNS, SQS, and S3. On your development machine, install the AWS Serverless Application Model (SAM) Command Line Interface (CLI). You can find the complete SAM template for this use case in the aws-sns-samples repository in GitHub.

The SAM template has a set of resource definitions, as presented below. The first resource definition creates the SNS topic that receives events from S3.

InsuranceEventsTopic:
    Type: AWS::SNS::Topic
    Properties:
      TopicName: insurance-events-topic

The next resource definition creates the S3 bucket where the insurance lead documents are stored. This S3 bucket publishes an event to the SNS topic whenever a new lead document is created.

InsuranceEventsBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    DependsOn: InsuranceEventsTopicPolicy
    Properties:
      BucketName: insurance-doc-events
      NotificationConfiguration:
        TopicConfigurations:
          - Topic: !Ref InsuranceEventsTopic
            Event: 's3:ObjectCreated:*'

The next resource definitions create the SQS queues to be subscribed to the SNS topic. As presented in the architecture diagram, there’s one queue for auto insurance leads, and another queue for home insurance leads.

AutoInsuranceEventsQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: auto-insurance-events-queue
      
HomeInsuranceEventsQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: home-insurance-events-queue

The next resource definitions create the SNS subscriptions and their respective filter policies. Note that, in addition to setting the FilterPolicy property, you need to set the FilterPolicyScope property to MessageBody in order to enable the new payload-based message filtering option for each subscription. The default value for the FilterPolicyScope property is MessageAttributes.

AutoInsuranceEventsSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Protocol: sqs
      Endpoint: !GetAtt AutoInsuranceEventsQueue.Arn
      TopicArn: !Ref InsuranceEventsTopic
      FilterPolicyScope: MessageBody
      FilterPolicy:
        '{"Records":{"s3":{"object":{"key":[{"prefix":"auto-"}]}}
        ,"eventName":[{"prefix":"ObjectCreated:"}]}}'
  
HomeInsuranceEventsSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Protocol: sqs
      Endpoint: !GetAtt HomeInsuranceEventsQueue.Arn
      TopicArn: !Ref InsuranceEventsTopic
      FilterPolicyScope: MessageBody
      FilterPolicy:
        '{"Records":{"s3":{"object":{"key":[{"prefix":"home-"}]}}
        ,"eventName":[{"prefix":"ObjectCreated:"}]}}'

Once you download the full SAM template from GitHub to your local development machine, run the following command in your terminal to build the deployment artifacts.

sam build –t SNS-Payload-Based-Filtering-SAM.template

Once SAM has finished building the deployment artifacts, run the following command to deploy the AWS resources and the SNS filter policies. The command guides you through the process of setting deployment preferences, which you can answer based on your requirements. For more information, refer to the SAM Developer Guide.

sam deploy --guided

Once SAM has finished deploying the resources, you can start testing the solution in the AWS Management Console.

Testing the filter policies

Go the AWS CloudFormation console, choose the stack created by the SAM template, then choose the Outputs tab. Note the name of the S3 bucket created.

S3 bucket name

S3 bucket name

Now switch to the S3 console, and choose the bucket with the corresponding name. Once on the bucket details page, upload a test file whose name starts with the auto- prefix. For example, you can name your test file auto-insurance-7156.xml. The upload triggers an S3 event, typed as ObjectCreated, which is then routed through the SNS topic to the SQS queue that stores auto insurance leads.

Insurance bucket contents

Insurance bucket contents

Now switch to the SQS console, and choose to receive messages for the SQS queue storing an auto insurance lead. Note that the SQS queue for home insurance leads is empty.

SQS home insurance queue empty

SQS home insurance queue empty

If you want to check the filter policy configured, you may switch to the SNS console, choose the SNS topic created by the SAM template, and choose the SNS subscription for auto insurance leads. Once on the subscription details page, you can view the filter policy, in JSON format, alongside the filter policy scope set to “Message body”.

SNS filter policy

SNS filter policy

You may repeat the testing steps above, now with another file whose name starts with the home- prefix, and see how the S3 event is routed through the SNS topic to the SQS queue that stores home insurance leads.

Monitoring the filtering activity

CloudWatch provides visibility into your SNS message filtering activity, with dedicated metrics, which also enables you to create alarms. You can use the NumberOfNotifcationsFilteredOut-MessageBody metric to monitor the number of messages filtered out due to payload-based filtering, as opposed to attribute-based filtering. For more information, see Monitoring Amazon SNS topics using CloudWatch.

Moreover, you can use the NumberOfNotificationsFilteredOut-InvalidMessageBody metric to monitor the number of messages filtered out due to having malformed JSON payloads. You can have these messages with malformed JSON payloads moved to a dead-letter queue (DLQ) for troubleshooting purposes. For more information, see Designing Durable Serverless Applications with DLQ for Amazon SNS.

Cleaning up

To delete all the AWS resources that you created as part of this use case, run the following command from the project root directory.

sam delete

Conclusion

In this blog post, we introduce the use of payload-based message filtering for SNS, which provides event routing for JSON-formatted messages. This enables you to write filter policies based on the contents of the messages published to SNS. This also removes the message parsing overhead from your subscriber systems, as well as any custom logic from your publisher systems to move message properties from the payload to the set of attributes. Lastly, payload-based filtering can facilitate your event-driven architectures (EDA) by enabling you to filter events published to SNS from 60+ other AWS event sources.

For more information, see Amazon SNS Message Filtering, Amazon SNS Event Sources, and Amazon SNS Pricing. For more serverless learning resources, visit Serverless Land.

AWS achieves Spain’s ENS High certification across 166 services

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-166-services/

Amazon Web Services (AWS) is committed to bringing additional services and AWS Regions into the scope of our Esquema Nacional de Seguridad (ENS) High certification to help customers meet their regulatory needs.

ENS is Spain’s National Security Framework. The ENS certification is regulated under the Spanish Royal Decree 3/2010 and is a compulsory requirement for central government customers in Spain. ENS establishes security standards that apply to government agencies and public organizations in Spain, and service providers on which Spanish public services depend. Updating and achieving this certification every year demonstrates our ongoing commitment to meeting the heightened expectations for cloud service providers set forth by the Spanish government.

We are happy to announce the addition of 17 services to the scope of our ENS High certification, for a new total of 166 services in scope. The certification now covers 25 Regions. Some of the additional security services in scope for ENS High include the following:

  • AWS CloudShell – a browser-based shell that makes it simpler to securely manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools for productivity.
  • AWS Cloud9 – a cloud-based integrated development environment (IDE) that you can use to write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.
  • Amazon DevOps Guru – a service that uses machine learning to detect abnormal operating patterns so that you can identify operational issues before they impact your customers.
  • Amazon HealthLake – a HIPAA-eligible service that offers healthcare and life sciences companies a complete view of individual or patient population health data for query and analytics at scale.
  • AWS IoT SiteWise – a managed service that simplifies collecting, organizing, and analyzing industrial equipment data.

AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level.

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view the complete list of services included in the scope, see the AWS Services in Scope by Compliance Program – Esquema Nacional de Seguridad (ENS) page. You can download the ENS High Certificate from AWS Artifact in the AWS Management Console or from the Compliance page Esquema Nacional de Seguridad High.

As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a Security Audit Program Manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations and certification programs in Spain and other EMEA countries. Daniel has 8 years of experience in security assurance and previously worked as an auditor for PCI DSS security framework.

Now Open the 30th AWS Region – Asia Pacific (Hyderabad) Region in India

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/now-open-the-30th-aws-region-asia-pacific-hyderabad-region-in-india/

In November 2020, Jeff announced the upcoming AWS Asia Pacific (Hyderabad) as the second Region in India. Yes! Today we are announcing the general availability of the 30th AWS Region, Asia Pacific (Hyderabad) Region, with three Availability Zones and the ap-south-2 API name.

The Asia Pacific (Hyderabad) Region is located in the state of Telangana. As the capital and the largest city in Telangana, Hyderabad is already an important talent hub for IT professionals and entrepreneurs. For example, AWS Hyderabad User Groups has more than 4,000 community members and holds active meetups, including an upcoming Community Day in December 2022.

The new Hyderabad Region gives customers an additional option for running their applications and serving end users from data centers located in India. Customers with data-residency requirements arising from statutes, regulations, and corporate policy can run workloads and securely store data in India while serving end users with even lower latency.

Here are the latest numbers of latency:

AWS Services in the Asia Pacific (Hyderabad) Region
In the new Hyderabad Region, you can use C5C5d, C6gM5M5dM6gdR5R5d, R6g, I3I3en, T3, and T4g instances, and use a long list of AWS services including: Amazon API Gateway, AWS AppConfig, AWS Application Auto Scaling, Amazon Aurora, Amazon EC2 Auto Scaling, AWS Config, AWS Certificate Manager, AWS Cloud Control API, AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, Amazon CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service (Amazon ECS), Amazon ElastiCache, Amazon EMR, Elastic Load Balancing, Elastic Load Balancing – Network (NLB), Amazon EventBridge, AWS Fargate, AWS Health Dashboard, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, AWS Key Management Service (AWS KMS), AWS Lambda, AWS Marketplace, Amazon OpenSearch Service, Amazon Relational Database Service (Amazon RDS), Amazon Redshift, Amazon Route 53, AWS Secrets Manager, Amazon Simple Storage Service (Amazon S3), Amazon S3 Glacier, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), AWS Step Functions, AWS Support API, Amazon Simple Workflow Service (Amazon SWF), AWS Systems Manager, AWS Trusted Advisor, VM Import/Export, Amazon Virtual Private Cloud (Amazon VPC), AWS VPN, and AWS X-Ray.

AWS in India
AWS has a long-standing history of helping drive digital transformation in India. AWS first established a presence in the country in 2011, with the opening of an office in Delhi. In 2016, AWS launched the Asia Pacific (Mumbai) Region giving enterprises, public sector organizations, startups, and SMBs access to state-of-the-art public cloud infrastructure. In May 2019, AWS expanded the Region to include a third Availability Zone to support rapid customer growth and provide more choice, flexibility, the ability to replicate workloads across more Availability Zones, and even higher availability.

There are currently 33 Amazon CloudFront edge locations: Mumbai, India (10), New Delhi (7), Chennai (7), Bangalore (4), Hyderabad (3), and Kolkata (2) in India. The edge locations work in concert with a CloudFront Regional edge cache in Mumbai to speed delivery of content. There are six AWS Direct Connect locations, all of which connect to the Asia Pacific (Mumbai) Region: two in Mumbai, one in Chennai, one in Hyderabad, one in Delhi, and one in Bangalore. Finally, the first AWS Local Zones launched in Delhi, India for bringing selected AWS services very close to a particular geographic area. We announced plans to launch three more AWS Local Zones in India, in the cities of Chennai, Bengaluru, and Kolkata.

AWS is also investing in the future of the Indian technology community and workforce, training tech professionals to expand their skillset and cloud knowledge. In fact, since 2017, AWS has trained over three million individuals in India on cloud skills. AWS has worked with government officials, educational institutes, and corporate organizations to achieve this milestone, which has included first-time learners and mid-career professionals alike.

AWS continues to invest in upskilling local developers, students, and the next generation of IT leaders in India through programs such as AWS Academy, AWS Educate, AWS re/Start, and other Training and Certification programs.

AWS Customers in India
We have many amazing customers in India that are doing incredible things with AWS, for example:

  • SonyLIV is the first Over the top (OTT) service in India born on the AWS Cloud. SonyLIV launched Kaun Banega Crorepati (KBC) interactive game show to allow viewers to submit answers to questions on the show in real time via their mobile devices. SonyLIV uses Amazon ElastiCache to support real-time, in-memory caching at scale, Amazon CloudFront as a low-latency content delivery network, and Amazon SQS as a highly available message queuing service.
  • DocOnline is a digital healthcare platform that provides video or phone doctor consultations to over 3.5 million families in 10 specialties and 14 Indian languages. DocOnline delivers over 100,000 consultations, diagnostic tests, and medicines every year. DocOnline has built its entire business on AWS to power its online consultation services 24-7 and to continuously measure and improve health outcomes. Being in the Healthcare domain, DocOnline needs to comply with regulatory guidelines, including Data Residency, PII security, and Disaster Recovery in seismic zones. With the AWS Asia Pacific (Hyderabad) Region, DocOnline can ensure critical patient data is hosted in India on the most secure, extensive, and reliable cloud platform while serving customers with even faster response times.
  • ICICI Lombard General Insurance is one of the first among the large insurance companies in India to move over 140+ applications, including its core application, to AWS. The rapid advances in technology and computing power delivered by cloud computing are poised to radically change the way insurance is delivered as well as consumed. ICICI Lombard has launched new generation products like cyber insurance, telehealth, cashless homecare, and IoT-based risk management solutions for marine transit insurance, providing seamless integration with various digital partners for digital distribution of insurance products and virtual motor claims inspection solutions, which have seen adoption increase from 61 percent last year to 80 percent this year. ICICI Lombard was able to process group health endorsements for their corporate customers in less than a day as compared to 10–12 days earlier. ICICI Lombard is looking at the cloud for further transformative possibilities in real-time inspection of risk and personalized underwriting.
  • Ministry of Health and Family Welfare (MoHFW), Government of India, needed a highly reliable, scalable, and resilient technical infrastructure to power a large-scale COVID-19 vaccination drive for India’s more than 1.3 billion citizens in 2021. To facilitate the required performance and speed, the MoHFW engaged India’s Ministry of Electronics and Information Technology to build and launch the Co-WIN application powered by AWS, which scales in seconds to handle user registrations and consistently supports 10 million vaccinations daily.

You can find more customer stories in India.

Available Now
The new Hyderabad Region is ready to support your business. You can find a detailed list of the services available in this Region on the AWS Regional Services List.

With this launch, AWS now spans 96 Availability Zones within 30 geographic Regions around the world, with three new Regions launched in 2022, including the AWS Middle East (UAE) Region, the AWS Europe (Zurich) Region, and the AWS Europe (Spain) Region. We have also announced plans for 15 more Availability Zones and five more AWS Regions in Australia, Canada, Israel, New Zealand, and Thailand.

To learn more, see the Global Infrastructure page, and please send feedback through your usual AWS support contacts in India.

— Channy

AWS Week in Review – November 21, 2022

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-november-21-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

A new week starts, and the News Blog team is getting ready for AWS re:Invent! Many of us will be there next week and it would be great to meet in person. If you’re coming, do you know about PeerTalk? It’s an onsite networking program for re:Invent attendees available through the AWS Events mobile app (which you can get on Google Play or Apple App Store) to help facilitate connections among the re:Invent community.

If you’re not coming to re:Invent, no worries, you can get a free online pass to watch keynotes and leadership sessions.

Last Week’s Launches
It was a busy week for our service teams! Here are the launches that got my attention:

AWS Region in Spain – The AWS Region in Aragón, Spain, is now open. The official name is Europe (Spain), and the API name is eu-south-2.

Amazon Athena – You can now apply AWS Lake Formation fine-grained access control policies with all table and file format supported by Amazon Athena to centrally manage permissions and access data catalog resources in your Amazon Simple Storage Service (Amazon S3) data lake. With fine-grained access control, you can restrict access to data in query results using data filters to achieve column-level, row-level, and cell-level security.

Amazon EventBridge – With these additional filtering capabilities, you can now filter events by suffix, ignore case, and match if at least one condition is true. This makes it easier to write complex rules when building event-driven applications.

AWS Controllers for Kubernetes (ACK) – The ACK for Amazon Elastic Compute Cloud (Amazon EC2) is now generally available and lets you provision and manage EC2 networking resources, such as VPCs, security groups and internet gateways using the Kubernetes API. Also, the ACK for Amazon EMR on EKS is now generally available to allow you to declaratively define and manage EMR on EKS resources such as virtual clusters and job runs as Kubernetes custom resources. Learn more about ACK for Amazon EMR on EKS in this blog post.

Amazon HealthLake – New analytics capabilities make it easier to query, visualize, and build machine learning (ML) models. Now HealthLake transforms customer data into an analytics-ready format in near real-time so that you can query, and use the resulting data to build visualizations or ML models. Also new is Amazon HealthLake Imaging (preview), a new HIPAA-eligible capability that enables you to easily store, access, and analyze medical images at any scale. More on HealthLake Imaging can be found in this blog post.

Amazon RDS – You can now transfer files between Amazon Relational Database Service (RDS) for Oracle and an Amazon Elastic File System (Amazon EFS) file system. You can use this integration to stage files like Oracle Data Pump export files when you import them. You can also use EFS to share a file system between an application and one or more RDS Oracle DB instances to address specific application needs.

Amazon ECS and Amazon EKS – We added centralized logging support for Windows containers to help you easily process and forward container logs to various AWS and third-party destinations such as Amazon CloudWatch, S3, Amazon Kinesis Data Firehose, Datadog, and Splunk. See these blog posts for how to use this new capability with ECS and with EKS.

AWS SAM CLI – You can now use the Serverless Application Model CLI to locally test and debug an AWS Lambda function defined in a Terraform application. You can see a walkthrough in this blog post.

AWS Lambda – Now supports Node.js 18 as both a managed runtime and a container base image, which you can learn more about in this blog post. Also check out this interesting article on why and how you should use AWS SDK for JavaScript V3 with Node.js 18. And last but not least, there is new tooling support to build and deploy native AOT compiled .NET 7 applications to AWS Lambda. With this tooling, you can enable faster application starts and benefit from reduced costs through the faster initialization times and lower memory consumption of native AOT applications. Learn more in this blog post.

AWS Step Functions – Now supports cross-account access for more than 220 AWS services to process data, automate IT and business processes, and build applications across multiple accounts. Learn more in this blog post.

AWS Fargate – Adds the ability to monitor the utilization of the ephemeral storage attached to an Amazon ECS task. You can track the storage utilization with Amazon CloudWatch Container Insights and ECS Task Metadata endpoint.

AWS Proton – Now has a centralized dashboard for all resources deployed and managed by AWS Proton, which you can learn more about in this blog post. You can now also specify custom commands to provision infrastructure from templates. In this way, you can manage templates defined using the AWS Cloud Development Kit (AWS CDK) and other templating and provisioning tools. More on CDK support and AWS CodeBuild provisioning can be found in this blog post.

AWS IAM – You can now use more than one multi-factor authentication (MFA) device for root account users and IAM users in your AWS accounts. More information is available in this post.

Amazon ElastiCache – You can now use IAM authentication to access Redis clusters. With this new capability, IAM users and roles can be associated with ElastiCache for Redis users to manage their cluster access.

Amazon WorkSpaces – You can now use version 2.0 of the WorkSpaces Streaming Protocol (WSP) host agent that offers significant streaming quality and performance improvements, and you can learn more in this blog post. Also, with Amazon WorkSpaces Multi-Region Resilience, you can implement business continuity solutions that keep users online and productive with less than 30-minute recovery time objective (RTO) in another AWS Region during disruptive events. More on multi-region resilience is available in this post.

Amazon CloudWatch RUM – You can now send custom events (in addition to predefined events) for better troubleshooting and application specific monitoring. In this way, you can monitor specific functions of your application and troubleshoot end user impacting issues unique to the application components.

AWS AppSync – You can now define GraphQL API resolvers using JavaScript. You can also mix functions written in JavaScript and Velocity Template Language (VTL) inside a single pipeline resolver. To simplify local development of resolvers, AppSync released two new NPM libraries and a new API command. More info can be found in this blog post.

AWS SDK for SAP ABAP – This new SDK makes it easier for ABAP developers to modernize and transform SAP-based business processes and connect to AWS services natively using the SAP ABAP language. Learn more in this blog post.

AWS CloudFormation – CloudFormation can now send event notifications via Amazon EventBridge when you create, update, or delete a stack set.

AWS Console – With the new Applications widget on the Console home, you have one-click access to applications in AWS Systems Manager Application Manager and their resources, code, and related data. From Application Manager, you can view the resources that power your application and your costs using AWS Cost Explorer.

AWS Amplify – Expands Flutter support (developer preview) to Web and Desktop for the API, Analytics, and Storage use cases. You can now build cross-platform Flutter apps with Amplify that target iOS, Android, Web, and Desktop (macOS, Windows, Linux) using a single codebase. Learn more on Flutter Web and Desktop support for AWS Amplify in this post. Amplify Hosting now supports fully managed CI/CD deployments and hosting for server-side rendered (SSR) apps built using Next.js 12 and 13. Learn more in this blog post and see how to deploy a NextJS 13 app with the AWS CDK here.

Amazon SQS – With attribute-based access control (ABAC), you can define permissions based on tags attached to users and AWS resources. With this release, you can now use tags to configure access permissions and policies for SQS queues. More details can be found in this blog.

AWS Well-Architected Framework – The latest version of the Data Analytics Lens is now available. The Data Analytics Lens is a collection of design principles, best practices, and prescriptive guidance to help you running analytics on AWS.

AWS Organizations – You can now manage accounts, organizational units (OUs), and policies within your organization using CloudFormation templates.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
A few more stuff you might have missed:

Introducing our final AWS Heroes of the year – As the end of 2022 approaches, we are recognizing individuals whose enthusiasm for knowledge-sharing has a real impact with the AWS community. Please meet them here!

The Distributed Computing ManifestoWerner Vogles, VP & CTO at Amazon.com, shared the Distributed Computing Manifesto, a canonical document from the early days of Amazon that transformed the way we built architectures and highlights the challenges faced at the end of the 20th century.

AWS re:Post – To make this community more accessible globally, we expanded the user experience to support five additional languages. You can now interact with AWS re:Post also using Traditional Chinese, Simplified Chinese, French, Japanese, and Korean.

For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more.

Upcoming AWS Events
As usual, there are many opportunities to meet:

AWS re:Invent – Our yearly event is next week from November 28 to December 2. If you can’t be there in person, get your free online pass to watch live the keynotes and the leadership sessions.

AWS Community DaysAWS Community Day events are community-led conferences to share and learn together. Join us in Sri Lanka (on December 6-7), Dubai, UAE (December 10), Pune, India (December 10), and Ahmedabad, India (December 17).

That’s all from me for this week. Next week we’ll focus on re:Invent, and then we’ll take a short break. We’ll be back with the next Week in Review on December 12!

Danilo

AWS Security Profile: Jonathan “Koz” Kozolchyk, GM of Certificate Services

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/aws-security-profile-jonathan-koz-kozolchyk-gm-of-certificate-services/

In the AWS Security Profile series, we interview AWS thought leaders who help keep our customers safe and secure. This interview features Jonathan “Koz” Kozolchyk, GM of Certificate Services, PKI Systems. Koz shares his insights on the current certificate landscape, his career at Amazon and within the security space, what he’s excited about for the upcoming AWS re:Invent 2022, his passion for home roasting coffee, and more.

How long have you been at AWS and what do you do in your current role?
I’ve been with Amazon for 21 years and in AWS for 6. I run our Certificate Services organization. This includes managing services such as AWS Certificate Manager (ACM), AWS Private Certificate Authority (AWS Private CA), AWS Signer, and managing certificates and trust stores at scale for Amazon. I’ve been in charge of the internal PKI (public key infrastructure, our mix of public and private certs) for Amazon for nearly 10 years. This has given me lots of insight into how certificates work at scale, and I’ve enjoyed applying those learnings to our customer offerings.

How did you get started in the certificate space? What about it piqued your interest?
Certificates were designed to solve two key problems: provide a secure identity and enable encryption in transit. These are both critical needs that are foundational to the operation of the internet. They also come with a lot of sharp edges. When a certificate expires, systems tend to fail. This can cause problems for Amazon and our customers. It’s a hard problem when you’re managing over a million certificates, and I enjoy the challenge that comes with that. I like turning hard problems into a delightful experience. I love the feedback we get from customers on how hands-free ACM is and how it just solves their problems.

How do you explain your job to your non-tech friends?
I tell them I do two things. I run the equivalent of a department of motor vehicles for the internet, where I validate the identity of websites and issue secure documentation to prove the websites’ validity to others (the certificate). I’m also a librarian. I keep track of all of the certificates we issue and ensure that they never expire and that the private keys are always safe.

What are you currently working on that you’re excited about?
I’m really excited about our AWS Private CA offering and the places we’re planning to grow the service. Running a certificate authority is hard—it requires careful planning and tight security controls. I love that AWS Private CA has turned this into a simple-to-use and secure system for customers. We’ve seen the number of customers expand over time as we’ve added more versatility for customers to customize certificates to meet a wide range of applications—including Kubernetes, Internet of Things, IAM Roles Anywhere (which provides a secure way for on-premises servers to obtain temporary AWS credentials and removes the need to create and manage long-term AWS credentials), and Matter, a new industry standard for connecting smart home devices. We’re also working on code signing and software supply chain security. Finally, we have some exciting features coming to ACM in the coming year that I think customers will really appreciate.

What’s been the most dramatic change you’ve seen in the industry?
The biggest change has been the way that certificate pricing and infrastructure as code has changed the way we think about certificates. It used to be that a company would have a handful of certificates that they tracked in spreadsheets and calendar invites. Issuance processes could take days and it was okay. Now, every individual host, every run of an integration test may be provisioning a new certificate. Certificate validity used to last three years, and now customers want one-day certificates. This brings a new element of scale to not only our underlying architecture, but also the ways that we have to interact with our customers in terms of management controls and visibility. We’re also at the beginning of a new push for increased PKI agility. In the old days, PKI was brittle and slow to change. We’re seeing the industry move towards the ability to rapidly change roots and intermediates. You can see we’re pushing some of this now with our dynamic intermediate certificate authorities.

What would you say is the coolest AWS service or feature in the PKI space?
Our customers love the way AWS Certificate Manager makes certificate management a hands-off automated affair. If you request a certificate with DNS validation, we’ll renew and deploy that certificate on AWS for as long as you’re using it and you’ll never lose sleep about that certificate.

Is there something you wish customers would ask you about more often?
I’m always happy to talk about PKI design and how to best plan your private CAs and design. We like to say that PKI is the land of one-way doors. It’s easy to make a decision that you can’t reverse, and it could be years before you realize you’ve made a mistake. Helping customers avoid those mistakes is something we like to do.

I understand you’ll be at re:Invent 2022. What are you most looking forward to?
Hands down it’s the customer meetings; we take customer feedback very seriously, and hearing what their needs are helps us define our solutions. We also have several talks in this space, including CON316 – Container Image Signing on AWS, SEC212 – Data Protection Grand Tour: Locks, Keys, Certs, and Sigs, and SEC213 – Understanding the evolution of cloud-based PKI. I encourage folks to check out these sessions as well as the re:Invent 2022 session catalog.

Do you have any tips for first-time re:Invent attendees?
Wear comfortable shoes! It’s amazing how many steps you’ll put in.

How about outside of work, any hobbies? I understand you’re passionate about home coffee roasting. How did you get started?
I do roast my own coffee—it’s a challenging hobby because you always have to be thinking 30 to 60 seconds ahead of what your data is showing you. You’re working off of sight and sound, listening to the beans and checking their color. When you make an adjustment to the roaster, you have to do it thinking where the beans will be in the future and not where they are now. I love the challenge that comes with it, and it gives me access to interesting coffee beans you wouldn’t normally see on store shelves. I got started with a used small home roaster because I thought I would enjoy it. I’ve since upgraded to a commercial “sample” roaster that lets me do larger batches.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Jonathan Kozolchyk

Jonathan Kozolchyk

Jonathan is GM, Certificate Services , PKI Systems at AWS.