Tag Archives: Management & Governance

Simplify risk and compliance assessments with the new common control library in AWS Audit Manager

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/simplify-risk-and-compliance-assessments-with-the-new-common-control-library-in-aws-audit-manager/

With AWS Audit Manager, you can map your compliance requirements to AWS usage data and continually audit your AWS usage as part of your risk and compliance assessment. Today, Audit Manager introduces a common control library that provides common controls with predefined and pre-mapped AWS data sources.

The common control library is based on extensive mapping and reviews conducted by AWS certified auditors, verifying that the appropriate data sources are identified for evidence collection. Governance, Risk and Compliance (GRC) teams can use the common control library to save time time when mapping enterprise controls into Audit Manager for evidence collection, reducing their dependence on information technology (IT) teams.

Using the common control library, you can view the compliance requirements for multiple frameworks (such as PCI or HIPAA) associated with the same common control in one place, making it easier to understand your audit readiness across multiple frameworks simultaneously. In this way, you don’t need to implement different compliance standard requirements individually and then review the resulting data multiple times for different compliance regimes.

Additionally, by using controls from this library, you automatically inherit improvements as Audit Manager updates or adds new data sources, such as additional AWS CloudTrail events, AWS API calls, AWS Config rules, or maps additional compliance frameworks to common controls. This eliminates the efforts required by GRC and IT teams to constantly update and manage evidence sources and makes it easier to benefit from additional compliance frameworks that Audit Manager adds to its library.

Let’s see how this works in practice with an example.

Using AWS Audit Manager common control library
A common scenario for an airline is to implement a policy so that their customer payments, including in-flight meals and internet access, can only be taken via credit card. To implement this policy, the airline develops an enterprise control for IT operations that says that “customer transactions data is always available.” How can they monitor whether their applications on AWS meet this new control?

Acting as their compliance officer, I open the Audit Manager console and choose Control library from the navigation bar. The control library now includes the new Common category. Each common control maps to a group of core controls that collect evidence from AWS managed data sources and makes it easier to demonstrate compliance with a range of overlapping regulations and standards. I look through the common control library and search for “availability.” Here, I realize the airline’s expected requirements map to common control High availability architecture in the library.

Console screenshot.

I expand the High availability architecture common control to see the underlying core controls. There, I notice this control doesn’t adequately meet all the company’s needs because Amazon DynamoDB is not in this list. DynamoDB is a fully managed database, but given extensive usage of DynamoDB in their application architecture, they definitely want their DynamoDB tables to be available when their workload grows or shrinks. This might not be the case if they configured a fixed throughput for a DynamoDB table.

I look again through the common control library and search for “redundancy.” I expand the Fault tolerance and redundancy common control to see how it maps to core controls. There, I see the Enable Auto Scaling for Amazon DynamoDB tables core control. This core control is relevant for the architecture that the airline has implemented but the whole common control is not needed.

Console screenshot.

Additionally, common control High availability architecture already includes a couple of core controls that check that Multi-AZ replication on Amazon Relational Database Service (RDS) is enabled, but these core controls rely on an AWS Config rule. This rule doesn’t work for this use case because the airline does not use AWS Config. One of these two core controls also uses a CloudTrail event, but that event does not cover all scenarios.

Console screenshot.

As the compliance officer, I would like to collect the actual resource configuration. To collect this evidence, I briefly consult with an IT partner and create a custom control using a Customer managed source. I select the api-rds_describedbinstances API call and set a weekly collection frequency to optimize costs.

Console screenshot.

Implementing the custom control can be handled by the compliance team with minimal interaction needed from the IT team. If the compliance team has to reduce their reliance on IT, they can implement the entire second common control (Fault tolerance and redundancy) instead of only selecting the core control related to DynamoDB. It might be more than what they need based on their architecture, but the acceleration of velocity and reduction of time and effort for both the compliance and IT teams is often a bigger benefit than optimizing the controls in place.

I now choose Framework library in the navigation pane and create a custom framework that includes these controls. Then, I choose Assessments in the navigation pane and create an assessment that includes the custom framework. After I create the assessment, Audit Manager starts collecting evidence about the selected AWS accounts and their AWS usage.

By following these steps, a compliance team can precisely report on the enterprise control “customer transactions data is always available” using an implementation in line with their system design and their existing AWS services.

Things to know
The common control library is available today in all AWS Regions where AWS Audit Manager is offered. There is no additional cost for using the common control library. For more information, see AWS Audit Manager pricing.

This new capability streamlines the compliance and risk assessment process, reducing the workload for GRC teams and simplifying the way they can map enterprise controls into Audit Manager for evidence collection. To learn more, see the AWS Audit Manager User Guide.


New Amazon CloudWatch log class for infrequent access logs at a reduced price

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/new-amazon-cloudwatch-log-class-for-infrequent-access-logs-at-a-reduced-price/

Amazon CloudWatch Logs announces today a new log class called Infrequent Access. This new log class offers a tailored set of capabilities at a lower cost for infrequently accessed logs, enabling customers to consolidate all their logs in one place in a cost-effective manner.

As customers’ applications continue to scale and grow, so does the volume of logs generated. To limit the increase of logging costs, many customers are forced to make hard trade-offs. For example, some customers limit the logs generated by their applications, which can hinder the visibility of the application, or choose a different solution for different log types, which adds complexity and inefficiencies in managing different logging solutions. For instance, customers may send logs needed for real-time analytics and alerting to CloudWatch Logs and send more detailed logs needed for debugging and troubleshooting to a lower-cost solution that doesn’t have as many features as CloudWatch. In the end, these workarounds can impact the observability of the application, because customers need to navigate across multiple solutions to see their logs.

The Infrequent Access log class allows you to build a holistic observability solution using CloudWatch by centralizing all your logs in one place to ingest, query, and store your logs in a cost-efficient way. Infrequent Access is 50 percent lower per GB ingestion price than Standard log class. It provides a tailored set of capabilities for customers that don’t need advanced features like Live Tail, metric extraction, alarming, or data protection that the Standard log class provides. With Infrequent Access, you can still get the benefits of fully managed ingestion, storage, and the ability to deep dive using CloudWatch Logs Insights.

The following table shows a side-by-side comparison of the features that the new Infrequent Access and the Standard log classes have.

Feature Infrequent Access log class Standard log class
Fully managed ingestion and storage Available Available
Cross-account Available Available
Encryption with KMS Available Available
Logs Insights Available Available
Subscription filters / Export to S3 Not available Available
GetLogEvents / FilterLogEvents Not available Available
Contributor, Container, and Lambda Insights Not available Available
Metric filter and alerting Not available Available
Data protection Not available Available
Embedded metric format (EMF) Not available Available
Live Tail Not available Available

When to use the new Infrequent Access log class
Use the Infrequent Access log class when you have a new workload that doesn’t require advanced features provided by the Standard log class. One important consideration is that when you create a log group with a specific log class, you cannot change that log group log class afterward.

The Infrequent Access log class is suitable for debug logs or web server logs because they are quite verbose and rarely require any of the advanced functionality that the Standard log class provides.

Another good workload for the Infrequent Access log class is an Internet of Things (IoT) fleet sending detailed logs that are only accessed for after the fact forensic analysis after the event. In addition, the Infrequent Access log class is a good choice for workloads where logs need to be stored for compliance because they will be queried infrequently.

Getting started
To get started using the new Infrequent Access log class, create a new log group in the CloudWatch Logs console and select the new Infrequent Access log class. You can create logs groups with the new Infrequent Access log class not only from the AWS Management Console but also from the AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS SDKs.

Create log group

Once you have the new log group created, you can start using it in your workloads. For this example, I will configure a web application to send debug logs to this log group. After a while of the web application executes for a while, you can go back to the log group, where you see a new log stream.

View log group

When you select a log stream, you will be directed to CloudWatch Logs Insights.

Log insights

Using the same familiar CloudWatch Logs Insights experience you get with Standard Class, you can create queries and search those logs to find relevant information, and you can analyze all the logs quickly in one place.

Available now
The new Infrequent Access log class is now available in all AWS Regions except the China and GovCloud Regions. You can start using it and enjoy a more cost-effective way to collect, store, and analyze your logs in a fully managed experience.

To learn more about the new log class, you can check the CloudWatch Logs user guide dedicated page for the Infrequent Access log class.


New – Manage Planned Lifecycle Events on AWS Health

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/new-manage-planned-lifecycle-events-on-aws-health/

We are announcing new features in AWS Health to help you manage planned lifecycle events for your AWS resources and dynamically track the completion of actions that your team takes at the resource-level to ensure continued smooth operations of your applications. Some examples of planned lifecycle events are an Amazon Elastic Kubernetes Service (Amazon EKS) Kubernetes version end of standard support, Amazon Relational Database Service (Amazon RDS) certificate rotations, and end of support for other open source software, to name a few.

These features include:

  • The ability to dynamically track the completion of actions at the resource level where possible, to minimize disruption to applications.
  • Timely visibility into upcoming planned lifecycle events, using notifications at least 90 days in advance for minor changes, and 180 days in advance for major changes, whenever possible.
  • A standardized data format that helps you prepare and take actions. It integrates AWS Health events programmatically with your preferred operational tools, using AWS Health API.
  • An organization-wide visibility into planned lifecycle events for teams that manage workloads across the company with delegated administrator. This means that central teams such as Cloud Center of Excellence (CCoE) teams, no longer need to use the management account to access the organizational view.
  • A single feed of AWS Health events from all accounts in your organization on Amazon EventBridge. This provides a centralized way to automate the management of AWS Health events across your organization by creating rules on EventBridge to take actions. Depending on the type of event, you can capture event information, initiate additional events, send notifications, take corrective action, or perform other actions. For example, you can use AWS Health to receive email, AWS Chatbot, or push notifications to the AWS Console Mobile Application if you have AWS resources in your AWS account that are scheduled for updates, such as Amazon Elastic Compute Cloud (Amazon EC2) instances.

How it Works
Planned lifecycle events are available through the AWS Health Dashboard, AWS Health API, and EventBridge. You can automate the management of AWS Health events across your organization by creating rules on EventBridge that includes the “source”: [“aws.health”] value to receive AWS Health notifications or initiate actions based on the rules created. For example, if AWS Health publishes an event about your EC2 instances, then you can use these notifications to take action and update or replace your resources as needed. You can view the planned lifecycle events for your AWS resources in the Scheduled changes tab.

Table View - Organizational Level

Table View – Organizational level

To prioritize events, you can now see scheduled changes in a calendar view. The event has a start time to indicate when the change commences. The status remains as Upcoming until the change occurs or all of the affected resources have been actioned. The event status changes to Completed when all of the affected resources have been actioned. You can also deselect event statuses that you don’t want to focus on. To show more specific event details, select an event to open the split panel view to the right or the bottom of the screen.

Calendar event selected - Organizational level (Affected resources)

Calendar event selected – Organizational level (Affected resources)

When selecting the Affected resources tab on the detailed view of an event, customers can see relevant account information that can help you reach out to the right people to resolve impaired resources.

Affected resources view - Account level

Affected resources view – Account level

Integration with Other AWS Services
Using EventBridge integrations that already exist in AWS Health, you can send change events, and their fully managed lifecycles to other tools such as JIRA, ServiceNow, and AWS Systems Manager OpsCenter. EventBridge sends all updates to events (for example, timestamps, resource status, and more) to these tools, allowing you to track the status of events in your preferred tooling.

EventBridge Integrations

EventBridge integrations

Now Available
Planned lifecycle events for AWS Health are available in all AWS Regions where AWS Health is available except China and GovCloud Regions.
To learn more, visit the AWS Health user guide. You can submit your questions to AWS re:Post for AWS Health, or through your usual AWS Support contacts.


Let’s Architect! Governance best practices

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-governance-best-practices/

Governance plays a crucial role in AWS environments, as it ensures compliance, security, and operational efficiency.

In this Let’s Architect!, we aim to provide valuable insights and best practices on how to configure governance appropriately within a company’s AWS infrastructure. By implementing these best practices, you can establish robust controls, enhance security, and maintain compliance, enabling your organization to fully leverage the power of AWS services while mitigating risks and maximizing operational efficiency.

If you are hungry for more information on governance, check out the Architecture Center’s management and governance page, where you can find a collection of AWS solutions, blueprints, and best practices on this topic.

How Global Payments scales on AWS with governance and controls

As global financial and regulated industry organizations increasingly turn to AWS for scaling their operations, they face the critical challenge of balancing growth with stringent governance and control regulatory requirements.

During this re:Invent 2022 session, Global Payments sheds light on how they leverage AWS cloud operations services to address this challenge head-on. By utilizing AWS Service Catalog, they streamline the deployment of pre-approved, compliant resources and services across their AWS accounts. This not only expedites the provisioning process but also ensures that all resources meet the required regulatory standards.

Take me to this re:Invent 2022 video!

The combination of AWS Service Catalog and AWS Organizations empowers Global Payments to establish robust governance and control mechanisms

The combination of AWS Service Catalog and AWS Organizations empowers Global Payments to establish robust governance and control mechanisms

Governance and security with infrastructure as code

Maintaining security and compliance throughout the entire deployment process is critical.

In this video, you will discover how cfn-guard can be utilized to validate your deployment pipelines built using AWS CloudFormation. By defining and applying custom rules, cfn-guard empowers you to enforce security policies, prevent misconfigurations, and ensure compliance with regulatory requirements. Moreover, by leveraging cdk-nag, you can catch potential security vulnerabilities and compliance risks early in the development process.

Take me to this governance video!

Learn how to use AWS CloudFormation and the AWS Cloud Development Kit to deploy cloud applications in regulated environments while enforcing security controls

Learn how to use AWS CloudFormation and the AWS Cloud Development Kit to deploy cloud applications in regulated environments while enforcing security controls

Get more out of service control policies in a multi-account environment

AWS customers often utilize AWS Organizations to effectively manage multiple AWS accounts. There are numerous advantages to employing this approach within an organization, including grouping workloads with shared business objectives, ensuring compliance with regulatory frameworks, and establishing robust isolation barriers based on ownership. Customers commonly utilize separate accounts for development, testing, and production purposes. However, as the number of these accounts grows, the need arises for a centralized approach to establish control mechanisms and guidelines.

In this AWS Security Blog post, we will guide you through various techniques that can enhance the utilization of AWS Organizations’ service control policies (SCPs) in a multi-account environment.

Take me to this AWS Security Blog post!

A sample organization showing the maximum number of SCPs applicable at each level (root, OU, account)

A sample organization showing the maximum number of SCPs applicable at each level (root, OU, account)

Centralized Logging on AWS

Having a single pane of glass where all Amazon CloudWatch Logs are displayed is crucial for effectively monitoring and understanding the overall performance and health of a system or application.

The AWS Centralized Logging solution facilitates the aggregation, examination, and visualization of CloudWatch Logs through a unified dashboard.

This solution streamlines the consolidation, administration, and analysis of log files originating from diverse sources, including access audit logs, configuration change records, and billing events. Furthermore, it enables the collection of CloudWatch Logs from numerous AWS accounts and Regions.

Take me to this AWS solution!

The Centralized Logging on AWS solution contains the following components log ingestion, indexing, and visualization

The Centralized Logging on AWS solution contains the following components log ingestion, indexing, and visualization

See you next time!

Thanks for joining our discussion on governance best practices! We’ll be back in 2 weeks, when we’ll explore DevOps best practices.

To find all the blogs from this series, you can check the Let’s Architect! list of content on the AWS Architecture Blog.

Architecting for data residency with AWS Outposts rack and landing zone guardrails

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/architecting-for-data-residency-with-aws-outposts-rack-and-landing-zone-guardrails/

This blog post was written by Abeer Naffa’, Sr. Solutions Architect, Solutions Builder AWS, David Filiatrault, Principal Security Consultant, AWS and Jared Thompson, Hybrid Edge SA Specialist, AWS.

In this post, we will explore how organizations can use AWS Control Tower landing zone and AWS Organizations custom guardrails to enable compliance with data residency requirements on AWS Outposts rack. We will discuss how custom guardrails can be leveraged to limit the ability to store, process, and access data and remain isolated in specific geographic locations, how they can be used to enforce security and compliance controls, as well as, which prerequisites organizations should consider before implementing these guardrails.

Data residency is a critical consideration for organizations that collect and store sensitive information, such as Personal Identifiable Information (PII), financial, and healthcare data. With the rise of cloud computing and the global nature of the internet, it can be challenging for organizations to make sure that their data is being stored and processed in compliance with local laws and regulations.

One potential solution for addressing data residency challenges with AWS is to use Outposts rack, which allows organizations to run AWS infrastructure on premises and in their own data centers. This lets organizations store and process data in a location of their choosing. An Outpost is seamlessly connected to an AWS Region where it has access to the full suite of AWS services managed from a single plane of glass, the AWS Management Console or the AWS Command Line Interface (AWS CLI).  Outposts rack can be configured to utilize landing zone to further adhere to data residency requirements.

The landing zones are a set of tools and best practices that help organizations establish a secure and compliant multi-account structure within a cloud provider. A landing zone can also include Organizations to set policies – guardrails – at the root level, known as Service Control Policies (SCPs) across all member accounts. This can be configured to enforce certain data residency requirements.

When leveraging Outposts rack to meet data residency requirements, it is crucial to have control over the in-scope data movement from the Outposts. This can be accomplished by implementing landing zone best practices and the suggested guardrails. The main focus of this blog post is on the custom policies that restrict data snapshots, prohibit data creation within the Region, and limit data transfer to the Region.


Landing zone best practices and custom guardrails can help when data needs to remain in a specific locality where the Outposts rack is also located.  This can be completed by defining and enforcing policies for data storage and usage within the landing zone organization that you set up. The following prerequisites should be considered before implementing the suggested guardrails:

1. AWS Outposts rack

AWS has installed your Outpost and handed off to you. An Outpost may comprise of one or more racks connected together at the site. This means that you can start using AWS services on the Outpost, and you can manage the Outposts rack using the same tools and interfaces that you use in AWS Regions.

2. Landing Zone Accelerator on AWS

We recommend using Landing Zone Accelerator on AWS (LZA) to deploy a landing zone for your organization. Make sure that the accelerator is configured for the appropriate Region and industry. To do this, you must meet the following prerequisites:

    • A clear understanding of your organization’s compliance requirements, including the specific Region and industry rules in which you operate.
    • Knowledge of the different LZAs available and their capabilities, such as the compliance frameworks with which you align.
    • Have the necessary permissions to deploy the LZAs and configure it for your organization’s specific requirements.

Note that LZAs are designed to help organizations quickly set up a secure, compliant multi-account environment. However, it’s not a one-size-fits-all solution, and you must align it with your organization’s specific requirements.

3. Set up the data residency guardrails

Using Organizations, you must make sure that the Outpost is ordered within a workload account in the landing zone.

Figure 1 Landing Zone Accelerator Outposts workload on AWS high level Architecture

Figure 1: Landing Zone Accelerator – Outposts workload on AWS high level Architecture

Utilizing Outposts rack for regulated components

When local regulations require regulated workloads to stay within a specific boundary, or when an AWS Region or AWS Local Zone isn’t available in your jurisdiction, you can still choose to host your regulated workloads on Outposts rack for a consistent cloud experience. When opting for Outposts rack, note that, as part of the shared responsibility model, customers are responsible for attesting to physical security, access controls, and compliance validation regarding the Outposts, as well as, environmental requirements for the facility, networking, and power. Utilizing Outposts rack requires that you procure and manage the data center within the city, state, province, or country boundary for your applications’ regulated components, as required by local regulations.

Procuring two or more racks in the diverse data centers can help with the high availability for your workloads. This is because it provides redundancy in case of a single rack or server failure. Additionally, having redundant network paths between Outposts rack and the parent Region can help make sure that your application remains connected and continue to operate even if one network path fails.

However, for regulated workloads with strict service level agreements (SLA), you may choose to spread Outposts racks across two or more isolated data centers within regulated boundaries. This helps make sure that your data remains within the designated geographical location and meets local data residency requirements.

In this post, we consider a scenario with one data center, but consider the specific requirements of your workloads and the regulations that apply to determine the most appropriate high availability configurations for your case.

Outposts rack workload data residency guardrails

Organizations provide central governance and management for multiple accounts. Central security administrators use SCPs with Organizations to establish controls to which all AWS Identity and Access Management (IAM) principals (users and roles) adhere.

Now, you can use SCPs to set permission guardrails.  A suggested preventative controls for data residency on Outposts rack that leverage the implementation of SCPs are shown as follows. SCPs enable you to set permission guardrails by defining the maximum available permissions for IAM entities in an account. If an SCP denies an action for an account, then none of the entities in the account can take that action, even if their IAM permissions let them. The guardrails set in SCPs apply to all IAM entities in the account, which include all users, roles, and the account root user.

Upon finalizing these prerequisites, you can create the guardrails for the Outposts Organization Unit (OU).

Note that while the following guidelines serve as helpful guardrails – SCPs – for data residency, you should consult internally with legal and security teams for specific organizational requirements.

 To exercise better control over workloads in the Outposts rack and prevent data transfer from Outposts to the Region or data storage outside the Outposts, consider implementing the following guardrails. Additionally, local regulations may dictate that you set up these additional guardrails.

  1. When your data residency requirements require restricting data transfer/saving to the Region, consider the following guardrails:

a. Deny copying data from Outposts to the Region for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache and data sync “DenyCopyToRegion”.

b. Deny Amazon Simple Storage Service (Amazon S3) put action to the Region “DenyPutObjectToRegionalBuckets”.

If your data residency requirements mandate restrictions on data storage in the Region,  consider implementing this guardrail to prevent  the use of S3 in the Region.

Note: You can use Amazon S3 for Outposts.

c. If your data residency requirements mandate restrictions on data storage in the Region, consider implementing “DenyDirectTransferToRegion” guardrail.

Out of Scope is metadata such as tags, or operational data such as KMS keys.

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "DenyCopyToRegion",
      "Action": [
      "Resource": "*",
      "Effect": "Deny"
      "Sid": "DenyDirectTransferToRegion",
      "Action": [
      "Resource": "*",
      "Effect": "Deny"
      "Sid": "DenyPutObjectToRegionalBuckets",
      "Action": [
      "Resource": ["arn:aws:s3:::*"],
      "Effect": "Deny"
  1. If your data residency requirements require limitations on data storage in the Region, consider implementing this guardrail “DenySnapshotsToRegion” and “DenySnapshotsNotOutposts” to restrict the use of snapshots in the Region.

a. Deny creating snapshots of your Outpost data in the Region “DenySnapshotsToRegion”

 Make sure to update the Outposts “<outpost_arn_pattern>”.

b. Deny copying or modifying Outposts Snapshots “DenySnapshotsNotOutposts”

Make sure to update the Outposts “<outpost_arn_pattern>”.

Note: “<outpost_arn_pattern>” default is arn:aws:outposts:*:*:outpost/*

  "Version": "2012-10-17",
  "Statement": [

      "Sid": "DenySnapshotsToRegion",

      "Sid": "DenySnapshotsNotOutposts",          

  1. This guardrail helps to prevent the launch of Amazon EC2 instances or creation of network interfaces in non-Outposts subnets. It is advisable to keep data residency workloads within the Outposts rather than the Region to ensure better control over regulated workloads. This approach can help your organization achieve better control over data residency workloads and improve governance over your AWS Organization.

Make sure to update the Outposts subnets “<outpost_subnet_arns>”.

"Version": "2012-10-17",
    "Sid": "DenyNotOutpostSubnet",
    "Action": [
    "Resource": [
    "Condition": {
      "ForAllValues:ArnNotEquals": {
        "ec2:Subnet": ["<outpost_subnet_arns>"]

Additional considerations

When implementing data residency guardrails on Outposts rack, consider backup and disaster recovery strategies to make sure that your data is protected in the event of an outage or other unexpected events. This may include creating regular backups of your data, implementing disaster recovery plans and procedures, and using redundancy and failover systems to minimize the impact of any potential disruptions. Additionally, you should make sure that your backup and disaster recovery systems are compliant with any relevant data residency regulations and requirements. You should also test your backup and disaster recovery systems regularly to make sure that they are functioning as intended.

Additionally, the provided SCPs for Outposts rack in the above example do not block the “logs:PutLogEvents”. Therefore, even if you implemented data residency guardrails on Outpost, the application may log data to CloudWatch logs in the Region.


By default, application-level logs on Outposts rack are not automatically sent to Amazon CloudWatch Logs in the Region. You can configure CloudWatch logs agent on Outposts rack to collect and send your application-level logs to CloudWatch logs.

logs: PutLogEvents does transmit data to the Region, but it is not blocked by the provided SCPs, as it’s expected that most use cases will still want to be able to use this logging API. However, if blocking is desired, then add the action to the first recommended guardrail. If you want specific roles to be allowed, then combine with the ArnNotLike condition example referenced in the previous highlight.


The combined use of Outposts rack and the suggested guardrails via AWS Organizations policies enables you to exercise better control over the movement of the data. By creating a landing zone for your organization, you can apply SCPs to your Outposts racks that will help make sure that your data remains within a specific geographic location, as required by the data residency regulations.

Note that, while custom guardrails can help you manage data residency on Outposts rack, it’s critical to thoroughly review your policies, procedures, and configurations to make sure that they are compliant with all relevant data residency regulations and requirements. Regularly testing and monitoring your systems can help make sure that your data is protected and your organization stays compliant.


The most visited AWS DevOps blogs in 2022

Post Syndicated from original https://aws.amazon.com/blogs/devops/the-most-visited-aws-devops-blogs-in-2022/

As we kick off 2023, I wanted to take a moment to highlight the top posts from 2022. Without further ado, here are the top 10 AWS DevOps Blog posts of 2022.

#1: Integrating with GitHub Actions – CI/CD pipeline to deploy a Web App to Amazon EC2

Coming in at #1, Mahesh Biradar, Solutions Architect and Suresh Moolya, Cloud Application Architect use GitHub Actions and AWS CodeDeploy to deploy a sample application to Amazon Elastic Compute Cloud (Amazon EC2).

Architecture diagram from the original post.

#2: Deploy and Manage GitLab Runners on Amazon EC2

Sylvia Qi, Senior DevOps Architect, and Sebastian Carreras, Senior Cloud Application Architect, guide us through utilizing infrastructure as code (IaC) to automate GitLab Runner deployment on Amazon EC2.

Architecture diagram from the original post.

#3 Multi-Region Terraform Deployments with AWS CodePipeline using Terraform Built CI/CD

Lerna Ekmekcioglu, Senior Solutions Architect, and Jack Iu, Global Solutions Architect, demonstrate best practices for multi-Region deployments using HashiCorp Terraform, AWS CodeBuild, and AWS CodePipeline.

Architecture diagram from the original post.

#4 Use the AWS Toolkit for Azure DevOps to automate your deployments to AWS

Mahmoud Abid, Senior Customer Delivery Architect, leverages the AWS Toolkit for Azure DevOps to deploy AWS CloudFormation stacks.

Architecture diagram from the original post.

#5 Deploy and manage OpenAPI/Swagger RESTful APIs with the AWS Cloud Development Kit

Luke Popplewell, Solutions Architect, demonstrates using AWS Cloud Development Kit (AWS CDK) to build and deploy Amazon API Gateway resources using the OpenAPI specification.

Architecture diagram from the original post.

#6: How to unit test and deploy AWS Glue jobs using AWS CodePipeline

Praveen Kumar Jeyarajan, Senior DevOps Consultant, and Vaidyanathan Ganesa Sankaran, Sr Modernization Architect, discuss unit testing Python-based AWS Glue Jobs in AWS CodePipeline.

Architecture diagram from the original post.

#7: Jenkins high availability and disaster recovery on AWS

James Bland, APN Global Tech Lead for DevOps, and Welly Siauw, Sr. Partner solutions architect, discuss the challenges of architecting Jenkins for scale and high availability (HA).

Architecture diagram from the original post.

#8: Monitor AWS resources created by Terraform in Amazon DevOps Guru using tfdevops

Harish Vaswani, Senior Cloud Application Architect, and Rafael Ramos, Solutions Architect, explain how you can configure and use tfdevops to easily enable Amazon DevOps Guru for your existing AWS resources created by Terraform.

Architecture diagram from the original post.

#9: Manage application security and compliance with the AWS Cloud Development Kit and cdk-nag

Arun Donti, Senior Software Engineer with Twitch, demonstrates how to integrate cdk-nag into an AWS Cloud Development Kit (AWS CDK) application to provide continual feedback and help align your applications with best practices.

Featured image from the original post.

#10: Smithy Server and Client Generator for TypeScript (Developer Preview)

Adam Thomas, Senior Software Development Engineer, demonstrate how you can use Smithy to define services and SDKs and deploy them to AWS Lambda using a generated client.

Architecture diagram from the original post.

A big thank you to all our readers! Your feedback and collaboration are appreciated and help us produce better content.



About the author:

Brian Beach

Brian Beach has over 20 years of experience as a Developer and Architect. He is currently a Principal Solutions Architect at Amazon Web Services. He holds a Computer Engineering degree from NYU Poly and an MBA from Rutgers Business School. He is the author of “Pro PowerShell for Amazon Web Services” from Apress. He is a regular author and has spoken at numerous events. Brian lives in North Carolina with his wife and three kids.

New ML Governance Tools for Amazon SageMaker – Simplify Access Control and Enhance Transparency Over Your ML Projects

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/new-ml-governance-tools-for-amazon-sagemaker-simplify-access-control-and-enhance-transparency-over-your-ml-projects/

As companies increasingly adopt machine learning (ML) for their business applications, they are looking for ways to improve governance of their ML projects with simplified access control and enhanced visibility across the ML lifecycle. A common challenge in that effort is managing the right set of user permissions across different groups and ML activities. For example, a data scientist in your team that builds and trains models usually requires different permissions than an MLOps engineer that manages ML pipelines. Another challenge is improving visibility over ML projects. For example, model information, such as intended use, out-of-scope use cases, risk rating, and evaluation results, is often captured and shared via emails or documents. In addition, there is often no simple mechanism to monitor and report on your deployed model behavior.

That’s why I’m excited to announce a new set of ML governance tools for Amazon SageMaker.

As an ML system or platform administrator, you can now use Amazon SageMaker Role Manager to define custom permissions for SageMaker users in minutes, so you can onboard users faster. As an ML practitioner, business owner, or model risk and compliance officer, you can now use Amazon SageMaker Model Cards to document model information from conception to deployment and Amazon SageMaker Model Dashboard to monitor all your deployed models through a unified dashboard.

Let’s dive deeper into each tool, and I’ll show you how to get started.

Introducing Amazon SageMaker Role Manager
SageMaker Role Manager lets you define custom permissions for SageMaker users in minutes. It comes with a set of predefined policy templates for different personas and ML activities. Personas represent the different types of users that need permissions to perform ML activities in SageMaker, such as data scientists or MLOps engineers. ML activities are a set of permissions to accomplish a common ML task, such as running SageMaker Studio applications or managing experiments, models, or pipelines. You can also define additional personas, add ML activities, and your managed policies to match your specific needs. Once you have selected the persona type and the set of ML activities, SageMaker Role Manager automatically creates the required AWS Identity and Access Management (IAM) role and policies that you can assign to SageMaker users.

A Primer on SageMaker and IAM Roles
A role is an IAM identity that has permissions to perform actions with AWS services. Besides user roles that are assumed by a user via federation from an Identity Provider (IdP) or the AWS Console, Amazon SageMaker requires service roles (also known as execution roles) to perform actions on behalf of the user. SageMaker Role Manager helps you create these service roles:

  • SageMaker Compute Role – Gives SageMaker compute resources the ability to perform tasks such as training and inference, typically used via PassRole. You can select the SageMaker Compute Role persona in SageMaker Role Manager to create this role. Depending on the ML activities you select in your SageMaker service roles, you will need to create this compute role first.
  • SageMaker Service Role – Some AWS services, including SageMaker, require a service role to perform actions on your behalf. You can select the Data Scientist, MLOps, or Custom persona in SageMaker Role Manager to start creating service roles with custom permissions for your ML practitioners.

Now, let me show you how this works in practice.

There are two ways to get to SageMaker Role Manager, either through Getting started in the SageMaker console or when you select Add user in the SageMaker Studio Domain control panel.

I start in the SageMaker console. Under Configure role, select Create a role. This opens a workflow that guides you through all required steps.

Amazon SageMaker Admin Hub - Getting Started

Let’s assume I want to create a SageMaker service role with a specific set of permissions for my team of data scientists. In Step 1, I select the predefined policy template for the Data Scientist persona.

Amazon SageMaker Role Manager - Select persona

I can also define the network and encryption settings in this step by selecting Amazon Virtual Private Cloud (Amazon VPC) subnets, security groups, and encryption keys.

In Step 2, I select what ML activities data scientists in my team need to perform.

Amazon SageMaker Admin Hub - Configure ML activities

Some of the selected ML activities might require you to specify the Amazon Resource Name (ARN) of the SageMaker Compute Role so SageMaker compute resources have the ability to perform the tasks.

In Step 3, you can attach additional IAM policies and add tags to the role if needed. Tags help you identify and organize your AWS resources. You can use tags to add attributes such as project name, cost center, or location information to a role. After a final review of the settings in Step 4, select Submit, and the role is created.

In just a few minutes, I set up a SageMaker service role, and I’m now ready to onboard data scientists in SageMaker with custom permissions in place.

Introducing Amazon SageMaker Model Cards
SageMaker Model Cards helps you streamline model documentation throughout the ML lifecycle by creating a single source of truth for model information. For models trained on SageMaker, SageMaker Model Cards discovers and autopopulates details such as training jobs, training datasets, model artifacts, and inference environment. You can also record model details such as the model’s intended use, risk rating, and evaluation results. For compliance documentation and model evidence reporting, you can export your model cards to a PDF file and easily share them with your customers or regulators.

To start creating SageMaker Model Cards, go to the SageMaker console, select Governance in the left navigation menu, and select Model cards.

Amazon SageMaker Model Cards

Select Create model card to document your model information.

Amazon SageMaker Model Card

Amazon SageMaker Model Cards

Introducing Amazon SageMaker Model Dashboard
SageMaker Model Dashboard lets you monitor all your models in one place. With this bird’s-eye view, you can now see which models are used in production, view model cards, visualize model lineage, track resources, and monitor model behavior through an integration with SageMaker Model Monitor and SageMaker Clarify. The dashboard automatically alerts you when models are not being monitored or deviate from expected behavior. You can also drill deeper into individual models to troubleshoot issues.

To access SageMaker Model Dashboard, go to the SageMaker console, select Governance in the left navigation menu, and select Model dashboard.

Amazon SageMaker Model Dashboard

Note: The risk rating shown above is for illustrative purposes only and may vary based on input provided by you.

Now Available
Amazon SageMaker Role Manager, SageMaker Model Cards, and SageMaker Model Dashboard are available today at no additional charge in all the AWS Regions where Amazon SageMaker is available except for the AWS GovCloud and AWS China Regions.

To learn more, visit ML governance with Amazon SageMaker and check the developer guide.

Start building your ML projects with our new governance tools for Amazon SageMaker today

— Antje

New – AWS Config Rules Now Support Proactive Compliance

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-config-rules-now-support-proactive-compliance/

When operating a business, you have to find the right balance between speed and control for your cloud operations. On one side, you want to have the ability to quickly provision the cloud resources you need for your applications. At the same time, depending on your industry, you need to maintain compliance with regulatory, security, and operational best practices.

AWS Config provides rules, which you can run in detective mode to evaluate if the configuration settings of your AWS resources are compliant with your desired configuration settings. Today, we are extending AWS Config rules to support proactive mode so that they can be run at any time before provisioning and save time spent to implement custom pre-deployment validations.

When creating standard resource templates, platform teams can run AWS Config rules in proactive mode so that they can be tested to be compliant before being shared across your organization. When implementing a new service or a new functionality, development teams can run rules in proactive mode as part of their continuous integration and continuous delivery (CI/CD) pipeline to identify noncompliant resources.

You can also use AWS CloudFormation Guard in your deployment pipelines to check for compliance proactively and ensure that a consistent set of policies are applied both before and after resources are provisioned.

Let’s see how this works in practice.

Using Proactive Compliance with AWS Config
In the AWS Config console, I choose Rules in the navigation pane. In the rules table, I see the new Enabled evaluation mode column that specifies whether the rule is proactive or detective. Let’s set up my first rule.

Console screenshot.

I choose Add rule, and then I enter rds-storage in the AWS Managed Rules search box to find the rds-storage-encrypted rule. This rule checks whether storage encryption is enabled for your Amazon Relational Database Service (RDS) DB instances and can be added in proactive or detective evaluation mode. I choose Next.

Console screenshot.

In the Evaluation mode section, I turn on proactive evaluation. Now both the proactive and detective evaluation switches are enabled.

Console screenshot.

I leave all the other settings to their default values and choose Next. In the next step, I review the configuration and add the rule.

Console screenshot.

Now, I can use proactive compliance via the AWS Config API (including the AWS Command Line Interface (CLI) and AWS SDKs) or with CloudFormation Guard. In my CI/CD pipeline, I can use the AWS Config API to check the compliance of a resource before creating it. When deploying using AWS CloudFormation, I can set up a CloudFormation hook to proactively check my configuration before the actual deployment happens.

Let’s do an example using the AWS CLI. First, I call the StartProactiveEvaluationResponse API with in input the resource ID (for reference only), the resource type, and its configuration using the CloudFormation schema. For simplicity, in the database configuration, I only use the StorageEncrypted option and set it to true to pass the evaluation. I use an evaluation timeout of 60 seconds, which is more than enough for this rule.

aws configservice start-resource-evaluation --evaluation-mode PROACTIVE \
    --resource-details '{"ResourceId":"myDB",
                         "ResourceConfigurationSchemaType":"CFN_RESOURCE_SCHEMA"}' \
    --evaluation-timeout 60
    "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d"

I get back in output the ResourceEvaluationId that I use to check the evaluation status using the GetResourceEvaluationSummary API. In the beginning, the evaluation is IN_PROGRESS. It usually takes a few seconds to get a COMPLIANT or NON_COMPLIANT result.

aws configservice get-resource-evaluation-summary \
    --resource-evaluation-id be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d
    "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d",
    "EvaluationMode": "PROACTIVE",
    "EvaluationStatus": {
        "Status": "SUCCEEDED"
    "EvaluationStartTimestamp": "2022-11-15T19:13:46.029000+00:00",
    "Compliance": "COMPLIANT",
    "ResourceDetails": {
        "ResourceId": "myDB",
        "ResourceType": "AWS::RDS::DBInstance",
        "ResourceConfiguration": "{\"StorageEncrypted\":true}"

As expected, the Amazon RDS configuration is compliant to the rds-storage-encrypted rule. If I repeat the previous steps with StorageEncrypted set to false, I get a noncompliant result.

If more than one rule is enabled for a resource type, all applicable rules are run in proactive mode for the resource evaluation. To find out individual rule-level compliance for the resource, I can call the GetComplianceDetailsByResource API:

aws configservice get-compliance-details-by-resource \
    --resource-evaluation-id be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d
    "EvaluationResults": [
            "EvaluationResultIdentifier": {
                "EvaluationResultQualifier": {
                    "ConfigRuleName": "rds-storage-encrypted",
                    "ResourceType": "AWS::RDS::DBInstance",
                    "ResourceId": "myDB",
                    "EvaluationMode": "PROACTIVE"
                "OrderingTimestamp": "2022-11-15T19:14:42.588000+00:00",
                "ResourceEvaluationId": "be2a915a-540d-4595-ac7b-e105e39b7980-1847cb6320d"
            "ComplianceType": "COMPLIANT",
            "ResultRecordedTime": "2022-11-15T19:14:55.588000+00:00",
            "ConfigRuleInvokedTime": "2022-11-15T19:14:42.588000+00:00"

If, when looking at these details, your desired rule is not invoked, be sure to check that proactive mode is turned on.

Availability and Pricing
Proactive compliance will be available in all commercial AWS Regions where AWS Config is offered but it might take a few days to deploy this new capability across all these Regions. I’ll update this post when this deployment is complete. To see which AWS Config rules can be turned into proactive mode, see the Developer Guide.

You are charged based on the number of AWS Config rule evaluations recorded. A rule evaluation is recorded every time a resource is evaluated for compliance against an AWS Config rule. Rule evaluations can be run in detective mode and/or in proactive mode, if available. If you are running a rule in both detective mode and proactive mode, you will be charged for only the evaluations in detective mode. For more information, see AWS Config pricing.

With this new feature, you can use AWS Config to check your rules before provisioning and avoid implementing your own custom validations.


New for AWS Control Tower – Comprehensive Controls Management (Preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-comprehensive-controls-management-preview/

Today, customers in regulated industries face the challenge of defining and enforcing controls needed to meet compliance and security requirements while empowering engineers to make their design choices. In addition to addressing risk, reliability, performance, and resiliency requirements, organizations may also need to comply with frameworks and standards such as PCI DSS and NIST 800-53.

Building controls that account for service relationships and their dependencies is time-consuming and expensive. Sometimes customers restrict engineering access to AWS services and features until their cloud architects identify risks and implement their own controls.

To make that easier, today we are launching comprehensive controls management in AWS Control Tower. You can use it to apply managed preventative, detective, and proactive controls to accounts and organizational units (OUs) by service, control objective, or compliance framework. AWS Control Tower does the mapping between them on your behalf, saving time and effort.

With this new capability, you can now also use AWS Control Tower to turn on AWS Security Hub detective controls across all accounts in an OU. In this way, Security Hub controls are enabled in every AWS Region that AWS Control Tower governs.

Let’s see how this works in practice.

Using AWS Control Tower Comprehensive Controls Management
In the AWS Control Tower console, there is a new Controls library section. There, I choose All controls. There are now more than three hundred controls available. For each control, I see which AWS service it is related to, the control objective this control is part of, the implementation (such as AWS Config rule or AWS CloudFormation Guard rule), the behavior (preventive, detective, or proactive), and the frameworks it maps to (such as NIST 800-53 or PCI DSS).

Console screenshot.

In the Find controls search box, I look for a preventive control called CT.CLOUDFORMATION.PR.1. This control uses a service control policy (SCP) to protect controls that use CloudFormation hooks and is required by the control that I want to turn on next. Then, I choose Enable control.

Console screenshot.

Then, I select the OU for which I want to enable this control.

Console screenshot.

Now that I have set up this control, let’s see how controls are presented in the console in categories. I choose Categories in the navigation pane. There, I can browse controls grouped as Frameworks, Services, and Control objectives. By default, the Frameworks tab is selected.

Console screenshot.

I select a framework (for example, PCI DSS version 3.2.1) to see all the related controls and control objectives. To implement a control, I can select the control from the list and choose Enable control.

Console screenshot.

I can also manage controls by AWS service. When I select the Services tab, I see a list of AWS services and the related control objectives and controls.

Console screenshot.

I choose Amazon DynamoDB to see the controls that I can turn on for this service.

Console screenshot.

I select the Control objectives tab. When I need to assess a control objective, this is where I have access to the list of related controls to turn on.

Console screenshot.

I choose Encrypt data at rest to see and search through the available controls for that control objective. I can also check which services are covered in this specific case. I type RDS in the search bar to find the controls related to Amazon Relational Database Service (RDS) for this control objective.

I choose CT.RDS.PR.16 – Require an Amazon RDS database cluster to have encryption at rest configured and then Enable control.

Console screenshot.

I select the OU for which I want to enable the control for, and I proceed. All the AWS accounts in this organization’s OU will have this control enabled in all the Regions that AWS Control Tower governs.

Console screenshot.

After a few minutes, the AWS Control Tower setup is updated. Now, the accounts in this OU have proactive control CT.RDS.PR.16 turned on. When an account in this OU deploys a CloudFormation stack, any Amazon RDS database cluster has to have encryption at rest configured. Because this control is proactive, it’ll be checked by a CloudFormation hook before the deployment starts. This saves a lot of time compared to a detective control that would find the issue only when the CloudFormation deployment is in progress or has terminated. This also improves my security posture by preventing something that’s not allowed as opposed to reacting to it after the fact.

Availability and Pricing
Comprehensive controls management is available in preview today in all AWS Regions where AWS Control Tower is offered. These enhanced control capabilities reduce the time it takes you to vet AWS services from months or weeks to minutes. They help you use AWS by undertaking the heavy burden of defining, mapping, and managing the controls required to meet the most common control objectives and regulations.

There is no additional charge to use these new capabilities during the preview. However, when you set up AWS Control Tower, you will begin to incur costs for AWS services configured to set up your landing zone and mandatory controls. For more information, see AWS Control Tower pricing.

Simplify how you implement compliance and security requirements with AWS Control Tower.


New – Amazon CloudWatch Cross-Account Observability

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-cloudwatch-cross-account-observability/

Deploying applications using multiple AWS accounts is a good practice to establish security and billing boundaries between teams and reduce the impact of operational events. When you adopt a multi-account strategy, you have to analyze telemetry data that is scattered across several accounts. To give you the flexibility to monitor all the components of your applications from a centralized view, we are introducing today Amazon CloudWatch cross-account observability, a new capability to search, analyze, and correlate cross-account telemetry data stored in CloudWatch such as metrics, logs, and traces.

You can now set up a central monitoring AWS account and connect your other accounts as sources. Then, you can search, audit, and analyze logs across your applications to drill down into operational issues in a matter of seconds. You can discover and visualize metrics from many accounts in a single place and create alarms that evaluate metrics belonging to other accounts. You can start with an aggregated cross-account view of your application to visually identify the resources exhibiting errors and dive deep into correlated traces, metrics, and logs to find the root cause. This seamless cross-account data access and navigation helps reduce the time and effort required to troubleshoot issues.

Let’s see how this works in practice.

Configuring CloudWatch Cross-Account Observability
To enable cross-account observability, CloudWatch has introduced the concept of monitoring and source accounts:

  • A monitoring account is a central AWS account that can view and interact with observability data shared by other accounts.
  • A source account is an individual AWS account that shares observability data and resources with one or more monitoring accounts.

You can configure multiple monitoring accounts with the level of visibility you need. CloudWatch cross-account observability is also integrated with AWS Organizations. For example, I can have a monitoring account with wide access to all accounts in my organization for central security and operational teams and then configure other monitoring accounts with more restricted visibility across a business unit for individual service owners.

First, I configure the monitoring account. In the CloudWatch console, I choose Settings in the navigation pane. In the Monitoring account configuration section, I choose Configure.

Console screenshot.

Now I can choose which telemetry data can be shared with the monitoring account: Logs, Metrics, and Traces. I leave all three enabled.

Console screenshot.

To list the source accounts that will share data with this monitoring account, I can use account IDs, organization IDs, or organization paths. I can use an organization ID to include all the accounts in the organization or an organization path to include all the accounts in a department or business unit. In my case, I have only one source account to link, so I enter the account ID.

Console screenshot.

When using the CloudWatch console in the monitoring account to search and display telemetry data, I see the account ID that shared that data. Because account IDs are not easy to remember, I can display a more descriptive “account label.” When configuring the label via the console, I can choose between the account name or the email address used to identify the account. When using an email address, I can also choose whether to include the domain. For example, if all the emails used to identify my accounts are using the same domain, I can use as labels the email addresses without that domain.

There is a quick reminder that cross-account observability only works in the selected Region. If I have resources in multiple Regions, I can configure cross-account observability in each Region. To complete the configuration of the monitoring account, I choose Configure.

Console screenshot.

The monitoring account is now enabled, and I choose Resources to link accounts to determine how to link my source accounts.

Console screenshot.

To link source accounts in an AWS organization, I can download an AWS CloudFormation template to be deployed in a CloudFormation delegated administration account.

To link individual accounts, I can either download a CloudFormation template to be deployed in each account or copy a URL that helps me use the console to set up the accounts. I copy the URL and paste it into another browser where I am signed in as the source account. Then, I can configure which telemetry data to share (logs, metrics, or traces). The Amazon Resource Name (ARN) of the monitoring account configuration is pre-filled because I copy-pasted the URL in the previous step. If I don’t use the URL, I can copy the ARN from the monitoring account and paste it here. I confirm the label used to identify my source account and choose Link.

In the Confirm monitoring account permission dialog, I type Confirm to complete the configuration of the source account.

Using CloudWatch Cross-Account Observability
To see how things work with cross-account observability, I deploy a simple cross-account application using two AWS Lambda functions, one in the source account (multi-account-function-a) and one in the monitoring account (multi-account-function-b). When triggered, the function in the source account publishes an event to an Amazon EventBridge event bus in the monitoring account. There, an EventBridge rule triggers the execution of the function in the monitoring account. This is a simplified setup using only two accounts. You’d probably have your workloads running in multiple source accounts.Architectural diagram.

In the Lambda console, the two Lambda functions have Active tracing and Enhanced monitoring enabled. To collect telemetry data, I use the AWS Distro for OpenTelemetry (ADOT) Lambda layer. The Enhanced monitoring option turns on Amazon CloudWatch Lambda Insights to collect and aggregate Lambda function runtime performance metrics.

Console screenshot.

I prepare a test event in the Lambda console of the source account. Then, I choose Test and run the function a few times.

Console screenshot.

Now, I want to understand what the components of my application, running in different accounts, are doing. I start with logs and then move to metrics and traces.

In the CloudWatch console of the monitoring account, I choose Log groups in the Logs section of the navigation pane. There, I search for and find the log groups created by the two Lambda functions running in different AWS accounts. As expected, each log group shows the account ID and label originating the data. I select both log groups and choose View in Logs Insights.

Console screenshot.

I can now search and analyze logs from different AWS accounts using the CloudWatch Logs Insights query syntax. For example, I run a simple query to see the last twenty messages in the two log groups. I include the @log field to see the account ID that the log belongs to.

Console screenshot.

I can now also create Contributor Insights rules on cross-account log groups. This enables me, for example, to have a holistic view of what security events are happening across accounts or identify the most expensive Lambda requests in a serverless application running in multiple accounts.

Then, I choose All metrics in the Metrics section of the navigation pane. To see the Lambda function runtime performance metrics collected by CloudWatch Lambda Insights, I choose LambdaInsights and then function_name. There, I search for multi-account and memory to see the memory metrics. Again, I see the account IDs and labels that tell me that these metrics are coming from two different accounts. From here, I can just select the metrics I am interested in and create cross-account dashboards and alarms. With the metrics selected, I choose Add to dashboard in the Actions dropdown.

Console screenshot.

I create a new dashboard and choose the Stacked area widget type. Then, I choose Add to dashboard.

Console screenshot.

I do the same for the CPU and memory metrics (but using different widget types) to quickly create a cross-account dashboard where I can keep under control my multi-account setup. Well, there isn’t a lot of traffic yet but I am hopeful.

Console screenshot.

Finally, I choose Service map from the X-Ray traces section of the navigation pane to see the flow of my multi-account application. In the service map, the client triggers the Lambda function in the source account. Then, an event is sent to the other account to run the other Lambda function.

Console screenshot.

In the service map, I select the gear icon for the function running in the source account (multi-account-function-a) and then View traces to look at the individual traces. The traces contain data from multiple AWS accounts. I can search for traces coming from a specific account using a syntax such as:

service(id(account.id: "123412341234"))

Console screenshot.

The service map now stitches together telemetry from multiple accounts in a single place, delivering a consolidated view to monitor their cross-account applications. This helps me to pinpoint issues quickly and reduces resolution time.

Availability and Pricing
Amazon CloudWatch cross-account observability is available today in all commercial AWS Regions using the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. AWS CloudFormation support is coming in the next few days. Cross-account observability in CloudWatch comes with no extra cost for logs and metrics, and the first trace copy is free. See the Amazon CloudWatch pricing page for details.

Having a central point of view to monitor all the AWS accounts that you use gives you a better understanding of your overall activities and helps solve issues for applications that span multiple accounts.

Start using CloudWatch cross-account observability to monitor all your resources.


New – Amazon Redshift Support in AWS Backup

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-redshift-support-in-aws-backup/

With Amazon Redshift, you can analyze data in the cloud at any scale. Amazon Redshift offers native data protection capabilities to protect your data using automatic and manual snapshots. This works great by itself, but when you’re using other AWS services, you have to configure more than one tool to manage your data protection policies.

To make this easier, I am happy to share that we added support for Amazon Redshift in AWS Backup. AWS Backup allows you to define a central backup policy to manage data protection of your applications and can now also protect your Amazon Redshift clusters. In this way, you have a consistent experience when managing data protection across all supported services. If you have a multi-account setup, the centralized policies in AWS Backup let you define your data protection policies across all your accounts within your AWS Organizations. To help you meet your regulatory compliance needs, AWS Backup now includes Amazon Redshift in its auditor-ready reports. You also have the option to use AWS Backup Vault Lock to have immutable backups and prevent malicious or inadvertent changes.

Let’s see how this works in practice.

Using AWS Backup with Amazon Redshift
The first step is to turn on the Redshift resource type for AWS Backup. In the AWS Backup console, I choose Settings in the navigation pane and then, in the Service opt-in section, Configure resources. There, I toggle the Redshift resource type on and choose Confirm.

Console screenshot.

Now, I can create or update a backup plan to include the backup of all, or some, of my Redshift clusters. In the backup plan, I can define how often these backups should be taken and for how long they should be kept. For example, I can have daily backups with one week of retention, weekly backups with one month of retention, and monthly backups with one year of retention.

I can also create on-demand backups. Let’s see this with more details. I choose Protected resources in the navigation pane and then Create on-demand backup.

I select Redshift in the Resource type dropdown. In the Cluster identifier, I select one of my clusters. For this workload, I need two weeks of retention. Then, I choose Create on-demand backup.

Console screenshot.

My data warehouse is not huge, so after a few minutes, the backup job has completed.

Console screenshot.

I now see my Redshift cluster in the list of the resources protected by AWS Backup.

Console screenshot.

In the Protected resources list, I choose the Redshift cluster to see the list of the available recovery points.

Console screenshot.

When I choose one of the recovery points, I have the option to restore the full data warehouse or just a table into a new Redshift cluster.

Console screenshot.

I now have the possibility to edit the cluster and database configuration, including security and networking settings. I just update the cluster identifier, otherwise the restore would fail because it must be unique. Then, I choose Restore backup to start the restore job.

After some time, the restore job has completed, and I see the old and the new clusters in the Amazon Redshift console. Using AWS Backup gives me a simple centralized way to manage data protection for Redshift clusters as well as many other resources in my AWS accounts.

Console screenshot.

Availability and Pricing
Amazon Redshift support in AWS Backup is available today in the AWS Regions where both AWS Backup and Amazon Redshift are offered, with the exception of the Regions based in China. You can use this capability via the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs.

There is no additional cost for using AWS Backup compared to the native snapshot capability of Amazon Redshift. Your overall costs depend on the amount of storage and retention you need. For more information, see AWS Backup pricing.


Introducing AWS Resource Explorer – Quickly Find Resources in Your AWS Account

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-aws-resource-explorer-quickly-find-resources-in-your-aws-account/

Looking for a specific Amazon Elastic Compute Cloud (Amazon EC2) instance, Amazon Elastic Container Service (Amazon ECS) task, or Amazon CloudWatch log group can take some time, especially if you have many resources and use multiple AWS Regions.

Today, we’re making that easier. Using the new AWS Resource Explorer, you can search through the AWS resources in your account across Regions using metadata such as names, tags, and IDs. When you find a resource in the AWS Management Console, you can quickly go from the search results to the corresponding service console and Region to start working on that resource. In a similar way, you can use the AWS Command Line Interface (CLI) or any of the AWS SDKs to find resources in your automation tools.

Let’s see how this works in practice.

Using AWS Resource Explorer
To start using Resource Explorer, I need to turn it on so that it creates and maintains the indexes that will provide fast responses to my search queries. Usually, the administrator of the account is the one taking these steps so that authorized users in that account can start searching.

To run a query, I need a view that gives access to an index. If the view is using an aggregator index, then the query can search across all indexed Regions.

Aggregator index diagram.

If the view is using a local index, then the query has access only to the resources in that Region.

Local index diagram.

I can control the visibility of resources in my account by creating views that define what resource information is available for search and discovery. These controls are not based only on resources but also on the information that resources bring. For example, I can give access to the Amazon Resource Names (ARNs) of all resources but not to their tags which might contain information that I want to keep confidential.

In the Resource Explorer console, I choose Enable Resource Explorer. Then, I select the Quick setup option to have visibility for all supported resources within my account. This option creates local indexes in all Regions and an aggregator index in the selected Region. A default view with a filter that includes all supported resources in the account is also created in the same Region as the aggregator index.

Console screenshot.

With the Advanced setup option, I have access to more granular controls that are useful when there are specific governance requirements. For example, I can select in which Regions to create indexes. I can choose not to replicate resource information to any other Region so that resources from each AWS Region are searchable only from within the same Region. I can also control what information is available in the default view or avoid the creation of the default view.

With the Quick setup option selected, I choose Go to Resource Explorer. A quick overview shows the progress of enabling Resource Explorer across Regions. After the indexes have been created, it can take up to 36 hours to index all supported resources, and search results might be incomplete until then. When resources are created or deleted, your indexes are automatically updated. These updates are asynchronous, so it can take some time (usually a few minutes) to see the changes.

Searching With AWS Resource Explorer
After resources have been indexed, I choose Proceed to resource search. In the Search criteria, I choose which View to use. Currently, I have the default view selected. Then, I start typing in the Query field to search through the resources in my AWS account across all Regions. For example, I have an application where I used the convention to start resource names with my-app. For the resources I created manually, I also added the Project tag with value MyApp.

To find the resource of this application, I start by searching for my-app.

Console screenshot.

The results include resources from multiple services and Regions and global resources from AWS Identity and Access Management (IAM). I have a service, tasks, and a task definition from Amazon ECS, roles and policies from AWS IAM, log groups from CloudWatch. Optionally, I can filter results by Region or resource type. If I choose any of the listed resources, the link will bring me to the corresponding service console and Region with the resource selected.

Console screenshot.

To look for something in a specific Region, such as Europe (Ireland), I can restrict the results by adding region:eu-west-1 to the query.

Console screenshot.

I can further restrict results to Amazon ECS resources by adding service:ecs to the query. Now I only see the ECS cluster, service, tasks, and task definition in Europe (Ireland). That’s the task definition I was looking for!

Console screenshot.

I can also search using tags. For example, I can see the resources where I added the MyApp tag by including tag.value:MyApp in a query. To specify the actual key-value pair of the tag, I can use tag:Project=MyApp.

Console screenshot.

Creating a Custom View
Sometimes you need to control the visibility of the resources in your account. For example, all the EC2 instances used for development in my account are in US West (Oregon). I create a view for the development team by choosing a specific Region (us-west-2) and filtering the results with service:ec2 in the query. Optionally, I could further filter results based on resource names or tags. For example, I could add tag:Environment=Dev to only see resources that have been tagged to be in a development environment.

Console screenshot.

Now I allow access to this view to users and roles used by the development team. To do so, I can attach an identity-based policy to the users and roles of the development team. In this way, they can only explore and search resources using this view.

Console screenshot.

Unified Search in the AWS Management Console
After I turn Resource Explorer on, I can also search through my AWS resources in the search bar at the top of the Management Console. We call this capability unified search as it gives results that include AWS services, features, blogs, documentation, tutorial, events, and more.

To focus my search on AWS resources, I add /Resources at the beginning of my search.

Console screenshot.

Note that unified search automatically inserts a wildcard character (*) at the end of the first keyword in the string. This means that unified search results include resources that match any string that starts with the specified keyword.

Console screenshot.

The search performed by the Query text box on the Resource search page in the Resource Explorer console does not automatically append a wildcard character but I can do it manually after any term in the search string to have similar results.

Unified search works when I have the default view in the same Region that contains the aggregator index. To check if unified search works for me, I look at the top of the Settings page.

Console screenshot.

Availability and Pricing
You can start using AWS Resource Explorer today with a global console and via the AWS Command Line Interface (CLI) and the AWS SDKs. AWS Resource Explorer is available at no additional charge. Using Resource Explorer makes it much faster to find the resources you need and use them in your automation processes and in their service console.

Discover and access your AWS resources across all the Regions you use with AWS Resource Explorer.


Microservice observability with Amazon OpenSearch Service part 2: Create an operational panel and incident report

Post Syndicated from Marvin Gersho original https://aws.amazon.com/blogs/big-data/microservice-observability-with-amazon-opensearch-service-part-2-create-an-operational-panel-and-incident-report/

In the first post in our series , we discussed setting up a microservice observability architecture and application troubleshooting steps using log and trace correlation with Amazon OpenSearch Service. In this post, we discuss using PPL to create visualizations in operational panels, and creating a simple incident report using notebooks.

To try out the solution yourself, start from part 1 of the series.

Microservice observability with Amazon OpenSearch Service

Piped Processing Language (PPL)

PPL is a new query language for OpenSearch. It’s simpler and more straightforward to use than query DSL (Domain Specific Language), and a better fit for DevOps than ODFE SQL. PPL handles semi-structured data and uses a sequence of commands delimited by pipes (|). For more information about PPL, refer to Using pipes to explore, discover and find data in Amazon OpenSearch Service with Piped Processing Language.

The following PPL query retrieves the same record as our search on the Discover page in our previous post. If you’re following along, use your trace ID in place of <Trace-ID>:

source = sample_app_logs | where stream = 'stderr' and locate(‘<Trace-ID>’,`log`) > 0

The query has the following components:

  • | separates commands in the statement.
  • Source=sample_app_logs means that we’re searching sample_app_logs.
  • where stream = ‘stderr’, stream is a field in sample_app_logs. We’re matching the value to stderr.
  • The locate function allows us to search for a string in a field. For our query, we search for the trace_id in the log field. The locate function returns 0 if the string is not found, otherwise the character number where it is found. We’re testing that trace_id is in the log field. This lets us find the entry that has the payment trace_id with the error.

Note that log is PPL keyword, but also a field in our log file. We put backquotes around a field name if it’s also a keyword if we need to reference it in a PPL statement.

To start using PPL, complete the following steps:

  1. On OpenSearch Dashboards, choose Observability in the navigation pane.
  2. Choose Event analytics.
  3. Choose the calendar icon, then choose the time period you want for your query (for this post, Year to date).
  4. Enter your PPL statement.

Note that results are shown in table format by default, but you can also choose to view them in JSON format.

Monitor your services using visualizations

We can use the PPL on the Event analytics page to create real-time visualizations. We now use these visualizations to create a dashboard for real-time monitoring of our microservices on the Operational panels page.

Event analytics has two modes: events and visualizations. With events, we’re looking at the query results as a table or JSON. With visualizations, the results are shown as a graph. For this post, we create a PPL query that monitors a value over time, and see the results in a graph. We can then save the graph to use in our dashboard. See the following code:

source = sample_app_logs | where stream = 'stderr' and locate('payment',`log`) > 0 | stats count() by span(time, 5m)

This code is similar to the PPL we used earlier, with two key differences:

  • We specify the name of our service in the log field (for this post, payment).
  • We use the aggregation function stats count() by span(time, 5m). We take the count of matches in the log field and aggregate by 5-minute intervals.

The following screenshot shows the visualization.

OpenSearch Service offers a choice of several different visualizations, such as line, bar, and pie charts.

We now save the results as a visualization, giving it the name Payment Service Errors.

We want to create and save a visualization for each of the five services. To create a new visualization, choose Add new, then modify the query by changing the service name.

We save this one and repeat the process by choosing Add new again for each of the five micro-services. Each microservice is now available on its own tab.

Create an operational panel

Operational panels in OpenSearch Dashboards are collections of visualizations created using PPL queries. Now that we have created the visualizations in the Event analytics dashboard, we can create a new operational panel.

  1. On the Operational panel page, choose Create panel.
  2. For Name, enter e-Commerce Error Monitoring.
  3. Open that panel and choose Add Visualization.
  4. Choose Payment Service Errors.

The following screenshot shows our visualization.

We now repeat the process for our other four services. However, the layout isn’t good. The graphs are too big, and laid out vertically, so they can’t all be seen at once.

We can choose Edit to adjust the size of each visualization and move them around. We end up with the layout in the following screenshot.

We can now monitor errors over time for all of our services. Notice that the y axis of each service visualization adjusts based on the error count.

This will be a useful tool for monitoring our services in the future.

Next, we create an incident report on the error that we found.

Create an OpenSearch incident report

The e-Commerce Error Monitoring panel can help us monitor our application in the future. However, we want to send out an incident report to our developers about our current findings. We do this by using OpenSearch PPL and Notebooks features introduced in OpenSearch Service 1.3 to create an incident report. A notebook can be downloaded as a PDF. An incident report is useful to share our findings with others.

First, we need to create a new notebook.

  1. Under Observability in the navigation pane, choose Notebooks.
  2. Choose Create notebook.
  3. For Name, enter e-Commerce Error Report.
  4. Choose Create.

    The following screenshot shows our new notebook page.

    A notebook consists of code blocks: narrative, PPL, and SQL, and visualizations created on the Event analytics page with PPL.
  5. Choose Add code block.
    We can now write a new code block.

    We can use %md, %sql, or %ppl to add code. In this first block, we just enter text.
  6. Use %md to add narrative text.
  7. Choose Run to see the output.

    The following screenshot shows our code block.

    Now we want to add our PPL query to show the error we found earlier.
  8. On the Add paragraph menu, choose Code block.
  9. Enter our PPL query, then choose Run.

    The following screenshot shows our output.

    Let’s drill down on the log field to get details of the error.
    We could have many narrative and code blocks, as well as visualizations of PPL queries. Let’s add a visualization.
  10. On the Add paragraph menu, choose Visualization.
  11. Choose Payment Service Errors to view the report we created earlier.

    This visualization shows a pattern of payment service errors this afternoon. Note that we chose a date range because we’re focusing on today’s errors to communicate with the development team.

    Notebook visualizations can be refreshed to provide updated information. The following screenshot shows our visualization an hour later.
    We’re now going to take our completed notebook and export it as a PDF report to share with other teams.
  12. Choose Output only to make the view cleaner to share.
  13. On the Reporting actions menu, choose Download PDF.

We can send this PDF report to the developers supporting the payment service.


In this post, we used OpenSearch Service v1.3 to create a dashboard to monitor errors in our microservices application. We then created a notebook to use a PPL query on a specific trace ID for a payment service error to provide details, and a graph of payment service errors to visualize the pattern of errors. Finally, we saved our notebook as a PDF to share with the payment service development team. If you would like to explore these features further check out the latest Amazon OpenSearch Observability documentation or, for open source, OpenSearch Observability latest open source documentation. You can also contact your AWS Solutions Architects, who can be of assistance alongside your innovation journey.

About the Authors

Marvin Gersho is a Senior Solutions Architect at AWS based in New York City. He works with a wide range of startup customers. He previously worked for many years in engineering leadership and hands-on application development, and now focuses on helping customers architect secure and scalable workloads on AWS with a minimum of operational overhead. In his free time, Marvin enjoys cycling and strategy board games.

Subham Rakshit is a Streaming Specialist Solutions Architect for Analytics at AWS based in the UK. He works with customers to design and build search and streaming data platforms that help them achieve their business objective. Outside of work, he enjoys spending time solving jigsaw puzzles with his daughter.

Rafael Gumiero is a Senior Analytics Specialist Solutions Architect at AWS. An open-source and distributed systems enthusiast, he provides guidance to customers who develop their solutions with AWS Analytics services, helping them optimize the value of their solutions.

Identification of replication bottlenecks when using AWS Application Migration Service

Post Syndicated from Tobias Reekers original https://aws.amazon.com/blogs/architecture/identification-of-replication-bottlenecks-when-using-aws-application-migration-service/

Enterprises frequently begin their journey by re-hosting (lift-and-shift) their on-premises workloads into AWS and running Amazon Elastic Compute Cloud (Amazon EC2) instances. A simpler way to re-host is by using AWS Application Migration Service (Application Migration Service), a cloud-native migration service.

To streamline and expedite migrations, automate reusable migration patterns that work for a wide range of applications. Application Migration Service is the recommended migration service to lift-and-shift your applications to AWS.

In this blog post, we explore key variables that contribute to server replication speed when using Application Migration Service. We will also look at tests you can run to identify these bottlenecks and, where appropriate, include remediation steps.

Overview of migration using Application Migration Service

Figure 1 depicts the end-to-end data replication flow from source servers to a target machine hosted on AWS. The diagram is designed to help visualize potential bottlenecks within the data flow, which are denoted by a black diamond.

Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Figure 1. Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Baseline testing

To determine a baseline replication speed, we recommend performing a control test between your target AWS Region and the nearest Region to your source workloads. For example, if your source workloads are in a data center in Rome and your target Region is Paris, run a test between eu-south-1 (Milan) and eu-west-3 (Paris). This will give a theoretical upper bandwidth limit, as replication will occur over the AWS backbone. If the target Region is already the closest Region to your source workloads, run the test from within the same Region.

Network connectivity

There are several ways to establish connectivity between your on-premises location and AWS Region:

  1. Public internet
  2. VPN
  3. AWS Direct Connect

This section pertains to options 1 and 2. If facing replication speed issues, the first place to look is at network bandwidth. From a source machine within your internal network, run a speed test to calculate your bandwidth out to the internet; common test providers include Cloudflare, Ookla, and Google. This is your bandwidth to the internet, not to AWS.

Next, to confirm the data flow from within your data center, run a traceroute (Windows) or tracert (Linux). Identify any network hops that are unusual or potentially throttling bandwidth (due to hardware limitations or configuration).

To measure the maximum bandwidth between your data center and the AWS subnet that is being used for data replication, while accounting for Security Sockets Layer (SSL) encapsulation, use the CloudEndure SSL bandwidth tool (refer to Figure 1).

Source storage I/O

The next area to look for replication bottlenecks is source storage. The underlying storage for servers can be a point of contention. If the storage is maxing-out its read speeds, this will impact the data-replication rate. If your storage I/O is heavily utilized, it can impact block replication by Application Migration Service. In order to measure storage speeds, you can use the following tools:

  • Windows: WinSat (or other third-party tooling, like AS SSD Benchmark)
  • Linux: hdparm

We suggest reducing read/write operations on your source storage when starting your migration using Application Migration Service.

Application Migration Service EC2 replication instance size

The size of the EC2 replication server instance can also have an impact on the replication speed. Although it is recommended to keep the default instance size (t3.small), it can be increased if there are business requirements, like to speed up the initial data sync. Note: using a larger instance can lead to increased compute costs.

-508 (1)

Common replication instance changes include:

  • Servers with <26 disks: change the instance type to m5.large. Increase the instance type to m5.xlarge or higher, as needed.
  • Servers with <26 disks (or servers in AWS Regions that do not support m5 instance types): change the instance type to m4.large. Increase to m4.xlarge or higher, as needed.

Note: Changing the replication server instance type will not affect data replication. Data replication will automatically pick up where it left off, using the new instance type you selected.

Application Migration Service Elastic Block Store replication volume

You can customize the Amazon Elastic Block Store (Amazon EBS) volume type used by each disk within each source server in that source server’s settings (change staging disk type).

By default, disks <500GiB use Magnetic HDD volumes. AWS best practice suggests not change the default Amazon EBS volume type, unless there is a business need for doing so. However, as we aim to speed up the replication, we actively change the default EBS volume type.

There are two options to choose from:

  1. The lower cost, Throughput Optimized HDD (st1) option utilizes slower, less expensive disks.

-508 (2)

    • Consider this option if you(r):
      • Want to keep costs low
      • Large disks do not change frequently
      • Are not concerned with how long the initial sync process will take
  1. The faster, General Purpose SSD (gp2) option utilizes faster, but more expensive disks.

-508 (3)

    • Consider this option if you(r):
      • Source server has disks with a high write rate, or if you need faster performance in general
      • Want to speed up the initial sync process
      • Are willing to pay more for speed

Source server CPU

The Application Migration Service agent that is installed on the source machine for data replication uses a single core in most cases (agent threads can be scheduled to multiple cores). If core utilization reaches a maximum, this can be a limitation for replication speed. In order to check the core utilization:

  • Windows: Launch the Task Manger application within Windows, and click on the “CPU” tab. Right click on the CPU graph (this is currently showing an average of cores) > select “Change graph to” > “Logical processors”. This will show individual cores and their current utilization (Figure 2).
Logical processor CPU utilization

Figure 2. Logical processor CPU utilization

Linux: Install htop and run from the terminal. The htop command will display the Application Migration Service/CE process and indicate the CPU and memory utilization percentage (this is of the entire machine). You can check the CPU bars to determine if a CPU is being maxed-out (Figure 3).

AWS Application Migration Service/CE process to assess CPU utilization

Figure 3. AWS Application Migration Service/CE process to assess CPU utilization


In this post, we explored several key variables that contribute to server replication speed when using Application Migration Service. We encourage you to explore these key areas during your migration to determine if your replication speed can be optimized.

Related information

Let’s Architect! Architecting for governance and management

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-governance-and-management/

As you develop next-generation cloud-native applications and modernize existing workloads by migrating to cloud, you need cloud teams that can govern centrally with policies for security, compliance, operations and spend management.

In this edition of Let’s Architect!, we gather content to help software architects and tech leaders explore new ideas, case studies, and technical approaches to help you support production implementations for large-scale migrations.

Seamless Transition from an AWS Landing Zone to AWS Control Tower

A multi-account AWS environment helps businesses migrate, modernize, and innovate faster. With the large number of design choices, setting up a multi-account strategy can take a significant amount of time, because it involves configuring multiple accounts and services and requires a deep understanding of AWS.

This blog post shows you how AWS Control Tower helps customers achieve their desired business outcomes by setting up a scalable, secure, and governed multi-account environment. This post describes a strategic migration of 140 AWS accounts from customer Landing Zone to an AWS Control Tower-based solution.

Multi-account landing zone architecture that uses AWS Control Tower

Multi-account landing zone architecture that uses AWS Control Tower

Build a strong identity foundation that uses your existing on-premises Active Directory

How do you use your existing Microsoft Active Directory (AD) to reliably authenticate access for AWS accounts, infrastructure running on AWS, and third-party applications?

The architecture shown in this blog post is designed to be highly available and extends access to your existing AD to AWS, which enables your users to use their existing credentials to access authorized AWS resources and applications. This post highlights the importance of implementing a cloud authentication and authorization architecture that addresses the variety of requirements for an organization’s AWS Cloud environment.

Multi-account Complete AD architecture with trusts and AWS SSO using AD as the identity source

Multi-account Complete AD architecture with trusts and AWS SSO using AD as the identity source

Migrate Resources Between AWS Accounts

AWS customers often start their cloud journey with one AWS account, and over time they deploy many resources within that account. Eventually though, they’ll need to use more accounts and migrate resources across AWS Regions and accounts to reduce latency or increase resiliency.

This blog post shows four approaches to migrate resources based on type, configuration, and workload needs across AWS accounts.

Migration infrastructure approach

Migration infrastructure approach

Transform your organization’s culture with a Cloud Center of Excellence

As enterprises seek digital transformation, their efforts to use cloud technology within their organizations can be a bit disjointed. This video introduces you to the Cloud Center of Excellence (CCoE) and shows you how it can help transform your business via cloud adoption, migration, and operations. By using the CCoE, you’ll establish and us a cross-functional team of people  for developing and managing your cloud strategy, governance, and best practices that your organization can use to transform the business using the cloud.

Benefits of CCoE

Benefits of CCoE

See you next time!

Thanks for reading! If you want to dive into this topic even more, don’t miss the Management and Governance on AWS product page.

See you in a couple of weeks with novel ways to architect for front-end web and mobile!

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Field Notes: Monitor IBM Db2 for Errors Using Amazon CloudWatch and Send Notifications Using Amazon SNS

Post Syndicated from Sai Parthasaradhi original https://aws.amazon.com/blogs/architecture/field-notes-monitor-ibm-db2-for-errors-using-amazon-cloudwatch-and-send-notifications-using-amazon-sns/

Monitoring a is crucial function to be able to detect any unanticipated or unknown access to your data in an IBM Db2 database running on AWS.  You also need to monitor any specific errors which might have an impact on the system stability and get notified immediately in case such an event occurs. Depending on the severity of the events, either manual or automatic intervention is needed to avoid issues.

While it is possible to access the database logs directly from the Amazon EC2 instances on which the database is running, you may need additional privilege to access the instance, in a production environment. Also, you need to write custom scripts to extract the required information from the logs and share with the relevant team members.

In this post, we use Amazon CloudWatch log Agent to export the logs to Amazon CloudWatch Logs and monitor the errors and activities in the Db2 database. We provide email notifications for the configured metric alerts which may need attention using Amazon Simple Notification Service (Amazon SNS).

Overview of solution

This solution covers the steps to configure a Db2 Audit policy on the database to capture the SQL statements which are being run on a particular table. This is followed by installing and configuring Amazon CloudWatch log Agent to export the logs to Amazon CloudWatch Logs. We set up metric filters to identify any suspicious activity on the table like unauthorized access from a user or permissions being granted on the table to any unauthorized user. We then use Amazon Simple Notification Service (Amazon SNS) to notify of events needing attention.

Similarly, we set up the notifications in case of any critical errors in the Db2 database by exporting the Db2 Diagnostics Logs to Amazon CloudWatch Logs.

Solution Architecture diagram

Figure 1 – Solution Architecture


You should have the following prerequisites:

  • Ensure you have access to the AWS console and CloudWatch
  • Db2 database running on Amazon EC2 Linux instance. Refer to the Db2 Installation methods from the IBM documentation for more details.
  • A valid email address for notifications
  • A SQL client or Db2 Command Line Processor (CLP) to access the database
  • Amazon CloudWatch Logs agent installed on the EC2 instances

Refer to the Installing CloudWatch Agent documentation and install the agent. The setup shown in this post is based on a RedHat Linux operating system.  You can run the following commands as a root user to install the agent on the EC2 instance, if your OS is also based on the RedHat Linux operating system.

cd /tmp
wget https://s3.amazonaws.com/amazoncloudwatch-agent/redhat/amd64/latest/amazon-cloudwatch-agent.rpm
sudo rpm -U ./amazon-cloudwatch-agent.rpm
  • Create an IAM role with policy CloudWatchAgentServerPolicy.

This IAM role/policy is required to run CloudWatch agent on EC2 instance. Refer to the documentation CloudWatch Agent IAM Role for more details. Once the role is created, attach it to the EC2 instance profile.

Setting up a Database audit

In this section, we set up and activate the db2 audit at the database level. In order to run the db2audit command, the user running it needs SYSADM authority on the database.

First, let’s configure the path to store an active audit log where the main audit file will be created, and archive path where it will be archived using the following commands.

db2audit configure datapath /home/db2inst1/dbaudit/active/
db2audit configure archivepath /home/db2inst1/dbaudit/archive/

Now, let’s set up the static configuration audit_buf_sz size to write the audit records asynchronously in 64 4K pages. This will ensure that the statement generating the corresponding audit record does not wait until the record is written to disk.

db2 update dbm cfg using audit_buf_sz 64

Now, create an audit policy on the WORKER table, which contains sensitive employee data to log all the SQL statements being executed against the table and attach the policy to the table as follows.

db2 connect to testdb
db2 "create audit policy worktabpol categories execute status both error type audit"
db2 "audit table sample.worker using policy worktabpol"

Finally, create an audit policy to audit and log the SQL queries run by the admin user authorities. Attach this policy to dbadm, sysadm and secadm authorities as follows.

db2 "create audit policy adminpol categories execute status both,sysadmin status both error type audit"
db2 "audit dbadm,sysadm,secadm using policy adminpol"
db2 terminate

The following SQL statement can be issued to verify if the policies are created and attached to the WORKER table and the admin authorities properly.

Figure 2. Audit policy setup in the database

Figure 2. Audit policy setup in the database

Once the audit setup is complete, you need to extract the details into a readable file. This contains all the execution logs whenever the WORKER table is accessed by any user from any SQL statement. You can run the following bash script periodically using CRON scheduler. This identifies events where the WORKER table is being accessed as part of any SQL statement by any user to populate worker_audit.log file which will be in a readable format.

. /home/db2inst1/sqllib/db2profile
cd /home/db2inst1/dbaudit/archive
db2audit flush
db2audit archive database testdb
if ls db2audit.db.TESTDB.log* 1> /dev/null 2>&1; then
latest_audit=`ls db2audit.db.TESTDB.log* | tail -1`
db2audit extract file worker_audit.log from files $latest_audit
rm -f db2audit.db.TESTDB.log*

Publish database audit and diagnostics Logs to CloudWatch Logs

The CloudWatch Log agent uses a JSON configuration file located at /opt/aws/amazon-cloudwatch-agent/bin/config.json. You can edit the JSON configuration as a root user and provide the following configuration details manually.  For more information, refer to the CloudWatch Agent configuration file documentation. Based on the setup in your environment, modify the file_path location in the following configuration along with any custom values for the log group and log stream names which you specify.

     "agent": {
         "run_as_user": "root"
     "logs": {
         "logs_collected": {
             "files": {
                 "collect_list": [
                         "file_path": "/home/db2inst1/dbaudit/archive/worker_audit.log",
                         "log_group_name": "db2-db-worker-audit-log",
                         "log_stream_name": "Db2 - {instance_id}"
                       "file_path": "/home/db2inst1/sqllib/db2dump/DIAG0000/db2diag.log",
                       "log_group_name": "db2-diagnostics-logs",
                       "log_stream_name": "Db2 - {instance_id}"

This configuration specifies the file path which needs to be published to CloudWatch Logs along with the CloudWatch Logs group and stream name details as provided in the file. We publish the audit log which was created earlier as well as the Db2 Diagnostics log which gets generated on the Db2 server, generally used to troubleshoot database issues. Based on the config file setup, we publish the worker audit log file with log group name db2-db-worker-audit-log and the Db2 diagnostics log with db2-diagnostics-logs log group name respectively.

You can now run the following command to start the CloudWatch Log agent which was installed on the EC2 instance as part of the Prerequisites.

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s

Create SNS Topic and subscription

To create an SNS topic, complete the following steps:

  • On the Amazon SNS console, choose Topics in the navigation pane.
  • Choose Create topic.
  • For Type, select Standard.
  • Provide the Topic name and other necessary details as per your requirements.
  • Choose Create topic.

After you create your SNS topic, you can create a subscription. Following are the steps to create the subscription.

  • On the Amazon SNS console, choose Subscriptions in the navigation pane.
  • Choose Create subscription.
  • For Topic ARN, choose the SNS topic you created earlier.
  • For Protocol, choose Email. Other options are available, but for this post we create an email notification.
  • For Endpoint, enter the email address to receive event notifications.
  • Choose Create subscription. Refer to the following screenshot for an example:
Figure 3. Create SNS subscription

Figure 3. Create SNS subscription

Create metric filters and CloudWatch alarm

You can use a metric filter in CloudWatch to create a new metric in a custom namespace based on a filter pattern.  Create a metric filter for db2-db-worker-audit-log log using the following steps.

To create a metric filter for the db2-db-worker-audit-log log stream:

  • On the CloudWatch console, under CloudWatch Logs, choose Log groups.
  • Select the Log group db2-db-worker-audit-log.
  • Choose Create metric filter from the Actions drop down as shown in the following screenshot.
Figure 4. Create Metric filter

Figure 4. Create Metric filter

  • For Filter pattern, enter “worker – appusr” to filter any user access on the WORKER table in the database except the authorized user appusr.

This means that only appusr user is allowed to query WORKER table to access the data. If there is an attempt to grant permissions on the table to any other user or access is being attempted by any other user, these actions are monitored. Choose Next to navigate to Assign metric step as shown in the following screenshot.

Figure 5. Define Metric filter

Figure 5. Define Metric filter

  • Enter Filter name, Metric namespace, Metric name and Metric value as provided and choose Next as shown in the following screenshot.
Figure 6. Assign Metric

Figure 6. Assign Metric

  • Choose Create Metric Filter from the next page.
  • Now, select the new Metric filter you just created from the Metric filters tab and choose Create alarm as shown in the following screenshot.
Figure 7. Create alarm

Figure 7. Create alarm

  • Choose Minimum under Statistic, Period as per your choice, say 10 seconds and Threshold value as 0 as shown in the following screenshot.
Figure 8. Configure actions

Figure 8. Configure actions

  • Choose Next and under Notification screen. select In Alarm under Alarm state trigger option.
  • Under Send a notification to search box, select the SNS Topic you have created earlier to send notifications and choose Next.
  • Enter the Filter name and choose Next and then finally choose Create alarm.

To create metric filter for db2-diagnostics-logs log stream:

Follow the same steps as earlier to create the Metric filter and alarm for the CloudWatch Log group db2-diagnostics-logs. While creating the Metric filter, enter the Filter pattern “?Error?Severe” to monitor Log level that are ‘Error’ or ‘Severe’ in nature from the diagnostics file and get notified during such events.

Testing the solution

Let’s test the solution by running a few commands and validate if notifications are being sent appropriately.

To test audit notifications, run the following grant statement against the WORKER table as system admin user or database admin user or the security admin user, depending on the user authorization setup in your environment. For this post, we use db2inst1 (system admin) to run the commands. Since the WORKER table has sensitive data, the DBA does not issue grants against the table to any other user apart from the authorized appusr user.

Alternatively, you can also issue a SELECT SQL statement against the WORKER table from any user other than appusr for testing.

db2 "grant select on table sample.worker to user abcuser, role trole"
Figure 9. Email notification for audit monitoring

Figure 9. Email notification for audit monitoring

To test error notifications, we can simulate db2 database manager failure by issuing db2_kill from the instance owner login.

Figure 10. Issue db2_kill on the database server

Figure 10. Issue db2_kill on the database server

Figure 11. Email notification for error monitoring

Figure 11. Email notification for error monitoring

Clean up

Shut down the Amazon EC2 instance which was created as part of the setup outlined in this blog post to avoid any incurring future charges.


In this post, we showed you how to set up Db2 database auditing on AWS and set up metric alerts and alarms to get notified in case of any unknown access or unauthorized actions. We used the audit logs and monitor errors from Db2 diagnostics logs by publishing to CloudWatch Logs.

If you have a good understanding on specific error patterns in your system, you can use the solution to filter out specific errors and get notified to take the necessary action. Let us know if you have any comments or questions. We value your feedback!

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Field Notes: Clear Unused AWS SSO Mappings Automatically During AWS Control Tower Upgrades

Post Syndicated from Gaurav Gupta original https://aws.amazon.com/blogs/architecture/field-notes-clear-unused-aws-sso-mappings-automatically-during-aws-control-tower-upgrades/

Increasingly organizations are using AWS Control Tower to manage their multiple accounts as well as an external third-party identity source for their federation needs. Cloud architects who use these external identity sources, needed an automated way to clear the unused maps created by AWS Control Tower landing zone as part of the launch, or during update and repair operations. Though the AWS SSO mappings are inaccessible once the external identity source is configured, customers prefer to clear any unused mappings in the directory.

You can remove the permissions sets and mappings that AWS Control Tower deployment creates in AWS SSO. However, when the landing zone is updated or repaired, the default permission sets and mappings are recreated in AWS SSO. In this blog post, we show you how to use AWS Control Tower Lifecycle events to automatically remove these permission sets and mappings when AWS Control Tower is upgraded or repaired. An AWS Lambda function runs on every upgrade and automatically removes the permission sets and mappings.

Overview of solution

Using this CloudFormation template, you can deploy the solution that automatically removes the AWS SSO permission sets and mappings when you upgrade your AWS Control Tower environment. We use AWS CloudFormation, AWS Lambda, AWS SSO and Amazon CloudWatch services to implement this solution.

Figure 1 - Architecture showing how AWS services are used to automatically remove the AWS SSO permission sets and mappings when you upgrade your AWS Control Tower environment

Figure 1 – Architecture showing how AWS services are used to automatically remove the AWS SSO permission sets and mappings when you upgrade your AWS Control Tower environment

To clear the AWS SSO entities and leave the service enabled with no active mappings, we recommend the following steps. This is mainly for those who do not want to use the default AWS SSO deployed by AWS Control Tower.

  • Log in to the AWS Control Tower Management Account and make sure you are in the AWS Control Tower Home Region.
  • Launch AWS CloudFormation stack, which creates:
    • An AWS Lambda function that:
      • Checks/Delete(s) the permission sets mappings created by AWS Control Tower, and
      • Deletes the permission sets created by AWS Control Tower.
  • An AWS IAM role that is assigned to the preceding AWS Lambda Function with minimum required permissions.
  • An Amazon CloudWatch Event Rule that is invoked upon UpdateLandingZone API and triggers the ClearMappingsLambda Lambda function


For this walkthrough, you should have the following prerequisites:

  • Administrator access to AWS Control Tower management account


  1. Log in to the AWS account where AWS Control Tower is deployed.
  2. Make sure you are in the home Region of AWS Control Tower.
  3. Deploy the provided CloudFormation template.
    • Download the CloudFormation template.
    • Select AWS CloudFormation service in the AWS Console
    • Select Create Stack and select With new resources (standard)
    • Upload the template file downloaded in Step 1
    • Enter the stack name and choose Next
    • Use the default values in the next page and choose Next
    • Choose Create Stack

By default, in your AWS Control Tower Landing Zone you will see the permission sets and mappings in your AWS SSO service page as shown in the following screenshots:

Figure 2 – Permission sets created by AWS Control Tower

Figure 3 – Account to Permission set mapping created by AWS Control Tower

Now, you can update the AWS Control Tower Landing Zone which will invoke the Lambda function deployed using the CloudFormation template.

Steps to update/repair Control Tower:

  1. Log in to the AWS account where AWS Control Tower is deployed.
  2. Select Landing zone settings from the left-hand pane of the Control Tower dashboard
  3. Select the latest version as seen in the screenshot below.
  4. Select Repair or Update, whichever option is available.
  5. Select Update Landing Zone.

Figure 4 – Updating AWS Control Tower Landing zone

Once the update is complete, you can go to AWS SSO service page and check that the permission sets and the mappings have been removed as shown in the following screenshots:

Figure 5 -Permission sets cleared automatically after Landing zone update

Figure 6 -Mappings cleared after Landing zone update

Cleaning up

If you are only testing this solution, make sure to delete the CloudFormation template, which will remove the relevant resources to stop incurring charges.


In this post, we provided a solution to clear AWS SSO Permission Sets and Mappings when you upgrade your AWS Control Tower Landing Zone. Remember, AWS SSO permission sets are added every time you upgrade AWS Control Tower Landing Zone. With this this solution you don’t have to manage any settings since the AWS Lambda function runs on every upgrade and removes the permission sets and mappings.

Give it a try and let us know your thoughts in the comments!

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Measure and Improve Your Application Resilience with AWS Resilience Hub

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/monitor-and-improve-your-application-resiliency-with-resilience-hub/

I am excited to announce the immediate availability of AWS Resilience Hub, a new AWS service designed to help you define, track, and manage the resilience of your applications.

You are building and managing resilient applications to serve your customers. Building distributed systems is hard; maintaining them in an operational state is even harder. The question is not if a system will fail, but when it will, and you want to be prepared for that.

Resilience targets are typically measured by two metrics: Recovery Time Objective (RTO), the time it takes to recover from a failure, and Recovery Point Objective (RPO), the maximum window of time in which data might be lost after an incident. Depending on your business and application, these can be measured in seconds, minutes, hours, or days.

AWS Resilience Hub lets you define your RTO and RPO objectives for each of your applications. Then it assesses your application’s configuration to ensure it meets your requirements. It provides actionable recommendations and a resilience score to help you track your application’s resiliency progress over time. Resilience Hub gives a customizable single dashboard experience, accessible through the AWS Management Console, to run assessments, execute prebuilt tests, and configure alarms to identify issues and alert the operators.

AWS Resilience Hub discovers applications deployed by AWS CloudFormation (this includes SAM and CDK applications), including cross Regions and cross account stacks. Resilience Hub also discovers applications from Resource Groups and tags or chooses from applications already defined in AWS Service Catalog AppRegistry.

The term “application” here refers not just to your application software or code; it refers to the entire infrastructure stack to host the application: networking, virtual machines, databases, and so on.

Resilience assessment and recommendations
AWS Resilience Hub’s resilience assessment utilizes best practices from the AWS Well-Architected Framework to analyze the components of your application and uncover potential resilience weaknesses caused by incomplete infrastructure setup, misconfigurations, or opportunities for additional configuration improvements. Resilience Hub provides actionable recommendations to improve the application’s resilience.

For example, Resilience Hub validates that the application’s Amazon Relational Database Service (RDS), Amazon Elastic Block Store (EBS), and Amazon Elastic File System (Amazon EFS) backup schedule is sufficient to meet the application’s RPO and RTO you defined in your resilience policy. When insufficient, it recommends improvements to meet your RPO and RTO objectives.

The resilience assessment generates code snippets that help you create recovery procedures as AWS Systems Manager documents for your applications, referred to as standard operating procedures (SOPs). In addition, Resilience Hub generates a list of recommended Amazon CloudWatch monitors and alarms to help you quickly identify any change to the application’s resilience posture once deployed.

Continuous resilience validation
After the application and SOPs have been updated to incorporate recommendations from the resilience assessment, you may use Resilience Hub to test and verify that your application meets its resilience targets before it is released into production. Resilience Hub is integrated with AWS Fault Injection Simulator (FIS), a fully managed service for running fault injection experiments on AWS. FIS provides fault injection simulations of real-world failures, such as network errors or having too many open connections to a database. Resilience Hub also provides APIs for development teams to integrate their resilience assessment and testing into their CI/CD pipelines for ongoing resilience validation. Integrating resilience validation into CI/CD pipelines helps ensure that every change to the application’s underlying infrastructure does not compromise its resilience.

AWS Resilience Hub provides a comprehensive view of your overall application portfolio resilience status
through its dashboard. To help you track the resilience of applications, Resilience Hub aggregates and
organizes resilience events (for example, unavailable database or failed resilience validation), alerts, and insights from services like Amazon CloudWatch and AWS Fault Injection Simulator (FIS). Resilience Hub also generates a resilience score, a scale that indicates the level of implementation for recommended resilience tests, alarms and recovery SOPs. This score can be used to measure resilience improvements over time.

The intuitive dashboard sends alerts for issues, recommends remediation steps, and provides a single place to manage application resilience. For example, when a CloudWatch alarm triggers, Resilience Hub alerts you and recommends recovery procedures to deploy.

AWS Resilience Hub in Action
I developed a non-resilient application made of a single EC2 instance and an RDS database. I’d like Resilience Hub to assess this application. The CDK script to deploy this application on your AWS Account is available on my GitHub repository. Just install CDK v2 (npm install -g aws-cdk@next) and deploy the stack (cdk bootstrap && cdk deploy --all).

There are four steps when using Resilience Hub:

  • I first add the application to assess. I can start with CloudFormation stacks, AppRegistry, Resource Groups, or another existing application.
  • Second, I define my resilience policy. The policy document describes my RTO and RPO objectives for incidents that might impact either my application, my infrastructure, an entire availability zone, or an entire AWS Region.
  • Third, I run an assessment against my application. The assessment lists policy breaches, if any, and provides a set of recommendations, such as creating CloudWatch alarms, standard operating procedures documents, or fault injection experiment templates.
  • Finally, I might setup any of the recommendations made or run experiments on a regular basis to validate the application’s resilience posture.

To start, I open my browser and navigate to the AWS Management Console. I select AWS Resilience Hub and select Add application.

Resilience hub add application

My sample app is deployed with three CloudFormation stacks: a network, a database, and an EC2 instance.  I select these three stacks and select Next on the bottom of the screen:

Resilience Hub add cloud formations tack

Resilience Hub detects the resources created by these stacks that might affect the resilience of my applications and I select the ones I want to include or exclude from the assessments and click Next. In this example, I select the NAT gateway, the database instance, and the EC2 instance.

Resilience Hub Select resources

I create a resilience policy and associate it with this application. I can choose from policy templates or create a policy from scratch. A policy includes a name and the RTO and RPO values for four types of incidents: the ones affecting my application itself, like a deployment error or a bug at code level; the ones affecting my application infrastructure, like a crash of the EC2 instance; the ones affecting an availability zone; and the ones affecting an entire region. The values are expressed in seconds, minutes,  hours, or days.

Resilience Hub Create Policy

Finally, I review my choices and select Publish.

Once this application and its policy are published, I start the assessment by selecting Assess resiliency.

Resilience Hub Assess resiliency

Without surprise, Resilience Hub reports my resilience policy is breached.

Resilience Hub Policy breach

I select the report to get the details.  The dashboard shows how Region, availability zone, infrastructure and application-level incident expected RTO/RPO compare to my policy.

Resilience Hub Assessment dashboard

I have access to Resiliency recommendations and Operational recommendations.

In Resiliency recommendations, I see if components of my application are compliant with the resilience policy. I also discover recommendations to Optimize for availability zone RTO/RPO, Optimize for cost, or Optimize for minimal changes.

Resilience Hub Optimisation

In Operational recommendations, on the first tab, I see a list of proposed Alarms to create in CloudWatch.

Resilience Hub Alarms

The second tab lists recommended Standard operating procedures. These are Systems Manager documents I can run on my infrastructure, such as Restore from Backup.

Resilience Hub SOP

The third tab (Fault injection experiment templates) proposes experiments to run on my infrastructure to test its resilience. Experiments are run with FIS. Proposed experiments are Inject memory load or Inject process kill.

Resilience Hub - FIS

When I select Set up recommendations, Resilience Hub generates CloudFormation templates to create the alarms or to execute the SOP or experiment proposed.

Resilience Hub - Set up recommandations

The follow up screens are quite self-explanatory. Once generated, templates are available to execute in the Templates tab. I apply the template and observe how it impacts the resilience score of the application.

Resilience Hub Resilience score

The CDK script you used to deploy the sample applications also creates a highly available infrastructure for the same application. It has a load balancer, an auto scaling group, and a database cluster with two nodes. As an exercise, run the same assessment report on this application stack and compare the results.

Pricing and Availability
AWS Resilience Hub is available today in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt). We will add more regions in the future.

As usual, you pay only for what you use. There are no upfront costs or minimum fees. You are charged based on the number of applications you described in Resilience Hub. You can try Resilience Hub free for 6 months, up to 3 applications. After that, Resilience Hub‘s price is $15.00 per application per month. Metering begins once you run the first resilience assessment in Resilience Hub. Remember that Resilience Hub might provision services for you, such as CloudWatch alarms, so additional charges might apply. Visit the pricing page to get the details.

Let us know your feedback and build your first resilience dashboard today.

— seb

Field Notes: Extending the Baseline in AWS Control Tower to Accelerate the Transition from AWS Landing Zone

Post Syndicated from MinWoo Lee original https://aws.amazon.com/blogs/architecture/field-notes-extending-the-baseline-in-aws-control-tower-to-accelerate-the-transition-from-aws-landing-zone/

Customers who adopt and operate the AWS Landing Zone solution as a scalable multi-account environment are starting to migrate to the AWS Control Tower service. They are doing so to enjoy the added benefits of managed services such as stability, feature enhancement, and operational efficiency. Customers who fully use the baseline for governance control provided by AWS Landing Zone for their member accounts may want to apply the baseline of the same feature without omission even when transitioning to AWS Control Tower. To baseline an account is to set up common blueprints and guardrails required for an organization to enable governance at the start of the account.

As shown in Table 1, AWS Control Tower provides most of the features that are mapped with the baseline of the AWS Landing Zone solution through the baseline stacks, guardrails, and account factory, but some features are unique to AWS Landing Zone.

Table 1. AWS Landing Zone and
AWS Control Tower Baseline mapping
AWS Landing Zone baseline stack AWS Control Tower baseline stack
AWS-Landing-Zone-Baseline-EnableCloudTrail AWSControlTowerBP-BASELINE-CLOUDTRAIL
AWS-Landing-Zone-Baseline-SecurityRoles AWSControlTowerBP-BASELINE-ROLES
AWS-Landing-Zone-Baseline-EnableConfig AWSControlTowerBP-BASELINE-CLOUDWATCH
AWS-Landing-Zone-Baseline-ConfigRole AWSControlTowerBP-BASELINE-SERVICE-ROLES
AWS-Landing-Zone-Baseline-EnableConfigRule Guardrails – Enable guardrail on OU
AWS-Landing-Zone-Baseline-EnableConfigRulesGlobal Guardrails – Enable guardrail on OU
AWS-Landing-Zone-Baseline-PrimaryVPC Account Factory – Network Configuration

The baselines uniquely provided by AWS Landing Zone are as follows:

  • AWS-Landing-Zone-Baseline-IamPasswordPolicy
    • AWS Lambda to configure AWS Identity and Access Management (IAM) custom password policy (such as minimum password length, password expires period, password complexity, and password history in member accounts).
  • AWS-Landing-Zone-Baseline-EnableNotifications
    • Amazon CloudWatch alarms deliver CloudTrail application programming interface (API) activity such as Security Group changes, Network ACL changes, and Amazon Elastic Compute Cloud (Amazon EC2) instance type changes to the security administrator.

AWS provides the AWS Control Tower lifecycle events and Customizations for AWS Control Tower as a way to add features that are not included by default in AWS Control Tower. Customizations for AWS Control Tower is an AWS solution that allows you to easily add customizations using AWS CloudFormation templates and service control polices.

This blog post explains how to modify and deploy the code to apply AWS Landing Zone specific baselines such as IamPasswordPolicy and EnableNotifications into AWS Control Tower using Customizations for the AWS Control Tower.

Overview of solution

Adhering to the package folder structure of Customizations for AWS Control Tower, modify the AWS Landing Zone IamPasswordPolicy, EnableNotifications template, parameter file, and manifest file to match the AWS Control Tower deployment environment.

When the modified package is uploaded to the source repository, contents of the package are validated and built by launching AWS CodePipeline. The AWS Landing Zone specific baseline is deployed in member accounts through AWS CloudFormation StackSets in the AWS Control Tower management account.

When a new or existing account is enrolled in AWS Control Tower, the same AWS Landing Zone specific baseline is automatically applied to that account by the lifecycle event (CreateManagedAccount status is SUCCEEDED).

Figure 1 shows how the default baseline of AWS Control Tower and the specific baseline of AWS Landing Zone are applied to member accounts.

Figure 1. Default and custom baseline deployment in AWS Control Tower

Figure 1. Default and custom baseline deployment in AWS Control Tower


This solution follows these steps:

  1. Download and extract the latest version of the AWS Landing Zone Configuration source package. The package contains several functional components including baseline of IamPasswordPolicy, EnableNotifications for applying to accounts in AWS Landing Zone environment. If you are transitioning from AWS Landing Zone to AWS Control Tower, you may use the AWS Landing Zone configuration source package that exists in your management account.
  2. Download and extract the configuration source package of your Customizations for AWS Control Tower.
  3. Create templates and parameters folder structure for customizing configuration package source of Customizations for AWS Control Tower.
  4.  Copy the template and parameter files of the IamPasswordPolicy baseline from the AWS Landing Zone configuration source to the Customizations for AWS Control Tower configuration source.
    1. Open the parameter file (JSON), and modify the parameter value to match your organization’s password policy.
  1. Copy the template and parameter files of the EnableNotifications baseline from the AWS Landing Zone configuration source to the Customizations for AWS Control Tower configuration source.
    1. Open the parameter file (JSON), and change the LogGroupName parameter value to the CloudWatch log group name of your AWS Control Tower environment. Select whether or not to use each alarm in the parameter value.
    2. Open the template file (YAML), and modify the AlarmActions properties of all CloudWatch alarms to refer to the security topic of the Amazon Simple Notification Service (Amazon SNS) that exists in your AWS Control Tower environment.
  1. Open the manifest (YAML) file in the configuration source of Customizations for AWS Control Tower, and update with the modified IamPasswordPolicy and EnableNotifications parameter, template file path, and organizational unit to be applied.
    1. If you have customizations which have already been deployed and operated through Customizations for AWS Control Tower, do not remove existing contents, and consecutively add customized resource in resources section.
  1. Compress the completed source package, and upload it to the source repository of Customizations for AWS Control Tower.
  2. Check the results for applying this solution in AWS Control Tower.
    1. In the management account, wait for all AWS CodePipeline steps in Customizations for AWS Control Tower to be completed.
    2. In the management account, check that the CloudFormation IamPasswordPolicy and EnableNotifications StackSet is deployed.
    3. In a member account, check that the custom password policy is configured in Account Settings of IAM.
    4. In a member account, check that alarms are created in All Alarms of CloudWatch.


For this walkthrough, you should have the following prerequisites:

  • AWS Control Tower deployment.
  • An AWS Control Tower member account.
  • Customizations for AWS Control Tower solution deployment.
  • IAM user and roles, and permission to allow use of ‘CustomControlTowerKMSKey’ in AWS Key Management Service Key Policy to access Amazon Simple Storage Service (Amazon S3) as the configuration source.
    • This is not required in case of using CodeCommit as source repository, but it assumes that Amazon S3 is used for this solution.
  • If the IamPasswordPolicy and EnableNotifications baseline for the AWS Landing Zone service has been deployed in the AWS Control Tower environment, it is necessary to delete stack instances and StackSet associated with the following CloudFormation StackSets:
    • AWS-Landing-Zone-Baseline-IamPasswordPolicy
    • AWS-Landing-Zone-Baseline-EnableNotifications
  • An IAM or AWS Single Sign-On (AWS SSO) account with the following settings:
    • Permission with AdministratorAccess
    • Access type with Programmatic access and AWS Management Console access
  • AWS Command Line Interface (AWS CLI) and Linux Zip package installation in work environment.
  • An IAM or AWS SSO user for member account (optional).

Prepare the work environment

Download the AWS Landing Zone configuration package and Customizations for AWS Control Tower configuration package, and create a folder structure.

  1. Open your terminal AWS Command Line Interface (AWS CLI).

Note: Confirm that AWS Config and credentials for the AWS Command Line Interface (AWS CLI) are properly set as access method (IAM or AWS SSO user) you are using in management account.

  1. Change to home directory and download the aws-landing-zone-configuration.zip file.
cd ~
wget https://s3.amazonaws.com/solutions-reference/aws-landing-zone/latest/aws-landing-zone-configuration.zip
  1.  Extract AWS Landing Zone configuration file to new directory (Named alz).
unzip aws-landing-zone-configuration.zip -d ~/alz
  1. Download _custom-control-tower-configuration.zip file in Customizations for AWS Control Tower configuration’s S3 bucket. Use your AWS Account Id and home Region in S3 bucket name.

Note: If you have already used the Customizations for AWS Control Tower configuration package, or have the Auto Build parameter set to true, use custom-control-tower-configuration.zip instead of _custom-control-tower-configuration.zip.

aws s3 cp s3://custom-control-tower-configuration-<AWS Account Id >-<AWS Region>/_custom-control-tower-configuration.zip ~/

Figure 2. Downloading source package of Customizations for AWS Control Tower

  1. Extract Customizations for AWS Control Tower configuration file to new directory (Named cfct).
unzip _custom-control-tower-configuration.zip -d ~/cfct
  1. Create templates and parameters directory under Customizations for AWS Control Tower configuration directory.
cd ~/cfct
mkdir templates parameters

Now you will see directories and files under Customizations for AWS Control Tower configuration directory.

Note: example-configuration is just an example, and will not be used in this blog post.

 Figure 3. Directory structure of Customizations for AWS Control Tower

Figure 3. Directory structure of Customizations for AWS Control Tower

Customize to include AWS Landing Zone specific baseline

Start customization work by integrating the AWS Landing Zone IamPasswordPolicy and EnableNotifications baseline related files into the structure of Customizations for AWS Control Tower configuration package.

  1. Copy the IamPasswordPolicy baseline template and parameter file into the Customizations for AWS Control Tower configuration directory.
cp ~/alz/templates/aws_baseline/aws-landing-zone-iam-password-policy.template ~/cfct/templates/
cp ~/alz/parameters/aws_baseline/aws-landing-zone-iam-password-policy.json ~/cfct/parameters/
  1. Open the copied aws-landing-zone-iam-password-policy.json, then adjust it to be compliant with your optional password policy requirement.
  2. Copy the EnableNotifications baseline template and parameter file into the Customizations for AWS Control Tower configuration directory.
cp ~/alz/templates/aws_baseline/aws-landing-zone-notifications.template ~/cfct/templates/
cp ~/alz/parameters/aws_baseline/aws-landing-zone-notifications.json ~/cfct/parameters/
  1. Open the copied aws-landing-zone-notifications.template.

Remove the following four lines from the SNSNotificationTopic parameter:

    Type: AWS::SSM::Parameter::Value<String>
    Default: /org/member/local_sns_arn
    Description: "Local Admin SNS Topic for Landing Zone"

Modify AlarmActions under Properties for each of 11 CloudWatch alarms as follows:

      - !Sub 'arn:aws:sns:${AWS::Region}:${AWS::AccountId}:aws-controltower-SecurityNotifications'
  1. Open aws-landing-zone-notifications.json.

Remove the following five lines from the SNSNotificationTopic parameter key, and parameter value at the bottom of file. Make sure to remove the including comma preceding the JSON syntax.

    "ParameterKey": "SNSNotificationTopic",
    "ParameterValue": "/org/member/local_sns_arn"

     Modify the parameter value of LogGroupName parameter key as follows:

"ParameterKey": "LogGroupName",
"ParameterValue": "aws-controltower/CloudTrailLogs"

6. Open the manifest.yaml under root of the Customizations for AWS Control Tower configuration directory, then modify it to include IamPasswordPolicy and EnableNotifications baseline. If there are customizations that  have been previously used in the manifest file of Customizations for AWS Control Tower, add them at the end.

7. Properly adjust region, resource_file, parameter_file, and organizational_units for your AWS Control Tower environment.

Note: Choose the proper organizational units because Customizations for AWS Control Tower will try to deploy customization resources to all AWS accounts within operational units defined in organizational_units property. If you want to select specific accounts, consider using accounts property instead of organizational_units property.

Review the following sample manifest file:

#Default region for deploying Custom Control Tower: Code Pipeline, Step functions, Lambda, SSM parameters, and StackSets
region: ap-northeast-2
version: 2021-03-15

# Control Tower Custom Resources (Service Control Policies or CloudFormation)
  - name: IamPasswordPolicy
    resource_file: templates/aws-landing-zone-iam-password-policy.template
    parameter_file: parameters/aws-landing-zone-iam-password-policy.json
    deploy_method: stack_set
        - Security
        - Infrastructure
        - app-services
        - app-reports

  - name: EnableNotifications
    resource_file: templates/aws-landing-zone-notifications.template
    parameter_file: parameters/aws-landing-zone-notifications.json
    deploy_method: stack_set
        - Security
        - Infrastructure
        - app-services
        - app-reports
  1. Compress all files within the root of the Customizations for AWS Control Tower configuration directory into the custom-control-tower-configuration.zip file.
cd ~/cfct/
zip -r custom-control-tower-configuration.zip ./
  1. Upload the custom-control-tower-configuration.zip file into the Customizations for AWS Control Tower configuration S3 bucket. Use your AWS Account Id and Home Region in the S3 bucket name.
aws s3 cp ~/cfct/custom-control-tower-configuration.zip s3://custom-control-tower-configuration-<AWS Account Id>-<AWS Region>/

Figure 4. Uploading source package of Customizations for AWS Control Tower

Verify solution

Now, you can verify the results for applying this solution.

  1. Log in to your AWS Control Tower management account.
  2.  Navigate to AWS CodePipeline service, then select Custom-Control-Tower-CodePipeline.
  3. Wait for all pipeline stages to complete.
  4. Go to AWS CloudFormation, then choose StackSets.
  5.  Search with the keyword custom. This will result in two custom StackSets.

Figure 5. Custom StackSet of Customizations for AWS Control Tower

  1. Log in to your AWS Control Tower member account.

Note: You need an IAM or AWS SSO user, or simply switch the role to AWSControlTowerExecution in the member account.

  1. Go to IAM, then choose Account settings. You will see a configured custom password policy.
Figure 6. IAM custom password policy of member account

Figure 6. IAM custom password policy of member account

  1. Go to Amazon CloudWatch, then choose All alarms. You will see 11 configured alarms.

Figure 7. Amazon CloudWatch alarms of member account

Cleaning up

Resources deployed to member accounts by this solution can be removed through the CloudFormation Stack function in the management account.

Run Delete stack from StackSet, followed by Delete StackSet, for the following two StackSets.

  • CustomControlTower-IamPasswordPolicy
  • CustomControlTower-EnableNotifications


In this blog post, you learned how to extend the baseline in AWS Control Tower to include the baseline specific to AWS Landing Zone. The principal idea is to use Customizations for AWS Control Tower, and additionally add guardrails, such as AWS Config rule and service control policy, which are not included by default in AWS Control Tower. This helps the transition of AWS Landing Zone to the AWS Control Tower, and enhances the governance control capability of the enterprise.

Related reading: Seamless Transition from an AWS Landing Zone to AWS Control Tower

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.