Tag Archives: Intermediate (200)

How to integrate third-party IdP using developer authenticated identities

Post Syndicated from Andrew Lee original https://aws.amazon.com/blogs/security/how-to-integrate-third-party-idp-using-developer-authenticated-identities/

Amazon Cognito identity pools enable you to create and manage unique identifiers for your users and provide temporary, limited-privilege credentials to your application to access AWS resources. Currently, there are several out of the box external identity providers (IdPs) to integrate with Amazon Cognito identity pools, including Facebook, Google, and Apple. If your application’s primary users use another social media network such as Snapchat and you would like to make it easier for them to authenticate with your application, you would need to use developer authenticated identities and interface with their third-party IdP. This blog post will describe what is needed from the third-party IdP, how to build a scalable backend for authentication, and how to access AWS services from the client.

As an example, this post will use Snapchat’s Login Kit to integrate with Amazon Cognito. The overall authentication flow for the integration is shown in Figure 1.

Figure 1: Overall authentication flow of integration

Figure 1: Overall authentication flow of integration

Prerequisites

The following are the prerequisites for integrating third-party IdP using developer authenticated identities.

  1. The client SDK to authenticate with third-party IdP. This will handle client authentication and access token retrieval. In the example in this post, we use Snapchat’s Login Kit.
  2. A method to authenticate access tokens that are retrieved from the third-party IdP. For this blog post, I am using an endpoint provided by Snapchat which will retrieve user data by passing in access tokens. A successful query of user data indicates the access token is valid.
  3. Developer authenticated identities (identity pool) configured in Amazon Cognito. You will need to note the identity pool ID and the developer provider name you specify.

Client SDK

Follow the third-party client SDK instructions for implementing authentication in your application. Snapchat’s Login Kit provides an SDK to mount a login button in your app, and to allow you to authenticate against your Snapchat account credentials. After a user clicks on the login button, they will be redirected to Snapchat to login. After successfully logging in, they will be redirected back to your application with an access token. The handleResponseCallback is where you can implement an API call to your developer backend, to pass your access token to retrieve credentials from Amazon Cognito to access AWS services. The following code example mounts a login button on your application, to allow the user to authenticate with Snapchat and retrieve an access token.

var loginButtonIconId = "<HTML div id>";
// Mount Login Button
snap.loginkit.mountButton(loginButtonIconId, {
    clientId: "<Snapchat Client Id>",
    redirectURI: "<Developer backend url>",
    scopeList: [
    "user.display_name",
    "user.bitmoji.avatar",
    "user.external_id",
    ],
    handleResponseCallback: function (token) {
   <IMPLEMENT API CALL TO DEVELOPER BACKEND PASSING SNAPCHAT ACCESS TOKEN>
   }
});

Developer backend

The developer backend is responsible for authenticating access tokens from the third-party IdP and exchanging them for an OpenID Connect token that can be used to access AWS services. For this example, you will use Amazon API Gateway with AWS Lambda with the IAM permissions to call getOpenIdTokenForDeveloperIdentity.

The following is a code example to authenticate access tokens with Snapchat.

let result = await axios({
    method: 'post',
    url: 'https://kit.snapchat.com/v1/me',
    headers: {'Content-Type': 'application/json', 
             'Authorization': 'Bearer ' + body.access_token},
    data: {"query":"{me{displayName bitmoji{avatar} externalId}}"}
});

After successful authentication, next you call getOpenIdTokenForDeveloperIdentity with the identity pool ID and logins map. The logins map has a mapping of the developer provider name to an external ID from the IdP. An OpenID Connect token and the Amazon Cognito identifier (identity ID) will be returned from the call, which can be sent to the application. The identity ID and token can be used to access AWS services. The following is a code example to retrieve AWS credentials after authenticating with Snapchat.

if(result.status == 200) {
    returnbody = JSON.stringify(await cognitoidentity.getOpenIdTokenForDeveloperIdentity({
        IdentityPoolId: '<Identity Pool Id>',
        Logins: {
            '<Developer provider name>': result.data.data.me.externalId,  
        }
    }).promise());
}

Considerations

The following are considerations about Amazon Cognito identity pools that you should keep in mind when building your solution.

  1. The identity ID returned by getOpenIdTokenForDeveloperIdentity is mapped to the external ID provided in the logins map. This mapping is stored in Amazon Cognito. You can then use the identity ID to identify who is calling AWS services, which is especially useful for auditing purposes.
  2. This solution is suitable for multi-regional deployments. All that is required is that you copy the identity pool to another AWS Region. Please note that the identity ID will be different in each Region, but that should not affect functionality.

Conclusion

By using developer authenticated identities, you can integrate your application with Amazon Cognito and a third-party IdP with the proper prerequisites. For more examples of using developer authenticated identities, see Developer Authenticated Identities (Identity Pools) in the Amazon Cognito Developer Guide. If you have feedback about this post, submit comments in the Comments section below or start a new thread on one of our forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrew Lee

Andrew is a Solutions Architect at AWS. He is passionate about projects that drive collaboration and partnerships between Amazon and Snapchat. A builder at heart, he believes that only through building something can a vision be truly realized.

How Banks Can Use AWS to Meet Compliance

Post Syndicated from Jiwan Panjiker original https://aws.amazon.com/blogs/architecture/how-banks-can-use-aws-to-meet-compliance/

Since the 2008 financial crisis, banking supervisory institutions such as the Basel Committee on Banking Supervision (BCBS) have strengthened regulations. There is now increased oversight over the financial services industry. For banks, making the necessary changes to comply with these rules is a challenging, multi-year effort.

Basel IV, a massive update to existing rules, is due for implementation in January 2023. Basel IV standardizes the approach to calculating credit risk, increases the impact of risk-weighted assets (RWAs) and emphasizes data transparency.

Given the complexity of data, modeling, and numerous assumptions that have to be made, compliance under Basel IV implementation will be challenging. Standardization omits nuances unique to your business, which can drive up costs, but violating guidelines will result in steep penalties.

This post will address these challenges by outlining a mechanism that facilitates a healthy, data-driven dialogue between banks and regulators to better achieve compliance objectives. The reference architecture will focus on enabling fast, iterative releases with the help of serverless AWS services.

There are four key actions to take in order to support this mechanism:

  1. Automate data management
  2. Establish a continuous integration/continuous delivery (CI/CD) pipeline
  3. Enable fast, point-in-time audit replays
  4. Set up proactive monitoring and notifications

Automate data management

Due to frequent merger activity, banks are typically comprised of a web of integrated systems and siloed business units, making it difficult to consolidate data. Under Basel IV guidelines, auditors want banks to provide detailed data in a presentable way.

You can tackle this first challenge by establishing a data pipeline as shown in Figure 1. Take inventory of each data source as it is incorporated into the pipeline. Identify the critical internal and external data sources that will be used to populate the initial landing area. Amazon Simple Storage Service (S3) is a great choice for this.

Figure 1. Data pipeline that cleans, processes, and segments data

Figure 1. Data pipeline that cleans, processes, and segments data

Amazon S3 is a highly available, durable service that is a popular data lake solution. S3 offers WORM storage capabilities like S3 Glacier Vault and S3 Object Lock to protect the integrity of your archived data in accordance with U.S. SEC and FINRA rules.

Basel IV regulations also require banks to use many attributes to develop accurate credit risk models. The attributes can be a mix of datasets such as financial statements, internal balanced scorecards, macro-economic data, and credit ratings. The risk models themselves can also be segmented by portfolio types, industry segments, asset types and much more.

You can split data into different domains and designate data owners with separate S3 buckets. Credit risk model developers, analyst, and data scientists can then use the structure of the S3 buckets to pull together relevant datasets. They can then store the outputs into S3 buckets.

To support fast, automated data retrieval, store object metadata in a highly scalable, and queryable database. You can set up Amazon S3 so that an event can initiate a function to populate Amazon DynamoDB. Developers can use AWS Lambda to write these functions using popular languages like Python.

With AWS Glue, you can automate Extract/Load/Transform (ETL) processes to clean and move data to the different S3 buckets. AWS Glue can also support data operations by automatically cataloging your various data sources.

Taking on a structured approach will simplify data governance and transparency as the business continues to grow and operate.

Establish a CI/CD pipeline

Adopt tools that machine learning teams can use to build a streamlined CI/CD solution as demonstrated in Figure 2.

Figure 2. An end-to-end machine learning development and deployment pipeline

Figure 2. An end-to-end machine learning development and deployment pipeline

Using tightly integrated AWS services, your teams can minimize time spent managing tools and deployment processes, and instead, focus on tuning the models and analyzing the results.

Amazon SageMaker brings together a powerful set of machine learning capabilities on the AWS Cloud. It helps data scientists and engineers build insightful models. Figure 2 depicts the high-level architecture and shows how Amazon SageMaker Pipelines helps teams orchestrate the automation and deployment processes.

The core of the pipeline uses a set of AWS deployment services so that your teams can collaborate and review effectively. With AWS CodeCommit, your teams can set up git-based repository to store and version models for data processing, training, and evaluation. The repository can also store code and configuration files using AWS CloudFormation for deployment. You can use AWS CodePipeline and AWS CodeBuild to create and update a model endpoint based on the approved/reviewed changes.

Any updates detected in the AWS CodeCommit repository initiate a deployment whenever a new model version is added to the Model Registry. Amazon S3 can be used to store generated model artifacts, historical data, and models.

Enable fast, point-in-time audit replays

Figure 3. Containers offer a lightweight, powerful solution to run audits using historical assets

Figure 3. Containers offer a lightweight, powerful solution to run audits using historical assets

One of the main themes of Basel IV is transparency. Figure 3 illustrates a solution to build trust with regulators by allowing them to verify and understand modeling activity.

A lightweight application is hosted in AWS Fargate and enables auditors to re-run Basel credit risk models under specified conditions. With AWS Fargate, you don’t need to manually manage instances or container orchestration. Configure the CPU or memory specifications at the task level and set guidelines around scalability for your service. Your tasks then scale up and down automatically, based on demand, and will optimize cost efficiency and availability.

Figure 3 shows the following:

  1. The application takes inputs such as date, release version, and model type.
  2. It then queries DynamoDB with this information.
  3. The query will return the data necessary to retrieve model artifacts from previous CI/CD deployments and relevant datasets from historical S3 buckets.
  4. Using this information, it can spin up as many containers as needed to run the model.
  5. It then stores the outputs in a separate S3 bucket.
  6. Auditors will have a detailed trace of all the attributes, assumptions, and data that went into the modeling effort. To streamline this process, the app can also compare the outputs of the historical runs to the recent replay and highlight any significant deviations.

Though internal models will be de-emphasized under Basel IV, banks will continue to run internal models as a benchmark against the broader standards. Schedule AWS Fargate tasks to run these models regularly to capitalize on highly performant compute services while minimizing costs.

Set up proactive monitoring and notifications

Figure 4. Scheduled jobs can send out notifications using Amazon SNS when certain thresholds are breached

Figure 4. Scheduled jobs can send out notifications using Amazon SNS when certain thresholds are breached

The last principle is based around establishing an early warning system, enabling banks to take on a more proactive role in maintaining compliance.

With automated monitoring and notifications, banks will be able to respond quickly to potential concerns. For instance, there can be a daily scheduled job that launches containers and runs the models against the latest data. If any thresholds are breached, alerts can be sent out via SMS or email. Operational teams can be subscribed to certain message topics using Amazon Simple Notification Service (SNS). They can then respond before actual compliance issues emerge.

Conclusion

With a Well-Architected approach, AWS helps you control your data, deploy new features, and embrace a serverless approach. This frees you to innovate quickly and address regulatory challenges.

You can iterate with new AWS services and bring machine learning to bear on various streams of data to identify high impact pools of value. You can get a clearer picture of the data to make it easier to identify areas where you can reduce RWAs. Using Amazon S3, you can turn on AWS analytics services such as Amazon QuickSight and Amazon Athena to visualize the data. You’ll be able to fulfill reporting requirements such as those found in regulatory studies like CCAR, DFAST, CECL, and IFRS9.

For more information about establishing a data pipeline, read Lake House Formation Architecture. It is a powerful pattern that combines a few concepts that will help bring your data together cohesively. To set up a robust CI/CD pipeline, explore the AWS Serverless CI/CD Reference Architecture.

Integrate AWS Network Firewall with your ISV Firewall Rulesets

Post Syndicated from Mony Kiem original https://aws.amazon.com/blogs/architecture/integrate-aws-network-firewall-with-your-isv-firewall-rulesets/

You may have requirements to leverage on-premises firewall technology in AWS by using your existing firewall implementation. As you move these workloads to AWS or launch new ones, you may replicate your existing on-premises firewall architecture. In this case, you can run partner appliances such as Palo Alto and Fortinet firewall appliances on Amazon EC2 instances.

Ensure that the firewall and intrusion prevention system (IPS) rules that protect your on-premises data center will also protect your Amazon Virtual Private Cloud (VPC). These rules must be frequently updated to ensure protection against the latest security threats. Many enterprises do not want to manage multiple rulesets across their entire hybrid architecture.

AWS Network Firewall takes the responsibility of this undifferentiated heavy lifting by providing a managed service that runs a fleet of firewall appliances, from patching to security updates. It uses the free and open-source intrusion prevention system (IPS), Suricata, for stateful inspection. Suricata is a network threat detection engine capable of real-time intrusion detection (IDS). It also provides inline intrusion prevention (IPS), network security monitoring (NSM), and offline packet capture processing. Customers can now import their existing IPS rules from their firewall provider software that adheres to the open source Suricata standard. This enables a network security model for your hybrid architecture that minimizes operational overhead while achieving consistent protection.

Overview of AWS services used

The following are AWS services that are used in our solution. These are the fundamental building blocks of a hybrid architecture on AWS.

  • AWS Network Firewall (ANFW): a stateful, managed, network firewall and intrusion detection and prevention service. You can filter network traffic at your VPC using AWS Network Firewall.  AWS Network Firewall pricing is based on the number of firewalls deployed and the amount of traffic inspected. There are no upfront commitments, and you pay only for what you use.
  • AWS Transit Gateway (TGW): a network transit hub that you use to interconnect your virtual private clouds (VPCs) and on-premises networks. Transit Gateway enables customers to connect thousands of VPCs. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single Transit Gateway. This enables you to consolidate and control your organization’s entire AWS routing configuration in one place.
  • AWS Direct Connect, AWS Site-to-Site VPN, and Amazon VPC are other core components of this hybrid architecture.

Hybrid architecture with centralized network inspection

The example architecture in Figure 1 depicts the deployment model of a centralized network security architecture. It shows all inbound and outbound traffic flowing through a single VPC for inspection. The centralized inspection architecture incorporates the use of AWS Network Firewall deployed in an inspection VPC. All traffic is routed from other VPCs through AWS Transit Gateway (TGW). The threat intelligence rulesets are managed by a partner integration solution and can be automatically imported into AWS Network Firewall. This will allow you to use the same ruleset that is deployed on-premises. It will reduce inconsistent and manual processes to maintain and update the rules.

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

The partner integration with AWS Network Firewall (ANFW) will work for both a centralized and distributed inspection architecture. The AWS Network Firewall service will house the rulesets, and you only need to deploy a Firewall endpoint in the Availability Zone of your VPC. In the centralized architecture deployment, all traffic originating from the attached VPCs is routed to the TGW. On the TGW route table, all traffic is routed to the inspection VPC attachment ID. The route table associated to the subnet where the TGW ENI is created in the inspection VPC will have a default route via the ANFW endpoint and return traffic from the ANFW endpoint is routed back to the TGW. If your VPC Firewall endpoint is being deployed across multiple Availability Zones (AZ), use the TGW appliance mode to allow traffic flow symmetry. This will ensure that return traffic is processed by the same AZ. For further details on how to set up your network routing, reference the Deployment models for AWS Network Firewall blog post.

AWS Network Firewall partner integrations

Figure 1 depicts two partner integrations, which include Trend Micro and FortiNet. View this complete and latest list of partner integrations with AWS Network Firewall.

If you are already a user of Trend Micro for your threat intelligence, you can leverage this deployment model to standardize your hybrid cloud security. Trend Micro enables you to deploy your AWS managed network infrastructure and pair it with a partner-supported threat intelligence. This focuses on detecting and disrupting malware in your environments. You just need to enable the Sharing capability on Trend Micro Cloud One. For further information, see these detailed instructions.

For existing users of Fortinet that are using their managed IPS rulesets, you can automatically deploy updated IPS rule sets to AWS Network Firewall. This will ensure consistent protection across your applications landscape. For more details on this integration, visit the partner page.

Getting started with AWS Firewall

You can get started with this pattern through the following high-level steps with link to detailed instructions along the way.

  1. Determine your current networking architecture and cross reference it with the different deployment models supported by AWS Network Firewall. You can learn more about your different options in the blog Deployment models for AWS Network Firewall. The deployment model will determine how you set up your route tables and where you will deploy your AWS Network Firewall endpoint.
  2. Visit the AWS Network Firewall Partners page to confirm your provider’s integration with ANFW and follow the integration instructions from the partner’s documentation.
  3. Get started with AWS Network Firewall by visiting the Amazon VPC Console to create or import your firewall rules. You can group them into policies and apply them to the VPCs you want to protect per the developer guide.
  4. To start inspecting traffic, deploy your Network Firewall endpoint in your inspection VPC.

Conclusion

You may need to operate a hybrid architecture using the same firewall and IPS rules for both your on-premises and cloud networks. For implementing these rules in the cloud, you can run partner firewall appliances on EC2 instances. This model of operation requires some heavy lifting.

Instead, you can set up AWS Network Firewall quickly, and not worry about deploying and managing any infrastructure. AWS Network Firewall automatically scales with your organizations’ network traffic. AWS provides a flexible rules engine that enables you to define firewall rules to control this traffic. To simplify how organizations determine what rules to define, Fortinet and Trend Micro have made managed rulesets available through AWS Marketplace offerings. These can be deployed to your environment with a few clicks. These partners remove complexity for security teams so they can easily create and maintain rules to take full advantage of the AWS Network Firewall.

Customize requests and responses with AWS WAF

Post Syndicated from Kaustubh Phatak original https://aws.amazon.com/blogs/security/customize-requests-and-responses-with-aws-waf/

In March 2021, AWS introduced support for custom responses and request header insertion with AWS WAF. This blog post will demonstrate how you can use these new features to customize your AWS WAF solution to improve the user experience and security posture of your applications.

HTTP response codes are standard responses sent by a server in response to a client request. When AWS WAF blocks a request, the default response code sent back to the client is HTTP 403 (Forbidden). The HTTP 403 response code is associated with a default error page built by the web server engine. This page is typically generic and not user-friendly. With the Custom Response feature, AWS WAF now allows you to modify the status code from HTTP 403 to HTTP 2xx, 3xx, 4xx, and 5xx, and to return a custom body when the request is blocked by AWS WAF. The custom responses unique to AWS WAF also allow you to differentiate blocked requests generated by AWS WAF or your server.

When inspected HTTP requests are allowed by AWS WAF, the request is passed through to the associated resource. Now you have the ability to insert custom HTTP request headers for each rule inside your web access control list (web ACL) set to allow or count, and you can create additional logic with your application by tagging these requests with the headers.

We will be outlining three different use cases to show how you can use these AWS WAF features.

Use case 1: Custom response code

In this example, you will use the custom response code feature to redirect a viewer request to a different webpage. You use HTTP 3xx response codes to redirect the incoming request, and use the HTTP header Location to specify the website URL for redirection. Figure 1 shows an overview of this workflow.

Figure 1: Overview of using custom response code to redirect the request

Figure 1: Overview of using custom response code to redirect the request

Figure 1 illustrates the following steps:

  1. AWS WAF has a rate-based rule to allow 100 requests every 5 minutes.
  2. A user sends multiple requests and breaches AWS WAF rate-based rules threshold.
  3. AWS WAF blocks any further requests from the user.
  4. The AWS WAF custom response code feature modifies the response code from HTTP 403 to HTTP 302 – Temporary Redirect with a Location header specifying the redirected URL.

Configure the AWS WAF web ACL and rule for custom response code

To create an Application Load Balancer and associate it to AWS WAF

  1. Follow the steps to configure a load balancer and a listener to create an internet-facing load balancer in the N.Virginia AWS Region.
  2. After the load balancer is created, open the AWS WAF console.
  3. In the navigation pane, choose Web ACLs, and then choose Create web ACL in US east (N.Virginia) Region.
  4. For Name, enter the name that you want to use to identify this web ACL.
  5. For Resource type, choose the Application Load Balancer that you created in Step 1 and choose Add.
  6. Choose Next.
  7. Choose Add rules and then choose Add my own rules and rule groups.
  8. For Name, enter the name that you want to use to identify this rule.
  9. For Rule type, choose Rate-based rule.
  10. For Rate limit, enter 100.
  11. Under Actions, keep the default action of Block and enable Custom response.
  12. Enter the response code as 302.
  13. Under Response headers, add a new custom header with Key as Location and Value as example.com
  14. Choose Add rule.
  15. Continue to choose Next to reach the summary page, and then choose Create new web ACL.

After the web ACL is created, you should see the web ACL configuration as shown in Figure 2.

Figure 2: Custom Response - Web ACL configuration

Figure 2: Custom Response – Web ACL configuration

Now, the setup is complete. You have a web ACL with a rate-based rule configured to redirect blocked requests to a different URL. To verify that the setup is working as expected, you can enable and analyze the AWS WAF logs for a test user that is sending more than 100 requests in a period of 5 minutes.

In Figure 3, you can see the custom response code of 302 being sent to the test user instance.

Figure 3: Verifying the AWS WAF logs for custom response

Figure 3: Verifying the AWS WAF logs for custom response

In the example in Figure 3, we tested our configuration by having a user send more than 100 requests from a PC to trigger a block. To verify the Location header, we analyzed the network traffic by using the developer tools of the browser. As you can see in Figure 4, the response includes the custom header Location with the configured redirect URL.

Figure 4: Verifying response in the browser tools for custom response

Figure 4: Verifying response in the browser tools for custom response

Use case 2: Custom error page

In this example, you will use the AWS WAF custom error page to route the request to a different error page, rather than the default web server error pages. As you can see in Figure 5, the workflow is similar to use case 1.

Figure 5: Overview of using custom error page to redirect the request

Figure 5: Overview of using custom error page to redirect the request

Figure 5 shows the following steps:

  1. AWS WAF has a rate-based rule to allow 100 requests every 5 minutes.
  2. A user sends multiple requests and breaches AWS WAF rate-based rules threshold.
  3. AWS WAF blocks any further requests from the user.
  4. AWS WAF custom response code feature modifies the response code to HTTP 307 – Temporary Redirect and responds with a custom error page with the message Too Many Requests.

To configure the AWS WAF web ACL and rule for custom error page

  1. In the AWS WAF console, in the navigation pane, choose Web ACLs, and then choose the web ACL that you created in use case 1.
  2. Click on Rules tab and choose Add rules and then choose Add my own rules and rule groups.
  3. For Name, enter the name that you want to use to identify this rule.
  4. For Rule type, choose Rate-based rule.
  5. For Rate limit, enter 100.
  6. Under Actions, keep the default action of Block and enable Custom response.
  7. For the response code, enter 307.
  8. For Choose how you would like to specify the response body, select Create a custom response body.
  9. A pop-up box will open. Enter a name for the Response body object name.
  10. For Content type, you can select JSON, HTML, or Plain Text. In this example, we select Plain Text.
  11. For Response body, enter any sample text. In this example, we enter This is a sample custom error page. Then choose Save.
  12. Choose Add Rule.
  13. For Set rule priority, move your new rule to the top so that this rule is processed first.

Figure 6 shows a summary of the rate based-rule created for use case 2.

Figure 6: Custom error page - Web ACL configuration

Figure 6: Custom error page – Web ACL configuration

Now, the setup is complete. You have a web ACL with a rate-based rule configured to redirect blocked requests to different URL. To verify the setup is working as expected, you can analyze the AWS WAF logs for a test user that is sending more than 100 requests in a period of 5 minutes. Figure 7 shows the custom response code of 307 being sent to our example test user instance.

Figure 7: Verifying responseCodeSent in the AWS WAF logs

Figure 7: Verifying responseCodeSent in the AWS WAF logs

When you access the load balancer URL from your browser, you should see the custom error page similar to Figure 8.

Figure 8: Verifying response using the browser

Figure 8: Verifying response using the browser

Use case 3: Header insertion for request tagging

This example demonstrates the AWS WAF header insertion capability to route the request based on geolocation. You will use the header country-check to notify the Application Load Balancer to route the request to a different target group, by using the Application Load Balancer advanced routing feature.

Figure 9: Overview of using request header insertion to tag the request to be processed downstream

Figure 9: Overview of using request header insertion to tag the request to be processed downstream

Figure 9 shows the following steps:

  1. User sends request to the Application Load Balancer that is attached with AWS WAF.
  2. AWS WAF applies a geographic location rule that conditionally allows requests from unexpected countries in Count mode.
  3. AWS WAF adds a custom HTTP request header to tag this request.
  4. An Application Load Balancer listener rule is configured to route requests based on this header.
  5. Request tagged by AWS WAF with the custom header is routed to a separate target group.

To add a geographical location rule for request header insertion

  1. In the AWS WAF console, in the navigation pane, choose Web ACLs, and then choose the web ACL that you created in use case 1.
  2. On the Rules tab, choose Add rules and then choose Add my own rules and rule groups.
  3. For Name, enter the name that you want to use to identify this rule.
  4. For Rule type, choose Regular rule.
  5. For If a request, select doesn’t match the statement (NOT).
  6. For Inspect, select Originates from a country in.
  7. In this example, normal traffic originates from United States; so under Country codes, select United States – US.
  8. For IP address to use to determine the country of origin, Choose Source IP Address.
  9. For Action, choose Count. This will allow requests to be logged and tagged while processing other rules that follow.
  10. Expand Custom request, choose Add new custom header. For Key, choose country-check and for Value, choose true.

    Note: custom request headers are prefixed with x-amzn-waf-

  11. Choose Save rule.
  12. Set rule priority, move your new rule to the top to allow this rule to be processed first.
  13. Choose Save.

 

Figure 10: Header insertion - Web ACL configuration

Figure 10: Header insertion – Web ACL configuration

For this use-case, you set up a geographical location rule to check for requests that originate from countries outside of the normal traffic flow of your application (in this example, the United States). You do not want to block the requests right away, but instead tag the requests triggered by this AWS WAF rule for further validation downstream by the application logic. To route the tagged requests differently, you use ALB advanced request routing feature to route AWS WAF tagged traffic to a different target group.

You can verify the header inserted by the rule by enabling AWS WAF full logs and looking at the requestHeadersInserted log field, as shown in Figure 11.

Figure 11: Verifying the AWS WAF logs for header insertion

Figure 11: Verifying the AWS WAF logs for header insertion

Conclusion

AWS WAF provides the ability to create a custom response for blocked requests by changing the status code and response body. The header insertion capability allows you to tag requests allowed by AWS WAF for your application to perform another action.

In this post, we showed you three basic use-cases to demonstrate how you can create a better user experience by redirecting users to another location instead of responding with a denied page. We showed you how you can create custom AWS WAF rules by tagging the request for your application logic to see it has been inspected, and how you can make a decision around this information.

If you’re new to AWS WAF, see Getting started with AWS WAF.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kaustubh Phatak

Kaustubh is a Solutions Architect at AWS. He’s passionate about helping customers build scalable, secure, and cost-effective applications to achieve business outcomes. Outside work, Kaustubh likes to play cricket and spend time with his wife and kid.

Author

EJ Chen

EJ is an Edge Specialist Solutions Architect at AWS.

Approaches to meeting Australian Government gateway requirements on AWS

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/approaches-to-meeting-australian-government-gateway-requirements-on-aws/

Australian Commonwealth Government agencies are subject to specific requirements set by the Protective Security Policy Framework (PSPF) for securing connectivity between systems that are running sensitive workloads, and for accessing less trusted environments, such as the internet. These agencies have often met the requirements by using some form of approved gateway solution that provides network-based security controls.

This post examines the types of controls you need to provide a gateway that can meet Australian Government requirements defined in the Protective Security Policy Framework (PSPF) and the challenges of using traditional deployment models to support cloud-based solutions. PSPF requirements are mandatory for non-corporate Commonwealth entities, and represent better practice for corporate Commonwealth entities, wholly-owned Commonwealth companies, and state and territory agencies. We discuss the ability to deploy gateway-style solutions in the cloud, and show how you can meet the majority of gateway requirements by using standard cloud architectures plus services. We provide guidance on deploying gateway solutions in the AWS Cloud, and highlight services that can support such deployments. Finally, we provide an illustrative AWS web architecture pattern to show how to meet the majority of gateway requirements through Well-Architected use of services.

Australian Government gateway requirements

The Australian Government Protective Security Policy Framework (PSPF) highlights the requirement to use secure internet gateways (SIGs) and references the Australian Information Security Manual (ISM) control framework to guide agencies. The ISM has a chapter on gateways, which includes the following recommendations for gateway architecture and operations:

  • Provide a central control point for traffic in and out of the system.
  • Inspect and filter traffic.
  • Log and monitor traffic and gateway operation to a secure location. Use appropriate security event alerting.
  • Use secure administration practices, including multi-factor authentication (MFA) access control, minimum privilege, separation of roles, and network segregation.
  • Perform appropriate authentication and authorization of users, traffic, and equipment. Use MFA when possible.
  • Use demilitarized zone (DMZ) patterns to limit access to internal networks.
  • Test security controls regularly.
  • Set up firewalls between security domains and public network infrastructure.

Since the PSPF references the ISM, the agency should apply the overall ISM framework to meet ISM requirements such as governance and security patching for the environment. The ISM is a risk-based framework, and the risk posture of the workload and organization should inform how to assess the controls. For example, requirements for authentication of users might be relaxed for a public-facing website.

In traditional on-premises environments, some Australian Government agencies have mandated centrally assessed and managed gateway capabilities in order to drive economies of scale across multiple government agencies. However, the PSPF does provide the option for gateways used only by a single government agency to undertake their own risk-based assessment for the single agency gateway solution.

Other government agencies also have specific requirements to connect with cloud providers. For example, the U.S. Government Office of Management and Budget (OMB) mandates that U.S. government users access the cloud through a specific agency connection.

Connecting to the cloud through on-premises gateways

Given the existence of centrally managed off-cloud gateways, one approach by customers has been to continue to use these off-cloud gateways and then connect to AWS through the on-premises gateway environment by using AWS Direct Connect, as shown in Figure 1.
 

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Although this approach does work, and makes use of existing gateway capability, it has a number of downsides:

  • A potential single point of failure: If the on-premises gateway capability is unavailable, the agency can lose connectivity to the cloud-based solution.
  • Bandwidth limitations: The agency is limited by the capacity of the gateway, which might not have been developed with dynamically scalable and bandwidth-intensive cloud-based workloads in mind.
  • Latency issues: The requirement to traverse multiple network hops, in addition to the gateway, will introduce additional latency. This can be particularly problematic with architectures that involve API communications being sent back and forth across the gateway environment.
  • Castle-and-moat thinking: Relying only on the gateway as the security boundary can discourage agencies from using and recognizing the cloud-based security controls that are available.

Some of these challenges are discussed in the context of US Trusted Internet Connection (TIC) programs in this whitepaper.

Moving gateways to the cloud

In response to the limitations discussed in the last section, both customers and AWS Partners have built gateway solutions on AWS to meet gateway requirements while remaining fully within the cloud environment. See this type of solution in Figure 2.
 

Figure 2: Moving the gateway to the AWS Cloud

Figure 2: Moving the gateway to the AWS Cloud

With this approach, you can fully leverage the scalable bandwidth that is available from the AWS environment, and you can also reduce latency issues, particularly when multiple hops to and from the gateway are required. This blog post describes a pilot program in the US that combines AWS services and AWS Marketplace technologies to provide a cloud-based gateway.

You can use AWS Transit Gateway (released after the referenced pilot program) to provide the option to centralize such a gateway capability within an organization. This makes it possible to utilize the gateway across multiple cloud solutions that are running in their own virtual private clouds (VPCs) and accounts. This approach also facilitates the principle of the gateway being the central control point for traffic flowing in and out. For more information on using AWS Transit Gateway with security appliances, see the Appliance VPC topic in the Amazon VPC documentation.

More recently, AWS has released additional services and features that can assist with delivering government gateway requirements.

Elastic Load Balancing Gateway Load Balancer provide the capability to deploy third-party network appliances in a scalable fashion. With this capability, you can leverage existing investment in licensing, use familiar tooling, reuse intellectual property (IP) such as rule sets, and reuse skills, because staff are already trained in configuring and managing the chosen device. You have one gateway for distributing traffic across multiple virtual appliances, while scaling the appliances up and down based on demand. This reduces the potential points of failure in your network and increases availability. Gateway Load Balancer is a straightforward way to use third-party network appliances from industry leaders in the cloud. You benefit from the features of these devices, while Gateway Load Balancer makes them automatically scalable and easier to deploy. You can find an AWS Partner with Gateway Load Balancer expertise on the AWS Marketplace. For more information on combining Transit Gateway and Gateway Load Balancer for a centralized inspection architecture, see this blog post. The post shows centralized architecture for East-West (VPC-to-VPC) and North-South (internet or on-premises bound) traffic inspection, plus processing.

To further simplify this area for customers, AWS has introduced the AWS Network Firewall service. Network Firewall is a managed service that you can use to deploy essential network protections for your VPCs. The service is simple to set up and scales automatically with your network traffic so you don’t have to worry about deploying and managing any infrastructure. You can combine Network Firewall with Transit Gateway to set up centralized inspection architecture models, such as those described in this blog post.

Reviewing a typical web architecture in the cloud

In the last section, you saw that SIG patterns can be created in the cloud. Now we can put that in context with the layered security controls that are implemented in a typical web application deployment. Consider a web application hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, as shown in Figure 3, within the context of other services that will support the architecture.
 

Figure 3: Security controls in a web application hosted on EC2

Figure 3: Security controls in a web application hosted on EC2

Although this example doesn’t include a traditional SIG-type infrastructure that inspects and controls traffic before it’s sent to the AWS Cloud, the architecture has many of the technical controls that are called for in SIG solutions as a result of using the AWS Well-Architected Framework. We’ll now step through some of these services to highlight the relevant security functionality that each provides.

Network control services

Amazon Virtual Private Cloud (Amazon VPC) is a service you can use to launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Amazon VPC lets you use multiple layers of security, including security groups and network access control lists (network ACLs), to help control access to Amazon EC2 instances in each subnet. Security groups act as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. A network ACL is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups to add an additional layer of security to your VPC. Read about the specific differences between security groups and network ACLs.

Having this level of control throughout the application architecture has advantages over relying only on a central, border-style gateway pattern, because security groups for each tier of the application architecture can be locked down to only those ports and sources required for that layer. For example, in the architecture shown in Figure 3, only the application load balancer security group would allow web traffic (ports 80, 443) from the internet. The web-tier-layer security group would only accept traffic from the load-balancer layer, and the database-layer security group would only accept traffic from the web tier.

If you need to provide a central point of control with this model, you can use AWS Firewall Manager, which simplifies the administration and maintenance of your VPC security groups across multiple accounts and resources. With Firewall Manager, you can configure and audit your security groups for your organization using a single, central administrator account. Firewall Manager automatically applies rules and protections across your accounts and resources, even as you add new resources. Firewall Manager is particularly useful when you want to protect your entire organization, or if you frequently add new resources that you want to protect via a central administrator account.

To support separation of management plan activities from data plane aspects in workloads, agencies can use multiple elastic network interface patterns on EC2 instances to provide a separate management network path.

Edge protection services

In the example in Figure 3, several services are used to provide edge-based protections in front of the web application. AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards applications that are running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there’s no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield: Standard and Advanced. When you use Shield Advanced, you can apply protections at both the Amazon CloudFront, Amazon EC2 and application load balancer layers. Shield Advanced also gives you 24/7 access to the AWS DDoS Response Team (DRT).

AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns that you define. Again, you can apply this protection at both the Amazon CloudFront and application load balancer layers in our illustrated solution. Agencies can also use managed rules for WAF to benefit from rules developed and maintained by AWS Marketplace sellers.

Amazon CloudFront is a fast content delivery network (CDN) service. CloudFront seamlessly integrates with AWS ShieldAWS WAF, and Amazon Route 53 to help protect against multiple types of unauthorized access, including network and application layer DDoS attacks.

Logging and monitoring services

The example application in Figure 3 shows several services that provide logging and monitoring of network traffic, application activity, infrastructure, and AWS API usage.

At the VPC level, the VPC Flow Logs feature provides you with the ability to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3). Traffic Mirroring is a feature that you can use in a VPC to capture traffic if needed for inspection. This allows agencies to implement full packet capture on a continuous basis, or in response to a specific event within the application.

Amazon CloudWatch provides a monitoring service with alarms and analytics. In the example application, AWS WAF can also be configured to log activity as described in the AWS WAF Developer Guide.

AWS Config provides a timeline view of the configuration of the environment. You can also define rules to provide alerts and remediation when the environment moves away from the desired configuration.

AWS CloudTrail is a service that you can use to handle governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity that is related to actions across your AWS infrastructure.

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. This blog post highlights a third-party assessment of GuardDuty that compares its performance to other intrusion detection systems (IDS).

Route 53 Resolver Query Logging lets you log the DNS queries that originate in your VPCs. With query logging turned on, you can see which domain names have been queried, the AWS resources from which the queries originated—including source IP and instance ID—and the responses that were received.

With Route 53 Resolver DNS Firewall, you can filter and regulate outbound DNS traffic for your VPCs. To do this, you create reusable collections of filtering rules in DNS Firewall rule groups, associate the rule groups to your VPC, and then monitor activity in DNS Firewall logs and metrics. Based on the activity, you can adjust the behavior of DNS Firewall accordingly.

Mapping services to control areas

Based on the above description of the use of additional services, we can summarize which services contribute to the control and recommendation areas in the gateway chapter in the Australian ISM framework.

Control and recommendation areas Contributing services
Inspect and filter traffic AWS WAF, VPC Traffic Mirroring
Central control point Infrastructure as code, AWS Firewall Manager
Authentication and authorization (MFA) AWS Identity and Access Management (IAM), solution and application IAM, VPC security groups
Logging and monitoring Amazon CloudWatch, AWS CloudTrail, AWS Config, Amazon VPC (flow logs and mirroring), load balancer logs, Amazon CloudFront logs, Amazon GuardDuty, Route 53 Resolver Query Logging
Secure administration (MFA) IAM, directory federation (if used)
DMZ patterns VPC subnet layout, security groups, network ACLs
Firewalls VPC security groups, network ACLs, AWS WAF, Route 53 Resolver DNS Firewall
Web proxy; site and content filtering and scanning AWS WAF, Firewall Manager

Note that the listed AWS service might not provide all relevant controls in each area, and it is part of the customer’s risk assessment and design to determine what additional controls might need to be implemented.

As you can see, many of the recommended practices and controls from the Australian Government gateway requirements are already encompassed in a typical Well-Architected solution. The implementing agency has the choice of two options: it can continue to place such a solution behind a gateway that runs either within or outside of AWS, leveraging the gateway controls that are inherent in the application architecture as additional layers of defense. Otherwise, the agency can conduct a risk assessment to understand which gateway controls can be supplied by means of the application architecture to reduce the gateway control requirements at any gateway layer in front of the application.

Summary

In this blog post, we’ve discussed the requirements for Australian Government gateways which provide network controls to secure workloads. We’ve outlined the downsides of using traditional on-premises solutions and illustrated how services such as AWS Transit Gateway, Elastic Load Balancing, Gateway Load Balancer, and AWS Network Firewall facilitate moving gateway solutions into the cloud. These are services you can evaluate against your network control requirements. Finally, we reviewed a typical web architecture running in the AWS Cloud with associated services to illustrate how many of the typical gateway controls can be met by using a standard Well-Architected approach.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on one of the AWS Security or Networking forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is a Principal Solutions Architect in the Australian National Security team at AWS in Canberra, Australia. He is passionate about facilitating cloud adoption for customers to enable innovation. John has been working with government customers at AWS for over 8 years, as the first employee for the ANZ Public Sector team.

Hackathons with AWS Cloud9: Collaboration simplified for your next big idea

Post Syndicated from Mahesh Biradar original https://aws.amazon.com/blogs/devops/hackathons-with-aws-cloud9-collaboration-simplified-for-your-next-big-idea/

Many organizations host ideation events to innovate and prototype new ideas faster.  These events usually run for a short duration and involve collaboration between members of participating teams. By the end of the event, a successful demonstration of a working prototype is expected and the winner or the next steps are determined. Therefore, it’s important to build a working proof of concept quickly, and to do that teams need to be able to share the code and get peer reviewed in real time.

In this post, you see how AWS Cloud9 can help teams collaborate, pair program, and track each other’s inputs in real time for a successful hackathon experience.

AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you to write, run, and debug code from any machine with just a browser. A shared environment is an AWS Cloud9 development environment that multiple users have been invited to participate in and can edit or view its shared resources.

Pair programming and mob programming are development approaches in which two or more developers collaborate simultaneously to design, code, or test solutions. At the core is the premise that two or more people collaborate on the same code at the same time, which allows for real-time code review and can result in higher quality software.

Hackathons are one of the best ways to collaboratively solve problems, often with code. Cross-functional two-pizza teams compete with limited resources under time constraints to solve a challenging business problem. Several companies have adopted the concept of hackathons to foster a culture of innovation, providing a platform for developers to showcase their creativity and acquire new skills. Teams are either provided a roster of ideas to choose from or come up with their own new idea.

Solution overview

In this post, you create an AWS Cloud9 environment shared with three AWS Identity and Access Management (IAM) users (the hackathon team). You also see how this team can code together to develop a sample serverless application using an AWS Serverless Application Model (AWS SAM) template.

 

The following diagram illustrates the deployment architecture.

Architecture diagram

Figure1: Solution Overview

Prerequisites

To complete the steps in this post, you need an AWS account with administrator privileges.

Set up the environment

To start setting up your environment, complete the following steps:

    1. Create an AWS Cloud9 environment in your AWS account.
    2. Create and attach an instance profile to AWS Cloud9 to call AWS services from an environment.For more information, see Create and store permanent access credentials in an environment.
    3. On the AWS Cloud9 console, select the environment you just created and choose View details.

      Screenshot of Cloud9 console

      Figure2: Cloud9 View details

    4. Note the environment ID from the Environment ARN value; we use this ID in a later step.

      Screenshot of Cloud9 console showing ARN

      Figure3: Environment ARN

    5. In your AWS Cloud9 terminal, create the file usersetup.sh with the following contents:
      #USAGE: 
      #STEP 1: Execute following command within Cloud9 terminal to retrieve environment id
      # aws cloud9 list-environments
      #STEP 2: Execute following command by providing appropriate parameters: -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3 
      # sh usersetup.sh -e 877f86c3bb80418aabc9956580436e9a -u User1,User2
      function usage() {
        echo "USAGE: sh usersetup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3"
      }
      while getopts ":e:u:" opt; do
        case $opt in
          e)  if ! aws cloud9 describe-environment-status --environment-id "$OPTARG" 2>&1 >/dev/null; then
                echo "Please provide valid cloud9 environmentid."
                usage
                exit 1
              fi
              environmentId="$OPTARG" ;;
          u)  if [ "$OPTARG" == "" ]; then
                echo "Please provide comma separated list of usernames."
                usage
                exit 1
              fi
              users="$OPTARG" ;;
          \?) echo "Incorrect arguments."
              usage
              exit 1;;
        esac
      done
      if [ "$OPTIND" -lt 5 ]; then
        echo "Missing required arguments."
        usage
        exit 1
      fi
      IFS=',' read -ra userNames <<< "$users"
      groupName='HackathonUsers'
      groupPolicy='arn:aws:iam::aws:policy/AdministratorAccess'
      userArns=()
      function createUsers() {
          userList=""    
          if aws iam get-group --group-name $groupName  > /dev/null 2>&1; then
            echo "$groupName group already exists."  
          else
            if aws iam create-group --group-name $groupName 2>&1 >/dev/null; then
              echo "Created user group - $groupName."  
            else
              echo "Error creating user group - $groupName."  
              exit 1
            fi
          fi
          if aws iam attach-group-policy --policy-arn $groupPolicy --group-name $groupName; then
            echo "Attached group policy."  
          else
            echo "Error attaching group policy to - $groupName."  
            exit 1
          fi
          
          for userName in "${userNames[@]}" ; do 
              
              randomPwd=`aws secretsmanager get-random-password \
              --require-each-included-type \
              --password-length 20 \
              --no-include-space \
              --output text`
          
              userList="$userList"$'\n'"Username: $userName, Password: $randomPwd"
              
              userArn=`aws iam create-user \
              --user-name $userName \
              --query 'User.Arn' | sed -e 's/\/.*\///g' | tr -d '"'`
              
              userArns+=( $userArn )
            
              aws iam wait user-exists \
              --user-name $userName
              
              echo "Successfully created user $userName."
              
              aws iam create-login-profile \
              --user-name $userName \
              --password $randomPwd \
              --password-reset-required 2>&1 >/dev/null
              
              aws iam add-user-to-group \
              --user-name $userName \
              --group-name $groupName
          done
          echo "Waiting for users profile setup..."
          sleep 8
          
          for arn in "${userArns[@]}" ; do 
            aws cloud9 create-environment-membership \
              --environment-id $environmentId \
              --user-arn $arn \
              --permissions read-write 2>&1 >/dev/null
          done
          echo "Following users have been created and added to $groupName group."
          echo "$userList"
      }
      createUsers
      
    6. Run the following command by replacing the following parameters:
        1. ENVIRONMENTID – The environment ID you saved earlier
        2. USERNAME1, USERNAME2… – A comma-separated list of users. In this example, we use three users.

      sh usersetup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3
      The script creates the following resources:

        • The number of IAM users that you defined
        • The IAM user group HackathonUsers with the users created from previous step assigned with administrator access
        • These users are assigned a random password, which must be changed before their first login.
        • User passwords can be shared with your team from the AWS Cloud9 Terminal output.
    7. Instruct your team to sign in to the AWS Cloud9 console open the shared environment by choosing Shared with you.

      Screenshot of Cloud9 console showing environments

      Figure4: Shared environments

    8. Run the create-repository command, specifying a unique name, optional description, and optional tags:
      aws codecommit create-repository --repository-name hackathon-repo --repository-description "Hackathon repository" --tags Team=hackathon
    9. Note the cloneUrlHttp value from the output; we use this in a later step.
      Terminal showing environment metadata after running the command

      Figure5: CodeCommit repo url

      The environment is now ready for the hackathon team to start coding.

    10. Instruct your team members to open the shared environment from the AWS Cloud9 dashboard.
    11. For demo purposes, you can quickly create a sample Python-based Hello World application using the AWS SAM CLI
    12. Run the following commands to commit the files to the local repo:

      cd hackathon-repo
      git config --global init.defaultBranch main
      git init
      git add .
      git commit -m "Initial commit
    13. Run the following command to push the local repo to AWS CodeCommit by replacing CLONE_URL_HTTP with the cloneUrlHttp value you noted earlier:
      git push <CLONEURLHTTP> —all

For a sample collaboration scenario, watch the video Collaboration with Cloud9 .

 

Clean up

The cleanup script deletes all the resources it created. Make a local copy of any files you want to save.

  1. Create a file named cleanup.sh with the following content:
    #USAGE: 
    #STEP 1: Execute following command within Cloud9 terminal to retrieve envronment id
    # aws cloud9 list-environments
    #STEP 2: Execute following command by providing appropriate parameters: -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3 
    # sh cleanup.sh -e 877f86c3bb80418aabc9956580436e9a -u User1,User2
    function usage() {
      echo "USAGE: sh cleanup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3"
    }
    while getopts ":e:u:" opt; do
      case $opt in
        e)  if ! aws cloud9 describe-environment-status --environment-id "$OPTARG" 2>&1 >/dev/null; then
              echo "Please provide valid cloud9 environmentid."
              usage
              exit 1
            fi
            environmentId="$OPTARG" ;;
        u)  if [ "$OPTARG" == "" ]; then
              echo "Please provide comma separated list of usernames."
              usage
              exit 1
            fi
            users="$OPTARG" ;;
        \?) echo "Incorrect arguments."
            usage
            exit 1;;
      esac
    done
    if [ "$OPTIND" -lt 5 ]; then
      echo "Missing required arguments."
      usage
      exit 1
    fi
    IFS=',' read -ra userNames <<< "$users"
    groupName='HackathonUsers'
    groupPolicy='arn:aws:iam::aws:policy/AdministratorAccess'
    function cleanUp() {
        echo "Starting cleanup..."
        groupExists=false
        if aws iam get-group --group-name $groupName  > /dev/null 2>&1; then
          groupExists=true
        else
          echo "$groupName does not exist."  
        fi
        
        for userName in "${userNames[@]}" ; do 
            if ! aws iam get-user --user-name $userName >/dev/null 2>&1; then
              echo "$userName does not exist."  
            else
              userArn=$(aws iam get-user \
              --user-name $userName \
              --query 'User.Arn' | tr -d '"') 
              
              if $groupExists ; then 
                aws iam remove-user-from-group \
                --user-name $userName \
                --group-name $groupName
              fi
      
              aws iam delete-login-profile \
              --user-name $userName 
      
              if aws iam delete-user --user-name $userName ; then
                echo "Succesfully deleted $userName"
              fi
              
              aws cloud9 delete-environment-membership \
              --environment-id $environmentId --user-arn $userArn
              
            fi
        done
        if $groupExists ; then 
          aws iam detach-group-policy \
          --group-name $groupName \
          --policy-arn $groupPolicy
      
          if aws iam delete-group --group-name $groupName ; then
            echo "Succesfully deleted $groupName user group"
          fi
        fi
        
        echo "Cleanup complete."
    }
    cleanUp
  2. Run the script by passing the same parameters you passed when setting up the script:
    sh cleanup.sh -e ENVIRONMENTID -u USERNAME1,USERNAME2,USERNAME3
  3. Delete the CodeCommit repository by running the following commands in the root directory with the appropriate repository name:
    aws codecommit delete-repository —repository-name hackathon-repo
    rm -rf hackathon-repo
  4. You can delete the Cloud9 environment when the event is over

 

Conclusion

In this post, you saw how to use an AWS Cloud9 IDE to collaborate as a team and code together to develop a working prototype. For organizations looking to host hackathon events, these tools can be a powerful way to deliver a rich user experience. For more information about AWS Cloud9 capabilities, see the AWS Cloud9 User Guide. If you plan on using AWS Cloud9 for an ongoing collaboration, refer to the best practices for sharing environments in Working with shared environment in AWS Cloud9.

About the authors

Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale.
Guy Savoie is a Senior Solutions Architect at AWS working with SMB customers, primarily in Florida. In his role as a technical advisor, he focuses on unlocking business value through outcome based innovation.
Ramesh Chidirala is a Solutions Architect focused on SMB customers in the Central region. He is passionate about helping customers solve challenging technical problems with AWS and help them achieve their desired business outcomes.

 

Creating a notification workflow from sensitive data discover with Amazon Macie, Amazon EventBridge, AWS Lambda, and Slack

Post Syndicated from Bruno Silviera original https://aws.amazon.com/blogs/security/creating-a-notification-workflow-from-sensitive-data-discover-with-amazon-macie-amazon-eventbridge-aws-lambda-and-slack/

Following the example of the EU in implementing the General Data Protection Regulation (GDPR), many countries are implementing similar data protection laws. In response, many companies are forming teams that are responsible for data protection. Considering the volume of information that companies maintain, it’s essential that these teams are alerted when sensitive data is at risk.

This post shows how to deploy a solution that uses Amazon Macie to discover sensitive data. This solution enables you to set up automatic notification to your company’s designated data protection team via a Slack channel when sensitive data that needs to be protected is discovered by Amazon EventBridge and AWS Lambda.

The challenge

Let’s imagine that you’re part of a team that’s responsible for classifying your organization’s data but the data structure isn’t documented. Amazon Macie provides you the ability to run a scheduled classification job that examines your data, and you want to notify the data protection team when there’s new sensitive data to classify. Let’s build a solution to automatically notify the data protection team.

Solution overview

To be scalable and cost-effective, this solution uses serverless technologies and managed AWS services, including:

  • Macie – A fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in Amazon Web Services (AWS).
  • EventBridge – A serverless event bus that connects application data from your apps, SaaS, and AWS services. EventBridge can respond to specific events or run according to a schedule. The solution presented in this post uses EventBridge to initiate a custom Lambda function in response to a specific event.
  • Lambda – Runs code in response to events such as changes in data, changes in application state, or user actions. In this solution, a Lambda function is initiated by EventBridge.

Solution architecture

The architecture workflow is shown in Figure 1 and includes the following steps:

  1. Macie runs a classification job and publishes its findings to EventBridge as a JSON object.
  2. The EventBridge rule captures the findings and invokes a Lambda function as a target.
  3. The Lambda function parses the JSON object. The function then sends a custom message to a Slack channel with the sensitive data finding for the data protection team to evaluate and respond to.

 

Figure 1: Solution architecture workflow

Figure 1: Solution architecture workflow

Set up Slack

For this solution, you need a Slack workspace and an incoming webhook. The workspace must be in place before you create the webhook.

Create a Slack workspace

If you already have a Slack workspace in your environment, you can skip forward, to creating the webhook.

If you don’t have a Slack workspace, follow the steps in Create a Slack Workspace to create one.

Create an incoming webhook in Slack API

  1. Go to your Slack API.
  2. Choose Start Building to create an app.
  3. Enter the following details for your app:
    • App Namemacie-to-slack.
    • Development Slack Workspace – Choose the Slack workspace—either an existing workspace or one you created for this solution—to receive the Macie findings.
  4. Choose the Create App button.
  5. In the left menu, choose Incoming Webhooks.
  6. At the Activate Incoming Webhooks screen, move the slider from OFF to ON.
  7. Scroll down and choose Add New Webhook to Workspace.
  8. In the screen asking where your app should post, enter the name of the Slack channel from your Workspace that you want to send notification to and choose Authorize.
  9. On the next screen, scroll down to the Webhook URL section. Make a note of the URL to use later.

Deploy the CloudFormation template with the solution

The deployment of the CloudFormation template automatically creates the following resources:

  • A Lambda function that begins with the name named macie-to-slack-lambdafindingsToSlack-.
  • An EventBridge rule named MacieFindingsToSlack.
  • An IAM role named MacieFindingsToSlackkRole.
  • A permission to invoke the Lambda function named LambdaInvokePermission.

Note: Before you proceed, make sure you’re deploying the template to the same Region that your production Macie is running.

To deploy the Cloudformation template

  1. Download the YAML template to your computer.

    Note: To save the template, you can right click the Raw button at the top of the code and then select Save link as if you’re using Chrome, or the equivalent in your browser. This file is used in Step 4.

  2. Open CloudFormation in the AWS Management Console.
  3. On the Welcome page, choose Create stack and then choose With new resources.
  4. On Step 1 — Specify template, choose Upload a template file, select Choose file and then select the file template.yaml (the file extension might be .YML), then choose Next.
  5. On Step 2 — Specify stack details:
    1. Enter macie-to-slack as the Stack name.
    2. At the Slack Incoming Web Hook URL, paste the webhook URL you copied earlier.
    3. At Slack channel, enter the name of the channel in your workspace that will receive the alerts and choose Next.
    Figure 2: Defining stack details

    Figure 2: Defining stack details

  6. On Step 3 – Configure Stack options, you can leave the default settings, or change them for your environment. Choose Next to continue.
  7. At the bottom of Step 4 – Review, select I acknowledge that AWS CloudFormation might create IAM resources, and choose Create stack.

    Figure 3: Confirmation before stack creation

    Figure 3: Confirmation before stack creation

  8. Wait for the stack to reach status CREATE_COMPLETE.

Running the solution

At this point, you’ve deployed the solution and your resources are created.

To test the solution, you can schedule a Macie job targeting a bucket that contains a file with sensitive information that Macie can detect.

Note: You can check the Amazon Macie documentation to see the list of supported managed data identifiers.

When the Macie job is complete, any findings are sent to the Slack channel.

Figure 4: Macie finding delivered to Slack channel

Figure 4: Macie finding delivered to Slack channel

Select the link in the message sent to the Slack channel to open that finding in the Macie console, as shown in Figure 5.

Figure 5: Finding details

Figure 5: Finding details

And you’re done!

Now your Macie finding results are delivered to your Slack channel where they can be easily monitored, reducing response time and risk exposure.

If you deployed this for testing purposes, or want to clean this up and move to your production account, you can delete the Cloudformation stack:

  1. Open the CloudFormation console.
  2. Select the stack and choose Delete.

Conclusion

In this blog post we walked through the steps to configure a notification workflow using Macie, Lambda, and EventBridge to send sensitive data findings to your data protection team via a Slack channel.

Your data protection team will appreciate the timely notifications of sensitive data findings, giving you the ability to focus on creating controls to improve data security and compliance with regulations related to protection and treatment of personal data.

For more information about data privacy on AWS, see Data Privacy FAQ.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Bruno Silveira

Bruno is a Solutions Architect Manager in the Public Sector team with focus on educational institutions in Brazil. His previous career was in government, financial services, utilities, and nonprofit institutions. Bruno is an enthusiast of cloud security and an appreciator of good rock’n roll with a good beer.

Author

Julio Carvalho

Julio is a Principal Security Solutions Architect at AWS for the Latin American financial market. As a security specialist, he helps customers solve protection and compliance challenges on their cloud journey.

Choosing a Well-Architected CI/CD approach: Open-source software and AWS Services

Post Syndicated from Brian Carlson original https://aws.amazon.com/blogs/devops/choosing-well-architected-ci-cd-open-source-software-aws-services/

This series of posts discusses making informed decisions when choosing to implement open-source tools on AWS services, adopt managed AWS services to satisfy the same needs, or use a combination of both.

We look at key considerations for evaluating open-source software and AWS services using the perspectives of a startup company and a mature company as examples. You can use these two different points of view to compare to your own organization. To make this investigation easier we will use Continuous Integration (CI) and Continuous Delivery (CD) capabilities as the target of our investigation.

Startup Company rocket and Mature Company rocket

In two related posts, we follow two AWS customers, Iponweb and BigHat Biosciences, as they share their CI/CD journeys, their perspectives, the decisions they made, and why. To end the series, we explore an example reference architecture showing the benefits AWS provides regardless of your emphasis on open-source tools or managed AWS services.

Why CI/CD?

Until your creations are in the hands of your customers, investment in development has provided no return. The faster valuable changes enter production, the greater positive impact you can have on your customer. In today’s highly competitive world, the ability to frequently and consistently deliver value is a competitive advantage. The Operational Excellence (OE) pillar of the AWS Well-Architected Framework recognizes this impact and focuses on the capabilities of CI/CD in two dedicated sections.

The concepts in CI/CD originate from software engineering but apply equally to any form of content. The goal is to support development, integration, testing, deployment, and delivery to production. For example, making changes to an application, updating your machine learning (ML) models, changing your multimedia assets, or referring to the AWS Well-Architected Framework.

Adopting CI/CD and the best practices from the Operational Excellence pillar can help you address risks in your environment, and limit errors from manual processes. More importantly, they help free your teams from the related manual processes, so they can focus on satisfying customer needs, differentiating your organization, and accelerating the flow of valuable changes into production.

A red question mark sits on a field of chaotically arranged black question marks.

How do you decide what you need?

The first question in the Operational Excellence pillar is about understanding needs and making informed decisions. To help you frame your own decision-making process, we explore key considerations from the perspective of a fictional startup company and a fictional mature company. In our two related posts, we explore these same considerations with Iponweb and BigHat.

The key considerations include:

  • Functional requirements – Providing specific features and capabilities that deliver value to your customers.
  • Non-functional requirements – Enabling the safe, effective, and efficient delivery of the functional requirements. Non-functional requirements include security, reliability, performance, and cost requirements.
    • Without security, you can’t earn customer trust. If your customers can’t trust you, you won’t have customers.
    • Without reliability you aren’t available to serve your customers. If you can’t serve your customers, you won’t have customers.
    • Performance is focused on timely and efficient delivery of value, not delivering as fast as possible.
    • Cost is focused on optimizing the value received for the resources spent (for example, money, time, or effort), not minimizing expense.
  • Operational requirements – Enabling you to effectively and efficiently support, maintain, sustain, and improve the delivery of value to your customers. When you “Design with Ops in Mind,” you’re enabling effective and efficient support for your business outcomes.

These non-feature-related key considerations are why Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization are the five pillars of the AWS Well-Architected Framework.

The startup company

Any startup begins as a small team of inspired people working together to realize the unique solution they believe solves an unsolved problem.

For our fictional small team, everyone knows each other personally and all speak frequently. We share processes and procedures in discussions, and everyone know what needs to be done. Our team members bring their expertise and dedicate it, and the majority of their work time, to delivering our solution. The results of our efforts inform changes we make to support our next iteration.

However, our manual activities are error-prone and inconsistencies exist in the way we do them. Performing these tasks takes time away from delivering our solution. When errors occur, they have the potential to disrupt everyone’s progress.

We have capital available to make some investments. We would prefer to bring in more team members who can contribute directly to developing our solution. We need to iterate faster if we are to achieve a broadly viable product in time to qualify for our next round of funding. We need to decide what investments to make.

  • Goals – Reach the next milestone and secure funding to continue development
  • Needs – Reduce or eliminate the manual processes and associated errors
  • Priority – Rapid iteration
  • CI/CD emphasis – Baseline CI/CD capabilities and non-functional requirements are emphasized over a rich feature set

The mature company

Our second fictional company is a large and mature organization operating in a mature market segment. We’re focused on consistent, quality customer experiences to serve and retain our customers.

Our size limits the personal relationships between our service and development teams. The process to make requests, and the interfaces between teams and their systems, are well documented and understood.

However, the systems we have implemented over time, as needs were identified and addressed, aren’t well documented. Our existing tool chain includes some in-house scripting and both supported and unsupported versions of open-source tools. There are limited opportunities for us to acquire new customers.

When conditions change and new features are desired, we want to be able to rapidly implement and deploy those features as fast as possible. If we can differentiate our services, however briefly, we may be able to win customers away from our competitors. Our other path to improved profitability is to evolve our processes, maximizing integration and efficiencies, and capturing cost reductions.

  • Goals – Differentiate ourselves in the marketplace with desired new features
  • Needs – Address the risks of poorly documented systems and unsupported software
  • Priority – Evolve efficiency
  • CI/CD emphasis – Rich feature set and integrations are emphasized over improving the existing non-functional capabilities

Open-source tools on AWS vs. AWS services

The choice of open-source tools or AWS service is not binary. You can select the combination of solutions that provides the greatest value. You can implement open-source tools for their specific benefits where they outweigh the costs and operational burden, using underlying AWS services like Amazon Elastic Compute Cloud (Amazon EC2) to host them. You can then use AWS managed services, like AWS CodeBuild, for the undifferentiated features you need, without additional cost or operational burden.

A group of people sit around a table discussing the pieces of a puzzle and their ideas.

Feature Set

Our fictional organizations both want to accelerate the flow of beneficial changes into production and are evaluating CI/CD alternatives to support that outcome. Our startup company wants a working solution—basic capabilities, author/code, build, and deploy, so that they can focus on development. Our mature company is seeking every advantage—a rich feature set, extensive opportunities for customization, integration capabilities, and fine-grained control.

Open-source tools

Open-source tools often excel at meeting functional requirements. When a new functionality, capability, or integration is desired, any developer can implement it for themselves, and then contribute their code back to the project. As the user community for an open-source project expands the number of use cases and the features identified grows, so does the number of potential solutions and potential contributors. Developers are using these tools to support their efforts and implement new features that provide value to them.

However, features may be released in unsupported versions and then later added to the supported feature set. Non-functional requirements take time and are less appealing because they don’t typically bring immediate value to the product. Non-functional capabilities may lag behind the feature set.

Consider the following:

  • Open-source tools may have more features and existing integrations to other tools
  • The pace of feature set delivery may be extremely rapid
  • The features delivered are those desired and created by the active members of the community
  • You are free to implement the features your company desires
  • There is no commitment to long-term support for the project or any given feature
  • You can implement open-source tools on multiple cloud providers or on premises
  • If the project is abandoned, you’re responsible for maintaining your implementation

AWS services

AWS services are driven by customer needs. Services and features are supported by dedicated teams. These customer-obsessed teams focus on all customer needs, with security being their top priority. Both functional and non-functional requirements are addressed with an emphasis on enabling customer outcomes while minimizing the effort they expend to achieve them.

Consider the following:

  • The pace of delivery of feature sets is consistent
  • The feature roadmap is driven by customer need and customer requests
  • The AWS service team is dedicated to support of the service
  • AWS services are available on the AWS Cloud and on premises through AWS Outposts

Picture showing symbol of dollar

Cost Optimization

Why are we discussing cost after the feature set? Security and reliability are fundamentally more important. Leadership naturally gravitates to following the operational excellence best practice of evaluating trade-offs. Having looked at the potential benefits from the feature set, the next question is typically, “What is this going to cost?” Leadership defines the priorities and allocates the resources necessary (capital, time, effort). We review cost optimization second so that leadership can make a comparison of the expected benefits between CI/CD investments, and investments in other efforts, so they can make an informed decision.

Our organizations are both cost conscious. Our startup is working with finite capital and time. In contrast, our mature company can plan to make investments over time and budget for the needed capital. Early investment in a robust and feature-rich CI/CD tool chain could provide significant advantages towards the startup’s long-term success, but if the startup fails early, the value of that investment will never be realized. The mature company can afford to realize the value of their investment over time and can make targeted investments to address specific short-term needs.

Open-source tools

Open-source software doesn’t have to be purchased, but there are costs to adopt. Open-source tools require appropriate skills in order to be implemented, and to perform management and maintenance activities. Those skills must be gained through dedicated training of team members, team member self-study, or by hiring new team members with the existing skills. The availability of skilled practitioners of open-source tools varies with how popular a tool is and how long it has had an active community. Loss of skilled team members includes the loss of their institutional knowledge and intimacy with the implementation. Skills must be maintained with changes to the tools and as team members join or leave. Time is required from skilled team members to support management and maintenance activities. If commercial support for the tool is desired, it may be available through third-parties at an additional cost.

The time to value of an open-source implementation includes the time to implement and configure the resources and software. Additional value may be realized through investment of time configuring or implementing desired integrations and capabilities. There may be existing community-supported integrations or capabilities that reduce the level of effort to achieve these.

Consider the following:

  • No cost to acquire the software.
  • The availability of skill practitioners of open-source tools may be lower. Cost (capital and time) to acquire, establish, or maintain skill set may be higher.
  • There is an ongoing cost to maintain the team member skills necessary to support the open-source tools.
  • There is an ongoing cost of time for team members to perform management and maintenance activities.
  • Additional commercial support for open-source tools may be available at additional cost
  • Time to value includes implementation and configuration of resources and the open-source software. There may be more predefined community integrations.

AWS services

AWS services are provided pay-as-you-go with no required upfront costs. As of August 2020, more than 400,000 individuals hold active AWS Certifications, a number that grew more than 85% between August 2019 and August 2020.

Time to value for AWS services is extremely short and limited to the time to instantiate or configure the service for your use. Additional value may be realized through the investment of time configuring or implementing desired integrations. Predefined integrations for AWS services are added as part of the service development roadmap. However, there may be fewer existing integrations to reduce your level of effort.

Consider the following:

  • No cost to acquire the software; AWS services are pay-as-you-go for use.
  • AWS skill sets are broadly available. Cost (capital and time) to acquire, establish, or maintain skill sets may be lower.
  • AWS services are fully managed, and service teams are responsible for the operation of the services.
  • Time to value is limited to the time to instantiate or configure the service. There may be fewer predefined integrations.
  • Additional support for AWS services is available through AWS Support. Cost for support varies based on level of support and your AWS utilization.

Open-source tools on AWS services

Open-source tools on AWS services don’t impact these cost considerations. Migration off of either of these solutions is similarly not differentiated. In either case, you have to invest time in replacing the integrations and customizations you wish to maintain.

Picture showing a checkmark put on security

Security

Both organizations are concerned about reputation and customer trust. They both want to act to protect their information systems and are focusing on confidentiality and integrity of data. They both take security very seriously. Our startup wants to be secure by default and wants to trust the vendor to address vulnerabilities within the service. Our mature company has dedicated resources that focus on security, and the company practices defense in depth across internal organizations.

The startup and the mature company both want to know whether a choice is safe, secure, and can validate the security of their choice. They also want to understand their responsibilities and the shared responsibility model that applies.

Open-source tools

Open-source tools are the product of the contributors and may contain flaws or vulnerabilities. The entire community has access to the code to test and validate. There are frequently many eyes evaluating the security of the tools. A company or individual may perform a validation for themselves. However, there may be limited guidance on secure configurations. Controls in the implementer’s environment may reduce potential risk.

Consider the following:

  • You’re responsible for the security of the open-source software you implement
  • You control the security of your data within your open-source implementation
  • You can validate the security of the code and act as desired

AWS services

AWS service teams make security their highest priority and are able to respond rapidly when flaws are identified. There is robust guidance provided to support configuring AWS services securely.

Consider the following:

  • AWS is responsible for the security of the cloud and the underlying services
  • You are responsible for the security of your data in the cloud and how you configure AWS services
  • You must rely on the AWS service team to validate the security of the code

Open-source tools on AWS services

Open-source tools on AWS services combine these considerations; the customer is responsible for the open-source implementation and the configuration of the AWS services it consumes. AWS is responsible for the security of the AWS Cloud and the managed AWS services.

Picture showing global distribution for redundancy to depict reliability

Reliability

Everyone wants reliable capabilities. What varies between companies is their appetite for risk, and how much they can tolerate the impact of non-availability. The startup emphasized the need for their systems to be available to support their rapid iterations. The mature company is operating with some existing reliability risks, including unsupported open-source tools and in-house scripts.

The startup and the mature company both want to understand the expected reliability of a choice, meaning what percentage of the time it is expected to be available. They both want to know if a choice is designed for high availability and will remain available even if a portion of the systems fails or is in a degraded state. They both want to understand the durability of their data, how to perform backups of their data, and how to perform recovery in the event of a failure.

Both companies need to determine what is an acceptable outage duration, commonly referred to as a Recovery Time Objective (RTO), and for what quantity of elapsed time it is acceptable to lose transactions (including committing changes), commonly referred to as Recovery Point Objective (RPO). They need to evaluate if they can achieve their RTO and RPO objectives with each of the choices they are considering.

Open-source tools

Open-source reliability is dependent upon the effectiveness of the company’s implementation, the underlying resources supporting the implementation, and the reliability of the open-source software. Open-source tools are the product of the contributors and may or may not incorporate high availability features. Depending on the implementation and tool, there may be a requirement for downtime for specific management or maintenance activities. The ability to support RTO and RPO depends on the teams supporting the company system, the implementation, and the mechanisms implemented for backup and recovery.

Consider the following:

  • You are responsible for implementing your open-source software to satisfy your reliability needs and high availability needs
  • Open-source tools may have downtime requirements to support specific management or maintenance activities
  • You are responsible for defining, implementing, and testing the backup and recovery mechanisms and procedures
  • You are responsible for the satisfaction of your RTO and RPO in the event of a failure of your open-source system

AWS services

AWS services are designed to support customer availability needs. As managed services, the service teams are responsible for maintaining the health of the services.

Consider the following:

Open-source tools on AWS services

Open-source tools on AWS services combine these considerations; the customer is responsible for the open-source implementation (including data durability, backup, and recovery) and the configuration of the AWS services it consumes. AWS is responsible for the health of the AWS Cloud and the managed services.

Picture showing a graph depicting performance measurement

Performance

What defines timely and efficient delivery of value varies between our two companies. Each is looking for results before an engineer becomes idled by having to wait for results. The startup iterates rapidly based on the results of each prior iteration. There is limited other activity for our startup engineer to perform before they have to wait on actionable results. Our mature company is more likely to have an outstanding backlog or improvements that can be acted upon while changes moves through the pipeline.

Open-source tools

Open-source performance is defined by the resources upon which it is deployed. Open-source tools that can scale out can dynamically improve their performance when resource constrained. Performance can also be improved by scaling up, which is required when performance is constrained by resources and scaling out isn’t supported. The performance of open-source tools may be constrained by characteristics of how they were implemented in code or the libraries they use. If this is the case, the code is available for community or implementer-created improvements to address the limitation.

Consider the following:

  • You are responsible for managing the performance of your open-source tools
  • The performance of open-source tools may be constrained by the resources they are implemented upon; the code and libraries used; their system, resource, and software configuration; and the code and libraries present within the tools

AWS services

AWS services are designed to be highly scalable. CodeCommit has a highly scalable architecture, and CodeBuild scales up and down dynamically to meet your build volume. CodePipeline allows you to run actions in parallel in order to increase your workflow speeds.

Consider the following:

  • AWS services are fully managed, and service teams are responsible for the performance of the services.
  • AWS services are designed to scale automatically.
  • Your configuration of the services you consume can affect the performance of those services.
  • AWS services quotas exist to prevent unexpected costs. You can make changes to service quotas that may affect performance and costs.

Open-source tools on AWS services

Open-source tools on AWS services combine these considerations; the customer is responsible for the open-source implementation (including the selection and configuration of the AWS Cloud resources) and the configuration of the AWS services it consumes. AWS is responsible for the performance of the AWS Cloud and the managed AWS services.

Picture showing cart-wheels in motion, depicting operations

Operations

Our startup company wants to limit its operations burden as much as possible in order to focus on development efforts. Our mature company has an established and robust operations capability. In both cases, they perform the management and maintenance activities necessary to support their needs.

Open-source tools

Open-source tools are supported by their volunteer communities. That support is voluntary, without any obligation or commitment from the users. If either company adopts open-source tools, they’re responsible for the management and maintenance of the system. If they want additional support with an obligation and commitment to support their implementation, third parties may provide commercial support at additional cost.

Consider the following:

  • You are responsible for supporting your implementation.
  • The open-source community may provide volunteer support for the software.
  • There is no commitment to support the software by the open-source community.
  • There may be less documentation, or accepted best practices, available to support open-source tools.
  • Early adoption of open-source tools, or the use of development builds, includes the chance of encountering unidentified edge cases and unanticipated issues.
  • The complexity of an implementation and its integrations may increase the difficulty to support open-source tools. The time to identify contributing factors may be extended by the complexity during an incident. Maintaining a set of skilled team members with deep understanding of your implementation may help mitigate this risk.
  • You may be able to acquire commercial support through a third party.

AWS services

AWS services are committed to providing long-term support for their customers.

Consider the following:

  • There is long-term commitment from AWS to support the service
  • As a managed service, the service team maintains current documentation
  • Additional levels of support are available through AWS Support
  • Support for AWS is available through partners and third parties

Open-source tools on AWS services

Open-source tools on AWS services combine these considerations. The company is responsible for operating the open-source tools (for example, software configuration changes, updates, patching, and responding to faults). AWS is responsible for the operation of the AWS Cloud and the managed AWS services.

Conclusion

In this post, we discussed how to make informed decisions when choosing to implement open-source tools on AWS services, adopt managed AWS services, or use a combination of both. To do so, you must examine your organization and evaluate the benefits and risks.

A magnifying glass is focused on the single red figure in a group of otherwise blue paper figures standing on a white surface.

Examine your organization

You can make an informed decision about the capabilities you adopt. The insight you need can be gained by examining your organization to identify your goals, needs, and priorities, and discovering what your current emphasis is. Ask the following questions:

  • What is your organization trying to accomplish and why?
  • How large is your organization and how is it structured?
  • How are roles and responsibilities distributed across teams?
  • How well defined and understood are your processes and procedures?
  • How do you manage development, testing, delivery, and deployment today?
  • What are the major challenges your organization faces?
  • What are the challenges you face managing development?
  • What problems are you trying to solve with CI/CD tools?
  • What do you want to achieve with CI/CD tools?

Evaluate benefits and risk

Armed with that knowledge, the next step is to explore the trade-offs between open-source options and managed AWS services. Then evaluate the benefits and risks in terms of the key considerations:

  • Features
  • Cost
  • Security
  • Reliability
  • Performance
  • Operations

When asked “What is the correct answer?” the answer should never be “It depends.” We need to change the question to “What is our use case and what are our needs?” The answer will emerge from there.

Make an informed decision

A Well-Architected solution can include open-source tools, AWS Services, or any combination of both! A Well-Architected choice is an informed decision that evaluates trade-offs, balances benefits and risks, satisfies your requirements, and most importantly supports the achievement of your business outcomes.

Read the other posts in this series and take this journey with BigHat Biosciences and Iponweb as they share their perspectives, the decisions they made, and why.

Resources

Want to learn more? Check out the following CI/CD and developer tools on AWS:

Continuous integration (CI)
Continuous delivery (CD)
AWS Developer Tools

For more information about the AWS Well-Architected Framework, refer to the following whitepapers:

AWS Well-Architected Framework
AWS Well-Architected Operational Excellence pillar
AWS Well-Architected Security pillar
AWS Well-Architected Reliability pillar
AWS Well-Architected Performance Efficiency pillar
AWS Well-Architected Cost Optimization pillar

The 3 hexagons of the well architected logo appear to the right of the words AWS Well-Architected.

Author bio

portrait photo of Brian Carlson Brian is the global Operational Excellence lead for the AWS Well-Architected program. Formerly the technical lead for an international network, Brian works with customers and partners researching the operations best practices with the greatest positive impact and produces guidance to help you achieve your goals.

 

How to replicate secrets in AWS Secrets Manager to multiple Regions

Post Syndicated from Fatima Ahmed original https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/

On March 3, 2021, we launched a new feature for AWS Secrets Manager that makes it possible for you to replicate secrets across multiple AWS Regions. You can give your multi-Region applications access to replicated secrets in the required Regions and rely on Secrets Manager to keep the replicas in sync with the primary secret. In scenarios such as disaster recovery, you can read replicated secrets from your recovery Regions, even if your primary Region is unavailable. In this blog post, I show you how to automatically replicate a secret and access it from the recovery Region to support a disaster recovery plan.

With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other secrets. When you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets to a Region is a security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require replication of secrets across Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more Regions to support these scenarios.

With this new feature, you can create Regional read replicas for your secrets. When you create a new secret or edit an existing secret, you can specify the Regions where your secrets need to be replicated. Secrets Manager will securely create the read replicas for each secret and its associated metadata, eliminating the need to maintain a complex solution for this functionality. Any update made to the primary secret, such as a secret value updated through automatic rotation, will be automatically propagated by Secrets Manager to the replica secrets, making it easier to manage the life cycle of multi-Region secrets.

Note: Each replica secret is billed as a separate secret. For more details on pricing, see the AWS Secrets Manager pricing page.

Architecture overview

Suppose that your organization has a requirement to set up a disaster recovery plan. In this example, us-east-1 is the designated primary Region, where you have an application running on a simple AWS Lambda function (for the example in this blog post, I’m using Python 3). You also have an Amazon Relational Database Service (Amazon RDS) – MySQL DB instance running in the us-east-1 Region, and you’re using Secrets Manager to store the database credentials as a secret. Your application retrieves the secret from Secrets Manager to access the database. As part of the disaster recovery strategy, you set up us-west-2 as the designated recovery Region, where you’ve replicated your application, the DB instance, and the database secret.

To elaborate, the solution architecture consists of:

  • A primary Region for creating the secret, in this case us-east-1 (N. Virginia).
  • A replica Region for replicating the secret, in this case us-west-2 (Oregon).
  • An Amazon RDS – MySQL DB instance that is running in the primary Region and configured for replication to the replica Region. To set up read replicas or cross-Region replicas for Amazon RDS, see Working with read replicas.
  • A secret created in Secrets Manager and configured for replication for the replica Region.
  • AWS Lambda functions (running on Python 3) deployed in the primary and replica Regions acting as clients to the MySQL DBs.

This architecture is illustrated in Figure 1.

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

In the primary region us-east-1, the Lambda function uses the credentials stored in the secret to access the database, as indicated by the following steps in Figure 1:

  1. The Lambda function sends a request to Secrets Manager to retrieve the secret value by using the GetSecretValue API call. Secrets Manager retrieves the secret value for the Lambda function.
  2. The Lambda function uses the secret value to connect to the database in order to read/write data.

The replicated secret in us-west-2 points to the primary DB instance in us-east-1. This is because when Secrets Manager replicates the secret, it replicates the secret value and all the associated metadata, such as the database endpoint. The database endpoint details are stored within the secret because Secrets Manager uses this information to connect to the database and rotate the secret if it is configured for automatic rotation. The Lambda function can also use the database endpoint details in the secret to connect to the database.

To simplify database failover during disaster recovery, as I’ll cover later in the post, you can configure an Amazon Route 53 CNAME record for the database endpoint in the primary Region. The database host associated with the secret is configured with the database CNAME record. When the primary Region is operating normally, the CNAME record points to the database endpoint in the primary Region. The requests to the database CNAME are routed to the DB instance in the primary Region, as shown in Figure 1.

During disaster recovery, you can failover to the replica Region, us-west-2, to make it possible for your application running in this Region to access the Amazon RDS read replica in us-west-2 by using the secret stored in the same Region. As part of your failover script, the database CNAME record should also be updated to point to the database endpoint in us-west-2. Because the database CNAME is used to point to the database endpoint within the secret, your application in us-west-2 can now use the replicated secret to access the database read replica in this Region. Figure 2 illustrates this disaster recovery scenario.

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active

Prerequisites

The procedure described in this blog post requires that you complete the following steps before starting the procedure:

  1. Configure an Amazon RDS DB instance in the primary Region, with replication configured in the replica Region.
  2. Configure a Route 53 CNAME record for the database endpoint in the primary Region.
  3. Configure the Lambda function to connect with the Amazon RDS database and Secrets Manager by following the procedure in this blog post.
  4. Sign in to the AWS Management Console using a role that has SecretsManagerReadWrite permissions in the primary and replica Regions.

Enable replication for secrets stored in Secrets Manager

In this section, I walk you through the process of enabling replication in Secrets Manager for:

  1. A new secret that is created for your Amazon RDS database credentials
  2. An existing secret that is not configured for replication

For the first scenario, I show you the steps to create a secret in Secrets Manager in the primary Region (us-east-1) and enable replication for the replica Region (us-west-2).

To create a secret with replication enabled

  1. In the AWS Management Console, navigate to the Secrets Manager console in the primary Region (N. Virginia).
  2. Choose Store a new secret.
  3. On the Store a new secret screen, enter the Amazon RDS database credentials that will be used to connect with the Amazon RDS DB instance. Select the encryption key and the Amazon RDS DB instance, and then choose Next.
  4. Enter the secret name of your choice, and then enter a description. You can also optionally add tags and resource permissions to the secret.
  5. Under Replicate Secret – optional, choose Replicate secret to other regions.

    Figure 3: Replicate a secret to other Regions

    Figure 3: Replicate a secret to other Regions

  6. For AWS Region, choose the replica Region, US West (Oregon) us-west-2. For Encryption Key, choose Default to store your secret in the replica Region. Then choose Next.

    Figure 4: Configure secret replication

    Figure 4: Configure secret replication

  7. In the Configure Rotation section, you can choose whether to enable rotation. For this example, I chose not to enable rotation, so I selected Disable automatic rotation. However, if you want to enable rotation, you can do so by following the steps in Enabling rotation for an Amazon RDS database secret in the Secrets Manager User Guide. When you enable rotation in the primary Region, any changes to the secret from the rotation process are also replicated to the replica Region. After you’ve configured the rotation settings, choose Next.
  8. On the Review screen, you can see the summary of the secret configuration, including the secret replication configuration.

    Figure 5: Review the secret before storing

    Figure 5: Review the secret before storing

  9. At the bottom of the screen, choose Store.
  10. At the top of the next screen, you’ll see two banners that provide status on:
    • The creation of the secret in the primary Region
    • The replication of the secret in the Secondary Region

    After the creation and replication of the secret is successful, the banners will provide you with confirmation details.

At this point, you’ve created a secret in the primary Region (us-east-1) and enabled replication in a replica Region (us-west-2). You can now use this secret in the replica Region as well as the primary Region.

Now suppose that you have a secret created in the primary Region (us-east-1) that hasn’t been configured for replication. You can also configure replication for this existing secret by using the following procedure.

To enable multi-Region replication for existing secrets

  1. In the Secrets Manager console, choose the secret name. At the top of the screen, choose Replicate secret to other regions.
    Figure 6: Enable replication for existing secrets

    Figure 6: Enable replication for existing secrets

    This opens a pop-up screen where you can configure the replica Region and the encryption key for encrypting the secret in the replica Region.

  2. Choose the AWS Region and encryption key for the replica Region, and then choose Complete adding region(s).
    Figure 7: Configure replication for existing secrets

    Figure 7: Configure replication for existing secrets

    This starts the process of replicating the secret from the primary Region to the replica Region.

  3. Scroll down to the Replicate Secret section. You can see that the replication to the us-west-2 Region is in progress.
    Figure 8: Review progress for secret replication

    Figure 8: Review progress for secret replication

    After the replication is successful, you can look under Replication status to review the replication details that you’ve configured for your secret. You can also choose to replicate your secret to more Regions by choosing Add more regions.

    Figure 9: Successful secret replication to a replica Region

    Figure 9: Successful secret replication to a replica Region

Update the secret with the CNAME record

Next, you can update the host value in your secret to the CNAME record of the DB instance endpoint. This will make it possible for you to use the secret in the replica Region without making changes to the replica secret. In the event of a failover to the replica Region, you can simply update the CNAME record to the DB instance endpoint in the replica Region as a part of your failover script

To update the secret with the CNAME record

  1. Navigate to the Secrets Manager console, and choose the secret that you have set up for replication
  2. In the Secret value section, choose Retrieve secret value, and then choose Edit.
  3. Update the secret value for the host with the CNAME record, and then choose Save.

    Figure 10: Edit the secret value

    Figure 10: Edit the secret value

  4. After you choose Save, you’ll see a banner at the top of the screen with a message that indicates that the secret was successfully edited.Because the secret is set up for replication, you can also review the status of the synchronization of your secret to the replica Region after you updated the secret. To do so, scroll down to the Replicate Secret section and look under Region Replication Status.

     

    Figure 11: Successful secret replication for a modified secret

    Figure 11: Successful secret replication for a modified secret

Access replicated secrets from the replica Region

Now that you’ve configured the secret for replication in the primary Region, you can access the secret from the replica Region. Here I demonstrate how to access a replicated secret from a simple Lambda function that is deployed in the replica Region (us-west-2).

To access the secret from the replica Region

  1. From the AWS Management Console, navigate to the Secrets Manager console in the replica Region (Oregon) and view the secret that you configured for replication in the primary Region (N. Virginia).

    Figure 12: View secrets that are configured for replication in the replica Region

    Figure 12: View secrets that are configured for replication in the replica Region

  2. Choose the secret name and review the details that were replicated from the primary Region. A secret that is configured for replication will display a banner at the top of the screen stating the replication details.

    Figure 13: The replication status banner

    Figure 13: The replication status banner

  3. Under Secret Details, you can see the secret’s ARN. You can use the secret’s ARN to retrieve the secret value from the Lambda function or application that is deployed in your replica Region (Oregon). Make a note of the ARN.

    Figure 14: View secret details

    Figure 14: View secret details

During a disaster recovery scenario when the primary Region isn’t available, you can update the CNAME record to point to the DB instance endpoint in us-west-2 as part of your failover script. For this example, my application that is deployed in the replica Region is configured to use the replicated secret’s ARN.

Let’s suppose your sample Lambda function defines the secret name and the Region in the environment variables. The REGION_NAME environment variable contains the name of the replica Region; in this example, us-west-2. The SECRET_NAME environment variable is the ARN of your replicated secret in the replica Region, which you noted earlier.

Figure 15: Environment variables for the Lambda function

Figure 15: Environment variables for the Lambda function

In the replica Region, you can now refer to the secret’s ARN and Region in your Lambda function code to retrieve the secret value for connecting to the database. The following sample Lambda function code snippet uses the secret_name and region_name variables to retrieve the secret’s ARN and the replica Region values stored in the environment variables.

secret_name = os.environ['SECRET_NAME']
region_name = os.environ['REGION_NAME']

def openConnection():
    # Create a Secrets Manager client
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=region_name
    )
    try:
        get_secret_value_response = 
client.get_secret_value(
            SecretId=secret_name
        )
    except ClientError as e:
        if e.response['Error']['Code'] == 
'DecryptionFailureException':

Alternately, you can simply use the Python 3 sample code for the replicated secret to retrieve the secret value from the Lambda function in the replica Region. You can review the provided sample codes by navigating to the secret details in the console, as shown in Figure 16.

Figure 16: Python 3 sample code for the replicated secret

Figure 16: Python 3 sample code for the replicated secret

Summary

When you plan for disaster recovery, you can configure replication of your secrets in Secrets Manager to provide redundancy for your secrets. This feature reduces the overhead of deploying and maintaining additional configuration for secret replication and retrieval across AWS Regions. In this post, I showed you how to create a secret and configure it for multi-Region replication. I also demonstrated how you can configure replication for existing secrets across multiple Regions.

I showed you how to use secrets from the replica Region and configure a sample Lambda function to retrieve a secret value. When replication is configured for secrets, you can use this technique to retrieve the secrets in the replica Region in a similar way as you would in the primary Region.

You can start using this feature through the AWS Secrets Manager console, AWS Command Line Interface (AWS CLI), AWS SDK, or AWS CloudFormation. To learn more about this feature, see the AWS Secrets Manager documentation. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fatima Ahmed

Fatima is a Security Global Practice Lead at AWS. She is passionate about cybersecurity and helping customers build secure solutions in the AWS Cloud. When she is not working, she enjoys time with her cat or solving cryptic puzzles.

TLS 1.2 will be required for all AWS FIPS endpoints beginning March 31, 2021

Post Syndicated from Janelle Hopper original https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-fips-endpoints/

To help you meet your compliance needs, we’re updating all AWS Federal Information Processing Standard (FIPS) endpoints to a minimum of Transport Layer Security (TLS) 1.2. We have already updated over 40 services to require TLS 1.2, removing support for TLS 1.0 and TLS 1.1. Beginning March 31, 2021, if your client application cannot support TLS 1.2, it will result in connection failures. In order to avoid an interruption in service, we encourage you to act now to ensure that you connect to AWS FIPS endpoints at TLS version 1.2. This change does not affect non-FIPS AWS endpoints.

Amazon Web Services (AWS) continues to notify impacted customers directly via their Personal Health Dashboard and email. However, if you’re connecting anonymously to AWS shared resources, such as through a public Amazon Simple Storage Service (Amazon S3) bucket, then you would not have received a notification, as we cannot identify anonymous connections.

Why are you removing TLS 1.0 and TLS 1.1 support from FIPS endpoints?

At AWS, we’re continually expanding the scope of our compliance programs to meet the needs of customers who want to use our services for sensitive and regulated workloads. Compliance programs, including FedRAMP, require a minimum level of TLS 1.2. To help you meet compliance requirements, we’re updating all AWS FIPS endpoints to a minimum of TLS version 1.2 across all AWS Regions. Following this update, you will not be able to use TLS 1.0 and TLS 1.1 for connections to FIPS endpoints.

How can I detect if I am using TLS 1.0 or TLS 1.1?

To detect the use of TLS 1.0 or 1.1, we recommend that you perform code, network, or log analysis. If you are using an AWS Software Developer Kit (AWS SDK) or Command Line Interface (CLI), we have provided hyperlinks to detailed guidance in our previous TLS blog post about how to examine your client application code and properly configure the TLS version used.

When the application source code is unavailable, you can use a network tool, such as TCPDump (Linux) or Wireshark (Linux or Windows), to analyze your network traffic to find the TLS versions you’re using when connecting to AWS endpoints. For a detailed example of using these tools, see the example, below.

If you’re using Amazon S3, you can also use your access logs to view the TLS connection information for these services and identify client connections that are not at TLS 1.2.

What is the most common use of TLS 1.0 or TLS 1.1?

The most common client applications that use TLS 1.0 or 1.1 are Microsoft .NET Framework versions earlier than 4.6.2. If you use the .NET Framework, please confirm you are using version 4.6.2 or later. For information on how to update and configure .NET Framework to support TLS 1.2, see How to enable TLS 1.2 on clients.

How do I know if I am using an AWS FIPS endpoint?

All AWS services offer TLS 1.2 encrypted endpoints that you can use for all API calls. Some AWS services also offer FIPS 140-2 endpoints for customers who need to use FIPS-validated cryptographic libraries to connect to AWS services. You can check our list of all AWS FIPS endpoints and compare the list to your application code, configuration repositories, DNS logs, or other network logs.

EXAMPLE: TLS version detection using a packet capture

To capture the packets, multiple online sources, such as this article, provide guidance for setting up TCPDump on a Linux operating system. On a Windows operating system, the Wireshark tool provides packet analysis capabilities and can be used to analyze packets captured with TCPDump or it can also directly capture packets.

In this example, we assume there is a client application with the local IP address 10.25.35.243 that is making API calls to the CloudWatch FIPS API endpoint in the AWS GovCloud (US-West) Region. To analyze the traffic, first we look up the endpoint URL in the AWS FIPS endpoint list. In our example, the endpoint URL is monitoring.us-gov-west-1.amazonaws.com. Then we use NSLookup to find the IP addresses used by this FIPS endpoint.

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Figure 1: Use NSLookup to find the IP addresses used by this FIPS endpoint

Wireshark is then used to open the captured packets, and filter to just the packets with the relevant IP address. This can be done automatically by selecting one of the packets in the upper section, and then right-clicking to use the Conversation filter/IPv4 option.

After the results are filtered to only the relevant IP addresses, the next step is to find the packet whose description in the Info column is Client Hello. In the lower packet details area, expand the Transport Layer Security section to find the version, which in this example is set to TLS 1.0 (0x0301). This indicates that the client only supports TLS 1.0 and must be modified to support a TLS 1.2 connection.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 2: After the conversation filter has been applied, select the Client Hello packet in the top pane. Expand the Transport Layer Security section in the lower pane to view the packet details and the TLS version.

Figure 3 shows what it looks like after the client has been updated to support TLS 1.2. This second packet capture confirms we are sending TLS 1.2 (0x0303) in the Client Hello packet.

Figure 3: The client TLS has been updated to support TLS 1.2

Figure 3: The client TLS has been updated to support TLS 1.2

Is there more assistance available?

If you have any questions or issues, you can start a new thread on one of the AWS forums, or contact AWS Support or your technical account manager (TAM). The AWS support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support doesn’t include code development for client applications.

Additionally, you can use AWS IQ to find, securely collaborate with, and pay AWS-certified third-party experts for on-demand assistance to update your TLS client components. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log in to your console and select Get Started with AWS IQ to start a request.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janelle Hopper

Janelle is a Senior Technical Program Manager in AWS Security with over 15 years of experience in the IT security field. She works with AWS services, infrastructure, and administrative teams to identify and drive innovative solutions that improve AWS’ security posture.

Author

Daniel Salzedo

Daniel is a Senior Specialist Technical Account Manager – Security. He has over 25 years of professional experience in IT in industries as diverse as video game development, manufacturing, banking and used car sales. He loves working with our wonderful AWS customers to help them solve their complex security challenges at scale.

How to set up a recurring Security Hub summary email

Post Syndicated from Justin Criswell original https://aws.amazon.com/blogs/security/how-to-set-up-a-recurring-security-hub-summary-email/

AWS Security Hub provides a comprehensive view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and best practices. In this post, we’ll show you how to set up weekly email notifications using Security Hub to provide account owners with a summary of the existing security findings to prioritize, new findings, and links to the Security Hub console for more information.

When you enable Security Hub, it collects and consolidates findings from AWS security services that you’re using, such as intrusion detection findings from Amazon GuardDuty, vulnerability scans from Amazon Inspector, Amazon Simple Storage Service (Amazon S3) bucket policy findings from Amazon Macie, publicly accessible and cross-account resources from IAM Access Analyzer, and resources lacking AWS WAF coverage from AWS Firewall Manager. Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions.

Cloud security processes can differ from traditional on-premises security in that security is often decentralized in the cloud. With traditional on-premises security operations, security alerts are typically routed to centralized security teams operating out of security operations centers (SOCs). With cloud security operations, it’s often the application builders or DevOps engineers who are best situated to triage, investigate, and remediate the security alerts. This integration of security into DevOps processes is referred to as DevSecOps, and as part of this approach, centralized security teams look for additional ways to proactively engage application account owners in improving the security posture of AWS accounts.

This solution uses Security Hub custom insights, AWS Lambda, and the Security Hub API. A custom insight is a collection of findings that are aggregated by a grouping attribute, such as severity or status. Insights help you identify common security issues that might require remediation action. Security Hub includes several managed insights, or you can create your own custom insights. Amazon SNS topic subscribers will receive an email, similar to the one shown in Figure 1, that summarizes the results of the Security Hub custom insights.

Figure 1: Example email with a summary of security findings for an account

Figure 1: Example email with a summary of security findings for an account

Solution overview

This solution assumes that Security Hub is enabled in your AWS account. If it isn’t enabled, set up the service so that you can start seeing a comprehensive view of security findings across your AWS accounts.

A recurring Security Hub summary email provides recipients with a proactive communication that summarizes the security posture and any recent improvements within their AWS accounts. The email message contains the following sections:

Here’s how the solution works:

  1. Seven Security Hub custom insights are created when you first deploy the solution.
  2. An Amazon CloudWatch time-based event invokes a Lambda function for processing.
  3. The Lambda function gets the results of the custom insights from Security Hub, formats the results for email, and sends a message to Amazon SNS.
  4. Amazon SNS sends the email notification to the address you provided during deployment.
  5. The email includes the summary and links to the Security Hub UI so that the recipient can follow the remediation workflow.

Figure 2 shows the solution workflow.

Figure 2: Solution overview, deployed through AWS CloudFormation

Figure 2: Solution overview, deployed through AWS CloudFormation

Security Hub custom insight

The finding results presented in the email are summarized by Security Hub custom insights. A Security Hub insight is a collection of related findings. Each insight is defined by a group by statement and optional filters. The group by statement indicates how to group the matching findings, and identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers. The optional filters narrow down the matching findings for the insight. For example, you might want to see only the findings from specific providers or findings associated with specific types of resources. Figure 3 shows the seven custom insights that are created as part of deploying this solution.

Figure 3: Custom insights created by the solution

Figure 3: Custom insights created by the solution

Sample custom insight

Security Hub offers several built-in managed (default) insights. You can’t modify or delete managed insights. You can view the custom insights created as part of this solution in the Security Hub console under Insights, by selecting the Custom Insights filter. From the email, follow the link for “Summary Email – 02 – Failed AWS Foundational Security Best Practices” to see the summarized finding counts, as well as graphs with related data, as shown in Figure 4.

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Let’s evaluate the filters that create this custom insight:

Filter setting Filter results
Type is “Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices” Captures all current and future findings created by the security standard AWS Foundational Security Best Practices.
Status is FAILED Captures findings where the compliance status of the resource doesn’t pass the assessment.
Workflow Status is not SUPPRESSED Captures findings where Security Hub users haven’t updated the finding to the SUPPRESSED status.
Record State is ACTIVE Captures findings that represent the latest assessment of the resource. Security Hub automatically archives control-based findings if the associated resource is deleted, the resource does not exist, or the control is disabled.
Group by SeverityLabel Creates the insight and populates the counts.

Solution artifacts

The solution provided with this blog post consists of two files:

  1. An AWS CloudFormation template named security-hub-email-summary-cf-template.json.
  2. A zip file named sec-hub-email.zip for the Lambda function that generates the Security Hub summary email.

In addition to the Security Hub custom insights as discussed in the previous section, the solution also deploys the following artifacts:

  1. An Amazon Simple Notification Service (Amazon SNS) topic named SecurityHubRecurringSummary and an email subscription to the topic.
    Figure 5: SNS topic created by the solution

    Figure 5: SNS topic created by the solution

    The email address that subscribes to the topic is captured through a CloudFormation template input parameter. The subscriber is notified by email to confirm the subscription, and after confirmation, the subscription to the SNS topic is created.

    Figure 6: SNS email subscription

    Figure 6: SNS email subscription

  2. Two Lambda functions:
    1. A Lambda function named *-CustomInsightsFunction-* is used only by the CloudFormation template to create the custom Insights.
    2. A Lambda function named SendSecurityHubSummaryEmail queries the custom insights from the Security Hub API and uses the insights’ data to create the summary email message. The function then sends the email message to the SNS topic.

      Figure 7: Example of Lambda functions created by the solution

      Figure 7: Example of Lambda functions created by the solution

  3. Two IAM roles for the Lambda functions provide the following rights, respectively:
    1. The minimum rights required to create insights and to create CloudWatch log groups and logs.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": [
                      "logs:CreateLogGroup",
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": "arn:aws:logs:*:*:*",
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "securityhub:CreateInsight"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              }
          ]
      }
      

    2. The minimum rights required to query Security Hub insights and to send email messages to the SNS topic named SecurityHubRecurringSummary.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": "sns:Publish",
                  "Resource": "arn:aws:sns:[REGION]:[ACCOUNT-ID]:SecurityHubRecurringSummary",
                  "Effect": "Allow"
              }
          ]
      } ,
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "securityhub:Get*",
                      "securityhub:List*",
                      "securityhub:Describe*"
                  ],
                  "Resource": "*"
              }
          ]
      }             
      

  4. A CloudWatch scheduled event named SecurityHubSummaryEmailSchedule for invoking the Lambda function that generates the summary email. The default schedule is every Monday at 8:00 AM GMT. This schedule can be overwritten by using a CloudFormation input parameter. Learn more about creating Cron expressions.

    Figure 8: Example of CloudWatch schedule created by the solution

    Figure 8: Example of CloudWatch schedule created by the solution

Deploy the solution

The following steps demonstrate the deployment of this solution in a single AWS account and Region. Repeat these steps in each of the AWS accounts that are active with Security Hub, so that the respective application owners can receive the relevant data from their accounts.

To deploy the solution

  1. Download the CloudFormation template security-hub-email-summary-cf-template.json and the .zip file sec-hub-email.zip from https://github.com/aws-samples/aws-security-hub-summary-email.
  2. Copy security-hub-email-summary-cf-template.json and sec-hub-email.zip to an S3 bucket within your target AWS account and Region. Copy the object URL for the CloudFormation template .json file.
  3. On the AWS Management Console, open the service CloudFormation. Choose Create Stack with new resources.

    Figure 9: Create stack with new resources

    Figure 9: Create stack with new resources

  4. Under Specify template, in the Amazon S3 URL textbox, enter the S3 object URL for the file security-hub-email-summary-cf-template.json that you uploaded in step 1.

    Figure 10: Specify S3 URL for CloudFormation template

    Figure 10: Specify S3 URL for CloudFormation template

  5. Choose Next. On the next page, under Stack name, enter a name for the stack.

    Figure 11: Enter stack name

    Figure 11: Enter stack name

  6. On the same page, enter values for the input parameters. These are the input parameters that are required for this CloudFormation template:
    1. S3 Bucket Name: The S3 bucket where the .zip file for the Lambda function (sec-hub-email.zip) is stored.
    2. S3 key name (with prefixes): The S3 key name (with prefixes) for the .zip file for the Lambda function.
    3. Email address: The email address of the subscriber to the Security Hub summary email.
    4. CloudWatch Cron Expression: The Cron expression for scheduling the Security Hub summary email. The default is every Monday 8:00 AM GMT. Learn more about creating Cron expressions.
    5. Additional Footer Text: Text that will appear at the bottom of the email message. This can be useful to guide the recipient on next steps or provide internal resource links. This is an optional parameter; leave it blank for no text.
    Figure 12: Enter CloudFormation parameters

    Figure 12: Enter CloudFormation parameters

  7. Choose Next.
  8. Keep all defaults in the screens that follow, and choose Next.
  9. Select the check box I acknowledge that AWS CloudFormation might create IAM resources, and then choose Create stack.

Test the solution

You can send a test email after the deployment is complete. To do this, navigate to the Lambda console and locate the Lambda function named SendSecurityHubSummaryEmail. Perform a manual invocation with any event payload to receive an email within a few minutes. You can repeat this procedure as many times as you wish.

Conclusion

We’ve outlined an approach for rapidly building a solution for sending a weekly summary of the security posture of your AWS account as evaluated by Security Hub. This solution makes it easier for you to be diligent in reviewing any outstanding findings and to remediate findings in a timely way based on their severity. You can extend the solution in many ways, including:

  1. Add links in the footer text to the remediation workflows, such as creating a ticket for ServiceNow or any Security Information and Event Management (SIEM) that you use.
  2. Add links to internal wikis for workflows like organizational exceptions to vulnerabilities or other internal processes.
  3. Extend the solution by modifying the custom insights content, email content, and delivery frequency.

To learn more about how to set up and customize Security Hub, see these additional blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Justin Criswell

Justin is a Senior Security Specialist Solutions Architect at AWS. His background is in cloud security and customer success. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS Security Services to increase visibility and reduce risk.

Author

Kavita Mahajan

Kavita is a Senior Partner Solutions Architect at AWS. She has a background in building software systems for the financial sector, specifically insurance and banking. Currently, she is focused on helping AWS partners develop capabilities, grow their practices, and innovate on the AWS platform to deliver the best possible customer experience.

How to continuously audit and limit security groups with AWS Firewall Manager

Post Syndicated from Jesse Lepich original https://aws.amazon.com/blogs/security/how-to-continuously-audit-and-limit-security-groups-with-aws-firewall-manager/

At AWS re:Invent 2019 and in a subsequent blog post, Stephen Schmidt, Chief Information Security Officer for Amazon Web Services (AWS), laid out the top 10 security items that AWS customers should pay special attention to if they want to improve their security posture. High on the list is the need to manage your network security and virtual private cloud (VPC) security groups. In this blog post, we’ll look at how you can use AWS Firewall Manager to address item number 4 on Stephen’s list: “Limit Security Groups.”

One fundamental security measure is to restrict network access to a server or service when connecting it to a network. In an on-premises scenario, you would use a firewall or similar technology to restrict network access to only approved IPs, ports, and protocols. When you migrate existing workloads or launch new workloads in AWS, the same basic security measures should be applied. Security groups, network access control lists, and AWS Network Firewall provide network security functionality in AWS. In this post, we’ll summarize the main use cases for managing security groups with Firewall Manager, and then we’ll take a step-by-step look at how you can configure Firewall Manager to manage protection of high-risk applications, such as Remote Desktop Protocol (RDP) and Secure Shell (SSH).

What are security groups?

Security groups are a powerful tool provided by AWS for use in enforcing network security and access control to your AWS resources and Amazon Elastic Compute Cloud (Amazon EC2) instances. Security groups provide stateful Layer 3/Layer 4 filtering for EC2 interfaces.

There are some things you need to know about configuring security groups:

  • A security group with no inbound rules denies all inbound traffic.
  • You need to create rules in order to allow traffic to flow.
  • You cannot create an explicit deny rule with a security group.
  • There are separate inbound and outbound rules for each security group.
  • Security groups are assigned to an EC2 instance, similar to a host-based firewall, and not to the subnet or VPC, and you can assign up to five security groups to each instance.
  • Security groups can be built by referencing IP addresses, subnets, or by referencing another security group.
  • Security groups can be reused across different instances. This means that you don’t have to create long complex rulesets when dealing with multiple subnets.

Best practices for security groups

AWS recommends that you follow these best practices when you work with security groups.

Remove unused or unattached security groups
Large numbers of unused or unattached security groups create confusion and invite misconfiguration. Remove any unused security groups. (PCI.EC2.3)

Limit modification to authorized roles only
AWS Identity and Access Management (IAM) roles with access can modify security groups. Limit the number of roles that have authorization to change security groups. (PCI DSS 7.2.1)

Monitor the creation or deletion of security groups
This best practice works hand in hand with the first two; you should always monitor for the attempted creation, modification, and deletion of security groups. (CIS AWS Foundations 3.10)

Don’t ignore the outbound or egress rules
Limit outbound access to only the subnets that are required. For example, in a three-tier web application, the app layer likely shouldn’t have unrestricted access to the internet, so configure the security group to allow access to only those hosts or subnets needed for correct functioning of the application. (PCI DSS 1.3.4)

Limit the ingress or inbound port ranges that are accessible
Limit the ports that are open in a security group to only those that are necessary for the application to function correctly. With large port ranges open, you may be exposed to any vulnerabilities or unintended access to services. This is especially important with high-risk applications. (CIS AWS Foundations 4.1, 4.2) (PCI DSS 1.2.1, 1.3.2)

Maintaining these best practices manually can be a challenge in large-scale AWS environments, or where developers and application owners might be deploying new applications often. Organizations can address this challenge by providing centrally configured guardrails. At AWS, we view security as an enabler to development velocity, making it possible for developers to move applications into production very quickly, but with the correct safeguards in place automatically.

Manage security groups with Firewall Manager

Firewall Manager is a security management service that you can use to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. As new applications are created, Firewall Manager makes it easier to bring them into compliance by enforcing a common set of baseline security rules and ensuring that overly permissive rules generate compliance findings or are automatically removed. With Firewall Manager, you have a single service to build firewall rules, create security policies, and enforce rules and policies in a consistent, hierarchical way across your entire infrastructure. Learn more about the Firewall Manager prerequisites.

The security group capabilities of Firewall Manager fall into three broad categories:

  • Create and apply baseline security groups to AWS accounts and resources.
  • Audit and clean up unused or redundant security groups.
  • Audit and control security group rules to identify rules that are too permissive and high risk.

In the following sections, we’re going to show how you can use Firewall Manager to audit and limit security groups by identifying rules that are too permissive and expose high-risk applications to external threats.

Use Firewall Manager to help protect high-risk applications

In this example, we’ll show how customers can use Firewall Manager to improve their security posture by automatically limiting access to high-risk applications, such as RDP, SSH, and SMB, from anywhere on the internet. All too often, access to these applications is left open to the internet, where unauthorized parties can find them using automated scanning tools. It has become increasingly important for customers to work towards reducing their risk surface due to the decrease in technical difficulty these types of attacks require. In many cases, the overly permissive access begins as a temporary setting for testing, and then is inadvertently left open over the long term. With a simple-to-configure policy, Firewall Manager can find and even automatically fix this issue across all of your AWS accounts.

Let’s jump right into configuring Firewall Manager for this use case, where you’ll inventory where public IP addresses are allowed to access high-risk applications. Once you’ve evaluated all the occurrences, then you’ll automatically remediate them.

To use Firewall Manager to limit access to high-risk applications

  1. Sign in to the AWS Management Console using the Firewall Manager administrator account, then navigate to Firewall Manager in the Console and choose Security policies.
  2. Specify the correct AWS Region your policy should be deployed to, and then choose Create policy.

    Figure 1: Create Firewall Manager policy

    Figure 1: Create Firewall Manager policy

  3. Under Policy type, choose Security group. Under Security group policy type, choose Auditing and enforcement of security group rules. Then confirm the Region is correct and choose Next.

    Figure 2: Firewall Manager policy type and Region

    Figure 2: Firewall Manager policy type and Region

  4. Enter a policy name. Under Policy options, choose Configure managed audit policy rules. Under Policy rules, choose Inbound Rules, and then turn on the Audit high risk applications action.

    Figure 3: Firewall Manager managed audit policy

    Figure 3: Firewall Manager managed audit policy

  5. Next, choose Applications that can only access local CIDR ranges, and then choose Add application list.As you can see from Figure 4 below, what this setting does is look for resources that allow non-RFC1918 private address ranges (publically routable internet IP addresses) to connect to them. By listing these applications, you can focus on your highest risk scenarios (accessibility to these high-risk applications from the internet) first. As an information security practitioner, you always want to maximize your limited time and focus on the highest risk items first. Firewall Manager makes this easier to do at scale across all AWS resources.

    Figure 4: Firewall Manager audit high risk applications setting

    Figure 4: Firewall Manager audit high risk applications setting

  6. Under Add application list, choose Add an existing list. Then select FMS-Default-Public-Access-Apps-Denied, and choose Add application list. The default managed list includes SSH, RDP, NFS, SMB, and NetBIOS, but you can also create your own custom application lists in Firewall Manager.

    Figure 5: Firewall Manager list of applications denied public access

    Figure 5: Firewall Manager list of applications denied public access

  7. Under Policy action, choose Identify resources that don’t comply with the policy rules, but don’t auto remediate, and then choose Next.This is where you can choose whether to have Firewall Manager provide alerts only, or to alert and automatically remove the specific risky security group rules. We recommend that customers start this process by only identifying noncompliant resources so that they can understand the full impact of eventually setting the auto remediation policy action.

    Figure 6: Firewall Manager policy action

    Figure 6: Firewall Manager policy action

  8. Under AWS accounts this policy applies to, choose Include all accounts under my AWS organization. Under Resource type, select all of the resource types. Under Resources, choose Include all resources that match the selected resource type to define the scope of this policy (what the policy will apply to), and then choose Next.This scope will give you a broad view of all resources that have high-risk applications exposed to the internet, but if you wanted, you could be much more targeted with how you apply your security policies using the other available scope options here. For now, let’s keep the scope broad so you can get a comprehensive view of your risk surface.

    Figure 7: Firewall Manager policy scope

    Figure 7: Firewall Manager policy scope

  9. If you choose to, you can apply a tag to this specific Firewall Manager security policy for tracking and documentation purposes. Then choose Next.

    Figure 8: Firewall Manager policy tags

    Figure 8: Firewall Manager policy tags

  10. The final page gives an overview of all the configuration settings so you can review and verify the correct configuration. Once you’re done reviewing the policy, choose Create policy to deploy this policy.

    Figure 9: Review and create policy in Firewall Manager

    Figure 9: Review and create policy in Firewall Manager

Now that you’ve created your Firewall Manager policy, you need to wait five minutes for Firewall Manager to inventory all of your AWS accounts and resources as it looks for noncompliant high-risk applications exposed to the internet.

Review policy findings to understand the risk surface

There are two main ways to review details about resources that are noncompliant with the Firewall Manager security policy you created: you can use Firewall Manager itself, or you can also use AWS Security Hub, since Firewall Manager sends all findings to Security Hub by default. Security Hub is a central location you can use to view findings from many security tools, including both native AWS security tools and third-party security tools. Security Hub can help you further focus your time in the highest value areas by, for example, showing you which resources have the largest number of security findings associated with them, and therefore represent a higher risk that should be addressed first. We won’t cover Security Hub here, but it’s helpful to know that Firewall Manager integrates with Security Hub.

Now that you’ve configured your Firewall Manager security policy and it has had time to inventory your environment to help identify noncompliant resources, you can review what Firewall Manager has found by viewing the Firewall Manager security policy.

To review policy findings on the Security policies page in the Firewall Manager console, you can see an overview of the policy you just created. You can see that the policy isn’t set to auto remediate yet, and that there are seven accounts that have noncompliant resources in them.

Figure 10: Firewall Manager policy result overview

Figure 10: Firewall Manager policy result overview

To view the specific details of each noncompliant resource, choose the name of your security policy. A list of accounts with noncompliant resources will be displayed.

Figure 11: Firewall Manager noncompliant accounts

Figure 11: Firewall Manager noncompliant accounts

Choose an account number to get more details about that account. Now you can see a list of noncompliant resources.

Figure 12: Firewall Manager noncompliant resources

Figure 12: Firewall Manager noncompliant resources

To get further details regarding why a resource is noncompliant, choose the Resource ID. This will show you the specific noncompliant security group rule.

Here you can see that this security group resource violates the Firewall Manager security policy that you created because it allows a source of 0.0.0.0/0 (any) to access TCP/3389 (RDP).

Figure 13: Firewall Manager non compliant security group rule

Figure 13: Firewall Manager non compliant security group rule

The recommended action is to remove this noncompliant rule from the security group. You can choose to do that manually. Or, alternatively, once you’ve reviewed all the findings and have a good understanding of all of the noncompliant resources, you can simply edit your existing “Protect high risk applications from the Internet” Firewall Manager security policy and set the policy action to Auto remediate non-compliant resources. This causes Firewall Manager to attempt to force compliance across all these resources automatically using its service-linked role. This level of automation can help security teams make sure that their organization’s resources aren’t being accidentally exposed to high-risk scenarios.

Use Firewall Manager to address other security group use cases

Firewall Manager has many other security group–related capabilities that I didn’t cover here. You can learn more about those here. This post was focused on helping customers start today to address high-risk scenarios that they may inadvertently have in their AWS environment. Firewall Manager can help you get continuous visibility into these scenarios, as well as automatically remediate them, even if these scenarios occur in the future. Here’s a quick overview of other use cases Firewall Manager can help you with. Keep in mind that these rules can be set to alert you only, or alert and auto remediate:

  • Deploy pre-approved security groups to AWS accounts and automatically associate them with resources
  • Deny the use of “ALL” protocol in security group rules, instead requiring that a specific protocol be selected
  • Deny the use of port ranges greater than n in security group rules
  • Deny the use of Classless Inter-Domain Routing (CIDR) ranges less than n in security group rules
  • Specify a list of applications that can be accessible from anywhere across the internet (and deny access to all other applications)
  • Identify security groups that are unused for n number of days
  • Identify redundant security groups

Firewall Manager has received many significant feature enhancements over the last year, but we’re not done yet. We have a robust roadmap of features we’re actively working on that will continue to make it easier for AWS customers to achieve security compliance of their resources.

Conclusion

In this post, we explored how Firewall Manager can help you more easily manage the VPC security groups in your AWS environments from a single central tool. Specifically, we showed how Firewall Manager can assist in implementing Stephen Schmidt’s best practice #4, “Limit Security Groups.” We focused on exactly how you can configure Firewall Manager to evaluate and get visibility into your external-facing risk surface of high-risk applications such as SSH, RDP, and SMB, and how you can use Firewall Manager to automatically remediate out-of-compliance security groups. We also summarized the other security group–related capabilities of Firewall Manager so that you can see there are many more use cases you can address with Firewall Manager. We encourage you to start using Firewall Manager today to protect your applications.

To learn more, see these AWS Security Blog posts on Firewall Manager.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jesse Lepich

Jesse is a Senior Security Solutions Architect at AWS based in Lake St. Louis, Missouri, focused on helping customers implement native AWS security services. Outside of cloud security, his interests include relaxing with family, barefoot waterskiing, snowboarding/snow skiing, surfing, boating/sailing, and mountain climbing.

Author

Michael Ingoldby

Michael is a Senior Security Solutions Architect at AWS based in Frisco, Texas. He provides guidance and helps customers to implement AWS native security services. Michael has been working in the security domain since 2006. When he is not working, he enjoys spending time outdoors.

Use new account assignment APIs for AWS SSO to automate multi-account access

Post Syndicated from Akhil Aendapally original https://aws.amazon.com/blogs/security/use-new-account-assignment-apis-for-aws-sso-to-automate-multi-account-access/

In this blog post, we’ll show how you can programmatically assign and audit access to multiple AWS accounts for your AWS Single Sign-On (SSO) users and groups, using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

With AWS SSO, you can centrally manage access and user permissions to all of your accounts in AWS Organizations. You can assign user permissions based on common job functions, customize them to meet your specific security requirements, and assign the permissions to users or groups in the specific accounts where they need access. You can create, read, update, and delete permission sets in one place to have consistent role policies across your entire organization. You can then provide access by assigning permission sets to multiple users and groups in multiple accounts all in a single operation.

AWS SSO recently added new account assignment APIs and AWS CloudFormation support to automate access assignment across AWS Organizations accounts. This release addressed feedback from our customers with multi-account environments who wanted to adopt AWS SSO, but faced challenges related to managing AWS account permissions. To automate the previously manual process and save your administration time, you can now use the new AWS SSO account assignment APIs, or AWS CloudFormation templates, to programmatically manage AWS account permission sets in multi-account environments.

With AWS SSO account assignment APIs, you can now build your automation that will assign access for your users and groups to AWS accounts. You can also gain insights into who has access to which permission sets in which accounts across your entire AWS Organizations structure. With the account assignment APIs, your automation system can programmatically retrieve permission sets for audit and governance purposes, as shown in Figure 1.

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Figure 1: Automating multi-account access with the AWS SSO API and AWS CloudFormation

Overview

In this walkthrough, we’ll illustrate how to create permission sets, assign permission sets to users and groups in AWS SSO, and grant access for users and groups to multiple AWS accounts by using the AWS Command Line Interface (AWS CLI) and AWS CloudFormation.

To grant user permissions to AWS resources with AWS SSO, you use permission sets. A permission set is a collection of AWS Identity and Access Management (IAM) policies. Permission sets can contain up to 10 AWS managed policies and a single custom policy stored in AWS SSO.

A policy is an object that defines a user’s permissions. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. AWS evaluates these policies when an IAM principal (a user or role) makes a request.

When you provision a permission set in the AWS account, AWS SSO creates a corresponding IAM role on that account, with a trust policy that allows users to assume the role through AWS SSO. With AWS SSO, you can assign more than one permission set to a user in the specific AWS account. Users who have multiple permission sets must choose one when they sign in through the user portal or the AWS CLI. Users will see these as IAM roles.

To learn more about IAM policies, see Policies and permissions in IAM. To learn more about permission sets, see Permission Sets.

Assume you have a company, Example.com, which has three AWS accounts: an organization management account (ExampleOrgMaster), a development account (ExampleOrgDev), and a test account (ExampleOrgTest). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has the IT security lead, Frank Infosec, who needs PowerUserAccess to the test account (ExampleOrgTest) and SecurityAudit access to the development account (ExampleOrgDev). Alice Developer, the developer, needs full access to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) through the development account (ExampleOrgDev). We’ll show you how to assign and audit the access for Alice and Frank centrally with AWS SSO, using the AWS CLI.

The flow includes the following steps:

  1. Create three permission sets:
    • PowerUserAccess, with the PowerUserAccess policy attached.
    • AuditAccess, with the SecurityAudit policy attached.
    • EC2-S3-FullAccess, with the AmazonEC2FullAccess and AmazonS3FullAccess policies attached.
  2. Assign permission sets to the AWS account and AWS SSO users:
    • Assign the PowerUserAccess and AuditAccess permission sets to Frank Infosec, to provide the required access to the ExampleOrgDev and ExampleOrgTest accounts.
    • Assign the EC2-S3-FullAccess permission set to Alice Developer, to provide the required permissions to the ExampleOrgDev account.
  3. Retrieve the assigned permissions by using Account Entitlement APIs for audit and governance purposes.

    Note: AWS SSO Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. In this blog we attach AWS managed polices to the AWS SSO Permission sets for simplicity. To help secure your AWS resources, follow the standard security advice of granting least privilege access using AWS SSO custom policy while creating AWS SSO Permission set.

Figure 2: AWS Organizations accounts access for Alice and Frank

Figure 2: AWS Organizations accounts access for Alice and Frank

To help simplify administration of access permissions, we recommend that you assign access directly to groups rather than to individual users. With groups, you can grant or deny permissions to groups of users, rather than having to apply those permissions to each individual. For simplicity, in this blog you’ll assign permissions directly to the users.

Prerequisites

Before you start this walkthrough, complete these steps:

Use the AWS SSO API from the AWS CLI

In order to call the AWS SSO account assignment API by using the AWS CLI, you need to install and configure AWS CLI v2. For more information about AWS CLI installation and configuration, see Installing the AWS CLI and Configuring the AWS CLI.

Step 1: Create permission sets

In this step, you learn how to create EC2-S3FullAccess, AuditAccess, and PowerUserAccess permission sets in AWS SSO from the AWS CLI.

Before you create the permission sets, run the following command to get the Amazon Resource Name (ARN) of the AWS SSO instance and the Identity Store ID, which you will need later in the process when you create and assign permission sets to AWS accounts and users or groups.

aws sso-admin list-instances

Figure 3 shows the results of running the command.

Figure 3: AWS SSO list instances

Figure 3: AWS SSO list instances

Next, create the permission set for the security team (Frank) and dev team (Alice), as follows.

Permission set for Alice Developer (EC2-S3-FullAccess)

Run the following command to create the EC2-S3-FullAccess permission set for Alice, as shown in Figure 4.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'EC2-S3-FullAccess' --description 'EC2 and S3 access for developers'
Figure 4: Creating the permission set EC2-S3-FullAccess

Figure 4: Creating the permission set EC2-S3-FullAccess

Permission set for Frank Infosec (AuditAccess)

Run the following command to create the AuditAccess permission set for Frank, as shown in Figure 5.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'AuditAccess' --description 'Audit Access for security team on ExampleOrgDev account'
Figure 5: Creating the permission set AuditAccess

Figure 5: Creating the permission set AuditAccess

Permission set for Frank Infosec (PowerUserAccess)

Run the following command to create the PowerUserAccess permission set for Frank, as shown in Figure 6.

aws sso-admin create-permission-set --instance-arn '<Instance ARN>' --name 'PowerUserAccess' --description 'Power User Access for security team on ExampleOrgDev account'
Figure 6: Creating the permission set PowerUserAccess

Figure 6: Creating the permission set PowerUserAccess

Copy the permission set ARN from these responses, which you will need when you attach the managed policies.

Step 2: Assign policies to permission sets

In this step, you learn how to assign managed policies to the permission sets that you created in step 1.

Attach policies to the EC2-S3-FullAccess permission set

Run the following command to attach the amazonec2fullacess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 7.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazonec2fullaccess'
Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Figure 7: Attaching the AWS managed policy amazonec2fullaccess to the EC2-S3-FullAccess permission set

Run the following command to attach the amazons3fullaccess AWS managed policy to the EC2-S3-FullAccess permission set, as shown in Figure 8.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/amazons3fullaccess'
Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Figure 8: Attaching the AWS managed policy amazons3fullaccess to the EC2-S3-FullAccess permission set

Attach a policy to the AuditAccess permission set

Run the following command to attach the SecurityAudit managed policy to the AuditAccess permission set that you created earlier, as shown in Figure 9.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/SecurityAudit'
Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Figure 9: Attaching the AWS managed policy SecurityAudit to the AuditAccess permission set

Attach a policy to the PowerUserAccess permission set

The following command is similar to the previous command; it attaches the PowerUserAccess managed policy to the PowerUserAccess permission set, as shown in Figure 10.

aws sso-admin attach-managed-policy-to-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --managed-policy-arn 'arn:aws:iam::aws:policy/PowerUserAccess'
Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

Figure 10: Attaching AWS managed policy PowerUserAccess to the PowerUserAccess permission set

In the next step, you assign users (Frank Infosec and Alice Developer) to their respective permission sets and assign permission sets to accounts.

Step 3: Assign permission sets to users and groups and grant access to AWS accounts

In this step, you assign the AWS SSO permission sets you created to users and groups and AWS accounts, to grant the required access for these users and groups on respective AWS accounts.

To assign access to an AWS account for a user or group, using a permission set you already created, you need the following:

  • The principal ID (the ID for the user or group)
  • The AWS account ID to which you need to assign this permission set

To obtain a user’s or group’s principal ID (UserID or GroupID), you need to use the AWS SSO Identity Store API. The AWS SSO Identity Store service enables you to retrieve all of your identities (users and groups) from AWS SSO. See AWS SSO Identity Store API for more details.

Use the first two commands shown here to get the principal ID for the two users, Alice (Alice’s user name is [email protected]) and Frank (Frank’s user name is [email protected]).

Alice’s user ID

Run the following command to get Alice’s user ID, as shown in Figure 11.

aws identitystore list-users --identity-store-id '<Identity Store ID>' --filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 11: Retrieving Alice’s user ID

Figure 11: Retrieving Alice’s user ID

Frank’s user ID

Run the following command to get Frank’s user ID, as shown in Figure 12.

aws identitystore list-users --identity-store-id '<Identity Store ID>'--filter AttributePath='UserName',AttributeValue='[email protected]'
Figure 12: Retrieving Frank’s user ID

Figure 12: Retrieving Frank’s user ID

Note: To get the principal ID for a group, use the following command.

aws identitystore list-groups --identity-store-id '<Identity Store ID>' --filter AttributePath='DisplayName',AttributeValue='<Group Name>'

Assign the EC2-S3-FullAccess permission set to Alice in the ExampleOrgDev account

Run the following command to assign Alice access to the ExampleOrgDev account using the EC2-S3-FullAccess permission set. This will give Alice full access to Amazon EC2 and S3 services in the ExampleOrgDev account.

Note: When you call the CreateAccountAssignment API, AWS SSO automatically provisions the specified permission set on the account in the form of an IAM policy attached to the AWS SSO–created IAM role. This role is immutable: it’s fully managed by the AWS SSO, and it cannot be deleted or changed by the user even if the user has full administrative rights on the account. If the permission set is subsequently updated, the corresponding IAM policies attached to roles in your accounts won’t be updated automatically. In this case, you will need to call ProvisionPermissionSet to propagate these updates.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Figure 13: Assigning the EC2-S3-FullAccess permission set to Alice on the ExampleOrgDev account

Assign the AuditAccess permission set to Frank Infosec in the ExampleOrgDev account

Run the following command to assign Frank access to the ExampleOrgDev account using the EC2-S3- AuditAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Figure 14: Assigning the AuditAccess permission set to Frank on the ExampleOrgDev account

Assign the PowerUserAccess permission set to Frank Infosec in the ExampleOrgTest account

Run the following command to assign Frank access to the ExampleOrgTest account using the PowerUserAccess permission set.

aws sso-admin create-account-assignment --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>' --principal-id '<user/group ID>' --principal-type '<USER/GROUP>' --target-id '<AWS Account ID>' --target-type AWS_ACCOUNT
Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

Figure 15: Assigning the PowerUserAccess permission set to Frank on the ExampleOrgTest account

To view the permission sets provisioned on the AWS account, run the following command, as shown in Figure 16.

aws sso-admin list-permission-sets-provisioned-to-account --instance-arn '<Instance ARN>' --account-id '<AWS Account ID>'
Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

Figure 16: View the permission sets (AuditAccess and EC2-S3-FullAccess) assigned to the ExampleOrgDev account

To review the created resources in the AWS Management Console, navigate to the AWS SSO console. In the list of permission sets on the AWS accounts tab, choose the EC2-S3-FullAccess permission set. Under AWS managed policies, the policies attached to the permission set are listed, as shown in Figure 17.

Figure 17: Review the permission set in the AWS SSO console

Figure 17: Review the permission set in the AWS SSO console

To see the AWS accounts, where the EC2-S3-FullAccess permission set is currently provisioned, navigate to the AWS accounts tab, as shown in Figure 18.

Figure 18: Review permission set account assignment in the AWS SSO console

Figure 18: Review permission set account assignment in the AWS SSO console

Step 4: Audit access

In this step, you learn how to audit access assigned to your users and group by using the AWS SSO account assignment API. In this example, you’ll start from a permission set, review the permissions (AWS-managed policies or a custom policy) attached to the permission set, get the users and groups associated with the permission set, and see which AWS accounts the permission set is provisioned to.

List the IAM managed policies for the permission set

Run the following command to list the IAM managed policies that are attached to a specified permission set, as shown in Figure 19.

aws sso-admin list-managed-policies-in-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 19: View the managed policies attached to the permission set

Figure 19: View the managed policies attached to the permission set

List the assignee of the AWS account with the permission set

Run the following command to list the assignee (the user or group with the respective principal ID) of the specified AWS account with the specified permission set, as shown in Figure 20.

aws sso-admin list-account-assignments --instance-arn '<Instance ARN>' --account-id '<Account ID>' --permission-set-arn '<Permission Set ARN>'
Figure 20: View the permission set and the user or group attached to the AWS account

Figure 20: View the permission set and the user or group attached to the AWS account

List the accounts to which the permission set is provisioned

Run the following command to list the accounts that are associated with a specific permission set, as shown in Figure 21.

aws sso-admin list-accounts-for-provisioned-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<Permission Set ARN>'
Figure 21: View AWS accounts to which the permission set is provisioned

Figure 21: View AWS accounts to which the permission set is provisioned

In this section of the post, we’ve illustrated how to create a permission set, assign a managed policy to the permission set, and grant access for AWS SSO users or groups to AWS accounts by using this permission set. In the next section, we’ll show you how to do the same using AWS CloudFormation.

Use the AWS SSO API through AWS CloudFormation

In this section, you learn how to use CloudFormation templates to automate the creation of permission sets, attach managed policies, and use permission sets to assign access for a particular user or group to AWS accounts.

Sign in to your AWS Management Console and create a CloudFormation stack by using the following CloudFormation template. For more information on how to create a CloudFormation stack, see Creating a stack on the AWS CloudFormation console.

//start of Template//
{
    "AWSTemplateFormatVersion": "2010-09-09",
  
    "Description": "AWS CloudFormation template to automate multi-account access with AWS Single Sign-On (Entitlement APIs): Create permission sets, assign access for AWS SSO users and groups to AWS accounts using permission sets. Before you use this template, we assume you have enabled AWS SSO for your AWS Organization, added the AWS accounts to which you want to grant AWS SSO access to your organization, signed in to the AWS Management Console with your AWS Organizations management account credentials, and have the required permissions to use the AWS SSO console.",
  
    "Parameters": {
      "InstanceARN" : {
        "Type" : "String",
        "AllowedPattern": "arn:aws:sso:::instance/(sso)?ins-[a-zA-Z0-9-.]{16}",
        "Description" : "Enter AWS SSO InstanceARN. Ex: arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx",
        "ConstraintDescription": "must be the name of an existing AWS SSO InstanceARN associated with the management account."
      },
      "ExampleOrgDevAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit Developer AWS Account ID. Ex: 123456789012"
        },
      "ExampleOrgTestAccountId" : {
        "Type" : "String",
        "AllowedPattern": "\\d{12}",
        "Description" : "Enter 12-digit AWS Account ID. Ex: 123456789012"
        },
      "AliceDeveloperUserId" : {
        "Type" : "String",
        "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
        "Description" : "Enter Developer UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
        },
        "FrankInfosecUserId" : {
            "Type" : "String",
            "AllowedPattern": "^([0-9a-f]{10}-|)[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}$",
            "Description" : "Enter Test UserId. Ex: 926703446b-f10fac16-ab5b-45c3-86c1-xxxxxxxxxxxx"
            }
    },
    "Resources": {
        "EC2S3Access": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "EC2 and S3 access for developers",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : ["arn:aws:iam::aws:policy/amazonec2fullaccess","arn:aws:iam::aws:policy/amazons3fullaccess"],
                "Name" : "EC2-S3-FullAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "EC2S3Access"
                 } ]
              }
        },  
        "SecurityAuditAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Audit Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/SecurityAudit" ],
                "Name" : "AuditAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "SecurityAuditAccess"
                 } ]
              }
        },    
        "PowerUserAccess": {
            "Type" : "AWS::SSO::PermissionSet",
            "Properties" : {
                "Description" : "Power User Access for Infosec team",
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "ManagedPolicies" : [ "arn:aws:iam::aws:policy/PowerUserAccess"],
                "Name" : "PowerUserAccess",
                "Tags" : [ {
                    "Key": "Name",
                    "Value": "PowerUserAccess"
                 } ]
              }      
        },
        "EC2S3userAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "EC2S3Access",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "AliceDeveloperUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "SecurityAudituserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "SecurityAuditAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgDevAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          },
          "PowerUserAssignment": {
            "Type" : "AWS::SSO::Assignment",
            "Properties" : {
                "InstanceArn" : {
                    "Ref": "InstanceARN"
                },
                "PermissionSetArn" : {
                    "Fn::GetAtt": [
                        "PowerUserAccess",
                        "PermissionSetArn"
                     ]
                },
                "PrincipalId" : {
                    "Ref": "FrankInfosecUserId"
                },
                "PrincipalType" : "USER",
                "TargetId" : {
                    "Ref": "ExampleOrgTestAccountId"
                },
                "TargetType" : "AWS_ACCOUNT"
              }
          }
    }
}
//End of Template//

When you create the stack, provide the following information for setting the example permission sets for Frank Infosec and Alice Developer, as shown in Figure 22:

  • The Alice Developer and Frank Infosec user IDs
  • The ExampleOrgDev and ExampleOrgTest account IDs
  • The AWS SSO instance ARN

Then launch the CloudFormation stack.

Figure 22: User inputs to launch the CloudFormation template

Figure 22: User inputs to launch the CloudFormation template

AWS CloudFormation creates the resources that are shown in Figure 23.

Figure 23: Resources created from the CloudFormation stack

Figure 23: Resources created from the CloudFormation stack

Cleanup

To delete the resources you created by using the AWS CLI, use these commands.

Run the following command to delete the account assignment.

delete-account-assignment --instance-arn '<Instance ARN>' --target-id '<AWS Account ID>' --target-type 'AWS_ACCOUNT' --permission-set-arn '<PermissionSet ARN>' --principal-type '<USER/GROUP>' --principal-id '<user/group ID>'

After the account assignment is deleted, run the following command to delete the permission set.

delete-permission-set --instance-arn '<Instance ARN>' --permission-set-arn '<PermissionSet ARN>'

To delete the resource that you created by using the CloudFormation template, go to the AWS CloudFormation console. Select the appropriate stack you created, and then choose delete. Deleting the CloudFormation stack cleans up the resources that were created.

Summary

In this blog post, we showed how to use the AWS SSO account assignment API to automate the deployment of permission sets, how to add managed policies to permission sets, and how to assign access for AWS users and groups to AWS accounts by using specified permission sets.

To learn more about the AWS SSO APIs available for you, see the AWS Single Sign-On API Reference Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akhil Aendapally

Akhil is a Solutions Architect at AWS focused on helping customers with their AWS adoption. He holds a master’s degree in Network and Computer Security. Akhil has 8+ years of experience working with different cloud platforms, infrastructure automation, and security.

Author

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Author

Ballu Singh

Ballu is a principal solutions architect at AWS. He lives in the San Francisco Bay area and helps customers architect and optimize applications on AWS. In his spare time, he enjoys reading and spending time with his family.

Author

Nir Ozeri

Nir is a Solutions Architect Manager with Amazon Web Services, based out of New York City. Nir specializes in application modernization, application delivery, and mobile architecture.

Resource leak detection in Amazon CodeGuru Reviewer

Post Syndicated from Pranav Garg original https://aws.amazon.com/blogs/devops/resource-leak-detection-in-amazon-codeguru/

This post discusses the resource leak detector for Java in Amazon CodeGuru Reviewer. CodeGuru Reviewer automatically analyzes pull requests (created in supported repositories such as AWS CodeCommit, GitHub, GitHub Enterprise, and Bitbucket) and generates recommendations for improving code quality. For more information, see Automating code reviews and application profiling with Amazon CodeGuru. This blog does not describe the resource leak detector for Python programs that is now available in preview.

What are resource leaks?

Resources are objects with a limited availability within a computing system. These typically include objects managed by the operating system, such as file handles, database connections, and network sockets. Because the number of such resources in a system is limited, they must be released by an application as soon as they are used. Otherwise, you will run out of resources and you won’t be able to allocate new ones. The paradigm of acquiring a resource and releasing it is also followed by other categories of objects such as metric wrappers and timers.

Resource leaks are bugs that arise when a program doesn’t release the resources it has acquired. Resource leaks can lead to resource exhaustion. In the worst case, they can cause the system to slow down or even crash.

Starting with Java 7, most classes holding resources implement the java.lang.AutoCloseable interface and provide a close() method to release them. However, a close() call in source code doesn’t guarantee that the resource is released along all program execution paths. For example, in the following sample code, resource r is acquired by calling its constructor and is closed along the path corresponding to the if branch, shown using green arrows. To ensure that the acquired resource doesn’t leak, you must also close r along the path corresponding to the else branch (the path shown using red arrows).

A resource must be closed along all execution paths to prevent resource leaks

Often, resource leaks manifest themselves along code paths that aren’t frequently run, or under a heavy system load, or after the system has been running for a long time. As a result, such leaks are latent and can remain dormant in source code for long periods of time before manifesting themselves in production environments. This is the primary reason why resource leak bugs are difficult to detect or replicate during testing, and why automatically detecting these bugs during pull requests and code scans is important.

Detecting resource leaks in CodeGuru Reviewer

For this post, we consider the following Java code snippet. In this code, method getConnection() attempts to create a connection in the connection pool associated with a data source. Typically, a connection pool limits the maximum number of connections that can remain open at any given time. As a result, you must close connections after their use so as to not exhaust this limit.

 1     private Connection getConnection(final BasicDataSource dataSource, ...)
               throws ValidateConnectionException, SQLException {
 2         boolean connectionAcquired = false;
 3         // Retrying three times to get the connection.
 4         for (int attempt = 0; attempt < CONNECTION_RETRIES; ++attempt) {
 5             Connection connection = dataSource.getConnection();
 6             // validateConnection may throw ValidateConnectionException
 7             if (! validateConnection(connection, ...)) {
 8                 // connection is invalid
 9                 DbUtils.closeQuietly(connection);
10             } else {
11                 // connection is established
12                 connectionAcquired = true;
13                 return connection;
14             }
15         }
16         return null;
17     }

At first glance, it seems that the method getConnection() doesn’t leak connection resources. If a valid connection is established in the connection pool (else branch on line 10 is taken), the method getConnection() returns it to the client for use (line 13). If the connection established is invalid (if branch on line 7 is taken), it’s closed in line 9 before another attempt is made to establish a connection.

However, method validateConnection() at line 7 can throw a ValidateConnectionException. If this exception is thrown after a connection is established at line 5, the connection is neither closed in this method nor is it returned upstream to the client to be closed later. Furthermore, if this exceptional code path runs frequently, for instance, if the validation logic throws on a specific recurring service request, each new request causes a connection to leak in the connection pool. Eventually, the client can’t acquire new connections to the data source, impacting the availability of the service.

A typical recommendation to prevent resource leak bugs is to declare the resource objects in a try-with-resources statement block. However, we can’t use try-with-resources to fix the preceding method because this method is required to return an open connection for use in the upstream client. The CodeGuru Reviewer recommendation for the preceding code snippet is as follows:

“Consider closing the following resource: connection. The resource is referenced at line 7. The resource is closed at line 9. The resource is returned at line 13. There are other execution paths that don’t close the resource or return it, for example, when validateConnection throws an exception. To prevent this resource leak, close connection along these other paths before you exit this method.”

As mentioned in the Reviewer recommendation, to prevent this resource leak, you must close the established connection when method validateConnection() throws an exception. This can be achieved by inserting the validation logic (lines 7–14) in a try block. In the finally block associated with this try, the connection must be closed by calling DbUtils.closeQuietly(connection) if connectionAcquired == false. The method getConnection() after this fix has been applied is as follows:

private Connection getConnection(final BasicDataSource dataSource, ...) 
        throws ValidateConnectionException, SQLException {
    boolean connectionAcquired = false;
    // Retrying three times to get the connection.
    for (int attempt = 0; attempt < CONNECTION_RETRIES; ++attempt) {
        Connection connection = dataSource.getConnection();
        try {
            // validateConnection may throw ValidateConnectionException
            if (! validateConnection(connection, ...)) {
                // connection is invalid
                DbUtils.closeQuietly(connection);
            } else {
                // connection is established
                connectionAcquired = true;
                return connection;
            }
        } finally {
            if (!connectionAcquired) {
                DBUtils.closeQuietly(connection);
            }
        }
    }
    return null;
}

As shown in this example, resource leaks in production services can be very disruptive. Furthermore, leaks that manifest along exceptional or less frequently run code paths can be hard to detect or replicate during testing and can remain dormant in the code for long periods of time before manifesting themselves in production environments. With the resource leak detector, you can detect such leaks on objects belonging to a large number of popular Java types such as file streams, database connections, network sockets, timers and metrics, etc.

Combining static code analysis with machine learning for accurate resource leak detection

In this section, we dive deep into the inner workings of the resource leak detector. The resource leak detector in CodeGuru Reviewer uses static analysis algorithms and techniques. Static analysis algorithms perform code analysis without running the code. These algorithms are generally prone to high false positives (the tool might report correct code as having a bug). If the number of these false positives is high, it can lead to alarm fatigue and low adoption of the tool. As a result, the resource leak detector in CodeGuru Reviewer prioritizes precision over recall— the findings we surface are resource leaks with a high accuracy, though CodeGuru Reviewer could potentially miss some resource leak findings.

The main reason for false positives in static code analysis is incomplete information available to the analysis. CodeGuru Reviewer requires only the Java source files and doesn’t require all dependencies or the build artifacts. Not requiring the external dependencies or the build artifacts reduces the friction to perform automated code reviews. As a result, static analysis only has access to the code in the source repository and doesn’t have access to its external dependencies. The resource leak detector in CodeGuru Reviewer combines static code analysis with a machine learning (ML) model. This ML model is used to reason about external dependencies to provide accurate recommendations.

To understand the use of the ML model, consider again the code above for method getConnection() that had a resource leak. In the code snippet, a connection to the data source is established by calling BasicDataSource.getConnection() method, declared in the Apache Commons library. As mentioned earlier, we don’t require the source code of external dependencies like the Apache library for code analysis during pull requests. Without access to the code of external dependencies, a pure static analysis-driven technique doesn’t know whether the Connection object obtained at line 5 will leak, if not closed. Similarly, it doesn’t know that DbUtils.closeQuietly() is a library function that closes the connection argument passed to it at line 9. Our detector combines static code analysis with ML that learns patterns over such external function calls from a large number of available code repositories. As a result, our resource leak detector knows that the connection doesn’t leak along the following code path:

  • A connection is established on line 5
  • Method validateConnection() returns false at line 7
  • DbUtils.closeQuietly() is called on line 9

This suppresses the possible false warning. At the same time, the detector knows that there is a resource leak when the connection is established at line 5, and validateConnection() throws an exception at line 7 that isn’t caught.

When we run CodeGuru Reviewer on this code snippet, it surfaces only the second leak scenario and makes an appropriate recommendation to fix this bug.

The ML model used in the resource leak detector has been trained on a large number of internal Amazon and GitHub code repositories.

Responses to the resource leak findings

Although closing an open resource in code isn’t difficult, doing so properly along all program paths is important to prevent resource leaks. This can easily be overlooked, especially along exceptional or less frequently run paths. As a result, the resource leak detector in CodeGuru Reviewer has observed a relatively high frequency, and has alerted developers within Amazon to thousands of resource leaks before they hit production.

The resource leak detections have witnessed a high developer acceptance rate, and developer feedback towards the resource leak detector has been very positive. Some of the feedback from developers includes “Very cool, automated finding,” “Good bot :),” and “Oh man, this is cool.” Developers have also concurred that the findings are important and need to be fixed.

Conclusion

Resource leak bugs are difficult to detect or replicate during testing. They can impact the availability of production services. As a result, it’s important to automatically detect these bugs early on in the software development workflow, such as during pull requests or code scans. The resource leak detector in CodeGuru Reviewer combines static code analysis algorithms with ML to surface only the high confidence leaks. It has a high developer acceptance rate and has alerted developers within Amazon to thousands of leaks before those leaks hit production.

How to approach threat modeling

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/

In this post, I’ll provide my tips on how to integrate threat modeling into your organization’s application development lifecycle. There are many great guides on how to perform the procedural parts of threat modeling, and I’ll briefly touch on these and their methodologies. However, the main aim of this post is to augment the existing guidance with some additional tips on how to handle the people and process components of your threat modeling approach, which in my experience goes a long way to improving the security outcomes, security ownership, speed to market, and general happiness of all involved. Furthermore, I’ll also provide some guidance specific to when you’re using Amazon Web Services (AWS).

Let’s start with a primer on threat modeling.

Why use threat modeling

IT systems are complex, and are becoming increasingly more complex and capable over time, delivering more business value and increased customer satisfaction and engagement. This means that IT design decisions need to account for an ever-increasing number of use cases, and be made in a way that mitigates potential security threats that may lead to business-impacting outcomes, including unauthorized access to data, denial of service, and resource misuse.

This complexity and number of use-case permutations typically makes it ineffective to use unstructured approaches to find and mitigate threats. Instead, you need a systematic approach to enumerate the potential threats to the workload, and to devise mitigations and prioritize these mitigations to make sure that the limited resources of your organization have the maximum impact in improving the overall security posture of the workload. Threat modeling is designed to provide this systematic approach, with the aim of finding and addressing issues early in the design process, when the mitigations have a low relative cost compared to later in the lifecycle.

The AWS Well-Architected Framework calls out threat modeling as a specific best practice within the Security Pillar, under the area of foundational security, under the question SEC 1: How do you securely operate your workload?:

“Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an up-to-date register of potential threats. Prioritize your threats and adapt your security controls to prevent, detect, and respond. Revisit and maintain this in the context of the evolving security landscape.”

Threat modeling is most effective when done at the workload (or workload feature) level, in order to ensure that all context is available for assessment. AWS Well-Architected defines a workload as:

“A set of components that together deliver business value. The workload is usually the level of detail that business and technology leaders communicate about. Examples of workloads are marketing websites, e-commerce websites, the back-ends for a mobile app, analytic platforms, etc. Workloads vary in levels of architectural complexity, from static websites to architectures with multiple data stores and many components.”

The core steps of threat modeling

In my experience, all threat modeling approaches are similar; at a high level, they follow these broad steps:

  1. Identify assets, actors, entry points, components, use cases, and trust levels, and include these in a design diagram.
  2. Identify a list of threats.
  3. Per threat, identify mitigations, which may include security control implementations.
  4. Create and review a risk matrix to determine if the threat is adequately mitigated.

To go deeper into the general practices associated with these steps, I would suggest that you read the SAFECode Tactical Threat Modeling whitepaper and the Open Web Application Security Project (OWASP) Threat Modeling Cheat Sheet. These guides are great resources for you to consider when adopting a particular approach. They also reference a number of tools and methodologies that are helpful to accelerate the threat modeling process, including creating threat model diagrams with the OWASP Threat Dragon project and determining possible threats with the OWASP Top 10, OWASP Application Security Verification Standard (ASVS) and STRIDE. You may choose to adopt some combination of these, or create your own.

When to do threat modeling

Threat modeling is a design-time activity. It’s typical that during the design phase you would go beyond creating a diagram of your architecture, and that you may also be building in a non-production environment—and these activities are performed to inform and develop your production design. Because threat modeling is a design-time activity, it occurs before code review, code analysis (static or dynamic), and penetration testing; these all come later in the security lifecycle.

Always consider potential threats when designing your workload from the earliest phases—typically when people are still on the whiteboard (whether physical or virtual). Threat modeling should be performed during the design phase of a given workload feature or feature change, as these changes may introduce new threats that require you to update your threat model.

Threat modeling tips

Ultimately, threat modeling requires thought, brainstorming, collaboration, and communication. The aim is to bridge the gap between application development, operations, business, and security. There is no shortcut to success. However, there are things I’ve observed that have meaningful impacts on the adoption and success of threat modeling—I’ll be covering these areas in the following sections.

1. Assemble the right team

Threat modeling is a “team sport,” because it requires the knowledge and skill set of a diverse team where all inputs can be viewed as equal in value. For all listed personas in this section, the suggested mindset is to start from your end-customers’ expectations, and work backwards. Think about what your customers expect from this workload or workload feature, both in terms of its security properties and maintaining a balance of functionality and usability.

I recommend that the following perspectives be covered by the team, noting that a single individual can bring more than one of these perspectives to the table:

The Business persona – First, to keep things grounded, you’ll want someone who represents the business outcomes of the workload or feature that is part of the threat modeling process. This person should have an intimate understanding of the functional and non-functional requirements of the workload—and their job is to make sure that these requirements aren’t unduly impacted by any proposed mitigations to address threats. Meaning that if a proposed security control (that is, mitigation) renders an application requirement unusable or overly degraded, then further work is required to come to the right balance of security and functionality.

The Developer persona – This is someone who understands the current proposed design for the workload feature, and has had the most depth of involvement in the design decisions made to date. They were involved in design brainstorming or whiteboarding sessions leading up to this point, when they would typically have been thinking about threats to the design and possible mitigations to include. In cases where you are not developing your own in-house application (e.g. COTS applications) you would bring in the internal application owner.

The Adversary persona – Next, you need someone to play the role of the adversary. The aim of this persona is to put themselves in the shoes of an attacker, and to critically review the workload design and look for ways to take advantage of a design flaw in the workload to achieve a particular objective (for example, unauthorized sharing of data). The “attacks” they perform are a mental exercise, not actual hands-on-keyboard exploitation. If your organization has a so-called Red Team, then they could be a great fit for this role; if not, you may want to have one or more members of your security operations or engineering team play this role. Or alternately, bring in a third party who is specialized in this area.

The Defender persona – Then, you need someone to play the role of the defender. The aim of this persona is to see the possible “attacks” designed by the adversary persona as potential threats, and to devise security controls that mitigate the threats. This persona also evaluates whether the possible mitigations are reasonably manageable in terms of on-going operational support, monitoring, and incident response.

The AppSec SME persona – The Application Security (AppSec) subject matter expert (SME) persona should be the most familiar with the threat modeling process and discussion moderation methods, and should have a depth of IT security knowledge and experience. Discussion moderation is crucial for the overall exercise process to make sure that the overall objectives of the process are kept on-track, and that the appropriate balance between security and delivery of the customer outcome is maintained. Ultimately, it’s this persona who endorses the threat model and advises the scope of the actions beyond the threat modeling exercise—for example, penetration testing scope.

2. Have a consistent approach

In the earlier section, I listed some of the popular threat modeling approaches, and which one you select is not as important as using it consistently both within and across your teams.

By using a consistent approach and format, teams can move faster and estimate effort more accurately. Individuals can learn from examples, by looking at threat models developed by other team members or other teams—saving them from having to start from scratch.

When your team estimates the effort and time required to create a threat model, the experience and time taken from previous threat models can be used to provide more accurate estimations of delivery timelines.

Beyond using a consistent approach and format, consistency in the granularity and relevance of the threats being modeled is key. Later in this post I describe a recommendation for creating a catalog of threats for reuse across your organization.

Finally, and importantly, this approach allows for scalability: if a given workload feature that’s undergoing a threat modeling exercise is using components that have an existing threat model, then the threat model (or individual security controls) of those components can be reused. With this approach, you can effectively take a dependency on a component’s existing threat model, and build on that model, eliminating re-work.

3. Align to the software delivery methodology

Your application development teams already have a particular workflow and delivery style. These days, Agile-style delivery is most popular. Ensure that the approach you take for threat modeling integrates well with both your delivery methodology and your tools.

Just like for any other deliverable, capture the user stories related to threat modeling as part of the workload feature’s sprint, epic, or backlog.

4. Use existing workflow tooling

Your application development teams are already using a suite of tools to support their delivery methodology. This would typically include collaboration tools for documentation (for example, a team wiki), and an issue-tracking tool to track work products through the software development lifecycle. Aim to use these same tools as part of your security review and threat modeling workflow.

Existing workflow tools can provide a single place to provide and view feedback, assign actions, and view the overall status of the threat modeling deliverables of the workload feature. Being part of the workflow reduces the friction of getting the project done and allows threat modeling to become as commonplace as unit testing, QA testing, or other typical steps of the workflow.

By using typical workflow tools, team members working on creating and reviewing the threat model can work asynchronously. For example, when the threat model reviewer adds feedback, the author is notified, and then the author can address the feedback when they have time, without having to set aside dedicated time for a meeting. Also, this allows the AppSec SME to more effectively work across multiple threat model reviews that they may be engaged in.

Having a consistent approach and language as described earlier is an important prerequisite to make this asynchronous process feasible, so that each participant can read and understand the threat model without having to re-learn the correct interpretation each time.

5. Break the workload down into smaller parts

It’s advisable to decompose (break down) the workload into features and perform the threat modeling exercise at the feature level, rather than create a single threat model for an entire workload. This approach has a number of key benefits:

  1. Having smaller chunks of work allows more granular tracking of progress, which aligns well with development teams that are following Agile-style delivery, and gives leadership a constant view of progress.
  2. This approach tends to create threat models that are more detailed, which results in more findings being identified.
  3. Decomposing also opens up the opportunity for the threat model to be reused as a dependency for other workload features that use the same components.
  4. By considering threat mitigations for each component, at the overall workload level this means that a single threat may have multiple mitigations, resulting in an improved resilience against those threats.
  5. Issues with a single threat model, for example a critical threat which is not yet mitigated, does not become launch blocking for the entire workload, but rather just for the individual feature.

The question then becomes, how far should you decompose the workload?

As a general rule, in order to create a threat model, the following context is required, at a minimum:

  • One asset. For example, credentials, customer records, and so on.
  • One entry point. For example, Amazon API Gateway REST API deployment.
  • Two components. For example, a web browser and an API Gateway REST API; or API Gateway and an AWS Lambda function.

Creating a threat model for a given AWS service (for example, API Gateway) in isolation wouldn’t fully meet this criteria—given that the service is a single component, there is no movement of the data from one component to another. Furthermore, the context of all the possible use cases of the service within a workload isn’t known, so you can’t comprehensively derive the threats and mitigations. AWS performs threat modeling of the multiple features that make up a given AWS service. Therefore, for your workload feature that leverages a given AWS service, you wouldn’t need to threat model the AWS service, but instead consider the various AWS service configuration options and your own workload-specific mitigations when you look to mitigate the threats you’ve identified. I go into more depth on this in the “Identify and evaluate mitigations” section, where I go into the concept of baseline security controls.

6. Distribute ownership

Having a central person or department responsible for creation of threat models doesn’t work in the long run. These central entities become bottlenecks and can only scale up with additional head count. Furthermore, centralized ownership doesn’t empower those who are actually designing and implementing your workload features.

Instead, what scales well is distributed ownership of threat model creation by the team that is responsible for designing and implementing each workload feature. Distributed ownership scales and drives behavior change, because now the application teams are in control, and importantly they’re taking security learnings from the threat modeling process and putting those learnings into their next feature release, and therefore constantly improving the security of their workload and features.

This creates the opportunity for the AppSec SME (or team) to effectively play the moderator and security advisor role to all the various application teams in your organization. The AppSec SME will be in a position to drive consistency, adoption, and communication, and to set and raise the security bar among teams.

7. Identify entry points

When you look to identify entry points for AWS services that are components within your overall threat model, it’s important to understand that, depending on the type of AWS service, the entry points may vary based on the architecture of the workload feature included in the scope of the threat model.

For example, with Amazon Simple Storage Service (Amazon S3), the possible types of entry-points to an S3 bucket are limited to what is exposed through the Amazon S3 API, and the service doesn’t offer the capability for you, as a customer, to create additional types of entry points. In this Amazon S3 example, as a customer you make choices about how these existing types of endpoints are exposed—including whether the bucket is private or publicly accessible.

On the other end of the spectrum, Amazon Elastic Compute Cloud (Amazon EC2) allows customers to create additional types of entry-points to EC2 instances (for example, your application API), besides the entry-point types that are provided by the Amazon EC2 API and those native to the operating system running on the EC2 instance (for example, SSH or RDP).

Therefore, make sure that you’re applying the entry points that are specific to the workload feature, in additional to the native endpoints for AWS services, as part of your threat model.

8. Come up with threats

Your aim here is to try to come up with answers to the question “What can go wrong?” There isn’t any canonical list that lists all the possible threats, because determining threats depends on the context of the workload feature that’s under assessment, and the types of threats that are unique to a given industry, geographical area, and so on.

Coming up with threats requires brainstorming. The brainstorming exercise can be facilitated by using a mnemonic like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege), or by looking through threat lists like the OWASP Top 10 or HiTrust Threat Catalog to get the ideas flowing.

Through this process, it’s recommended that you develop and contribute to a threat catalog that is contextual to your organization and will accelerate the brainstorming process going forward, as well as drive consistency in the granularity of threats that you model.

9. Identify and evaluate mitigations

Here, your aim is to identify the mitigations (security controls) within the workload design and evaluate whether threats have been adequately addressed. Keep in mind that there are typically multiple layers of controls and multiple responsibilities at play.

For your own in-house applications and code, you would want to review the mitigations you’ve included in your design—including, but not limited to, input validation, authentication, session handling, and bounds handling.

Consider all other components of your workload (for example, software as a service (SaaS), infrastructure supporting your COTS applications, or components hosted within your on-premises data centers) and determine the security controls that are part of the workload design.

When you use AWS services, Security and Compliance is a shared responsibility between AWS and you as our customer. This is described on the AWS Shared Responsibility Model page.

This means, for the portions of the AWS services that you’re using that are the responsibility of AWS (Security of the Cloud), the security controls are managed by AWS, along with threat identification and mitigation. The distribution of responsibility between AWS (Security of the Cloud) and you (Security in the Cloud) depends on which AWS service you use. Below, I provide examples of infrastructure, container, and abstracted AWS services to show how your responsibility for identifying and mitigating threats can vary:

  • Amazon EC2 is a good example of an infrastructure service, where you are able to access a virtual server in the cloud, you get to choose the operating system, and you have control of the service and all aspects you run on it—so you would be responsible for mitigating the identified threats.
  • Amazon Relational Database Service (Amazon RDS) is a representative example of a container service, where there is no operating system exposed for you, and instead AWS exposes the selected database engine to you (for example, MySQL). AWS is responsible for the security of the operating system in this example, and you don’t need to devise mitigations. However, the database engine is under your control as well as all aspects above it, so you would need to consider mitigations for these areas. Here, AWS is taking on a larger portion of the responsibility compared to infrastructure services.
  • Amazon S3, AWS Key Management Service (AWS KMS), and Amazon DynamoDB are examples of an abstracted service where AWS exposes the entire service control plane and data plane to you through the service API. Again, here there are no operating systems, database engines, or platforms exposed to you—these are an AWS responsibility. However, the API actions and associated policies are under your control and so are all aspects above the API level, so you should be considering mitigations for these. For this type of service, AWS takes a larger portion of responsibility compared to container and infrastructure types of services.

While these examples do not encompass all types of AWS services that may be in your workload, they demonstrate how your Security and Compliance responsibilities under the Shared Responsibility Model will vary in this context. Understanding the balance of responsibilities between AWS and yourself for the types of AWS services in your workload helps you scope your threat modeling exercise to the mitigations that are under your control, which are typically a combination of AWS service configuration options and your own workload-specific mitigations. For the AWS portion of the responsibility, you will find that AWS services are in-scope of many compliance programs, and the audit reports are available for download for AWS customers (at no cost) from AWS Artifact.

Regardless of which AWS services you’re using, there’s always an element of customer responsibility, and this should be included in your workload threat model.

Specifically, for security control mitigations for the AWS services themselves, you’d want to consider security controls across domains, including these domains: Identity and Access Management (Authentication/Authorization), Data Protection (At-Rest, In-Transit), Infrastructure Security, and Logging and Monitoring. AWS services each have a dedicated security chapter in the documentation, which provides guidance on the security controls to consider as mitigations. When capturing these security controls and mitigations in your threat model, you should aim to include references to the actual code, IAM policies, and AWS CloudFormation templates located in the workload’s infrastructure-as-code repository, and so on. This helps the reviewer or approver of your threat model to get an unambiguous view of the intended mitigation.

As in the case for threat identification, there’s no canonical list enumerating all the possible security controls. Through the process described here, you should consciously develop baseline security controls that align to your organization’s control objectives, and where possible, implement these baseline security controls as platform-level controls, including AWS service-level configurations (for example, encryption at rest) or guardrails (for example, through service control policies). By doing this, you can drive consistency and scale, so that these implemented baseline security controls are automatically inherited and enforced for other workload features that you design and deploy.

When you come up with the baseline security controls, it’s important to note that the context of a given workload feature isn’t known. Therefore, it’s advisable to consider these controls as a negotiable baseline that you can deviate from, provided that when you perform the workload threat modeling exercise, you find that the threat that the baseline control was designed to mitigate isn’t applicable, or there are other mitigations or compensating controls that adequately mitigate the threat. Compensating controls and mitigating factors could include: reduced data asset classification, non-human access, or ephemeral data/workload.

To learn more about how to start thinking about baseline security controls as part of your overall cloud security governance, have a look at the How to think about cloud security governance blog post.

10. Decide when enough is enough

There’s no perfect answer to this question. However, it’s important to have a risk-based perspective on the threat modeling process to create a balanced approach, so that the likelihood and impact of a risk are appropriately considered. Over-emphasis on “let’s build and ship it” could lead to significant costs and delays later. Conversely, over-emphasis on “let’s mitigate every conceivable threat” could lead to the workload feature shipping late (or never), and your customers might move on. In the recommendation I made earlier in the “Assemble the right team” section, the selection of personas is deliberate to make sure that there’s a natural tension between shipping the feature, and mitigating threats. Embrace this healthy tension.

11. Don’t let paralysis stop you before you start

Earlier in the “Break the workload down into smaller parts” section, I gave the recommendation that you should scope your threat models down to a workload feature. You may be thinking to yourself, “We’ve already shipped <X number> of features, how do we threat model those?” This is a completely reasonable question.

My view is that rather than go back to threat model features that are already live, aim to threat model any new features that you are working on now and improve the security properties of the code you ship next, and for each feature you ship after that. During this process you, your team, and your organization will learn—not just about threat modeling—but how to communicate effectively with one another. Make adjustments, iterate, improve. Sometime in the future, when you’re routinely providing high quality, consistent and reusable threat models for your new features, you can start putting activities to perform threat modeling for existing features into your backlog.

Conclusion

Threat modeling is an investment—in my view, it’s a good one, because finding and mitigating threats in the design phase of your workload feature can reduce the relative cost of mitigation, compared to finding the threats later. Consistently implementing threat modeling will likely also improve your security posture over time.

I’ve shared my observations and tips for practical ways to incorporate threat modeling into your organization, which center around communication, collaboration, and human-led expertise to find and address threats that your end customer expects. Armed with these tips, I encourage you to look across the workload features you’re working on now (or have in your backlog) and decide which ones will be the first you’ll threat model.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping customers make good security choices and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that unlock and enable our customers at scale.

Masking field values with Amazon Elasticsearch Service

Post Syndicated from Prashant Agrawal original https://aws.amazon.com/blogs/security/masking-field-values-with-amazon-elasticsearch-service/

Amazon Elasticsearch Service (Amazon ES) is a fully managed service that you can use to deploy, secure, and run Elasticsearch cost-effectively at scale. The service provides support for open-source Elasticsearch APIs, managed Kibana, and integration with Logstash and other AWS services. Amazon ES provides a deep security model that spans many layers of interaction and supports fine-grained access control at the cluster, index, document, and field level, on a per-user basis. The service’s security plugin integrates with federated identity providers for Kibana login.

A common use case for Amazon ES is log analytics. Customers configure their applications to store log data to the Elasticsearch cluster, where the data can be queried for insights into the functionality and use of the applications over time. In many cases, users reviewing those insights should not have access to all the details from the log data. The log data for a web application, for example, might include the source IP addresses of incoming requests. Privacy rules in many countries require that those details be masked, wholly or in part. This post explains how to set up field masking within your Amazon ES domain.

Field masking is an alternative to field-level security that lets you anonymize the data in a field rather than remove it altogether. When creating a role, add a list of fields to mask. Field masking affects whether you can see the contents of a field when you search. You can use field masking to either perform a random hash or pattern-based substitution of sensitive information from users, who shouldn’t have access to that information.

When you use field masking, Amazon ES creates a hash of the actual field values before returning the search results. You can apply field masking on a per-role basis, supporting different levels of visibility depending on the identity of the user making the query. Currently, field masking is only available for string-based fields. A search result with a masked field (clientIP) looks like this:

{
  "_index": "web_logs",
  "_type": "_doc",
  "_id": "1",
  "_score": 1,
  "_source": {
    "agent": "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
    "bytes": 0,
    "clientIP": "7e4df8d4df7086ee9c05efe1e21cce8ff017a711ee9addf1155608ca45d38219",
    "host": "www.example.com",
    "extension": "txt",
    "geo": {
      "src": "EG",
      "dest": "CN",
      "coordinates": {
        "lat": 35.98531194,
        "lon": -85.80931806
      }
    },
    "machine": {
      "ram": 17179869184,
      "os": "win 7"
    }
  }
}

To follow along in this post, make sure you have an Amazon ES domain with Elasticsearch version 6.7 or higher, sample data loaded (this example uses the web logs data supplied by Kibana), and access to Kibana through a role with administrator privileges for the domain.

Configure field masking

Field masking is managed by defining specific access controls within the Kibana visualization system. You’ll need to create a new Kibana role, define the fine-grained access-control privileges for that role, specify which fields to mask, and apply that role to specific users.

You can use either the Kibana console or direct-to-API calls to set up field masking. In our first example, we’ll use the Kibana console.

To configure field masking in the Kibana console

  1. Log in to Kibana, choose the Security pane, and then choose Roles, as shown in Figure 1.

    Figure 1: Choose security roles

    Figure 1: Choose security roles

  2. Choose the plus sign (+) to create a new role, as shown in Figure 2.

    Figure 2: Create role

    Figure 2: Create role

  3. Choose the Index Permissions tab, and then choose Add index permissions, as shown in Figure 3.

    Figure 3: Set index permissions

    Figure 3: Set index permissions

  4. Add index patterns and appropriate permissions for data access. See the Amazon ES documentation for details on configuring fine-grained access control.
  5. Once you’ve set Index Patterns, Permissions: Action Groups, Document Level Security Query, and Include or exclude fields, you can use the Anonymize fields entry to mask the clientIP, as shown in Figure 4.

    Figure 4: Anonymize field

    Figure 4: Anonymize field

  6. Choose Save Role Definition.
  7. Next, you need to create one or more users and apply the role to the new users. Go back to the Security page and choose Internal User Database, as shown in Figure 5.

    Figure 5: Select Internal User Database

    Figure 5: Select Internal User Database

  8. Choose the plus sign (+) to create a new user, as shown in Figure 6.

    Figure 6: Create user

    Figure 6: Create user

  9. Add a username and password, and under Open Distro Security Roles, select the role es-mask-role, as shown in Figure 7.

    Figure 7: Select the username, password, and roles

    Figure 7: Select the username, password, and roles

  10. Choose Submit.

If you prefer, you can perform the same task by using the Amazon ES REST API using Kibana dev tools.

Use the following API to create a role as described in below snippet and shown in Figure 8.

PUT _opendistro/_security/api/roles/es-mask-role
{
  "cluster_permissions": [],
  "index_permissions": [
    {
      "index_patterns": [
        "web_logs"
      ],
      "dls": "",
      "fls": [],
      "masked_fields": [
        "clientIP"
      ],
      "allowed_actions": [
        "data_access"
      ]
    }
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-role' created."
}
Figure 8: API to create Role

Figure 8: API to create Role

Use the following API to create a user with the role as described in below snippet and shown in Figure 9.

PUT _opendistro/_security/api/internalusers/es-mask-user
{
  "password": "xxxxxxxxxxx",
  "opendistro_security_roles": [
    "es-mask-role"
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-user' created."
}
Figure 9: API to create User

Figure 9: API to create User

Verify field masking

You can verify field masking by running a simple search query using Kibana dev tools (GET web_logs/_search) and retrieving the data first by using the kibana_user (with no field masking), and then by using the es-mask-user (with field masking) you just created.

Query responses run by the kibana_user (all access) have the original values in all fields, as shown in Figure 10.

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 11, following, shows an example of what you would see if you logged in as the es-mask-user. In this case, the clientIP field is hidden due to the es-mask-role you created.

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Use pattern-based field masking

Rather than creating a hash, you can use one or more regular expressions and replacement strings to mask a field. The syntax is <field>::/<regular-expression>/::<replacement-string>.

You can use either the Kibana console or direct-to-API calls to set up pattern-based field masking. In the following example, clientIP is masked in such a way that the last three parts of the IP address are masked by xxx using the pattern is clientIP::/[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$/::xxx.xxx.xxx>. You see only the first part of the IP address, as shown in Figure 12.

Figure 12: Anonymize the field with a pattern

Figure 12: Anonymize the field with a pattern

Run the search query to verify that the last three parts of clientIP are masked by custom characters and only the first part is shown to the requester, as shown in Figure 13.

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Conclusion

Field level security should be the primary approach for ensuring data access security – however if there are specific business requirements that cannot be met with this approach, then field masking may offer a viable alternative. By using field masking, you can selectively allow or prevent your users from seeing private information such as personally identifying information (PII) or personal healthcare information (PHI). For more information about fine-grained access control, see the Amazon Elasticsearch Service Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Prashant Agrawal

Prashant is a Search Specialist Solutions Architect with Amazon Elasticsearch Service. He works closely with team members to help customers migrate their workloads to the cloud. Before joining AWS, he helped various customers use Elasticsearch for their search and analytics use cases.

Use AWS Secrets Manager to simplify the management of private certificates

Post Syndicated from Maitreya Ranganath original https://aws.amazon.com/blogs/security/use-aws-secrets-manager-to-simplify-the-management-of-private-certificates/

AWS Certificate Manager (ACM) lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with Amazon Web Services (AWS) services and your internal connected resources. For private certificates, AWS Certificate Manager Private Certificate Authority (ACM PCA) can be used to create private CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA. With these CAs, you can issue custom end-entity certificates or use the ACM defaults.

When you manage the lifecycle of certificates, it’s important to follow best practices. You can think of a certificate as an identity of a service you’re connecting to. You have to ensure that these identities are secure and up to date, ideally with the least amount of manual intervention. AWS Secrets Manager provides a mechanism for managing certificates, and other secrets, at scale. Specifically, you can configure secrets to automatically rotate on a scheduled basis by using pre-built or custom AWS Lambda functions, encrypt them by using AWS Key Management Service (AWS KMS) keys, and automatically retrieve or distribute them for use in applications and services across an AWS environment. This reduces the overhead of manually managing the deployment, creation, and secure storage of these certificates.

In this post, you’ll learn how to use Secrets Manager to manage and distribute certificates created by ACM PCA across AWS Regions and accounts.

We present two use cases in this blog post to demonstrate the difference between issuing private certificates with ACM and with ACM PCA. For the first use case, you will create a certificate by using the ACM defaults for private certificates. You will then deploy the ACM default certificate to an Amazon Elastic Compute Cloud (Amazon EC2) instance that is launched in the same account as the secret and private CA. In the second scenario, you will create a custom certificate by using ACM PCA templates and parameters. This custom certificate will be deployed to an EC2 instance in a different account to demonstrate cross-account sharing of secrets.

Solution overview

Figure 1 shows the architecture of our solution.

Figure 1: Solution architecture

Figure 1: Solution architecture

This architecture includes resources that you will create during the blog walkthrough and by using AWS CloudFormation templates. This architecture outlines how these services can be used in a multi-account environment. As shown in the diagram:

  1. You create a certificate authority (CA) in ACM PCA to generate end-entity certificates.
  2. In the account where the issuing CA was created, you create secrets in Secrets Manager.
    1. There are several required parameters that you must provide when creating secrets, based on whether you want to create an ACM or ACM PCA issued certificate. These parameters will be passed to our Lambda function to make sure that the certificate is generated and stored properly.
    2. The Lambda rotation function created by the CloudFormation template is attached when configuring secrets rotation. Initially, the function generates two Privacy-Enhanced Mail (PEM) encoded files containing the certificate and private key, based on the provided parameters, and stores those in the secret. Subsequent calls to the function are made when the secret needs to be rotated, and then the function stores the resulting Certificate PEM and Private Key PEM in the desired secret. The function is written using Python, the AWS SDK for Python (Boto3), and OpenSSL. The flow of the function follows the requirements for rotating secrets in Secrets Manager.
  3. The first CloudFormation template creates a Systems Manager Run Command document that can be invoked to install the certificate and private key from the secret on an Apache Server running on EC2 in Account A.
  4. The second CloudFormation template deploys the same Run Command document and EC2 environment in Account B.
    1. To make sure that the account has the ability to pull down the certificate and private key from Secrets Manager, you need to update the key policy in Account A to give Account B access to decrypt the secret.
    2. You also need to add a resource-based policy to the secret that gives Account B access to retrieve the secret from Account A.
    3. Once the proper access is set up in Account A, you can use the Run Command document to install the certificate and private key on the Apache Server.

In a multi-account scenario, it’s common to have a central or shared AWS account that owns the ACM PCA resource, while workloads that are deployed in other AWS accounts use certificates issued by the ACM PCA. This can be achieved in two ways:

  1. Secrets in Secrets Manager can be shared with other AWS accounts by using resource-based policies. Once shared, the secrets can be deployed to resources, such as EC2 instances.
  2. You can share the central ACM PCA with other AWS accounts by using AWS Resource Access Manager or ACM PCA resource-based policies. These two options allow the receiving AWS account or accounts to issue private certificates by using the shared ACM PCA. These issued certificates can then use Secrets Manager to manage the secret in the child account and leverage features like rotation.

We will focus on first case for sharing secrets.

Solution cost

The cost for running this solution comes from the following services:

  • AWS Certificate Manager Private Certificate Authority (ACM PCA)
    Referring to the pricing page for ACM PCA, this solution incurs a prorated monthly charge of $400 for each CA that is created. A CA can be deleted the same day it’s created, leading to a charge of around $13/day (400 * 12 / 365.25). In addition, there is a cost for issuing certificates using ACM PCA. For the first 1000 certificates, this cost is $0.75. For this demonstration, you only need two certificates, resulting in a total charge of $1.50 for issuing certificates using ACM PCA. In all, the use of ACM PCA in this solution results in a charge of $14.50.
  • Amazon EC2
    The CloudFormation templates create t2.micro instances that cost $0.0116/hour, if they’re not eligible for Free Tier.
  • Secrets Manager
    There is a 30-day free trial for Secrets Manager, which is initiated when the first secret is created. After the free trial has completed, it costs $0.40 per secret stored per month. You will use two secrets for this solution and can schedule these for deletion after seven days, resulting in a prorated charge of $0.20.
  • Lambda
    Lambda has a free usage tier that allows for 1 million free requests per month and 400,000 GB-seconds of compute time per month. This fits within the usage for this blog, making the cost $0.
  • AWS KMS
    A single key created by one of the CloudFormation templates costs $1/month. The first 20,000 requests to AWS KMS are free, which fits within the usage of the test environment. In a production scenario, AWS KMS would charge $0.03 per 10,000 requests involving this key.

There are no charges for Systems Manager Run Command.

See the “Clean up resources” section of this blog post to get information on how to delete the resources that you create for this environment.

Deploy the solution

Now we’ll walk through the steps to deploy the solution. The CloudFormation templates and Lambda function code can be found in the AWS GitHub repository.

Create a CA to issue certificates

First, you’ll create an ACM PCA to issue private certificates. A common practice we see with customers is using a subordinate CA in AWS that is used to issue end-entity certificates for applications and workloads in the cloud. This subordinate can either point to a root CA in ACM PCA that is maintained by a central team, or to an existing on-premises public key infrastructure (PKI). There are some considerations when creating a CA hierarchy in ACM.

For demonstration purposes, you need to create a CA that can issue end-entity certificates. If you have an existing PKI that you want to use, you can create a subordinate CA that is signed by an external CA that can issue certificates. Otherwise, you can create a root CA and begin building a PKI on AWS. During creation of the CA, make sure that ACM has permissions to automatically renew certificates, because this feature will be used in later steps.

You should have one or more private CAs in the ACM console, as shown in Figure 2.

Figure 2: A private CA in the ACM PCA console

Figure 2: A private CA in the ACM PCA console

You will use two CloudFormation templates for this architecture. The first is launched in the same account where your private CA lives, and the second is launched in a different account. The first template generates the following: a Lambda function used for Secrets Manager rotation, an AWS KMS key to encrypt secrets, and a Systems Manager Run Command document to install the certificate on an Apache Server running on EC2 in Amazon Virtual Private Cloud (Amazon VPC). The second template launches the same Systems Manager Run Command document and EC2 environment.

To deploy the resources for the first template, select the following Launch Stack button. Make sure you’re in the N. Virginia (us-east-1) Region.

Select the Launch Stack button to launch the template

The template takes a few minutes to launch.

Use case #1: Create and deploy an ACM certificate

For the first use case, you’ll create a certificate by using the ACM defaults for private certificates, and then deploy it.

Create a Secrets Manager secret

To begin, create your first secret in Secrets Manager. You will create these secrets in the console to see how the service can be set up and used, but all these actions can be done through the AWS Command Line Interface (AWS CLI) or AWS SDKs.

To create a secret

  1. Navigate to the Secrets Manager console.
  2. Choose Store a new secret.
  3. For the secret type, select Other type of secrets.
  4. The Lambda rotation function has a set of required parameters in the secret type depending on what kind of certificate needs to be generated.For this first secret, you’re going to create an ACM_ISSUED certificate. Provide the following parameters.

    Key Value
    CERTIFICATE_TYPE ACM_ISSUED
    CA_ARN The Amazon Resource Name (ARN) of your certificate-issuing CA in ACM PCA
    COMMON_NAME The end-entity name for your certificate (for example, server1.example)
    ENVIRONMENT TEST (You need this later on to test the renewal of certificates. If using this outside of the blog walkthrough, set it to something like DEV or PROD.)
  5. For Encryption key, select CAKey, and then choose Next.
  6. Give the secret a name and optionally add tags or a description. Choose Next.
  7. Select Enable automatic rotation and choose the Lambda function that starts with <CloudFormation Stack Name>-SecretsRotateFunction. Because you’re creating an ACM-issued certificate, the rotation will be handled 60 days before the certificate expires. The validity is set to 365 days, so any value higher than 305 would work. Choose Next.
  8. Review the configuration, and then choose Store.
  9. This will take you back to a list of your secrets, and you will see your new secret, as shown in Figure 3. Select the new secret.

    Figure 3: The new secret in the Secrets Manager console

    Figure 3: The new secret in the Secrets Manager console

  10. Choose Retrieve secret value to confirm that CERTIFICATE_PEM, PRIVATE_KEY_PEM, CERTIFICATE_CHAIN_PEM, and CERTIFICATE_ARN are set in the secret value.

You now have an ACM-issued certificate that can be deployed to an end entity.

Deploy to an end entity

For testing purposes, you will now deploy the certificate that you just created to an Apache Server.

To deploy the certificate to the Apache Server

  1. In a new tab, navigate to the Systems Manager console.
  2. Choose Documents at the bottom left, and then choose the Owned by me tab.
  3. Choose RunUpdateTLS.
  4. Choose Run command at the top right.
  5. Copy and paste the secret ARN from Secrets Manager and make sure there are no leading or trailing spaces.
  6. Select Choose instances manually, and then choose ApacheServer.
  7. Select CloudWatch output to track progress.
  8. Choose Run.

The certificate and private key are now installed on the server, and it has been restarted.

To verify that the certificate was installed

  1. Navigate to the EC2 console.
  2. In the dashboard, choose Running Instances.
  3. Select ApacheServer, and choose Connect.
  4. Select Session Manager, and choose Connect.
  5. When you’re logged in to the instance, enter the following command.
    openssl s_client -connect localhost:443 | openssl x509 -text -noout
    

    This will display the certificate that the server is using, along with other metadata like the certificate chain and validity period. For the validity period, note the Not Before and Not After dates and times, as shown in figure 4.

    Figure 4: Server certificate

    Figure 4: Server certificate

Now, test the rotation of the certificate manually. In a production scenario, this process would be automated by using maintenance windows. Maintenance windows allow for the least amount of disruption to the applications that are using certificates, because you can determine when the server will update its certificate.

To test the rotation of the certificate

  1. Navigate back to your secret in Secrets Manager.
  2. Choose Rotate secret immediately. Because you set the ENVIRONMENT key to TEST in the secret, this rotation will renew the certificate. When the key isn’t set to TEST, the rotation function pulls down the renewed certificate based on its rotation schedule, because ACM is managing the renewal for you. In a couple of minutes, you’ll receive an email from ACM stating that your certificate was rotated.
  3. Pull the renewed certificate down to the server, following the same steps that you used to deploy the certificate to the Apache Server.
  4. Follow the steps that you used to verify that the certificate was installed to make sure that the validity date and time has changed.

Use case #2: Create and deploy an ACM PCA certificate by using custom templates

Next, use the second CloudFormation template to create a certificate, issued by ACM PCA, which will be deployed to an Apache Server in a different account. Sign in to your other account and select the following Launch Stack button to launch the CloudFormation template.

Select the Launch Stack button to launch the template

This creates the same Run Command document you used previously, as well as the EC2 and Amazon VPC environment running an Apache Server. This template takes in a parameter for the KMS key ARN; this can be found in the first template’s output section, shown in figure 5.

Figure 5: CloudFormation outputs

Figure 5: CloudFormation outputs

While that’s completing, sign in to your original account so that you can create the new secret.

To create the new secret

  1. Follow the same steps you used to create a secret, but change the secret values passed in to the following.

    Key Value
    CA_ARN The ARN of your certificate-issuing CA in ACM PCA
    COMMON_NAME You can use any name you want, such as server2.example
    TEMPLATE_ARN

    For testing purposes, use arn:aws:acm-pca:::template/EndEntityCertificate/V1

    This template ARN determines what type of certificate is being created and your desired path length. For more information, see Understanding Certificate Templates.

    KEY_ALGORITHM TYPE_RSA
    (You can also use TYPE_DSA)
    KEY_SIZE 2048
    (You can also use 1024 or 4096)
    SIGNING_HASH sha256
    (You can also use sha384 or sha512)
    SIGNING_ALGORITHM RSA
    (You can also use ECDSA if the key type for your issuing CA is set to ECDSA P256 or ECDSA P384)
    CERTIFICATE_TYPE ACM_PCA_ISSUED
  2. Add the following resource policy during the name and description step. This gives your other account access to pull this secret down to install the certificate on its Apache Server.
    {
      "Version" : "2012-10-17",
      "Statement" : [ {
        "Effect" : "Allow",
        "Principal" : {
          "AWS" : "<ARN in output of second CloudFormation Template>"
        },
        "Action" : "secretsmanager:GetSecretValue",
        "Resource" : "*"
      } ]
    }
    

  3. Finish creating the secret.

After the secret has been created, the last thing you need to do is add permissions to the KMS key policy so that your other account can decrypt the secret when installing the certificate on your server.

To add AWS KMS permissions

  1. Navigate to the AWS KMS console, and choose CAKey.
  2. Next to the key policy name, choose Edit.
  3. For the Statement ID (SID) Allow use of the key, add the ARN of the EC2 instance role in the other account. This can be found in the CloudFormation templates as an output called ApacheServerInstanceRole, as shown in Figure 5. The Statement should look something like this:
    {
                "Sid": "Allow use of the key",
                "Effect": "Allow",
                "Principal": {
                    "AWS": [
                        "arn:aws:iam::<AccountID with CA>:role/<Apache Server Instance Role>",
                        "arn:aws:iam:<Second AccountID>:role/<Apache Server Instance Role>"
                    ]
                },
                "Action": [
                    "kms:Encrypt",
                    "kms:Decrypt",
                    "kms:ReEncrypt*",
                    "kms:GenerateDataKey*",
                    "kms:DescribeKey"
                ],
                "Resource": "*"
    }
    

Your second account now has permissions to pull down the secret and certificate to the Apache Server. Follow the same steps described in the earlier section, “Deploy to an end entity.” Test rotating the secret the same way, and make sure the validity period has changed. You may notice that you didn’t get an email notifying you of renewal. This is because the certificate isn’t issued by ACM.

In this demonstration, you may have noticed you didn’t create resources that pull down the secret in different Regions, just in different accounts. If you want to deploy certificates in different Regions from the one where you create the secret, the process is exactly the same as what we described here. You don’t need to do anything else to accomplish provisioning and deploying in different Regions.

Clean up resources

Finally, delete the resources you created in the earlier steps, in order to avoid additional charges described in the section, “Solution cost.”

To delete all the resources created:

  1. Navigate to the CloudFormation console in both accounts, and select the stack that you created.
  2. Choose Actions, and then choose Delete Stack. This will take a few minutes to complete.
  3. Navigate to the Secrets Manager console in the CA account, and select the secrets you created.
  4. Choose Actions, and then choose Delete secret. This won’t automatically delete the secret, because you need to set a waiting period that allows for the secret to be restored, if needed. The minimum time is 7 days.
  5. Navigate to the Certificate Manager console in the CA account.
  6. Select the certificates that were created as part of this blog walkthrough, choose Actions, and then choose Delete.
  7. Choose Private CAs.
  8. Select the subordinate CA you created at the beginning of this process, choose Actions, and then choose Disable.
  9. After the CA is disabled, choose Actions, and then Delete. Similar to the secrets, this doesn’t automatically delete the CA but marks it for deletion, and the CA can be recovered during the specified period. The minimum waiting period is also 7 days.

Conclusion

In this blog post, we demonstrated how you could use Secrets Manager to rotate, store, and distribute private certificates issued by ACM and ACM PCA to end entities. Secrets Manager uses AWS KMS to secure these secrets during storage and delivery. You can introduce additional automation for deploying the certificates by using Systems Manager Maintenance Windows. This allows you to define a schedule for when to deploy potentially disruptive changes to EC2 instances.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Secrets Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Maitreya Ranganath

Maitreya is an AWS Security Solutions Architect. He enjoys helping customers solve security and compliance challenges and architect scalable and cost-effective solutions on AWS.

Author

Blake Franzen

Blake is a Security Solutions Architect with AWS in Seattle. His passion is driving customers to a more secure AWS environment while ensuring they can innovate and move fast. Outside of work, he is an avid movie buff and enjoys recreational sports.

Signing executables with HSM-backed certificates using multiple Windows instances

Post Syndicated from Karim Hamdy Abdelmonsif Ibrahim original https://aws.amazon.com/blogs/security/signing-executables-with-hsm-backed-certificates-using-multiple-windows-instances/

Customers use code signing certificates to digitally sign software, documents, and other certificates. Signing is a cryptographic tool that lets users verify that the code hasn’t been altered and that the software, documents or other certificates can be trusted.

This blog post shows you how to configure your applications so you can use a key pair already on your hardware security module (HSM) to generate signatures using any Windows instance. Many customers use multiple Amazon Elastic Compute Cloud (Amazon EC2) instances to sign workloads using the same key pair. You must configure these instances to use a pre-existing key pair from the HSM. In this blog post, I show you how to create a key container on a new Windows instance from an existing key pair in AWS CloudHSM, and then update the certificate store to associate the newly imported certificate with the new container. I also show you how to use a common application to sign executables with this key pair.

Every certificate is associated with a key pair, which includes a private key and a public key. You can only trust a signature if you can be sure that the private key has remained confidential and can be used only by the owner of the certificate. You achieve this goal by generating the key pair on an HSM and securely storing the private key on the HSM. Enterprise certificate authority (CA) or public key infrastructure (PKI) applications are configured to use this private key in the HSM whenever they need to use the corresponding certificate to sign. This configuration is generally handled transparently between the application and the HSM on the Windows instance your application is running on. The process gets tricky when you want to use multiple Windows instances to sign using the same key pair. This is especially true if your current EC2 instance that acts as a Windows Server CA, which you used to issue the HSM-backed certificate, is deleted and you have a backup of the HSM-backed certificate.

Before we get into the details, you need to know about a library called the key storage provider (KSP). Windows systems use KSP libraries to connect applications to an HSM. For each HSM brand, such as CloudHSM, you need a corresponding KSP to run operations that involve cryptographic keys stored on that HSM. From your application, select the KSP that corresponds with the HSM you want to use to store (or use) your keys. All KSPs associate keys on their HSM with metadata in the Microsoft ecosystem using key containers. Key containers map the metadata in certificates with metadata on the HSM, which allows the application to properly address keys. The list of certificates available for Microsoft utilities to sign with is contained in a trust store. To use the same key pair across multiple Windows instances, you must copy the key containers to each instance—or create a new key container from an existing key pair in each instance—and import the corresponding certificate into the trust store for each instance.

Prerequisites

The solution in this post assumes that you’ve completed the steps in Signing executables with Microsoft SignTool.exe using AWS CloudHSM-backed certificates. You should already have your HSM-backed certificate on one Windows instance.

Before you implement the solution, you must:

  1. Install the AWS CloudHSM client on the new instance and make sure that you can interact with HSM in your CloudHSM cluster.
  2. Verify the CloudHSM KSP and CNG providers installation on your new instance.
  3. Set the login credentials for the HSM on your system. Set credentials through Windows Credentials Manager. I recommend that you reboot your instance after setting up the credentials.

Note: The login credentials identify a crypto user (CU) in the HSM that has access to the key pair in CloudHSM.

Architectural overview

 

Figure 1: Architectural overview

Figure 1: Architectural overview

This diagram shows a virtual private cloud (VPC) that contains an EC2 instance running Windows Server 2016 that resides on private subnet 1. This instance will run the CloudHSM client software and will use your HSM-backed certificate with a key pair already on your HSM to sign executable files. The instance can be accessed through a VPN connection. It will also have security groups that enable RDP access for your on-premises network. Private subnet 2 hosts the elastic network interface for the CloudHSM cluster, which has a single HSM.

Out of scope

The focus of this blog post is how to use an HSM-backed certificate with a key pair already on your HSM to sign executable files from any Windows instance using Microsoft SignTool.exe. This post isn’t intended to represent any best practices for implementing code signing or Amazon EC2. For more information, see the NIST cybersecurity whitepaper Security Considerations for Code Signing and Best practices for Amazon EC2, respectively.

Deploy the solution

To deploy the solution, you use certutil, import_key, and SignTool. Certutil is a Microsoft tool that helps you examine your system for available certificates and key containers. Import_key—a tool provided by CloudHSM—generates a local key container for a key pair that’s on your HSM. To complete the process, use SignTool—a Microsoft tool that enables Windows users to digitally sign files, and verifies signatures in files and timestamps files.

You will need the following:

Certificates or key material Purpose
<my root certificate>.cer Root certificate
<my signed certificate>.cer HSM-backed signing certificate
<signed certificate in base64>.cer HSM-backed signing certificate in base64 format
<public key handle> Public key handle of the signing certificate
<private key handle> Private key handle of the signing certificate

Import the HSM-backed certificate and its RootCA chain certificate into the new instance

Before you can use third-party tools such as SignTool to generate signatures using the HSM-backed certificate, you must move the signing certificate file to the Personal certificate store in the new Windows instance.

To do that, you copy the HSM-backed certificate that your application uses for signing operations and its root certificate chain from the original instance to the new Windows instance.

If you issued your signing certificate through a private CA (like in my example), you must deploy a copy of the root CA certificate and any intermediate certificates from the private CA to any systems you want to use to verify the integrity of your signed file.

To import the HSM-backed certificate and root certificate

  1. Sign in to the Windows Server that has the private CA that you used to issue your signing certificate. Then, run the following certutil command to export the root CA to a new file. Replace <my root certificate> with a name that you can remember easily.
    C:\Users\Administrator\Desktop>certutil -ca.cert <my root certificate>.cer
    
    CA cert[0]: 3 -- Valid
    CA cert[0]:
    
    -----BEGIN CERTIFICATE-----
    MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w0BAQUFADCBiDELMAkGA1UEBhMC
    VVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6
    b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAd
    BgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wHhcNMTEwNDI1MjA0NTIxWhcN
    MTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYD
    VQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25z
    b2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFt
    YXpvbi5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ
    21uUSfwfEvySWtC2XADZ4nB+BLYgVIk60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9T
    rDHudUZg3qX4waLG5M43q7Wgc/MbQITxOUSQv7c7ugFFDzQGBzZswY6786m86gpE
    Ibb3OhjZnzcvQAaRHhdlQWIMm2nrAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4
    nUhVVxYUntneD9+h8Mg9q6q+auNKyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0Fkb
    FFBjvSfpJIlJ00zbhNYS5f6GuoEDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTb
    NYiytVbZPQUQ5Yaxu2jXnimvw3rrszlaEXAMPLE=
    -----END CERTIFICATE-----
            
    CertUtil: -ca.cert command completed successfully.
    
    C:\Users\Administrator\Desktop>
    

  2. Copy the <my root certificate>.cer file to your new Windows instance and run the following certutil command. This moves the root certificate from the file into the Trusted Root Certification Authorities store in Windows. You can verify that it exists by running certlm.msc and viewing the Trusted Root Certification Authorities certificates.
    C:\Users\Administrator\Desktop>certutil -addstore "Root" <my root certificate>.cer
    
    Root "Trusted Root Certification Authorities"
    Signature matches Public Key
    Certificate "MYRootCA" added to store.
    CertUtil: -addstore command completed successfully.
    

  3. Copy the HSM-backed signing certificate from the original instance to the new one, and run the following certutil command. This moves the certificate from the file into the Personal certificate store in Windows.
    C:\Users\Administrator\Desktop>certutil -addstore "My" <my signed certificate>.cer
    
    My "Personal"
    Certificate "www.mydomain.com" added to store.
    CertUtil: -addstore command completed successfully.
    

  4. Verify that the certificate exists in your Personal certificate store by running the following certutil command. The following sample output from certutil shows the serial number. Take note of the certificate serial number to use later.
    C:\Users\Administrator\Desktop>certutil -store my
    
    my "Personal"
    ================ Certificate 0 ================
    Serial Number: <certificate serial number>
    Issuer: CN=MYRootCA
     NotBefore: 2/5/2020 1:38 PM
     NotAfter: 2/5/2021 1:48 PM
    Subject: CN=www.mydomain.com, OU=Certificate Management, O=Information Technology, L=Houston, S=Texas, C=US
    Non-root Certificate
    Cert Hash(sha1): 5aaef93e7e972b1187363d880cfa3f71507c2e24
    No key provider information
    Cannot find the certificate and private key for decryption.
    CertUtil: -store command completed successfully.
    

Retrieve the key handles of the RSA key pair on the HSM

In this step, you retrieve the key handles of the existing public and private key pair on your CloudHSM in order to use that key pair to create a key container on the new Windows instance.

One way to get the key handles of an existing key pair on the CloudHSM is to use the modulus value. Since the certificate and its public and private keys all must have the same modulus value and you have the signing certificate already, you view its modulus value using the OpenSSL tool. Then, you use the findKey command in key_mgmt_util to search for the public and private key handles on the HSM using the value of the certificate modulus.

To retrieve the key handles

  1. Download the OpenSSL for Windows installation package.

    Note: In my example, I downloaded Win64OpenSSL-1_1_1d.exe.

  2. Right-click on the downloaded file and choose Run as administrator.
  3. Follow the installation instructions, accepting all default settings. Then choose Install.
    1. If the error message “The Win64 Open SSL Installation Project setup has detected that the following critical component is missing…”—shown in Figure 2—appears, you need to install Microsoft Visual C++ Redistributables to complete this procedure.

      Figure 2: OpenSSL installation error message

      Figure 2: OpenSSL installation error message

    2. Choose Yes to download and install the required Microsoft Visual C++ package on your system.
    3. Run the OpenSSL installer again and follow the installation instructions, accepting all default settings. Then choose Install.
  4. Choose Finish when the installation is complete.

    With the installation complete, OpenSSL for Windows can be found as OpenSSL.exe in C:\Program Files\OpenSSL-Win64\bin. Always open the program as the administrator.

  5. On the new CloudHSM client instance, copy your certificate to C:\Program Files\OpenSSL-Win64\bin and run the command certutil -encode <my signed certificate>.cer <signed certificate in base64>.cer to export the certificate using base64 .cer format. This exports the certificate to a file with the name you enter in place of <signed certificate in base64>.
    C:\Program Files\OpenSSL-Win64\bin>certutil -encode <my signed certificate>.cer <signed certificate in base64>.cer
    
    Input Length = 1066
    Output Length = 1526
    CertUtil: -encode command completed successfully.
    

  6. Run the command openssl x509 -noout -modulus -in <signed certificate in base64>.cer to view the certificate modulus.
    C:\Program Files\OpenSSL-Win64\bin>openssl x509 -noout -modulus -in <signed certificate in base64>.cer
    
    Modulus=9D1D625C041F7FAF076780E486CA2DB2FB846982E88804030F9C84F6CF553925C287934C18B92606EE9A4438F80E47961D7B2CD28213EADE2078BE1A921E6D164CC07F99DA42CF6DD1767A6392FC4BC2B19592474782E1B8574F4A46A93626CD2A8D56405EA7DFCED8DA7042F6FC6D3716CC1649174E93C66F0A9EC7EEFEC9661D43FD2BC8E2E261C06A619E4AF3B5E13190215F72EE5BDE2090818031F8AAD0AA7E934894DC54DF5F1E7577645137637F400E10B9ECDC0870C78C99E8027A86807CD719AA05931D1A4326A5ED1C3687C8EA8E54DF62BFD1851A92473348C98973DEF850B8A88A443A56E93B997F3286A1DC274E6A8DD187D8C59BAB32A6919F
    

  7. Save the certificate modulus in a text file named modulus.txt.
  8. Run the key_mgmt_util command line tool, and log in as the CU, as described in Getting Started with key_mgmt_util. Replace <cu username> and <cu password> with the username and password of the CU.
    Command: loginHSM -u CU -s <CU username> -p <CU password>
    
         	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS
    
            Cluster Error Status
            Node id 13 and err state 0x00000000 : HSM Return: SUCCESS
            Node id 14 and err state 0x00000000 : HSM Return: SUCCESS
    

  9. Run the following findKey command to find the public key handle that has the same RSA modulus that you generated previously. Enter the path to the modulus.txt file that you created in step 7. Take note of the public key handle that’s returned so that you can use it in the following steps.
    Command: findKey -c 2 -m C:\\Users\\Administrator\\Desktop\\modulus.txt
    
            Total number of keys present: 1
    
            Number of matching keys from start index 0::0
    
            Handles of matching keys:
            <public key handle>
    
            Cluster Error Status
            Node id 13 and err state 0x00000000 : HSM Return: SUCCESS
            Node id 14 and err state 0x00000000 : HSM Return: SUCCESS
    
            Cfm3FindKey returned: 0x00 : HSM Return: SUCCESS
    

  10. Run the following findKey command to find the private key handle that has the same RSA modulus that you generated previously. Enter the path to the modulus.txt file that you created in step 7. Take note of the private key handle that’s returned so that you can use it in the following steps.
    Command: findKey -c 3 -m C:\\Users\\Administrator\\Desktop\\modulus.txt
    
            Total number of keys present: 1
    
            Number of matching keys from start index 0::0
    
            Handles of matching keys:
            <private key handle>
    
            Cluster Error Status
            Node id 13 and err state 0x00000000 : HSM Return: SUCCESS
            Node id 14 and err state 0x00000000 : HSM Return: SUCCESS
    
            Cfm3FindKey returned: 0x00 : HSM Return: SUCCESS
    

Create a new key container for the existing public and private key pair in the CloudHSM

To use the same key pair across new Windows instances, you must copy over the key containers to each instance, or create a new key container from an existing key pair in the key storage provider of each instance. In this step, you create a new key container to hold the public key of the certificate and its corresponding private key metadata. To create a new key container from an existing public and private key pair in the HSM, first make sure to start the CloudHSM client daemon. Then, use the import_key.exe utility, which is included in CloudHSM version 3.0 and later.

To create a new key container

  1. Run the following import_key.exe command, replacing <private key handle> and <public key handle> with the public and private key handles you created in the previous procedure. This creates the HSM key pair in a new key container in the key storage provider.
    C:\Program Files\Amazon\CloudHSM>import_key.exe -from HSM –privateKeyHandle <private key handle> -publicKeyHandle <public key handle>
    
    Represented 1 keypairs in Cavium Key Storage Provider.
    

    Note: If you get the error message n3fips_password is not set, make sure that you set the login credentials for the HSM on your system.

  2. You can verify the new key container by running the following certutil command to list the key containers in your key storage provider (KSP). Take note of the key container name to use in the following steps.
    C:\Program Files\Amazon\CloudHSM>certutil -key -csp "Cavium Key Storage provider"
    
    Cavium Key Storage provider:
      <key container name>
      RSA
    
    
    CertUtil: -key command completed successfully.
    

Update the certificate store

Now you have everything in place: the imported certificate in the Personal certificate store of the new Windows instance and the key container that represents the key pair in CloudHSM. In this step, you associate the certificate to the key container that you made a note of earlier.

To update the certificate store

  1. Create a file named repair.txt as shown following.

    Note: You must use the key container name of your certificate that you got in the previous step as the input for the repair.txt file.

    [Properties]
    11 = "" ; Add friendly name property
    2 = "{text}" ; Add Key Provider Information property
    _continue_="Container=<key container name>&"
    _continue_="Provider=Cavium Key Storage Provider&"
    _continue_="Flags=0&"
    _continue_="KeySpec=2"
    

  2. Make sure that the CloudHSM client daemon is still running. Then, use the certutil verb -repairstore to update the certificate serial number that you took note of earlier, as shown in the following command. The following sample shows the command and output. See the Microsoft documentation for information about the – repairstore verb.
    certutil -repairstore my <certificate serial number> repair.txt
    
    C:\Users\Administrator\Desktop>certutil -repairstore my <certificate serial number> repair.txt
    
    my "Personal"
    ================ Certificate 0 ================
    Serial Number: <certificate serial number>
    Issuer: CN=MYRootCA
     NotBefore: 2/5/2020 1:38 PM
     NotAfter: 2/5/2021 1:48 PM
    Subject: CN=www.mydomain.com, OU=Certificate Management, O=Information Technology, L=Houston, S=Texas, C=US
    Non-root Certificate
    Cert Hash(sha1): 5aaef93e7e972b1187363d880cfa3f71507c2e24
    CertUtil: -repairstore command completed successfully.
    

  3. Run the following certutil command to verify that your certificate has been associated with the new key container successfully.
    C:\Users\Administrator\Desktop>certutil -store my
    
    my "Personal"
    ================ Certificate 0 ================
    Serial Number: <certificate serial number>
    Issuer: CN=MYRootCA
     NotBefore: 2/5/2020 1:38 PM
     NotAfter: 2/5/2021 1:48 PM
    Subject: CN=www.mydomain.com, OU=Certificate Management, O=Information Technology, L=Houston, S=Texas, C=US
    Non-root Certificate
    Cert Hash(sha1): 5aaef93e7e972b1187363d880cfa3f71507c2e24
      Key Container = CNGRSAPriv-3145768-3407903-26dd1d
      Provider = Cavium Key Storage Provider
    Private key is NOT exportable
    Encryption test passed
    CertUtil: -store command completed successfully.
    

Now you can use this certificate and its corresponding private key with any third-party signing tool on Windows.

Use the certificate with Microsoft SignTool

Now that you have everything in place, you can use the certificate to sign a file using the Microsoft SignTool.

To use the certificate

  1. Get the thumbprint of your certificate. To do this, right-click PowerShell and choose Run as administrator. Enter the following command:
    PS C:\>Get-ChildItem -path cert:\LocalMachine\My
    

    If successful, you should see output similar to the following.

    PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
    
    Thumbprint                                Subject
    ----------                                -------
    <thumbprint>   CN=www.mydomain.com, OU=Certificate Management, O=Information Technology, L=Ho...
    

  2. Copy the thumbprint. You need it to perform the actual signing operation on a file.
  3. Download and install one of the following versions of the Microsoft Windows SDK on your Windows EC2 instance:Microsoft Windows 10 SDK
    Microsoft Windows 8.1 SDK
    Microsoft Windows 7 SDK

    Install the latest applicable Windows SDK package for your operating system. For example, for Microsoft Windows 2012 R2 or later versions, you should install the Microsoft Windows 10 SDK.

  4. To open the SignTool application, navigate to the application directory within PowerShell. This is usually:
    C:\Program Files (x86)\Windows Kits\<SDK version>\bin\<version number>\<CPU architecture>\signtool.exe
    

  5. When you’ve located the directory, sign your file by running the following command. Remember to replace <thumbprint> and <test.exe> with your own values. <test.exe> can be any executable file in your directory.
    PS C:\>.\signtool.exe sign /v /fd sha256 /sha1 <thumbprint> /sm /as C:\Users\Administrator\Desktop\<test.exe>
    

    You should see a message like the following:

    Done Adding Additional Store
    Successfully signed: C:\Users\Administrator\Desktop\<test.exe>
    
    Number of files successfully Signed: 1
    Number of warnings: 0
    Number of errors: 0
    

  6. (Optional) To verify the signature on the file, you can use SignTool.exe with the verify option by using the following command.
    PS C:\>.\signtool.exe verify /v /pa C:\Users\Administrators\Desktop\<test.exe>
    

    If successful, you should see output similar to the following.

    Number of files successfully Verified: 1
    

Conclusion

In this post, I walked you through the process of using an HSM-backed certificate on a new Windows instance for signing operations. You used the import_key.exe utility to create a new key container from an existing private/public key pair in CloudHSM. Then, you updated the certificate store to associate your certificate with the key container. Finally, you saw how to use the HSM-backed certificate with the new key container to sign executable files. As you continue to use this solution, it’s important to keep Microsoft Windows SDK, CloudHSM client software, and any other installed software up-to-date.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS CloudHSM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Karim Hamdy Abdelmonsif Ibrahim

Karim is a Cloud Support Engineer II at AWS and a subject matter expert for AWS Shield. He’s an avid pentester and security enthusiast who’s worked in IT for almost 11 years. He obtained OSCP, OSWP, CISSP, CEH, ECSA, CISM, and AWS Certified Security Specialist certifications. Outside of work, he enjoys jet skiing, hanging out with friends, and watching space documentaries.