Tag Archives: Security, Identity & Compliance

Implement step-up authentication with Amazon Cognito, Part 1: Solution overview

Post Syndicated from Salman Moghal original https://aws.amazon.com/blogs/security/implement-step-up-authentication-with-amazon-cognito-part-1-solution-overview/

In this blog post, you’ll learn how to protect privileged business transactions that are exposed as APIs by using multi-factor authentication (MFA) or security challenges. These challenges have two components: what you know (such as passwords), and what you have (such as a one-time password token). By using these multi-factor security controls, you can implement step-up authentication to obtain a higher level of security when you perform critical transactions. In this post, we show you how you can use AWS services such as Amazon API Gateway, Amazon Cognito, Amazon DynamoDB, and AWS Lambda functions to implement step-up authentication by using a simple rule-based security model for your API resources.

Previously, identity and access management solutions have attempted to deliver step-up authentication by retrofitting their runtimes with stateful server-side management, which doesn’t scale in the modern-day stateless cloud-centered application architecture. We’ll show you how to use a pluggable, stateless authentication implementation that integrates into your existing infrastructure without compromising your security or performance. The Amazon API Gateway Lambda authorizer is a pluggable serverless function that acts as an intermediary step before an API action is invoked. This Lambda authorizer, coupled with a small SDK library that runs in the authorizer, will provide step-up authentication.

This solution consists of two blog posts. This is Part 1, where you’ll learn about the step-up authentication solution architecture and design. In the next post, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution, you’ll learn how to use a reference implementation to test the step-up authentication solution.

Prerequisites

The reference architecture in this post uses a purpose-built step-up authorization workflow engine, which uses a custom SDK. The custom SDK uses the DynamoDB service as a persistent layer. This workflow engine is generic and can be used across any API serving layers, such as API Gateway or Elastic Load Balancing (ELB) Application Load Balancer, as long as the API serving layers can intercept API requests to perform additional actions. The step-up workflow engine also relies on an identity provider that is capable of issuing an OAuth 2.0 access token.

There are three parts to the step-up authentication solution:

  1. An API serving layer with the capability to apply custom logic before applying business logic.
  2. An OAuth 2.0–capable identity provider system.
  3. A purpose-built step-up workflow engine.

The solution in this post uses Amazon Cognito as the identity provider, with an API Gateway Lambda authorizer to invoke the step-up workflow engine, and DynamoDB as a persistent layer used by the step-up workflow engine. You can see a reference implementation of the API Gateway Lambda authorizer in the step-up-auth GitHub repository. Additionally, the purpose-built step-up workflow engine provides two API endpoints (or API actions), /initiate-auth and /respond-to-challenge, which are realized using the API Gateway Lambda authorizer, to drive the API invocation step-up state.

Note: If you decide to use an API serving layer other than API Gateway, or use an OAuth 2.0 identity provider besides Amazon Cognito, you will have to make changes to the accompanying sample code in the step-up-auth GitHub repository.

Solution architecture

Figure 1 shows the high-level reference architecture.

Figure 1: Step-up authentication high-level reference architecture

Figure 1: Step-up authentication high-level reference architecture

First, let’s talk about the core components in the step-up authentication reference architecture in Figure 1.

Identity provider

In order for a client application or user to invoke a protected backend API action, they must first obtain a valid OAuth token or JSON web token (JWT) from an identity provider. The step-up authentication solution uses Amazon Cognito as the identity provider. The step-up authentication solution and the accompanying step-up API operations use the access token to make the step-up authorization decision.

Protected backend

The step-up authentication solution uses API Gateway to protect backend resources. API Gateway supports several different API integration types, and you can use any one of the supported API Gateway integration types. For this solution, the accompanying sample code in the step-up-auth GitHub repository uses Lambda proxy integration to simulate a protected backend resource.

Data design

The step-up authentication solution relies on two DynamoDB tables, a session table and a setting table. The session table contains the user’s step-up session information, and the setting table contains an API step-up configuration. The API Gateway Lambda authorizer (described in the next section) checks the setting table to determine whether the API request requires a step-up session. For more information about table structure and sample values, see the Step-up authentication data design section in the accompanying GitHub repository.

The session table has the DynamoDB Time to Live (TTL) feature enabled. An item stays in the session table until the TTL time expires, when DynamoDB automatically deletes the item. The TTL value can be controlled by using the environment variable SESSION_TABLE_ITEM_TTL. Later in this post, we’ll cover where to define this environment variable in the Step-up solution design details section; and we’ll cover how to set the optimal value for this environment variable in the Additional considerations section.

Authorizer

The step-up authentication solution uses a purpose-built request parameter-based Lambda authorizer (also called a REQUEST authorizer). This REQUEST authorizer helps protect privileged API operations that require a step-up session.

The authorizer verifies that the API request contains a valid access token in the HTTP Authorization header. Using the access token’s JSON web token ID (JTI) claim as a key, the authorizer then attempts to retrieve a step-up session from the session table. If a session exists and its state is set to either STEP_UP_COMPLETED or STEP_UP_NOT_REQUIRED, then the authorizer lets the API call through by generating an allow API Gateway Lambda authorizer policy. If the set-up state is set to STEP_UP_REQUIRED, then the authorizer returns a 401 Unauthorized response status code to the caller.

If a step-up session does not exist in the session table for the incoming API request, then the authorizer attempts to create a session. It first looks up the setting table for the API configuration. If an API configuration is found and the configuration status is set to STEP_UP_REQUIRED, it indicates that the user must provide additional authentication in order to call this API action. In this case, the authorizer will create a new session in the session table by using the access token’s JTI claim as a session key, and it will return a 401 Unauthorized response status code to the caller. If the API configuration in the setting table is set to STEP_UP_DENY, then the authorizer will return a deny API Gateway Lambda authorizer policy, therefore blocking the API invocation. The caller will receive a 403 Forbidden response status code.

The authorizer uses the purpose-built auth-sdk library to interface with both the session and setting DynamoDB tables. The auth-sdk library provides convenient methods to create, update, or delete items in tables. Internally, auth-sdk uses the DynamoDB v3 Client SDK.

Initiate auth endpoint

When you deploy the step-up authentication solution, you will get the following two API endpoints:

  1. The initiate step-up authentication endpoint (described in this section).
  2. The respond to step-up authentication challenge endpoint (described in the next section).

When a client receives a 401 Unauthorized response status code from API Gateway after invoking a privileged API operation, the client can start the step-up authentication flow by invoking the initiate step-up authentication endpoint (/initiate-auth).

The /initiate-auth endpoint does not require any extra parameters, it only requires the Amazon Cognito access_token to be passed in the Authorization header of the request. The /initiate-auth endpoint uses the access token to call the Amazon Cognito API actions GetUser and GetUserAttributeVerificationCode on behalf of the user.

After the /initiate-auth endpoint has determined the proper multi-factor authentication (MFA) method to use, it returns the MFA method to the client. There are three possible values for the MFA methods:

  • MAYBE_SOFTWARE_TOKEN_STEP_UP, which is used when the MFA method cannot be determined.
  • SOFTWARE_TOKEN_STEP_UP, which is used when the user prefers software token MFA.
  • SMS_STEP_UP, which is used when the user prefers short message service (SMS) MFA.

Let’s take a closer look at how /initiate-auth endpoint determines the type of MFA methods to return to the client. The endpoint calls Amazon Cognito GetUser API action to check for user preferences, and it takes the following actions:

  1. Determines what method of MFA the user prefers, either software token or SMS.
  2. If the user’s preferred method is set to software token, the endpoint returns SOFTWARE_TOKEN_STEP_UP code to the client.
  3. If the user’s preferred method is set to SMS, the endpoint sends an SMS message with a code to the user’s mobile device. It uses the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message. After the Amazon Cognito API action returns success, the endpoint returns SMS_STEP_UP code to the client.
  4. When the user preferences don’t include either a software token or SMS, the endpoint checks if the response from Amazon Cognito GetUser API action contains UserMFASetting response attribute list with either SOFTWARE_TOKEN_MFA or SMS_MFA keywords. If the UserMFASetting response attribute list contains SOFTWARE_TOKEN_MFA, then the endpoint returns SOFTWARE_TOKEN_STEP_UP code to the client. If it contains SMS_MFA keyword, then the endpoint invokes the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message (as in step 3). Upon successful response from the Amazon Cognito API action, the endpoint returns SMS_STEP_UP code to the client.
  5. If the UserMFASetting response attribute list from Amazon Cognito GetUser API action does not contain SOFTWARE_TOKEN_MFA or SMS_MFA keywords, then the endpoint looks for phone_number_verified attribute. If found, then the endpoint sends an SMS message with a code to the user’s mobile device with verified phone number. The endpoint uses the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message (as in step 3). Otherwise, when no verified phone is found, the endpoint returns MAYBE_SOFTWARE_TOKEN_STEP_UP code to the client.

The flowchart shown in Figure 2 illustrates the full decision logic.

Figure 2: MFA decision flow chart

Figure 2: MFA decision flow chart

Respond to challenge endpoint

The respond to challenge endpoint (/respond-to-challenge) is called by the client after it receives an appropriate MFA method from the /initiate-auth endpoint. The user must respond to the challenge appropriately by invoking /respond-to-challenge with a code and an MFA method.

The /respond-to-challenge endpoint receives two parameters in the POST body, one indicating the MFA method and the other containing the challenge response. Additionally, this endpoint requires the Amazon Cognito access token to be passed in the Authorization header of the request.

If the MFA method is SMS_STEP_UP, the /respond-to-challenge endpoint invokes the Amazon Cognito API action VerifyUserAttribute to verify the user-provided challenge response, which is the code that was sent by using SMS.

If the MFA method is SOFTWARE_TOKEN_STEP_UP or MAYBE_SOFTWARE_TOKEN_STEP_UP, the /respond-to-challenge endpoint invokes the Amazon Cognito API action VerifySoftwareToken to verify the challenge response that was sent in the endpoint payload.

After the user-provided challenge response is verified, the /respond-to-challenge endpoint updates the session table with the step-up session state STEP_UP_COMPLETED by using the access_token JTI. If the challenge response verification step fails, no changes are made to the session table. As explained earlier in the Data design section, the step-up session stays in the session table until the TTL time expires, when DynamoDB will automatically delete the item.

Deploy and test the step-up authentication solution

If you want to test the step-up authentication solution at this point, go to the second part of this blog, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution. That post provides instructions you can use to deploy the solution by using the AWS Cloud Development Kit (AWS CDK) in your AWS account, and test it by using a sample web application.

Otherwise, you can continue reading the rest of this post to review the details and code behind the step-up authentication solution.

Step-up solution design details

Now let’s dig deeper into the step-up authentication solution. Figure 3 expands on the high-level solution design in the previous section and highlights the sequence of events that must take place to perform step-up authentication. In this section, we’ll break down these sequences into smaller parts and discuss each by going over a detailed sequence diagram.

Figure 3: Step-up authentication detailed reference architecture

Figure 3: Step-up authentication detailed reference architecture

Let’s group the step-up authentication flow in Figure 3 into three parts:

  1. Create a step-up session (steps 1-6 in Figure 3)
  2. Initiate step-up authentication (steps 7-8 in Figure 3)
  3. Respond to the step-up challenge (steps 9-12 in Figure 3)

In the next sections, you’ll learn how the user’s API requests are handled by the step-up authentication solution, and how the user state is elevated by going through an additional challenge.

Create a step-up session

After the user successfully logs in, they create a step-up session when invoking a privileged API action that is protected with the step-up Lambda authorizer. This authorizer determines whether to start a step-up challenge based on the configuration within the DynamoDB setting table, which might create a step-up session in the DynamoDB session table. Let’s go over steps 1–6, shown in the architecture diagram in Figure 3, in more detail:

  • Step 1 – It’s important to note that the user must authenticate with Amazon Cognito initially. As a result, they must have a valid access token generated by the Amazon Cognito user pool.
  • Step 2 – The user then invokes a privileged API action and passes the access token in the Authorization header.
  • Step 3 – The API action is protected by using a Lambda authorizer. The authorizer first validates the token by invoking the Amazon Cognito user pool public key. If the token is invalid, a 401 Unauthorized response status code can be sent immediately, prompting the client to present a valid token.
  • Step 4 – The authorizer performs a lookup in the DynamoDB setting table to check whether the current request needs elevated privilege (also known as step-up privilege). In the setting table, you can define which API actions require elevated privilege. You can additionally bundle API operations into a group by defining the group attribute. This allows you to further isolate privileged API operations, especially in a large-scale deployment.
  • Step 5 – If an API action requires elevated privilege, the authorizer will check for an existing step-up session for this specific user in the session table. If a step-up session does not exist, the authorizer will create a new entry in the session table. The key for this table will be the JTI claim of the access_token (which can be obtained after token verification).
  • Step 6 – If a valid session exists, then authorization will be given. Otherwise an unauthorized access response (401 HTTP code) will be sent back from the Lambda authorizer, indicating that the user requires elevated privilege.

Figure 4 highlights these steps in a sequence diagram.

Figure 4: Sequence diagram for creating a step-up session

Figure 4: Sequence diagram for creating a step-up session

Initiate step-up authentication

After the user receives a 401 Unauthorized response status code from invoking the privileged API action in the previous step, the user must call the /initiate-auth endpoint to start step-up authentication. The endpoint will return the response to the user or the client application to supply the temporary code. Let’s go over steps 7 and 8, shown in the architecture diagram in Figure 3, in more detail:

  • Step 7 – The client application initiates a step-up action by calling the /initiate-auth endpoint. This action is protected by the API Gateway built-in Amazon Cognito authorizer, and the client needs to pass a valid access_token in the Authorization header.
  • Step 8 – The call is forwarded to a Lambda function that will initiate the step-up action with the end user. The function first calls the Amazon Cognito API action GetUser to find out the user’s MFA settings. Depending on which MFA type is enabled for the user, the function uses different Amazon Cognito API operations to start the MFA challenge. For more details, see the Initiate auth endpoint section earlier in this post.

Figure 5 shows these steps in a sequence diagram.

Figure 5: Sequence diagram for invoking /initiate-auth to start step-up authentication

Figure 5: Sequence diagram for invoking /initiate-auth to start step-up authentication

Respond to the step-up challenge

In the previous step, the user receives a challenge code from the /initiate-auth endpoint. Depending on the type of challenge code, user must respond by sending a one-time password (OTP) to the /respond-to-challenge endpoint. The /respond-to-challenge endpoint invokes an Amazon Cognito API action to verify the OTP. Upon successful verification, the /respond-to-challenge endpoint marks the step-up session in the session table to STEP_UP_COMPLETED, indicating that the user now has elevated privilege. At this point, the user can invoke the privileged API action again to perform the elevated business operation. Let’s go over steps 9–12, shown in the architecture diagram in Figure 3, in more detail:

  • Step 9 – The client application presents an appropriate screen to the user to collect a response to the step-up challenge. The client application calls the /respond-to-challenge endpoint that contains the following:
    1. An access_token in the Authorization header.
    2. A step-up challenge type.
    3. A response provided by the user to the step-up challenge.

    This endpoint is protected by the API Gateway built-in Amazon Cognito authorizer.

  • Step 10 – The call is forwarded to the Lambda function, which verifies the response by calling the Amazon Cognito API action VerifyUserAttribute (in the case of SMS_STEP_UP) or VerifySoftwareToken (in the case of SOFTWARE_TOKEN_STEP_UP), depending on the type of step-up action that was returned from the /initiate-auth API action. The Amazon Cognito response will indicate whether verification was successful.
  • Step 11 – If the Amazon Cognito response in the previous step was successful, the Lambda function associated with the /respond-to-challenge endpoint inserts a record in the session table by using the access_token JTI as key. This record indicates that the user has completed step-up authentication. The record is inserted with a time to live (TTL) equal to the lesser of these values: the remaining period in the access_token timeout, or the default TTL value that is set in the Lambda function as a configurable environment variable, SESSION_TABLE_ITEM_TTL. The /respond-to-challenge endpoint returns a 200 status code after successfully updating the session table. It returns a 401 Unauthorized response status code if the operation failed or if the Amazon Cognito API calls in the previous step failed. For more information about the optimal value for the SESSION_TABLE_ITEM_TTL variable, see the Additional considerations section later in this post.
  • Step 12 – The client application can re-try the original call (using the same access token) to the privileged API operations, and this call should now succeed because an active step-up session exists for the user. Calls to other privileged API operations that require step-up should also succeed, as long as the step-up session hasn’t expired.

Figure 6 shows these steps in a sequence diagram.

Figure 6: Invoke the /respond-to-challenge endpoint to complete step-up authentication

Figure 6: Invoke the /respond-to-challenge endpoint to complete step-up authentication

Additional considerations

This solution uses several Amazon Cognito API operations to provide step-up authentication functionality. Amazon Cognito applies rate limiting on all API operations categories, and rapid calls that exceed the assigned quota will be throttled.

The step-up flow for a single user can include multiple Amazon Cognito API operations such as GetUser, GetUserAttributeVerificationCode, VerifyUserAttribute, and VerifySoftwareToken. These Amazon Cognito API operations have different rate limits. The effective rate, in requests per second (RPS), that your privileged and protected API action can achieve will be equivalent to the lowest category rate limit among these API operations. When you use the default quota, your application can achieve 25 SMS_STEP_UP RPS or up to 50 SOFTWARE_TOKEN_STEP_UP RPS.

Certain Amazon Cognito API operations have additional security rate limits per user per hour. For example, the GetUserAttributeVerificationCode API action has a limit of five calls per user per hour. For that reason, we recommend 15 minutes as the minimum value for SESSION_TABLE_ITEM_TTL, as this will allow a single user to have up to four step-up sessions per hour if needed.

Conclusion

In this blog post, you learned about the architecture of our step-up authentication solution and how to implement this architecture to protect privileged API operations by using AWS services. You learned how to use Amazon Cognito as the identity provider to authenticate users with multi-factor security and API Gateway with an authorizer Lambda function to enforce access to API actions by using a step-up authentication workflow engine. This solution uses DynamoDB as a persistent layer to manage the security rules for the step-up authentication workflow engine, which helps you to efficiently manage your rules.

In the next part of this post, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution, you’ll deploy a reference implementation of the step-up authentication solution in your AWS account. You’ll use a sample web application to test the step-up authentication solution you learned about in this post.

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the Amazon Cognito forum.

Want more AWS Security news? Follow us on Twitter.

Salman Moghal

Salman Moghal

Salman is a Principal Consultant in AWS Professional Services, based in Toronto, Canada. He helps customers in architecting, developing, and reengineering data-driven applications at scale, with a sharp focus on security.

Thomas Ross

Thomas Ross

Thomas is a Software Engineering student at Carleton University. He worked at AWS as a Professional Services Intern and a Software Development Engineer Intern in Amazon Aurora. He has an interest in almost anything related to technology, especially systems at high scale, security, distributed systems, and databases.

Ozair Sheikh

Ozair Sheikh

Ozair is a senior product leader for Sponsored Display in Amazon ads, based in Toronto, Canada. He helps advertisers and Ad Tech API Partners build campaign management solutions to reach customers across the purchase journey. He has over 10 years of experience in API management and security, with an obsession for delivering highly secure API products.

Mahmoud Matouk

Mahmoud Matouk

Mahmoud is a Principal Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How to automate updates for your domain list in Route 53 Resolver DNS Firewall

Post Syndicated from Guillaume Neau original https://aws.amazon.com/blogs/security/how-to-automate-updates-for-your-domain-list-in-route-53-resolver-dns-firewall/

Note: This post includes links to third-party websites. AWS is not responsible for the content on those websites.


Following the release of Amazon Route 53 Resolver DNS Firewall, Amazon Web Services (AWS) published several blog posts to help you protect your Amazon Virtual Private Cloud (Amazon VPC) DNS resolution, including How to Get Started with Amazon Route 53 Resolver DNS Firewall for Amazon VPC and Secure your Amazon VPC DNS resolution with Amazon Route 53 Resolver DNS Firewall. Route 53 Resolver DNS Firewall provides managed domain lists that are fully maintained and kept up-to-date by AWS and that directly benefit from the threat intelligence that we gather, but you might want to create or import your own list to have full control over the DNS filtering.

In this blog post, you will find a solution to automate the management of your domain list by using AWS Lambda, Amazon EventBridge, and Amazon Simple Storage Service (Amazon S3). The solution in this post uses, as an example, the URLhaus open Response Policy Zone (RPZ) list, which generates a new file every five minutes.

Architecture overview

The solution is made of the following four components, as shown in Figure 1.

  1. An EventBridge scheduled rule to invoke the Lambda function on a schedule.
  2. A Lambda function that uses the AWS SDK to perform the automation logic.
  3. An S3 bucket to temporarily store the list of domains retrieved.
  4. Amazon Route 53 Resolver DNS Firewall.
    Figure 1: Architecture overview

    Figure 1: Architecture overview

After the solution is deployed, it works as follows:

  1. The scheduled rule invokes the Lambda function every 5 minutes to fetch the latest domain list available.
  2. The Lambda function fetches the list from URLhaus, parses the data retrieved, formats the data, uploads the list of domains into the S3 bucket, and invokes the Route 53 Resolver DNS Firewall importFirewallDomains API action.
  3. The domain list is then updated.

Implementation steps

As a first step, create your own domain list on the Route 53 Resolver DNS Firewall. Having your own domain list allows you to have full control of the list of domains to which you want to apply actions, as defined within rule groups.

To create your own domain list

  1. In the Route 53 console, in the left menu, choose Domain lists in the DNS firewall section.
  2. Choose the Add domain list button, enter a name for your owned domain list, and then enter a placeholder domain to initialize the domain list.
  3. Choose Add domain list to finalize the creation of the domain list.
    Figure 2: Expected view of the console

    Figure 2: Expected view of the console

The list from URLhaus contains more than a thousand records. You will use the ImportFirewallDomains endpoint to upload this list to DNS Firewall. The use of the ImportFirewallDomains endpoint requires that you first upload the list of domains and make the list available in an S3 bucket that is located in the same AWS Region as the owned domain list that you just created.

To create the S3 bucket

  1. In the S3 console, choose Create bucket.
  2. Under General configuration, configure the AWS Region option to be the same as the Region in which you created your domain list.
  3. Finalize the configuration of your S3 bucket, and then choose Create bucket.

Because a new file is created every five minutes, we recommend setting a lifecycle rule to automatically expire and delete files after 24 hours to optimize for cost and only save the most recent lists.

To create the Lambda function

  1. Follow the steps in the topic Creating an execution role in the IAM console to create an execution role. After step 4, when you configure permissions, choose Create Policy, and then create and add an IAM policy similar to the following example. This policy needs to:
    • Allow the Lambda function to put logs in Amazon CloudWatch.
    • Allow the Lambda function to have read and write access to objects placed in the created S3 bucket.
    • Allow the Lambda function to update the firewall domain list.
    • {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": [
                      "logs:CreateLogGroup",
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": "arn:aws:logs:<region>:<accountId>:*",
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "s3:PutObject",
                      "s3:GetObject"
                  ],
                  "Resource": "arn:aws:s3:::<DNSFW-BUCKET-NAME>/*",
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "route53resolver:ImportFirewallDomains"
                  ],
                  "Resource": "arn:aws:route53resolver:<region>:<accountId>:firewall-domain-list/<domain-list-id>",
                  "Effect": "Allow"
              }
          ]
      }

  2. (Optional) If you decide to use the example provided by AWS:
    • After cloning the repository: Build the layer following the instruction included in the readme.md and the provided script.
    • Zip the lambda.
    • In the left menu, select Layers then Create Layer. Enter a name for the layer, then select Upload a .zip file. Choose to upload the layer (node-axios-layer.zip).
    • As a compatible runtime, select: Node.js 16.x.
    • Select Create
  3. In the Lambda console, in the same Region as your domain list, choose Create function, and then do the following:
    • Choose your desired runtime and architecture.
    • (Optional) To use the code provided by AWS: Select Node.js 16.x as the runtime.
    • Choose Change the default execution role.
    • Choose Use an existing role, and then pick the role that you just created.
  4. After the Lambda function is created, in the left menu of the Lambda console, choose Functions, and then select the function you created.
    • For Code source, you can either enter the code of the Lambda function or choose the Upload from button and then choose the source for the code. AWS provides an example of functioning code on GitHub under a MIT-0 license.

    (optional) To use the code provided by AWS:

    • Choose the Upload from button and upload the zipped code example.
    • After the code is uploaded, edit the default Runtime settings: Choose the Edit button and set the handler to be equal to: LambdaRpz.handler
    • Edit the default Layers configuration, choose the Add a layer button, select Specify an ARN and enter the ARN of the layer created during the optional step 2.
    • Edit the environment variables of the function: Select the Edit button and define the three following variables:
      1. Key : FirewallDomainListId | Value : <domain-list-id>
      2. Key : region | Value : <region>
      3. Key : s3Prefix | Value : <DNSFW-BUCKET-NAME>

The code that you place in the function will be able to fetch the list from URLhaus, upload the list as a file to S3, and start the import of domains.

For the Lambda function to be invoked every 5 minutes, next you will create a scheduled rule with Amazon EventBridge.

To automate the invoking of the Lambda function

  1. In the EventBridge console, in the same AWS Region as your domain list, choose Create rule.
  2. For Rule type, choose Schedule.
  3. For Schedule pattern, select the option A schedule that runs at a regular rate, such as every 10 minutes, and under Rate expression set a rate of 5 minutes.
    Figure 3: Console view when configuring a schedule

    Figure 3: Console view when configuring a schedule

  4. To select the target, choose AWS service, choose Lambda function, and then select the function that you previously created.

After the solution is deployed, your domain list will be updated every 5 minutes and look like the view in Figure 4.

Figure 4: Console view of the created domain list after it has been updated by the Lambda function

Figure 4: Console view of the created domain list after it has been updated by the Lambda function

Code samples

You can use the samples in the amazon-route-53-resolver-firewall-automation-examples-2 GitHub repository to ease the automation of your domain list, and the associated updates. The repository contains script files to help you with the deployment process of the AWS CloudFormation template. Note that you need to have the AWS Command Line Interface (AWS CLI) installed and properly configured in order to use the files.

To deploy the CloudFormation stack

  1. If you haven’t done so already, create an S3 bucket to store the artifacts in the Region where you wish to deploy. This name of this bucket will then be referenced as ParamS3ArtifactBucket with a value of <DOC-EXAMPLE-BUCKET-ARTIFACT>
  2. Clone the repository locally.
    git clone https://github.com/aws-samples/amazon-route-53-resolver-firewall-automation-examples-2
  3. Build the Lambda function layer. From the /layer folder, use the provided script.
    . ./build-layer.sh
  4. Zip and upload the artifact to the bucket created in step 1. From the root folder, use the provided script.
    . ./zipupload.sh <ParamS3ArtifactBucket>
  5. Deploy the AWS CloudFormation stack by using either the AWS CLI or the CloudFormation console.
    • To deploy by using the AWS CLI, from the root folder, type the following command, making sure to replace <region>, <DOC-EXAMPLE-BUCKET-ARTIFACT>, <DNSFW-BUCKET-NAME>, and <DomainListName>with your own values.
      aws --region <region> cloudformation create-stack --stack-name DNSFWStack --capabilities CAPABILITY_NAMED_IAM --template-body file://./DNSFWStack.cfn.yaml --parameters ParameterKey=ParamS3ArtifactBucket,ParameterValue=<DOC-EXAMPLE-BUCKET-ARTIFACT> ParameterKey=ParamS3RpzBucket,ParameterValue=<DNSFW-BUCKET-NAME> ParameterKey=ParamFirewallDomainListName,ParameterValue=<DomainListName>

    • To deploy by using the console, do the following:
      1. In the CloudFormation console, choose Create stack, and then choose With new resources (standard).
      2. On the creation screen, choose Template is ready, and upload the provided DNSFWStack.cfn.yaml file.
      3. Enter a stack name and configure the requested parameters with your desired configuration and outcomes. These parameters include the following:
        • The name of your firewall domain list.
        • The name of the S3 bucket that contains Lambda artifacts.
        • The name of the S3 bucket that will be created to contain the files with the domain information from URLhaus.
      4. Acknowledge that the template requires IAM permission because it will create the role for the Lambda function and manage its IAM policy, and then choose Create stack.

After a few minutes, all the resources should be created and the CloudFormation stack is now deployed. After 5 minutes, your domain list should be updated, as shown in Figure 5.

Figure 5: Console view of CloudFormation after the stack has been deployed

Figure 5: Console view of CloudFormation after the stack has been deployed

Conclusions and cost

In this blog post, you learned about creating and automating the update of a domain list that you fully control. To go further, you can extend and replicate the architecture pattern to fetch domain names from other sources by editing the source code of the Lambda function.

After the solution is in place, in order for the filtering to be effective, you need to create a rule group referencing the domain list and associate the rule group with some of your VPCs.

For cost information, see the AWS Pricing Calculator. This solution will be invoked 60 (minutes) * 24 (hours) * 30 (days) / 5 (minutes) = 8,640 times per month, invoking the Lambda function that will run for an average of 400 minutes, storing an average of 0.5 GB in Amazon S3, and creating a domain list that averages 1,500 domains. According to our public pricing, and without factoring in the AWS Free Tier, this will incur the estimated total cost of $1.43 per month for the filtering of 1 million DNS requests.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Guillaume Neau

Guillaume Neau

Guillaume is a solutions architect of France with an expertise in information security that focus on building solutions that improve the life of citizens.

Announcing new AWS IAM Identity Center APIs to manage users and groups at scale

Post Syndicated from Sharanya Ramakrishnan original https://aws.amazon.com/blogs/security/announcing-new-aws-iam-identity-center-apis-to-manage-users-and-groups-at-scale/

If you use AWS IAM Identity Center (successor to AWS Single Sign-On) as your identity source, you create and manage your users and groups manually in the IAM Identity Center console. However, you may prefer to automate this process to save time, spend less administrative effort, and to scale effectively as your organization grows. If you use IAM Identity Center with a supported identity provider (IdP) or Microsoft Active Directory (AD), you may want to check if the right users and groups have synced into IAM Identity Center. You can do this manually, but now you can use the new APIs to make the process easier by setting up automated checks that query this information from IAM Identity Center and notify you only when you need to intervene.

This post explains how you can use IAM Identity Center APIs to automate managing users and groups in a scalable manner and gain better visibility into users and groups in the Identity Center directory. These automations can help you save time and reduce administrative effort. We will provide some background on how IAM Identity Center works and how you can use the new APIs to help simplify your workflows.

Background

IAM Identity Center allows you to centrally manage access to all of your AWS accounts within AWS Organizations or your applications. Using IAM Identity Center is the AWS recommendation for managing the workforce identities of the human users in your organization who access AWS resources. It provides you with the flexibility to create and manage users and groups in the Identity Center directory, or bring in your users and groups from a different identity source such as Active Directory or an external identity provider (IdP). After IAM Identity Center is configured, you can look up users or groups to grant them single sign-on access to AWS accounts, applications, or both. By signing-in once in the IAM Identity Center portal, your users can access their assigned AWS accounts, as well as Identity Center enabled applications such as Amazon SageMaker Studio or Amazon EMR Studio, as well as cloud applications such as Jira, Salesforce, and Tableau.

While IAM Identity Center simplifies user access, you may prefer to manage these users and groups at scale, and audit their access regularly to meet your security requirements. You may also want to automate the process of giving users access to AWS and the resources they need to do their job. Previously, you could only manage identities in the Identity Center directory manually by using the IAM Identity Center console. Now, you can use the new Identity Center APIs to build automation that manages the Identity Center directory users and groups for you.

With these Identity Center APIs, you can build automated workflows to do the following tasks:

  • Provision and de-provision users and groups.
  • Add new members to a group or remove them from a group.
  • Query information about users and groups in the Identity Center directory.
  • Update information about users and groups.
  • Find out which users are members of which groups.

You can create automated workflows using the APIs to define who has access to AWS accounts or applications through IAM Identity Center, and provide them with the right resources to do their job. Automating workflows can save you time and can reduce your administrative effort. With the new APIs, you can auto-generate reports about users and their IAM Identity Center access configurations. These automated reports can provide you with greater visibility to evaluate your security posture.

Provision, manage, and de-provision users and groups in IAM Identity Center

As your enterprise grows, you may want to automate your administrative tasks to reduce manual effort, save time, and scale efficiently. If you’re a cloud administrator or IT administrator who manages which employees in your organization need access to AWS as part of their job role, or what AWS resources they need so they can develop applications, now you can set up automated workflows that manage this for you.

Consider the following scenario. Your organization uses the Identity Center directory as the source for user information, and a new data scientist joins your company. You want them to be granted access to log in to AWS automatically based on their job role. After they log in, you want them to have access to the AWS resources and applications that you have approved for their job role, including Amazon SageMaker, AWS Managed Grafana, and multiple S3 buckets. Previously, you had to use the AWS Management Console to manually create a new user object and then add the new data scientist to the AWS_Data_Science group. With the new APIs, you can set up an automated workflow that creates a new user and adds the user to relevant groups in the Identity Center directory, as soon as the new data scientist is added to your human resources (HR) system.

A sample AWS Identity Store operations python script called identitystore_operations.py is available in the iam-identitycenter-identitystoreapi-operations GitHub repository. This sample program shows you how you can automate Identity Store operations to create a new user, add the user to a group, list group memberships, and update the user’s group memberships operations. This sample program requires the AWS SDK for Python (Boto3). For instructions to install the AWS SDK for Python, see the Boto3 Quickstart.

The following is an example to see all supported operations available in the sample script.

python identitystore_operations.py —h

The following is example output:

usage: identitystore_operations.py [-h]
{create_user,create_group,adduser_to_group,delete_group,list_members,list_membership}
...
positional arguments:
{create_user,create_group,adduser_to_group,delete_group,list_members,list_membership}

options:
-h, --help show this help message and exit

Next is an example of how you can create the new user John Doe in the Identity Center directory and add the user to an existing AWS_Data_Science group.

python identitystore_operations.py create_user --identitystoreid d-123456a7890 --username johndoe --givenname John --familyname Doe --groupname AWS_Data_Science

The following is example output:

User:johndoe with UserId:12345678-9012-3456-789a-bcdef021345a created successfully
User:johndoe added to Group:AWS_Data_Science successfully

To continue with this example, consider a scenario where the data scientist transitions to a role as an applied scientist, and needs access to additional AWS applications and resources. Rather than using the IAM Identity Center console to manually update the user’s information and add them to the AWS_Applied_Scientists group, you can now use automation to update the user and provide them with the access they need.

The following is an example of how the previously-created user johndoe can be added to the AWS_Applied_Scientists group.

python identitystore_operations.py adduser_to_group --identitystoreid d-123456a7890 --groupname AWS_Applied_Scientists --username johndoe

The following is example output:

User:johndoe added to Group:AWS_Applied_Scientists successfully

Finally for this scenario, consider that this employee leaves your company. Rather than using the IAM Identity Center console to manually delete their user object, now your automation can delete the user as soon as they are removed from your HR system.

Evaluate your security posture in IAM Identity Center

To maintain visibility across AWS, it is a best practice to regularly audit and evaluate the security controls for any service that you use. It’s also a best practice to identify the AWS accounts or applications that an employee can access. Having access to this information helps you maintain and improve your company’s security posture. As a cloud administrator, you may need to submit periodic reports to auditors enumerating the employees who have access to AWS. If you or your IT team manage users and groups in a different source system, you also need to track whether the right users and groups are synced into AWS.

The following is an example of how you can find the members of the AWS_Applied_Scientists group.

python identitystore_operations.py list_members --identitystoreid d-123456a7890 --groupname AWS_Applied_Scientists

The following is example output:

UserName:johndoe,Display Name: John Doe

For example, consider a scenario in which you use Active Directory as your identity source. You want to confirm that the right set of users and groups have been synced into the Identity Center directory. After the users and groups are confirmed, you are required to submit this list of users and groups with access to AWS to auditors every quarter. Rather than manually verifying which employees have access to AWS and manually creating a list for the auditors, now you can use the new APIs to create a workflow that automatically queries the users and groups in the Identity Center directory, compares it to your list of intended Active Directory users and groups who should have AWS access, and provides you the information about whether there are any users or groups who have access that was not intended. Additionally, you can set up a script to generate reports every quarter for the auditors.

The following is an example of how you can find the group memberships of the specific user johndoe.

python identitystore_operations.py list_membership --identitystoreid d-123456a7890 --username johndoe

The following is example output:

User :johndoe is a member of the following groups
AWS_Data_Science
AWS_Applied_Scientists

Conclusion

In this post, you learned how to use IAM Identity Center APIs to automate managing users and groups in a scalable manner and gain better visibility into users and groups in the Identity Center directory.

The IAM Identity Center APIs for user and group management expand the capabilities of existing Identity Store APIs, helping you build scalable workflows. They help your IT and cloud teams save time and reduce administrative effort through automation.

To learn more about using IAM Identity Center or the user and group management APIs, see the AWS IAM Identity Center User Guide or the Identity Store API Reference Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on AWS IAM Identity Center re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Sharanya Ramakrishnan

Sharanya is a Senior Technical Product Manager in the AWS Identity team. She enjoys solving customer problems through meaningful products, particularly in the dynamic security and identity space. Outside of work, Sharanya likes to travel and enjoys hiking and reading.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect. He enjoys working closely with customers and supporting their digital transformation and AWS adoption journey. His core areas of focus are Serverless, Application Integration, and Security.

Bala KP

Bala KP

Bala KP is a Sr Partner Solutions Architect at Amazon Web Services. He helps global system integrator partners and customers in the financial services and insurance domain to move their most sensitive workloads to AWS.

Learn more about the new allow list feature in Macie

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/learn-more-about-the-new-allow-list-feature-in-macie/

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help you protect your sensitive data in Amazon Web Services (AWS). The data that is available within your AWS account can grow rapidly, which increases your need to verify that all sensitive data is identified and protected. Macie provides you with the ability to use both managed data identifiers and custom data identifiers, but enabling these identifiers for every job could result in a large number of security findings that might not take into account how data is used within your AWS account. So that you can tailor the detection and creation of findings within Macie, Macie now has an allow list feature available for use with your scanning jobs.

In this blog post, we show you how to set up an allow list in Macie and run a Macie scan that uses the allow list to ignore the specified values when creating sensitive data findings. The allow list feature can help your sensitive data management team by reducing false positives due to data text or formats in your environment that do not require action. This makes it easier for your team to focus on Macie findings that need to be reviewed and remediated. By increasing the overall confidence in findings presented by Macie, you can improve the performance of automated workflows and solutions.

Prerequisites

To get started, you’ll need the following prerequisites:

  1. An active AWS account
  2. Amazon Macie enabled within your AWS account
  3. (Optional) Member AWS accounts are enabled using AWS Organizations and a delegated Macie administrator account

Create an allow list in Macie

You can configure allow lists with either regular expressions (regex) or predefined text. Use a predefined text allow list if you have a list of specific values you want to exclude, like a list of example fake names or addresses that are used in test data sets. Alternatively, if you don’t have the exact values but know the pattern to exclude, you can use a regex allow list. Some use cases for a regex allow list could be to exclude tracking IDs or public reference numbers that could resemble a Macie managed data identifier or custom data identifier.

It is important to note that allow lists, and S3 objects if using predefined text, must be created in the same AWS account where the Macie job is created.

  1. If Macie jobs are created from the Macie delegated administrator AWS account to scan member AWS accounts, then the allow lists must be centrally configured in the Macie delegated administrator account.
  2. If Macie jobs are created from the member AWS account to scan buckets within the same AWS account, then the allow lists must be configured in the same AWS account where the Macie job is created.

To create an allow list by using the Amazon Macie Console

  1. In the Amazon Macie Console, navigate to Macie.
  2. Under Settings, choose Allow lists.
  3. Choose Create.
  4. Choose a list type.
    1. If you’re creating a regex allow list, choose Regular expression. For List settings, enter the following settings for the allow list.
      1. For Name, enter the name of the list.
      2. For Description, enter a description (optional).
      3. For Regular expression, enter the regular expression. Macie will not create findings for any matches on the allow list regex.
      4. Evaluate with sample data if needed to test your regex. Macie provides an Evaluate option so you can test your regex against sample data sets to make sure it’s working as expected.
    2. If you’re creating a predefined text allow list, choose Predefined text. For this option, you will need to create a plaintext file and upload the file to an Amazon Simple Storage Service (Amazon S3) bucket. Once you upload the file, you can then reference the Amazon S3 object in the allow list.
      1. Enter the name of the list.
      2. Enter a description for the list (optional).
      3. Enter the S3 bucket name.
      4. Enter the S3 object name of the plaintext file.

    Note: The Macie service-linked role must have the ability to read the S3 object for the predefined text. When you run Macie jobs that use allow lists with predefined text, the Macie service-linked role will read the S3 object. If there is any error reading the S3 object, the Macie job will continue to run without using the predefined text allow list. You will need to periodically check your allow lists to make sure they are in an OK status. You can check the status of each allow list in the Amazon Macie console or via the AWS CLI using the get-allow-list API.

    More information and explanation for status of allow list can be found in the Amazon Macie User Guide.

  5. Choose Create to create the allow list.

    Note: An allow list must be stored in an S3 bucket in the same AWS account and AWS Region as your Macie account. Macie cannot access an allow list if it is stored in a different Region or account.

You can also create and manage allow lists by using the Amazon Macie console, AWS Command Line Interface (AWS CLI) or AWS CloudFormation.

To create or manage an allow list by using the AWS CloudFormation

Below is an example enabling Amazon Macie for an account. The session resource configures Macie to publish updated policy findings for the account.

AWSTemplateFormatVersion: 2010-09-09
Description:<insert-template-description>
Resources:
  EnableMacieSession:
Type: AWS::Macie::Session
Properties:
    	    FindingPublishingFrequency: <insert-finding-publishing-frequency>
    Status: ENABLED

Below is an example of creating an allow list that uses a regular expression to specify a text pattern to ignore. Like other Macie resources, the DependsOn attribute is a required dependency for creating a Macie allow list.

AWSTemplateFormatVersion: 2010-09-09
Description:<insert-template-description>
Resources:
  RegularExpressionAllowList:
Type: AWS::Macie::AllowList
DependsOn: Session
Properties:
  Criteria:
    Regex: “<insert-regex-expression>”
  Description: <insert-allow-list-description>
  Name: <insert-allow-list-name>
  Tags:
    - Key: <insert-tag-key-name>
      Value: <insert-tag-key-value>

Below is an example creating an allow list that specifies a list of predefined text to ignore.

AWSTemplateFormatVersion: 2010-09-09
Description:<insert-template-description>
Resources:
PredefinedAllowList:
Type: AWS::Macie::AllowList
DependsOn: Session
Properties:
  Criteria:
    S3WordsList:
      BucketName: <DOC-EXAMPLE-BUCKET>
      ObjectKey: <OBJECT-EXAMPLE-KEY>
  Description: <insert-allow-list-description>
  Name: <insert-allow-list-name>
  Tags:
  - Key: <insert-tag-key-name>
    Value: <insert-tag-key-value>

To create or manage an allow list by using the AWS CLI

  1. In the AWS CLI, run the following commands to create an allow list with a regular expression.
    aws macie2 create-allow-list \
    --criteria '{"regex":"<insert-regex-expression>"}' \
    --name "<insert-allow-list-name>" \
    --description "<insert-allow-list-description>"
  2. In the AWS CLI, run the following commands to create an allow list with predefined text.
    aws macie2 create-allow-list \
    --criteria '{"s3WordsList":{"bucketName":"<DOC-EXAMPLE-BUCKET>","objectKey":"<OBJECT-EXAMPLE-KEY>"}}' \
    --name "<insert-allow-list-name>" \
    --description "<insert-allow-list-description>"
  3. In the AWS CLI, run the following commands to update an existing allow list.
    aws macie2 update-allow-list --id <GUID-for-Macie-allow-list> example --description <insert-new-description>
  4. In the AWS CLI, run the following commands to delete an existing allow list.
    aws macie2 delete-allow-list --id <GUID-for-Macie-allow-list> example --ignoreJobChecks false
  5. In the AWS CLI, run the following commands to get existing allow lists.
    aws macie2 get-allow-list –id <GUID-for-Macie-allow-list>

For a detailed list of available AWS CLI commands, refer to the AWS CLI documentation for Amazon Macie.

Use the allow list in a Macie scan

After you create allow lists, you can create and run sensitive data discovery jobs in Macie. This will enable you to review, analyze, and compare findings about the affected resources in Amazon S3 buckets with or without allow lists.

Option 1: Create a Macie job with the allow list by using the console

  1. Go to the Amazon Macie Console and navigate to Macie.
  2. In the navigation pane, choose Jobs, and then choose Create job.
  3. On the Choose Amazon S3 buckets page, choose Select specific buckets.

    Note: Macie displays a list of all the buckets managed by your AWS account, including members if configured, in the current Region.

    • Under Select Amazon S3 buckets, optionally choose Refresh to retrieve the latest bucket metadata from Amazon S3.
  4. In the table, select each bucket you want the job to analyze, and then choose Next.
  5. Review and optionally adjust the list of S3 buckets that you selected for the job, and then choose Next.
  6. Refine the scope of the job, if needed. Use these settings to specify how often you want the job to run and the depth and scope of the job’s analysis, and then choose Next.
  7. Select any managed data identifiers you want to use, and then choose Next.
  8. Select any custom data identifiers that you want to use, and then choose Next.
  9. Select the allow lists that you created to ignore either predefined text or regular expression patterns for any objects in the job, and then choose Next.

    Figure 1: Selecting allow lists for a Macie job

    Figure 1: Selecting allow lists for a Macie job

  10. In General settings, enter a name for the job. You can also enter a description and assign tags to the job. Choose Next.
  11. Review and create the job, and then choose Submit.

Option 2: Create a Macie job with the allow list by using the AWS CLI

  1. In the AWS CLI, run the following command.
    aws macie2 create-classification-job \
    --generate-cli-skeleton > <insert-macie-job-input-json>
  2. Input the GUID for the Macie allow list as part of the Macie job input in the JSON file.
  3. Run the following command.
    aws macie2 create-classification-job \
    --cli-input-json file://<insert-macie-job-input-json>

Review Macie findings before and after allow lists

It is important to note that for any existing jobs you configured in your AWS account or organization prior to the Macie allow list feature being released, you will need to recreate those Macie jobs and reference the allow lists you want the job to use. This is only required if you want to have existing jobs use allow lists.

Before you run a Macie job that uses predefined text allow lists, verify that existing Amazon Key Management Service (AWS KMS) keys that are used to encrypt buckets and S3 bucket policy grant the Macie service-linked role the necessary permissions to decrypt the S3 objects.

Figure 2 shows an example of predefined text allow lists for sensitive data discovery jobs, that include credit card numbers, Social Security Numbers (SSNs), and first and last names. The values in the S3 object allow lists will not create Macie findings when the sensitive data discovery job inspects S3 objects.

Figure 2: Example list of existing allow lists

Figure 2: Example list of existing allow lists

Figure 3 shows a sensitive data discovery job that does not include the predefined text allow lists.

Figure 3: Macie job example without allow list configured

Figure 3: Macie job example without allow list configured

Since there are no allow lists configured, Macie creates findings for credit card numbers, United States SSNs, and names, as shown in Figure 4.

Figure 4: Macie job scan without allow list results

Figure 4: Macie job scan without allow list results

Figure 5 shows a sensitive data discovery job that does include the use of a predefined text allow lists.

Figure 5: Macie job example with allow list configured

Figure 5: Macie job example with allow list configured

Because we have configured an allow list for this job, Macie creates no findings for credit card numbers, United States SSNs, and names. Figure 6 shows the lack of findings.

Figure 6: Macie job results with allow list configured

Figure 6: Macie job results with allow list configured

Conclusion

In this post, we walked through how to create, manage, and use Macie allow lists with your Macie jobs. Reducing Macie false-positive findings can help your security team to efficiently identify and protect sensitive data within your AWS environment.

Now that we’ve showed you how to create an allow list in Macie, you can use this feature to tailor Macie in your AWS environment, based on your use cases and workloads. After you’ve reduced the false positives in your environment, you can start looking at how to add in automation to respond to Macie findings with allow lists configured.

Try implementing the solution in this blog post for auto-remediation behavior based on finding type and finding severity. Alternatively, since Macie is automatically integrated with AWS Security Hub, you could implement this automated solution to respond to Macie findings by using by Security Hub custom actions.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

Ajay Rawat

Ajay Rawat

Ajay is a Security Consultant in a shared delivery team at AWS. He is a technology enthusiast who enjoys working with customers to solve their technical challenges and to improve their security posture in the cloud.

AWS achieves FedRAMP P-ATO for 20 services in the AWS US East/West Regions and AWS GovCloud (US) Regions

Post Syndicated from Steve Earley original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-20-services-in-the-aws-us-east-west-regions-and-aws-govcloud-us-regions/

Amazon Web Services (AWS) is pleased to announce that 20 additional AWS services have achieved Provisional Authority to Operate (P-ATO) from the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). The following are the 20 AWS services with FedRAMP authorization for the U.S. federal government and organizations with regulated workloads:

  • AWS App Mesh provides application-level networking to help your services communicate with each other across multiple types of compute infrastructure.
  • AWS Audit Manager helps you to continuously audit your AWS usage to simplify how risk and compliance are assessed with regulations and industry standards.
  • AWS Chatbot is an interactive agent that helps you monitor, operate, and troubleshoot AWS workloads in your chat channels.
  • Amazon Chime SDK is a collection of client software development kits that use resources in your AWS account to add collaborative audio calling, video calling, and screen share features to your web or mobile applications.
  • AWS Cloud9 is a cloud-based integrated development environment (IDE) that helps you write, run, and debug your code with just a browser.
  • Amazon Detective helps you analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.
  • EC2 Image Builder simplifies the building, testing, and deployment of virtual machine and container images for use on AWS or on-premises.
  • Amazon FinSpace is a data management and analytics service that is purpose built for the financial services industry (FSI).
  • AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations.
  • Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts.
  • Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service.
  • Amazon Kinesis Data Analytics is a fully managed service that you can use to process and analyze streaming data using Java, SQL, or Scala.
  • Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text.
  • Amazon Managed Streaming for Apache Kafka (Amazon MSK) is an AWS streaming data service that manages Apache Kafka infrastructure and operations.
  • Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ to help you set up and operate message brokers on AWS.
  • Amazon Neptune is a fast, reliable, fully managed graph database service that helps you build and run applications that work with highly connected datasets.
  • AWS Network Firewall is a managed service that helps you to deploy essential network protections for your Amazon Virtual Private Cloud (Amazon VPC).
  • Amazon Quantum Ledger Database (Amazon QLDB) is a purpose-built ledger database that provides a complete and cryptographically verifiable history of changes made to your application data.
  • AWS Resource Access Manager (AWS RAM) is designed to help you securely share resources across AWS accounts, within your organization or organizational units (OUs) in AWS Organizations, and with AWS Identity and Access Management (IAM) roles and users for supported resource types.
  • Amazon Timestream is a fast, scalable, and serverless time series database service for AWS IoT Core and operational applications that can help you to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases.

These 20 services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by AWS Region

The following table shows our most recent FedRAMP service authorizations by Region and authorization level:

Service FedRAMP Moderate in the AWS US East/West Region FedRAMP High in the AWS GovCloud (US) Region
AWS App Mesh  
AWS Audit Manager  
AWS Chatbot  
Amazon Chime SDK  
AWS Cloud9  
Amazon Detective  
EC2 Image Builder
Amazon FinSpace  
AWS Firewall Manager
Amazon Forecast  
Amazon Keyspaces (for Apache Cassandra)  
Amazon Kinesis Data Analytics
Amazon Lex  
Amazon Managed Streaming for Apache Kafka (Amazon MSK)
Amazon MQ  
Amazon Neptune
AWS Network Firewall
Amazon Quantum Ledger Database (Amazon QLDB)  
AWS Resource Access Manager (AWS RAM)
Amazon Timestream  

AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. AWS now offers 123 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 105 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS Account Team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Steve Earley

Steve Earley

Steve leads the Government Audits Team and the commercial Customer Audit Program for AWS. For over 20 years, he has led security organizations and assessed control environments in both public and private sectors as a security executive with multiple organizations. At AWS, he provides direction for AWS services and features seeking adherence to federal compliance requirements while championing for customer-centric innovation.

Whitney Peters

Whitney Peters

Whitney is a part of the U.S. Government Audits Team for AWS. For the past six years, she has guided services internally and externally through various federal compliance frameworks to achieve their Authority to Operate (ATO).

James Mueller

James Mueller

James is a Security Assurance Manager for AWS. For over 20 years, he has served customers in the private, public, and non-profit sectors delivering innovative information technology solutions. He currently leads security compliance efforts to drive adoption of AWS services.

How to subscribe to the new Security Hub Announcements topic for Amazon SNS

Post Syndicated from Mike Saintcross original https://aws.amazon.com/blogs/security/how-to-subscribe-to-the-new-security-hub-announcements-topic-for-amazon-sns/

With AWS Security Hub you are able to manage your security posture in AWS, perform security best practice checks, aggregate alerts, and automate remediation. Now you are able to use Amazon Simple Notification Service (Amazon SNS) to subscribe to the new Security Hub Announcements topic to receive updates about new Security Hub services and features, newly supported standards and controls, and other Security Hub changes.

Introducing the Security Hub Announcements topic

Amazon SNS follows the publish/subscribe (pub/sub) messaging model, in which notifications are delivered to you by using a push mechanism that eliminates the need for you to periodically check or poll for new information and updates. You can now use this push mechanism to receive notifications about Security Hub by subscribing to the dedicated Security Hub Announcements topic.

The Security Hub Announcements topic publishes the following types of notifications:

  • General notifications
  • Upcoming standards and controls
  • New AWS Regions supported
  • New standards and controls
  • Updated standards and controls
  • Retired standards and controls
  • Updates to the AWS Security Finding Format (ASFF)
  • New integrations
  • New features
  • Changes to existing features

How to use the Security Hub Announcements topic

You can subscribe to the SNS topic for Security Hub Announcements to receive notification messages about newly released finding types, updates to the existing finding types, and other functionality changes. By subscribing to the SNS topic, you will receive Security Hub Announcements messages as soon as they are published. The notifications are available in all protocols that Amazon SNS supports, such as email and SMS. For more information about supported protocols in Amazon SNS, see Subscribing to an Amazon SNS topic.

The Security Hub Announcements topic is available in all AWS Regions in the aws and aws-cn partitions, but is not yet available in the AWS GovCloud (US) Regions (the aws-us-gov partition). Later in this post, we’ll show you how to subscribe to the Security Hub Announcements topic in a specific AWS Region by using the topic Amazon Resource Name (ARN) for that Region. The SNS topic messages are the same across Regions in a partition, so you can choose to subscribe to only one Region in a partition to avoid receiving duplicate information.

However, if you want to invoke an AWS Lambda function in reaction to a Security Hub Announcements message, you must subscribe to the topic ARN that is in the same Region as the Lambda function. The Lambda function can receive the SNS topic message payload as an input parameter and manipulate the information in the message, publish the message to other SNS topics, or send the message to other AWS services. For more information, see Subscribing a function to a topic in the Amazon SNS Developer Guide.

The same is true if you want to subscribe an Amazon Simple Queue Service (Amazon SQS) queue to the Security Hub Announcements topic, you must use a topic ARN that is in the same Region as the SQS queue. The SQS queue can be used to persist announcement SNS topic messages in the queue for other applications to process at a later time. For more information, see Subscribing an Amazon SQS queue to an Amazon SNS topic in the Amazon SQS Developer Guide.

IAM permissions

Your user account must have sns::subscribe AWS Identity and Access Management (IAM) permissions to subscribe to an SNS topic. For more information on IAM permissions for Amazon SNS, see Using identity-based policies with Amazon SNS.

Subscribe to the Security Hub Announcements topic

The following is the list of Security Hub Announcements topic ARNs for each currently supported Region. The examples in this post use the US West (Oregon) Region (us-west-2), but you can update the procedures with one of the following ARNs to use a different supported Region.

Security Hub Announcements topic ARNs by Region

arn:aws:sns:us-east-1:088139225913:SecurityHubAnnouncements
arn:aws:sns:us-east-2:291342846459:SecurityHubAnnouncements
arn:aws:sns:us-west-1:137690824926:SecurityHubAnnouncements
arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements
arn:aws:sns:eu-central-1:871975303681:SecurityHubAnnouncements
arn:aws:sns:eu-north-1:191971010772:SecurityHubAnnouncements
arn:aws:sns:eu-south-1:151363035580:SecurityHubAnnouncements
arn:aws:sns:eu-west-1:705756202095:SecurityHubAnnouncements
arn:aws:sns:eu-west-2:883600840440:SecurityHubAnnouncements
arn:aws:sns:eu-west-3:313420042571:SecurityHubAnnouncements
arn:aws:sns:ca-central-1:137749997395:SecurityHubAnnouncements
arn:aws:sns:sa-east-1:359811883282:SecurityHubAnnouncements
arn:aws:sns:me-south-1:585146626860:SecurityHubAnnouncements
arn:aws:sns:af-south-1:463142546776:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-1:592469075483:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-2:374299265323:SecurityHubAnnouncements
arn:aws:sns:ap-northeast-3:633550238216:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-1:512267288502:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-2:475730049140:SecurityHubAnnouncements
arn:aws:sns:ap-southeast-3:627843640627:SecurityHubAnnouncements
arn:aws:sns:ap-east-1:464812404305:SecurityHubAnnouncements
arn:aws:sns:ap-south-1:707356269775:SecurityHubAnnouncements
arn:aws-cn:sns:cn-north-1:672341567257:SecurityHubAnnouncements
arn:aws-cn:sns:cn-northwest-1:672534482217:SecurityHubAnnouncements

The two procedures that follow show you how to subscribe an email address to the Security Hub Announcements topic by using the AWS Management Console and the AWS CLI.

To subscribe an email address to the Security Hub Announcements topic (console)

  1. Sign in to the Amazon SNS console.
  2. In the Region list, choose the same Region as the topic ARN to which you want to subscribe. This example uses the us-west-2 Region.
  3. In the left navigation pane, choose Subscriptions, then choose Create subscription.
  4. In the Create subscription dialog box, do the following:
    • For Topic ARN, paste the following topic ARN for the us-west-2 Region, or use one of the ARNs listed above for a different supported Region:

      arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements

    • For Protocol, choose Email.
    • For Endpoint, enter an email address that you can use to receive the notification.
  5. Choose Create subscription.
  6. In your email application, open the message from AWS Notifications and open the link to confirm your subscription. Your web browser displays a confirmation response from Amazon SNS, similar to that shown in Figure 1.

    Figure 1: SNS notification subscription confirmation

    Figure 1: SNS notification subscription confirmation

The following steps show you how to subscribe an email address to the Security Hub Announcements topic by using the AWS Command Line Interface (AWS CLI).

To subscribe an email address to the Security Hub Announcements topic (AWS CLI)

  1. Run the following command in the AWS CLI, replacing <[email protected]> with your email address, and optionally replacing the ARN and reference to us-west-2 if you want to use a different Region:
    aws sns --region us-west-2 subscribe --topic-arn arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements --protocol email --notification-endpoint <[email protected]>
  2. In your email application, open the message from AWS Notifications and open the link to confirm your subscription.
  3. Your web browser displays a confirmation response from Amazon SNS, similar to that shown in Figure 1.

Example subscription responses

The following sections contain examples of a message announcing new standard controls supported by Security Hub in email and sqs protocol types.

Example message from an email subscription (protocol type: email)

{"AnnouncementType":"NEW_STANDARDS_CONTROLS", “Title”:”[New Controls] 36 new Security Hub controls added to the AWS Foundational Security Best Practices standard”, "Description":"We have added 36 new controls to the AWS Foundational Security Best Practices standard. These include controls for Amazon Auto Scaling (AutoScaling.3, AutoScaling.4, AutoScaling.6), AWS CloudFormation (CloudFormation.1), Amazon CloudFront (CloudFront.10), Amazon Elastic Compute Cloud (Amazon EC2) (EC2.23, EC2.24, EC2.27), Amazon Elastic Container Registry (Amazon ECR) (ECR.1, ECR.2), Amazon Elastic Container Service (Amazon ECS) (ECS.3, ECS.4, ECS.5, ECS.8, ECS.10, ECS.12), Amazon Elastic File System (Amazon EFS) (EFS.3, EFS.4), Amazon Elastic Kubernetes Service (Amazon EKS) (EKS.2), Elastic Load Balancing (ELB.12, ELB.13, ELB.14), Amazon Kinesis (Kinesis.1), AWS Network Firewall (NetworkFirewall.3, NetworkFirewall.4, NetworkFirewall.5), Amazon OpenSearch Service (Opensearch.7), Amazon Redshift (Redshift.9), Amazon Simple Storage Service (Amazon S3) (S3.13), Amazon Simple Notification Service (SNS.2), AWF WAF (WAF.2, WAF.3, WAF.4, WAF.6, WAF.7, WAF.8). If you enabled the AWS Foundational Security Best Practices standard in an account and configured Security Hub to automatically enable new controls, these controls are enabled by default. Availability of controls can vary by Region."}

Example message from an SQS queue subscription (protocol type: sqs)

The following message shows the additional metadata included with an SQS subscription to the Security Hub Announcements topic. For more information about the metadata included in an SNS topic message delivered to an SQS queue, see Fanout to Amazon SQS Queues.

{
  "Type" : "Notification",
  "MessageId" : "c9c03e46-69df-5c3c-84e9-6520708ac394",
  "TopicArn" : "arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements",
  "Message" : "{\"AnnouncementType\":\"NEW_STANDARDS_CONTROLS\",\"Title\":\"[New Controls] 36 new Security Hub controls added to the AWS Foundational Security Best Practices standard\",\"Description\":\"We have added 36 new controls to the AWS Foundational Security Best Practices standard. These include controls for Amazon Auto Scaling (AutoScaling.3, AutoScaling.4, AutoScaling.6), AWS CloudFormation (CloudFormation.1), Amazon CloudFront (CloudFront.10), Amazon Elastic Compute Cloud (Amazon EC2) (EC2.23, EC2.24, EC2.27), Amazon Elastic Container Registry (Amazon ECR) (ECR.1, ECR.2), Amazon Elastic Container Service (Amazon ECS) (ECS.3, ECS.4, ECS.5, ECS.8, ECS.10, ECS.12), Amazon Elastic File System (Amazon EFS) (EFS.3, EFS.4), Amazon Elastic Kubernetes Service (Amazon EKS) (EKS.2), Elastic Load Balancing (ELB.12, ELB.13, ELB.14), Amazon Kinesis (Kinesis.1), AWS Network Firewall (NetworkFirewall.3, NetworkFirewall.4, NetworkFirewall.5), Amazon OpenSearch Service (Opensearch.7), Amazon Redshift (Redshift.9), Amazon Simple Storage Service (Amazon S3) (S3.13), Amazon Simple Notification Service (SNS.2), AWF WAF (WAF.2, WAF.3, WAF.4, WAF.6, WAF.7, WAF.8). If you enabled the AWS Foundational Security Best Practices standard in an account and configured Security Hub to automatically enable new controls, these controls are enabled by default. Availability of controls can vary by Region. \"}",
  "Timestamp" : "2022-08-04T18:59:33.319Z",
  "SignatureVersion" : "1",
  "Signature" : "GdKokPEUexpKZn5da5u/p5eZF1cE3JUyL0uPVKmPnDzd3orkk5jJ211VsOflUFi6V9lSXF/V6RBpQN/9f3+JBFBprng7BRQwT9I4jSa1xOn1L3xKXEVGvWI6nl1oDqBl21Pj3owV+NZ+Exd2W0dpgg8B1LG4bYq5T73MjHjWGtelcBa15TpIz/+rynqanXCKCvc/50V/XZLjA5M7gU6Dzs9CULIjkdEpCsw5FvSxbtkEd6Ktx4LH7Zq6FlPKNli3EaEHRKh9uYPo6sR/yvF4RWg3E9O4dVsK7A8uTdR+pwVCU1M601KMRxO1OWF8VIdvyPINJND8Nu/70GRA2L+MRA==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem",
  "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:393883065485:SecurityHubAnnouncements:1eb29a83-8726-4366-891c-293ad5e35a53"
}

Note: You need to set up the SQS access policy in order for SNS to push message to the SNS queue. For more information, see Basic examples of Amazon SQS policies.

Available now

The SNS topic for Security Hub Announcements is available today in the Regions described in this post. Subscribe now to stay informed of Security Hub updates. With Amazon SNS, there is no minimum fee, and you pay only for what you use. For more information, see the Amazon SNS pricing page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support. You can also start a new thread on AWS Security Hub re:Post to get answers from the community.

Want more AWS Security news? Follow us on Twitter.

Mike Saintcross

Mike Saintcross

Mike Saintcross is a Security Consultant at AWS helping enterprise customers achieve their cloud security goals in an ever-changing threat landscape. His background is in Security Engineering with a focus on deep packet inspection, incident response, security orchestration, and automation.

Neha Joshi

Neha Joshi

Neha Joshi is a Senior Solutions Architect at AWS. She loves to brainstorm and develop solutions to help customers be successful on AWS. Outside of work, she enjoys hiking and audio books.

AWS announces migration plans for NIST 800-53 Revision 5

Post Syndicated from James Mueller original https://aws.amazon.com/blogs/security/aws-announces-migration-plans-for-nist-800-53-revision-5/

Amazon Web Services (AWS) is excited to begin migration plans for National Institute of Standards and Technology (NIST) 800-53 Revision 5.

The NIST 800-53 framework is a regulatory standard that defines the minimum baseline of security controls for U.S. federal information systems. In 2020, NIST released Revision 5 of the framework to improve security standards for industry partners and government agencies. The set of NIST 800-53 controls provides a foundation for additional laws and regulations within the U.S. government.

The Federal Information Security Modernization Act (FISMA) of 2014 is a law that requires federal agencies and contractors to meet information security standards. The Federal Risk and Authorization Management Program (FedRAMP) is a federal government program that provides a standardized approach to security assessment, authorization, and continuous monitoring of cloud services. Both FISMA and FedRAMP rely on the NIST 800-53 framework.

NIST 800-53 Revision 5

AWS meets the NIST 800-53 Revision 4 regulatory standards mandated by government authorities. NIST added numerous security enhancements, such as privacy and supply chain management, to Revision 5 to keep abreast of emerging threats to federal information systems.

In preparation for federal regulators to accept NIST 800-53 Revision 5 as the new requirement standard, AWS has begun efforts to adapt to the new security controls, processes, and procedures. AWS security compliance teams have analyzed the new requirements and launched a project to implement the updates. Although AWS is not required to migrate to the new Revision 5 standard until NIST announces the official regulatory compliance deadline, we are already taking steps to meet the deadline.

To learn more about AWS compliance programs, see the AWS Compliance Programs page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

James Mueller

James Mueller

James is a Security Assurance Manager for AWS. For over 20 years, he has served customers in the private, public, and non-profit sectors delivering innovative information technology solutions. He currently leads security compliance efforts to drive adoption of AWS services.

How to deploy AWS Network Firewall by using AWS Firewall Manager

Post Syndicated from Harith Gaddamanugu original https://aws.amazon.com/blogs/security/how-to-deploy-aws-network-firewall-by-using-aws-firewall-manager/

AWS Network Firewall helps make it easier for you to secure virtual networks at scale inside Amazon Web Services (AWS). Without having to worry about availability, scalability, or network performance, you can now deploy Network Firewall with the AWS Firewall Manager service. Firewall Manager allows administrators in your organization to apply network firewalls across accounts. This post will take you through different deployment models and demonstrate with step-by-step instructions how this can be achieved.

Here’s a quick overview of the services used in this blog post:

  • Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network. It has inbuilt network security controls and routing between VPC subnets by design. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
  • AWS Transit Gateway is a service that connects your VPCs to each other, to on-premises networks, to virtual private networks (VPNs), and to the internet through a central hub.
  • AWS Network Firewall is a service that secures network traffic at the organization and account levels. AWS Network Firewall policies govern the monitoring and protection behavior of these firewalls. The specifics of these policies are defined in rule groups. A rule group consists of rules that define reusable criteria for inspecting and processing network traffic. Network Firewall can support thousands of rules that can be based on a domain, port, protocol, IP address, or pattern matching.
  • AWS Firewall Manager is a security management service that acts as a central place for you to configure and deploy firewall rules across AWS Regions, accounts, and resources in AWS Organizations. Firewall Manager helps you to ensure that all firewall rules are consistently enforced, even as new accounts and resources are created. Firewall Manager integrates with AWS Network Firewall, Amazon Route 53 Resolver DNS Firewall, AWS WAF, AWS Shield Advanced, and Amazon VPC security groups.

Deployment models overview

When it comes to securing multiple AWS accounts, security teams categorize firewall deployment into centralized or distributed deployment models. Firewall Manager supports Network Firewall deployment in both modes. There are multiple additional deployment models available with Network Firewall. For more information about these models, see the blog post Deployment models for AWS Network Firewall.

Centralized deployment model

Network Firewall can be centrally deployed as an Amazon VPC attachment to a transit gateway that you set up with AWS Transit Gateway. Transit Gateway acts as a network hub and simplifies the connectivity between VPCs as well as on-premises networks. Transit Gateway also provides inter-Region peering capabilities to other transit gateways to establish a global network by using the AWS backbone. In a centralized transit gateway model, Firewall Manager can create one or more firewall endpoints for each Availability Zone within an inspection VPC. Network Firewall deployed in a centralized model covers the following use cases:

  • Filtering and inspecting traffic within a VPC or in transit between VPCs, also known as east-west traffic.
  • Filtering and inspecting ingress and egress traffic to and from the internet or on-premises networks, also known as north-south traffic.

Distributed deployment model

With the distributed deployment model, Firewall Manager creates endpoints into each VPC that requires protection. Each VPC is protected individually and VPC traffic isolation is retained. You can either customize the endpoint location by specifying which Availability Zones to create firewall endpoints in, or Firewall Manager can automatically create endpoints in those Availability Zones that have public subnets. Each VPC does not require connectivity to any other VPC or transit gateway. Network Firewall configured in a distributed model addresses the following use cases:

  • Protect traffic between a workload in a public subnet (for example, an EC2 instance) and the internet. Note that the only recommended workloads that should have a network interface in a public subnet are third-party firewalls, load balancers, and so on.
  • Protect and filter traffic between an AWS resource (for example Application Load Balancers or Network Load Balancers) in a public subnet and the internet.

Deploying Network Firewall in a centralized model with Firewall Manager

The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a centralized model, as shown in Figure 1.

Overview of how to configure a centralized model

  1. Complete the steps described in the AWS Firewall Manager prerequisites.
  2. Create an Inspection VPC in each Firewall Manager member account. Firewall Manager will use these VPCs to create firewalls. Follow the steps to create a VPC.
  3. Create the stateless and stateful rule groups that you want to centrally deploy as an administrator. For more information, see Rule groups in AWS Network Firewall.
  4. Build and deploy Firewall Manager policies for Network Firewall, based on the rule groups you defined previously. Firewall Manager will now create firewalls across these accounts.
  5. Finish deployment by updating the related VPC route tables in the member account, so that traffic gets routed through the firewall for inspection.
    Figure 1: Network Firewall centralized deployment model

    Figure 1: Network Firewall centralized deployment model

The following steps provide a detailed description of how to configure Network Firewall with Firewall Manager in a centralized model.

To deploy network firewall policy centrally with Firewall Manager (console)

  1. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. On the Filter menu, select the AWS Region where your application is hosted, and choose Create policy. In this example, we choose US East (N. Virginia).
  4. As shown in Figure 2, under Policy details, choose the following:
    1. For AWS services, choose AWS Network Firewall.
    2. For Deployment model, choose Centralized.
      Figure 2: Network Firewall Manager policy type and Region for centralized deployment

      Figure 2: Network Firewall Manager policy type and Region for centralized deployment

  5. Choose Next.
  6. Enter a policy name.
  7. In the AWS Network Firewall policy configuration pane, you can choose to configure both stateless and stateful rule groups along with their logging configurations. In this example, we are not creating any rule groups and keep the default configurations, as shown in Figure 3. If you would like to add a rule group, you can create rule groups here and add them to the policy.
    Figure 3: AWS Network Firewall policy configuration

    Figure 3: AWS Network Firewall policy configuration

  8. Choose Next.
  9. For Inspection VPC configuration, select the account and add the VPC ID of the inspection VPC in each of the member accounts that you previously created, as shown in Figure 4. In the centralized model, you can only select one VPC under a specific account as the inspection VPC.
    Figure 4: Inspection VPC configuration

    Figure 4: Inspection VPC configuration

  10. For Availability Zones, select the Availability Zones in which you want to create the Network Firewall endpoint(s), as shown in Figure 5. You can select by Availability Zone name or Availability Zone ID. Optionally, if you want to specify the CIDR for each Availability Zone, or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
    Figure 5: Network Firewall endpoint Availability Zones configuration

    Figure 5: Network Firewall endpoint Availability Zones configuration

  11. Choose Next.
  12. For Policy scope, choose VPC, as shown in Figure 6.
    Figure 6: Firewall Manager policy scope configuration

    Figure 6: Firewall Manager policy scope configuration

  13. For Resource cleanup, choose Automatically remove protections from resources that leave the policy scope. When you select this option, Firewall Manager will automatically remove Firewall Manager managed protections from your resources when a member account or a resource leaves the policy scope. Choose Next.
  14. For Policy tags, you don’t need to add any tags. Choose Next.
  15. Review the security policy, and then choose Create policy.
  16. To route traffic for inspection, you manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.

Note: In current versions of Firewall Manager, centralized policy only supports one inspection VPC per account. If you want to have multiple inspection VPCs in an account to inspect multiple firewalls, you cannot deploy all of them through Firewall Manager centralized policy. You have to manually deploy to the network firewalls in each inspection VPC.

Deploying Network Firewall in a distributed model with Firewall Manager

The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a distributed model, as shown in Figure 7.

Overview of how to configure a distributed model

  1. Complete the steps described in the AWS Firewall Manager prerequisites.
  2. Create a new VPC with a desired tag in each Firewall Manager member account. Firewall Manager uses these VPC tags to create network firewalls in tagged VPCs. Follow these steps to create a VPC.
  3. Create the stateless and stateful rule groups that you want to centrally deploy as an administrator. For more information, see Rule groups in AWS Network Firewall.
  4. Build and deploy Firewall Manager policy for network firewalls into tagged VPCs based on the rule groups that you defined in the previous step.
  5. Finish deployment by updating the related VPC route tables in the member accounts to begin routing traffic through the firewall for inspection.
    Figure 7: Network Firewall distributed deployment model

    Figure 7: Network Firewall distributed deployment model

The following steps provide a detailed description how to configure Network Firewall with Firewall Manager in a distributed model.

To deploy Network Firewall policy distributed with Firewall Manager (console)

  1. Create new VPCs in member accounts and tag them. In this example, you launch VPCs in the US East (N. Virginia) Region. Create a new VPC in a member account by using the VPC wizard, as follows.
    1. Choose VPC with a Single Public Subnet. For this example, select a subnet in the us-east-1a Availability Zone.
    2. Add a desired tag to this VPC. For this example, use the key Network Firewall and the value yes. Make note of this tag key and value, because you will need this tag to configure the policy in the Policy scope step.
  2. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
  3. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  4. On the Filter menu, select the AWS Region where you created VPCs previously and choose Create policy. In this example, you choose US East (N. Virginia).
    1. For AWS services, choose AWS Network Firewall.
    2. For Deployment model, choose Distributed, and then choose Next.
      Figure 8: Network Firewall Manager policy type and Region for distributed deployment

      Figure 8: Network Firewall Manager policy type and Region for distributed deployment

  5. Enter a policy name.
  6. On the AWS Network Firewall policy configuration page, you can configure both stateless and stateful rule groups, along with their logging configurations. In this example you are not creating any rule groups, so you choose the default configurations, as shown in Figure 9. If you would like to add a rule group, you can create rule groups here and add them to the policy.
    Figure 9: Network Firewall policy configuration

    Figure 9: Network Firewall policy configuration

  7. Choose Next.
  8. In the Configure AWS Network Firewall Endpoint section, as shown in Figure 10, you can choose Custom endpoint configuration or Automatic endpoint configuration. In this example, you choose Custom endpoint configuration and select the us-east-1a Availability Zone. Optionally, if you want to specify the CIDR for each Availability Zone or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
    Figure 10: Network Firewall endpoint Availability Zones configuration

    Figure 10: Network Firewall endpoint Availability Zones configuration

  9. Choose Next.
  10. For AWS Network Firewall route configuration, choose the following options, as shown in Figure 11. This will monitor the route configuration using the administrator account, to help ensure that traffic is routed as expected through the network firewalls.
    1. For Route management, choose Monitor.
    2. Under Traffic type, for Internet gateway, choose Add to firewall policy.
    3. Select the checkbox for Allow required cross-AZ traffic, and then choose Next.
      Figure 11: Network Firewall route management configuration

      Figure 11: Network Firewall route management configuration

  11. For Policy scope, select the following options to create network firewalls in previously tagged VPCs, as shown in Figure 12.
    1. For AWS accounts this policy applies to, choose All accounts under my AWS organization.
    2. For Resource type, choose VPC.
    3. For Resources, choose Include only resources that have the specified tags.
    4. For Key, enter Network Firewall. For Value, Enter Yes. The tag you are using here is the same tag defined in step 1.
      Figure 12: AWS Firewall Manager policy scope configuration

      Figure 12: AWS Firewall Manager policy scope configuration

      Important: Be careful when defining the policy scope. Each policy creates Network Firewall endpoints in all the VPCs and their Availability Zones that are within the policy scope. If you select an inappropriate scope, it could result in the creation of a large number of network firewalls and incur significant charges for AWS Network Firewall.

  12. For Resource cleanup, select the Automatically remove protections from resources that leave the policy scope check box, and then choose Next.
    Figure 13: Firewall Manager Resource cleanup configuration

    Figure 13: Firewall Manager Resource cleanup configuration

  13. For Policy tags, you don’t need to add any tags. Choose Next.
  14. Review the security policy, and then choose Create policy.
  15. To route traffic for inspection, you need to manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.

Clean up

To avoid incurring future charges, delete the resources you created for this solution.

To delete Firewall Manager policy (console)

  1. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services
  2. In the navigation pane, choose Security policies.
  3. Choose the option next to the policy that you want to delete.
  4. Choose Delete all policy resources, and then choose Delete. If you do not select Delete all policy resources, then only the firewall policy on the administrator account will be deleted, not network firewalls deployed in the other accounts in AWS Organizations.

To delete the VPCs you created as prerequisites

Conclusion

In this blog post, you learned how you can use either a centralized or a distributed deployment model for Network Firewall, so developers in your organization can build firewall rules, create security policies, and enforce them in a consistent, hierarchical manner across your entire infrastructure. As new applications are created, Firewall Manager makes it easier to bring new applications and resources into a consistent state by enforcing a common set of security rules.

For information about pricing, see the pages for AWS Firewall Manager pricing and AWS Network Firewall pricing. For more information, see the other AWS Network Firewall posts on the AWS Security Blog. Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Harith Gaddamanugu

Harith Gaddamanugu

Harith works at AWS as a Sr. Edge Specialist Solutions Architect. He stays motivated by solving problems for customers across AWS Perimeter Protection and Edge services. When he is not working, he enjoys spending time outdoors with friends and family.

Yang Liu

Yang Liu

Yang works as cloud support engineer II with AWS. On a daily basis, he provides solutions for customers’ cloud architecture questions related to networking infrastructure and the security domain. Outside of work, Yang loves traveling with his family and two Corgis, Cookie and Cache.

How to centralize findings and automate deletion for unused IAM roles

Post Syndicated from Hong Pham original https://aws.amazon.com/blogs/security/how-to-centralize-findings-and-automate-deletion-for-unused-iam-roles/

Maintaining AWS Identity and Access Management (IAM) resources is similar to keeping your garden healthy over time. Having visibility into your IAM resources, especially the resources that are no longer used, is important to keep your AWS environment secure. Proactively detecting and responding to unused IAM roles helps you prevent unauthorized entities from gaining access to your AWS resources. In this post, I will show you how to apply resource tags on IAM roles and deploy serverless technologies on AWS to detect unused IAM roles and to require the owner of the IAM role (identified through tags) to take action.

You can use this solution to check for unused IAM roles in a standalone AWS account. As you grow your workloads in the cloud, you can run this solution for multiple AWS accounts by using AWS Organizations. In this solution, you use AWS Control Tower to create an AWS Organizations organization with a Security organizational unit (OU), and a Security account in this OU. In this blog post, you deploy the solution in the Security account belonging to a Security OU of an organization.

For more information and recommended best practices, see the blog post Managing the multi-account environment using AWS Organizations and AWS Control Tower. Following this best practice, you can create a Security OU, in which you provision one or more Security and Audit accounts that are dedicated for security automation and audit activities on behalf of the entire organization.

Solution architecture

The architecture diagram in Figure 1 demonstrates the solution workflow.

Figure 1: Solution workflow for standalone account or member account of an AWS Organization.

Figure 1: Solution workflow for standalone account or member account of an AWS Organization.

The solution is triggered periodically by an Amazon EventBridge scheduled rule and invokes a series of actions. You specify the frequency (in number of days) when you create the EventBridge rule. There are two options to run this solution, based on the needs of your organization.

Option 1: For a standalone account

Choose this option if you would like to check for unused IAM roles in a single AWS account. This AWS account might or might not belong to an organization or OU. In this blog post, I refer to this account as the standalone account.

Prerequisites

  1. You need an AWS account specifically for security automation. For this blog post, I refer to this account as the standalone Security account.
  2. You should deploy the solution to the standalone Security account, which has appropriate admin permission to audit other accounts and manage security automation.
  3. Because this solution uses AWS CloudFormation StackSets, you need to grant self-managed permissions to create stack sets in standalone accounts. Specifically, you need to establish a trust relationship between the standalone Security account and the standalone account by creating the AWSCloudFormationStackSetAdministrationRole IAM role in the standalone Security account, and the AWSCloudFormationStackSetExecutionRole IAM role in the standalone account.
  4. You need to have AWS Security Hub enabled in your standalone Security account, and you need to deploy the solution in the same AWS Region as your Security Hub dashboard.
  5. You need a tagging enforcement in place for IAM roles. This solution uses an IAM tag key Owner to identify the email address of the owner. The value of this tag key should be the email address associated with the owner of the IAM role. If the Owner tag isn’t available, the notification email is sent to the email address that you provided in the parameter ITSecurityEmail when you provisioned the CloudFormation stack.
  6. This solution uses Amazon Simple Email Service (Amazon SES) to send emails to the owner of the IAM roles. The destination address needs to be verified with Amazon SES. With Amazon SES, you can verify identity at the individual email address or at the domain level.

An EventBridge rule triggers the AWS Lambda function LambdaCheckIAMRole in the standalone Security account. The LambdaCheckIAMRolefunction assumes a role in the standalone account. This role is named after the Cloudformation stack name that you specify when you provision the solution. Then LambdaCheckIAMRole calls the IAM API action GetAccountAuthorizationDetails to get the list of IAM roles in the standalone account, and parses the data type RoleLastUsed to retrieve the date, time, and the Region in which the roles were last used. If the last time value is not available, the IAM role is skipped. Based on the CloudFormation parameter MaxDaysForLastUsed that you provide, LambdaCheckIAMRole determines if the last time used is greater than the MaxDaysForLastUsed value. LambdaCheckIAMRole also extracts tags associated with the IAM roles, and retrieves the email address of the IAM role owner from the value of the tag key Owner. If there is no Owner tag, then LambdaCheckIAMRole sends an email to a default email address provided by you from the CloudFormation parameter ITSecurityEmail.

Option 2: For all member accounts that belong to an organization or an OU

Choose this option if you want to check for unused IAM roles in every member account that belongs to an AWS Organizations organization or OU.

Prerequisites

  1. You need to have an AWS Organizations organization with a dedicated Security account that belongs to a Security OU. For this blog post, I refer to this account as the Security account.
  2. You should deploy the solution to the Security account that has appropriate admin permission to audit other accounts and to manage security automation.
  3. Because this solution uses CloudFormation StackSets to create stack sets in member accounts of the organization or OU that you specify, the Security account in the Security OU needs to be granted CloudFormation delegated admin permission to create AWS resources in this solution.
  4. You need Security Hub enabled in your Security account, and you need to deploy the solution in the same Region as your Security Hub dashboard.
  5. You need tagging enforcement in place for IAM roles. This solution uses the IAM tag key Owner to identify the owner email address. The value of this tag key should be the email address associated with the owner of the IAM role. If the Owner tag isn’t available, the notification email will be sent to the email address that you provided in the parameter ITSecurityEmail when you provisioned the CloudFormation stack.
  6. This solution uses Amazon SES to send emails to the owner of the IAM roles. The destination address needs to be verified with Amazon SES. With Amazon SES, you can verify identity at the individual email address or at the domain level.

An EventBridge rule triggers the Lambda function LambdaGetAccounts in the Security account to collect the account IDs of member accounts that belong to the organization or OU. LambdaGetAccounts sends those account IDs to an SNS topic. Each account ID invokes the Lambda function LambdaCheckIAMRole once.

Similar to the process for Option 1, LambdaCheckIAMRole in the Security account assumes a role in the member account(s) of the organization or OU, and checks the last time that IAM roles in the account were used.

In both options, if an IAM role is not currently used, the function LambdaCheckIAMRole generates a Security Hub finding, and performs BatchImportFindings for all findings to Security Hub in the Security account. At the same time, the Lambda function starts an AWS Step Functions state machine execution. Each execution is for an unused IAM role following this naming convention:
[target-account-id]-[unused IAM role name]-[time the execution created in Unix format]

You should avoid running this solution against special IAM roles, such as a break-glass role or a disaster recovery role. In the CloudFormation parameter RolePatternAllowedlist, you can provide a list of role name patterns to skip the check.

Use a Step Functions state machine to process approval

Figure 2 shows the state machine workflow for owner approval.

Figure 2: Owner approval state machine workflow

Figure 2: Owner approval state machine workflow

After the solution identifies an unused IAM role, it creates a Step Functions state machine execution. Figure 2 demonstrates the workflow of the execution. After the execution starts, the first Lambda task NotifyOwner (powered by the Lambda function NotifyOwnerFunction) sends an email to notify the IAM role owner. This is a callback task that pauses the execution until a taskToken is returned. The maximum pause for a callback task is 1 year. The execution waits until the owner responds with a decision to delete or keep the role, which is captured by a private API endpoint in Amazon API Gateway. You can configure a timeout to avoid waiting for callback task execution.

With a private API endpoint, you can build a REST API that is only accessible within your Amazon Virtual Private Cloud (Amazon VPC), or within your internal network connected to your VPC. Using a private API endpoint will prevent anyone from outside of your internal network from selecting this link and deleting the role. You can implement authentication and authorization with API Gateway to make sure that only the appropriate owner can delete a role.

If the owner denies role deletion, then the role remains intact until the next automation cycle runs, and the state machine execution stops immediately with a Fail status. If the owner approves role deletion, the next Lambda task Approve (powered by the function ApproveFunction) checks again if the role is not currently used. If the role isn’t in use, the Lambda task Approve attaches an IAM policy DenyAllCheckUnusedIAMRoleSolution to deny the role to perform any actions, and waits for 30 days. During this wait time, you can restore the IAM role by removing the IAM policy DenyAllCheckUnusedIAMRoleSolution from the role. The Step Functions state machine execution for this role is still in progress until the wait time expires.

After the wait time expires, the state machine execution invokes the Validate task. The Lambda function ValidateFunction checks again if the role is not in use after the amount of time calculated by adding MaxDaysForLastUsed and the preceding wait time. It also checks if the IAM policy DenyAllCheckUnusedIAMRoleSolution is attached to the role. If both of these conditions are true, the Lambda function follows a process to detach the IAM policies and delete the role permanently. The role can’t be recovered after deletion.

Note: To restore a role that has been marked for deletion, detach the DenyAll IAM policy from the role.

To deploy the solution using the AWS CLI

  1. Clone git repo from AWS Samples to get source code and CloudFormation templates.
    git clone https://github.com/aws-samples/aws-blog-automate-iam-role-deletion 
    cd /aws-blog-automate-iam-role-deletion

  2. Run the AWS CLI command below to upload CloudFormation templates and Lambda code to a S3 bucket in the Security Account. The S3 bucket needs to be in the same Region where you will deploy the solution.
    • To deploy the solution for a single account, use the following commands. Be sure to replace <YOUR_BUCKET_NAME> and <PATH_TO_UPLOAD_CODE> with your own values.
      #Deploy solution for a single target AWS Account
      aws cloudformation package \
      --template-file solution_scope_account.yml \
      --s3-bucket <YOUR_BUCKET_NAME> \
      --s3-prefix <PATH_TO_UPLOAD_CODE> \
      --output-template-file solution_scope_account.template

    • To deploy the solution for an organization or OU, use the following commands. Be sure to replace <YOUR_BUCKET_NAME> and <PATH_TO_UPLOAD_CODE> with your own values.
      #Deploy solution for an Organization/OU
      aws cloudformation package \
      --template-file solution_scope_organization.yml \
      --s3-bucket <YOUR_BUCKET_NAME> \
      --s3-prefix <PATH_TO_UPLOAD_CODE> \
      --output-template-file solution_scope_organization.template

  3. Validate the template generated by the CloudFormation package.
    • To validate the solution for a single account, use the following commands.
      #Deploy solution for a single target AWS Account
      aws cloudformation validate-template —template-body file://solution_scope_account.template

    • To validate the solution for an organization or OU, use the following commands.
      #Deploy solution for an Organization/OU
      aws cloudformation validate-template —template-body file://solution_scope_organization.template

  4. Deploy the solution in the same Region that you use for Security Hub. The stack takes 30 minutes to complete deployment.
    • To deploy the solution for a single account, use the following commands. Be sure to replace all of the placeholders with your own values.
      #Deploy solution for a single target AWS Account
      aws cloudformation deploy \
      --template-file solution_scope_account.template \
      --stack-name <UNIQUE_STACK_NAME> \
      --region <REGION> \
      --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \
      --parameter-overrides AccountId='<STANDALONE ACCOUNT ID>' \
      Frequency=<DAYS> MaxDaysForLastUsed=<DAYS> \
      ITSecurityEmail='<YOUR IT TEAM EMAIL>' \
      RolePatternAllowedlist='<ALLOWED PATTERN>'

    • To deploy the solution for an organization, run the following commands to create CloudFormation stack in the Security Account of the organization.
      #Deploy solution for an Organization
      aws cloudformation deploy \
      --template-file solution_scope_organization.template \
      --stack-name <UNIQUE_STACK_NAME> \
      --region <REGION> \
      --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \
      --parameter-overrides Scope=Organization \
      OrganizationId='<o-12345abcde>' \
      OrgRootId='<r-1234>'  \
      Frequency=<DAYS> MaxDaysForLastUsed=<DAYS> \
      ITSecurityEmail='<[email protected]>' \
      RolePatternAllowedlist='<ALLOWED PATTERN>'

    • To deploy the solution for an OU, run the following commands to create CloudFormation stack in the Security Account of the organization.
      #Deploy solution for an OU
      aws cloudformation deploy \
      --template-file solution_scope_organization.template \
      --stack-name <UNIQUE_STACK_NAME> \
      --region <REGION> \
      --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \
      --parameter-overrides Scope=OrganizationalUnit \
      OrganizationId='<o-12345abcde>' \
      OrganizationalUnitId='<ou-1234-1234abcd>'  \
      Frequency=<DAYS> MaxDaysForLastUsed=<DAYS> \
      ITSecurityEmail=’<[email protected]>’ \
      RolePatternAllowedlist=’<ALLOWED PATTERN>

Test the solution

The solution is triggered by an EventBridge scheduled rule, so it doesn’t perform the checks immediately. To test the solution right away after the CloudFormation stacks are successfully created, follow these steps.

To manually trigger the automation for a single account

  1. Navigate to the AWS Lambda console and choose the function
    <CloudFormation stackname>-LambdaCheckIAMRole.
  2. Choose Test.
  3. Choose New event.
  4. For Name, enter a name for the event, and provide the current time in UTC Date Time format YYYY-MM-DDTHH:MM:SSZ. For example {“time”: “2022-01-22T04:36:52Z”}. The Lambda function uses this value to calculate how much time has passed since the last time that a role was used. Figure 5 shows an example of configuring a test event.
    Figure 5: Configure test event for standalone account

    Figure 5: Configure test event for standalone account

  5. Choose Test.

To manually trigger the automation for an organization or OU

  1. Choose the function
    [CloudFormation stackname]-LambdaGetAccounts.
  2. Choose Test.
  3. Choose New event.
  4. For Name, enter a name for the event. Leave the default values for the remaining fields.
  5. Choose Test.

Respond to unused IAM roles

After you’ve triggered the Lambda function, the automation runs the necessary checks. For each unused IAM role, it creates a Step Functions state machine execution.

To see the list of Step Functions state machine executions

  1. Navigate to the AWS Step Functions console.
  2. Choose state machine [CloudFormation stackname]OnwerApprovalStateMachine.
  3. Under the Executions tab, you will see the list of executions in running state following this naming convention: [target-account-id]-[unused IAM role name]-[time the execution created in Unix format]. Figure 6 shows an example list of executions.
    Figure 6: Each unused IAM role generates an execution in the Step Functions state machine

    Figure 6: Each unused IAM role generates an execution in the Step Functions state machine

Each execution sends out an email notification to the IAM role owner (if available through the Owner tag) or to the IT security email address that you provided in the CloudFormation stack parameter ITSecurityEmail. The email content is:

Subject: Please take action on this unused IAM Role
 
Hello!
 
This IAM Role arn:aws:iam::<AWS account>:role/<role name> is not in use for
more than 60 days.
 
Can you please delete the role by following this link: Approve link
 
Or keep this role by following this link: Deny Link

In the email, the Approve link and Deny link is the hyperlink to a private API endpoint with a parameter taskToken. If you try to access these links publicly, they won’t work. When you access the link, the taskToken is provided to the private API endpoint, which updates the Step Functions state machine.

To test the approval action using an API Gateway test

  1. Navigate to the AWS Step Functions console. Under State machines, choose the state machine that has the name [CloudFormation stackname]OwnerApprovalStateMachine
  2. On the Executions tab, there is a list of executions. Each execution represents a workflow for one IAM role, as shown in Figure 6. Choose the execution name that includes the IAM role name in the email that you received earlier.
  3. Scroll down to Execution event history.
  4. Expand the Step Notify Owner, enter TaskScheduled, find the item taskToken, and copy its value to a notepad, as shown in Figure 7.
    Figure 7: Retrieve taskToken from execution

    Figure 7: Retrieve taskToken from execution

  5. Navigate to the API Gateway console.
  6. Choose the API that has a name similar to [CloudFormation stackname]-PrivateAPIGW-[unique string]-ApprovalEndpoint.
  7. Choose which action to test: Deny or Approve.
    • To test the Deny action, under /deny resource, choose the GET method.
    • To test the Approve action, under /approve resource, choose the GET method.
  8. Choose Test.
  9. Under Query Strings, enter taskToken= and paste the taskToken you copied earlier from the state machine execution. Figure 8 shows how to pass the taskToken to API Gateway.
    Figure 8: Provide taskToken to API Gateway Method

    Figure 8: Provide taskToken to API Gateway Method

  10. Choose Test. After you test, the state machine resumes the workflow and finishes the automation. You won’t be able to change the action.
  11. Navigate to the AWS Step Functions console. Choose the state machine and go to the state machine execution.
    1. If you choose to deny the role deletion, the execution immediately stops as Fail.
    2. If you choose to approve the role deletion, the execution moves to the Wait task. This task removes IAM policies associated to the role and waits for a period of time before moving to the next task. By default, the wait time is 30 days. To change this number, go to the Lambda function [CloudFormation stackname]ApproveFunction, and update the variable wait_time_stamp.
    3. After the waiting period expires, the state machine triggers the Validate task to do a final validation on the role before deleting it. If the Validate task decides that the role is being used, it leaves the role intact. Otherwise, it deletes the role permanently.

Conclusion

In this blog post, you learned how serverless services such as Lambda, Step Functions, and API Gateway can work together to build security automation. We recommend testing this solution as a starting point. Then, you can build more features on top of the sample code and templates to customize it to perform checks, following guidance from your IT security team.

Here are a few suggestions that you can take to extend this solution.

  • This solution uses a private API Gateway to handle the approval response from the IAM role owner. You need to establish private connectivity between your internal network and AWS to invoke a private API Gateway. For instructions, see How to invoke a private API.
  • Add a mechanism to control access to API Gateway by using endpoint policies for interface VPC endpoints.
  • Archive the Security Hub finding after the IAM role is deleted using the AWS CLI or AWS Console.
  • Use a Step Functions state machine for other automation that needs human approval.
  • Add the capability to report on IAM roles that were skipped due to the absence of RoleLastUsed information.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Hong Pham

Hong Pham

Hong is a Senior Solutions Architect at AWS. For more than five years, she has helped many customers from start-ups to enterprises in different industries to adopt Cloud Computing. She was born in Vietnam and currently lives in Seattle, Washington.

How to set up and track SLAs for resolving Security Hub findings

Post Syndicated from Maisie Fernandes original https://aws.amazon.com/blogs/security/how-to-set-up-and-track-slas-for-resolving-security-hub-findings/

Your organization can use AWS Security Hub to gain a comprehensive view of your security and compliance posture across your Amazon Web Services (AWS) environment. Security Hub receives security findings from AWS security services and supported third-party products and centralizes them, providing a single view for identifying and analyzing security issues. Security Hub correlates findings and breaks them down into five severity categories: INFORMATIONAL, LOW, MEDIUM, HIGH, and CRITICAL. In this blog post, we provide step-by-step instructions for tracking Security Hub findings in each severity category against service-level agreements (SLAs) through visual dashboards.

SLAs are defined collaboratively by the Business, IT, and Security and Compliance teams within an organization. You can track Security Hub findings against your specific SLAs, and any findings that are in breach of an SLA can be escalated. You can also apply automation to alert the owners of the resources and remediate common security findings to improve your overall security posture.

Prerequisites

Security Hub uses service-linked AWS Config rules to perform security checks behind the scenes. To support these controls, you must enable AWS Config on all accounts, including the administrator and member accounts, in each AWS Region where Security Hub is enabled.

As a best practice, we recommend that you enable AWS Config and Security Hub across all of your accounts and Regions. For more information on how to do this, see Enabling and configuring AWS Config and Setting up Security Hub.

Solution overview

In this solution, you will learn two different ways to track your findings in Security Hub against the pre-defined SLA for each severity category.

Option 1: Use custom insights

Security Hub offers managed insights, which include a collection of related findings that identify a security issue that requires attention and intervention. You can view and take action on the insight findings. In addition to the managed insights, you can create custom insights to track issues and findings related to your resources in your environment.

Create a custom insight for SLA tracking

In this example, you set an SLA of 30 days for HIGH severity findings. This example will provide you with a view of the HIGH severity findings that were generated within the last 30 days and haven’t been resolved.

To create a custom insight to view HIGH severity findings from the last 30 days

  1. In the Security Hub console, in the left navigation pane, choose Insights.
  2. On the Insights page, choose Create insight, as shown in Figure 1.
    Figure 1: Create insight in the Security Hub console

    Figure 1: Create insight in the Security Hub console

  3. On the Create insight page, in the search box, leave the following default filters: Workflow status is NEW, Workflow status is NOTIFIED, and Record state is ACTIVE, as show in Figure 2.
  4. To select the required grouping attribute for the insight, choose the search box to display the filter options. In the search box, choose the following filters and settings:
    1. Choose the Group by filter, and select WorkflowStatus.
      Figure 2: Create insights using filters

      Figure 2: Create insights using filters

    2. Choose the Severity label filter and enter HIGH.
    3. Choose the Created at filter and enter 30 to indicate the number of days you want to set as your SLA.
  5. Choose Create insight again.
  6. For Insight name, enter a meaningful name (for this example, we entered UnresolvedHighSevFindings), and then choose Create insight again.

You can repeat the same steps for other finding severities – CRITICAL, MEDIUM, LOW, and INFORMATIONAL; you can change the number of days you specify for the Created at filter to meet your SLA requirements; or specify different workflow status settings. Note that the workflow status can have the following values:

  • NEW – The initial state of a finding before you review it.
  • NOTIFIED – Indicates that the resource owner has been notified about the security issue.
  • SUPPRESSED – Indicates that you have reviewed the finding and no action is required.
  • RESOLVED – Indicates that the finding has been reviewed and remediated.

Your custom insight will show the findings that meet the criteria you defined. For more information about creating custom insights, see Module 2: Custom Insights in the Security Hub Workshop.

Option 2: Build visualizations for Security Hub findings data by using Amazon QuickSight

We hear from our customers that your organizations are looking for a solution where you can quickly visualize the status of your Security Hub findings, to see which findings you need to take action on (NEW and NOTIFIED) and which you do not (SUPPRESSED and RESOLVED). You can achieve this by building a data analytics pipeline that uses Amazon EventBridge, Amazon Kinesis Data Firehose, Amazon Simple Storage Service (Amazon S3), Amazon Athena, and Amazon QuickSight. The data analytics pipeline enables you to detect, analyze, contain, and mitigate issues quickly.

This solution integrates Security Hub with EventBridge to set SLA rules to a specified period of your choice for each severity level. For example, you can set the SLA to 5 days for CRITICAL severity findings, 10 days for HIGH severity findings, 14 days for MEDIUM severity findings, 30 days for LOW severity findings, and 60 days for INFORMATIONAL severity findings.

Architecture overview

Figure 3 shows the architectural overview of the QuickSight solution workflow.

Figure 3: Architecture diagram for option 2, the QuickSight solution

Figure 3: Architecture diagram for option 2, the QuickSight solution

In the QuickSight solution, Security Hub publishes the findings to EventBridge, and then an EventBridge rule (based on the SLA) is configured to deliver the findings to Kinesis Data Firehose. For example, if the SLA is 14 days for all MEDIUM severity findings, then those findings will be filtered by the rule and sent to Kinesis Data Firehose. Security Hub findings follow the AWS Security Finding Format (ASFF).

The following is a sample EventBridge rule that filters the Security Hub findings for MEDIUM severity and workflow status NEW, before publishing the findings to Kinesis Data Firehose, and then finally to Amazon S3 for storage. A workflow status of NEW and NOTIFIED should be included to catch all findings that require action.

{
  "source": ["aws.securityhub"],
  "detail-type": ["Security Hub Findings - Imported"],
  "detail": {
    "findings": {
      "Severity": {
        "Label": ["MEDIUM"]
      },
      "Workflow": {
        "Status": ["NEW"]
      }
    }
  }
}

After the findings are exported and stored in Amazon S3, you can use Athena to run queries on the data and you can use Amazon QuickSight to display the findings that violate your organization’s SLA. With Athena, you can create views of the original table as a logical table. You can also create a view for CRITICAL, HIGH, MEDIUM, LOW, and INFORMATIONAL severity findings.

For details about how to export findings and build a dashboard, see the blog post How to build a multi-Region AWS Security Hub analytic pipeline and visualize Security Hub data.

Visualize an SLA by using QuickSight

The QuickSight dashboard shown in Figure 4 is an example that shows all the MEDIUM severity findings that should be resolved within a 14 day SLA.

Figure 4: QuickSight table showing medium severity findings over a 14-day SLA

Figure 4: QuickSight table showing medium severity findings over a 14-day SLA

Using QuickSight, you can create different types of data visualizations to represent the exported Security Hub findings, which enables the decision makers in your organization to explore and interpret information in an interactive visual environment. For example, Figure 5 shows findings categorized by service.

Figure 5: QuickSight visual showing MEDIUM severity findings for each service

Figure 5: QuickSight visual showing MEDIUM severity findings for each service

As another example, Figure 6 shows findings categorized by severity.

Figure 6: QuickSight visual showing findings by severity

Figure 6: QuickSight visual showing findings by severity

For more information about visualizing Security Hub findings by using Amazon OpenSearch Service and Kibana, see the blog post Visualize Security Hub Findings using Analytics and Business Intelligence Tools.

Changing a finding’s severity

Over time, your organization might discover that there are certain findings that should be tracked at a lower or higher severity level than what is auto-generated from Security Hub. You can implement EventBridge rules with AWS Lambda functions to automatically update the severity of the findings as soon as they are generated.

To automate the finding severity change

  1. On the EventBridge console, create an EventBridge rule. For detailed instructions, see Getting started with Amazon EventBridge.
    Figure 7: Create an EventBridge rule in the console

    Figure 7: Create an EventBridge rule in the console

  2. Define the event pattern, including the finding generator ID or any other identifying fields for which you want to redefine the severity. Review the fields in the format, and choose your desired filters. The following is a sample of the event pattern.
    {
      "source": ["aws.securityhub"],
      "detail-type": ["Security Hub Findings - Imported"],
      "detail": {
        "findings": {
            "GeneratorId": [
            "aws-foundational-security-best-practices/v/1.0.0/S3.4"
                        ],
          "RecordState": ["ACTIVE"],
          "Workflow": {
            "Status": ["NEW"]
          }
        }
      }
    }

  3. Specify the target as a Lambda function that will host the code to update the finding severity.
    Figure 8: Select a target Lambda function

    Figure 8: Select a target Lambda function

  4. In the Lambda function, use the BatchUpdateFindings API action to update the severity label as desired.

    The following example Lambda code will update finding severity to INFORMATIONAL. This function requires Amazon CloudWatch write permissions, and requires permissions to invoke the Security Hub API action BarchUpdateFindings.

    import logging
    import json, boto3
    import botocore.exceptions as boto3exceptions
    
    logger = logging.getLogger()
    logger.setLevel(os.environ.get('LOGLEVEL', 'INFO').upper())
    
    def lambda_handler(event, context):
        
        finding_id = ""
        product_arn = ""
        
        logger.info(event)
        
        for finding in event['detail']['findings']:
            
            #determine and log this Finding's ID
            finding_id = finding["Id"]
            product_arn = finding["ProductArn"]
            logger.info("Finding ID: " + finding_id)
            
        
            #determine and log this Finding's resource type
            resource_type = finding["Resources"][0]["Type"]
            logger.info("Resource Type is: " + resource_type)
    
            try:
                sec_hub_client = boto3.client('securityhub')
                response = sec_hub_client.batch_update_findings(
                    FindingIdentifiers=[
                    {
                        'Id': finding_id,
                        'ProductArn': product_arn
                    }
                    ],
                        Severity={"Label": "INFORMATIONAL"}
        
                    )
    
            except boto3exceptions.ClientError as error:
                logger.exception(f"Client error invoking batch update findings {error}")
            except boto3exceptions.ParamValidationError as error:
                logger.exception(f"The parameters you provided are incorrect: {error}")
    
        return {"statusCode": 200}

  5. The finding is generated with a new severity level, as updated in the Lambda function. For example, Figure 9 shows a finding that is generated as MEDIUM by default, but the configured EventBridge rule and Lambda function update the severity level to INFORMATIONAL.
    Figure 9: Security Hub findings generated with updated severity level

    Figure 9: Security Hub findings generated with updated severity level

Conclusion

This blog post walked you through two different solutions for setting up and tracking the SLAs for the findings generated by Security Hub. Reporting Security Hub findings for a given SLA in a dashboard view can help you prioritize findings and track whether findings are being remediated on time. This post also provided example code that you can use to modify the Security Hub severity for a specific finding. To further extend the solution and enable custom actions to remediate the findings, see the following:

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Maisie Fernandes

Maisie Fernandes

Maisie is a Senior Solutions Architect at AWS based in London. She is focused on helping public sector customers design, build, and secure scalable applications on AWS. Outside of work, Maisie enjoys traveling, running, and gardening.

Krati Singh

Krati Singh

Krati is a Senior Solutions Architect at AWS based in San Francisco Bay Area. She collaborates with small and medium business customers on their cloud journey and is passionate about security in the cloud. Outside of work, Krati enjoys reading, and an occasional hike on a nice weather day.

Expanded eligibility for the free MFA security key program

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/expanded-eligibility-for-the-free-mfa-security-key-program/

Since the broad launch of our multi-factor authentication (MFA) security key program, customers have been enthusiastic about the program and how they will use it to improve their organizations’ security posture. Given the level of interest, we’re expanding eligibility for the program to allow more US-based AWS account root users and payer accounts to take advantage of the offer. Previously, eligibility required that US-based root users and payer accounts spend a minimum of $100 per month over the past 3 months. Now, we are expanding eligibility to US-based root users and payer accounts who have spent a minimum of $300 over the past 3 months. If you are a US-based customer who meets the expanded eligibility requirements, we encourage you to place an order for your free security key. As a reminder, you can use the following steps to order your free key.

To order your free security key

  1. Confirm your eligibility at the ordering portal. You will be prompted to sign in if you haven’t already. Sign in with your AWS account root user or payer account credentials.
  2. Choose your free security key from the available options.
  3. Provide your email address for order confirmation and your shipping address.
  4. Place your order.

MFA as a core security best practice is one of the key messages emphasized at the recent AWS re:Inforce conference. Using MFA is one of the simplest ways for anyone, personally or professionally, to help improve their security online. For example, if credentials become compromised on GitHub, users have an extra layer of protection if MFA is enabled. Or, if your login details are compromised for your bank account, MFA acts a second factor to protect your account.

If you’re not eligible for a free security key at this time, but would still like a security key, check out our MFA recommendations. These are available for purchase from many sellers, including Amazon. For more information about the MFA program, see our Free MFA Security Key page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

CJ Moses

CJ Moses

CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

How to export AWS Security Hub findings to CSV format

Post Syndicated from Andy Robinson original https://aws.amazon.com/blogs/security/how-to-export-aws-security-hub-findings-to-csv-format/

AWS Security Hub is a central dashboard for security, risk management, and compliance findings from AWS Audit Manager, AWS Firewall Manager, Amazon GuardDuty, IAM Access Analyzer, Amazon Inspector, and many other AWS and third-party services. You can use the insights from Security Hub to get an understanding of your compliance posture across multiple AWS accounts. It is not unusual for a single AWS account to have more than a thousand Security Hub findings. Multi-account and multi-Region environments may have tens or hundreds of thousands of findings. With so many findings, it is important for you to get a summary of the most important ones. Navigating through duplicate findings, false positives, and benign positives can take time.

In this post, we demonstrate how to export those findings to comma separated values (CSV) formatted files in an Amazon Simple Storage Service (Amazon S3) bucket. You can analyze those files by using a spreadsheet, database applications, or other tools. You can use the CSV formatted files to change a set of status and workflow values to align with your organizational requirements, and update many or all findings at once in Security Hub.

The solution described in this post, called CSV Manager for Security Hub, uses an AWS Lambda function to export findings to a CSV object in an S3 bucket, and another Lambda function to update Security Hub findings by modifying selected values in the downloaded CSV file from an S3 bucket. You use an Amazon EventBridge scheduled rule to perform periodic exports (for example, once a week). CSV Manager for Security Hub also has an update function that allows you to update the workflow, customer-specific notation, and other customer-updatable values for many or all findings at once. If you’ve set up a Region aggregator in Security Hub, you should configure the primary CSV Manager for Security Hub stack to export findings only from the aggregator Region. However, you may configure other CSV Manager for Security Hub stacks that export findings from specific Regions or from all applicable Regions in specific accounts. This allows application and account owners to view their own Security Hub findings without having access to other findings for the organization.

How it works

CSV Manager for Security Hub has two main features:

  • Export Security Hub findings to a CSV object in an S3 bucket
  • Update Security Hub findings from a CSV object in an S3 bucket

Overview of the export function

The overview of the export function CsvExporter is shown in Figure 1.

Figure 1: Architecture diagram of the export function

Figure 1: Architecture diagram of the export function

Figure 1 shows the following numbered steps:

  1. In the AWS Management Console, you invoke the CsvExporter Lambda function with a test event.
  2. The export function calls the Security Hub GetFindings API action and gets a list of findings to export from Security Hub.
  3. The export function converts the most important fields to identify and sort findings to a 37-column CSV format (which includes 12 updatable columns) and writes to an S3 bucket.

Overview of the update function

To update existing Security Hub findings that you previously exported, you can use the update function CsvUpdater to modify the respective rows and columns of the CSV file you exported, as shown in Figure 2. There are 12 modifiable columns out of 37 (any changes to other columns are ignored), which are described in more detail in Step 3: View or update findings in the CSV file later in this post.

Figure 2: Architecture diagram of the update function

Figure 2 shows the following numbered steps:

Figure 2 shows the following numbered steps:

  1. You download the CSV file that the CsvExporter function generated from the S3 bucket and update as needed.
  2. You upload the CSV file that contains your updates to the S3 bucket.
  3. In the AWS Management Console, you invoke the CsvUpdater Lambda function with a test event containing the URI of the CSV file.
  4. CsvUpdater reads the updated CSV file from the S3 bucket.
  5. CsvUpdater identifies the minimum set of updates and invokes the Security Hub BatchUpdateFindings API action.

Step 1: Use the CloudFormation template to deploy the solution

You can set up and use CSV Manager for Security Hub by using either AWS CloudFormation or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution (AWS CDK)

You can find the latest code in the aws-security-hub-csv-manager GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution by using the AWS CDK. First, the AWS CDK initializes your environment and uploads the AWS Lambda assets to an S3 bucket. Then, you deploy the solution to your account by using the following commands. Replace <INSERT_AWS_ACCOUNT> with your account number, and replace <INSERT_REGION> with the AWS Region that you want the solution deployed to, for example us-east-1.

cdk bootstrap aws://<INSERT_AWS_ACCOUNT>/<INSERT_REGION>
cdk deploy

To deploy the solution (CloudFormation)

  1. Choose the following Launch Stack button to open the AWS CloudFormation console pre-loaded with the template for this solution:

    Launch Stack

  2. In the Parameters section, as shown in Figure 3, enter your values.
    Figure 3: CloudFormation template variables

    Figure 3: CloudFormation template variables

    1. For What folder for CSV Manager for Security Hub Lambda code, leave the default Code. For What folder for CSV Manager for Security Hub exports, leave the default Findings.

      These are the folders within the S3 bucket that the CSV Manager for Security Hub CloudFormation template creates to store the Lambda code, as well as where the findings are exported by the Lambda function.

    2. For Frequency, for this solution you can leave the default value cron(0 8 ? * SUN *). This default causes automatic exports to occur every Sunday at 8:00 AM local time using an EventBridge scheduled rule. For more information about how to update this value to meet your needs, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide.
    3. The values you enter for the Regions field depend on whether you have configured an aggregation Region in Security Hub.
      • If you have configured an aggregation Region, enter only that Region code, for example eu-north-1, as shown in Figure 3.
      • If you haven’t configured an aggregation Region, enter a comma-separated list of Regions in which you have enabled Security Hub, for example us-east-1, eu-west-1, eu-west-2.
      • If you would like to export findings from all Regions where Security Hub is enabled, leave the Regions field blank. Regions where Security Hub is not enabled will generate a message and will be skipped.
  3. Choose Next.

The CloudFormation stack deploys the necessary resources, including an EventBridge scheduling rule, AWS System Managers Automation documents, an S3 bucket, and Lambda functions for exporting and updating Security Hub findings.

After you deploy the CloudFormation stack

After you create the CSV Manager for Security Hub stack, you can do the following:

  1. Perform the export function to write some or all Security Hub findings to a CSV file by following the instructions in Step 2: Export Security Hub findings to a CSV file later in this post.
  2. Perform a bulk update of Security Hub findings by following the instructions in Step 3: View or update findings in the CSV file later in this post. You can make changes to one or more of the 12 updatable columns of the CSV file, and perform the update function to update some or all Security Hub findings.

Step 2: Export Security Hub findings to a CSV file

You can export Security Hub findings from the AWS Lambda console. To do this, you create a test event and invoke the CsvExporter Lambda function. CsvExporter exports all Security Hub findings from all applicable Regions to a single CSV file in the S3 bucket for CSV Manager for Security Hub.

To export Security Hub findings to a CSV file

  1. In the AWS Lambda console, find the CsvExporter Lambda function and select it.
  2. On the Code tab, choose the down arrow at the right of the Test button, as shown in Figure 4, and select Configure test event.
    Figure 4: The down arrow at the right of the Test button

    Figure 4: The down arrow at the right of the Test button

  3. To create an empty test event, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used testEvent.
    3. For Template, leave the default hello-world.
    4. For Event JSON, enter the JSON object {} as shown in Figure 5.
    Figure 5: Creating an empty test event

    Figure 5: Creating an empty test event

  4. Choose Save to save the empty test event.
  5. To invoke the Lambda function, choose the Test button, as shown in Figure 6.
    Figure 6: Test button to invoke the Lambda function

    Figure 6: Test button to invoke the Lambda function

  6. On the Execution Results tab, note the following details, which you will need for the next step.
    {
    "message": "Export succeeded", 
    "bucket": DOC-EXAMPLE-BUCKET,
    "exportKey”: DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

  7. Locate the CSV object that matches the value of “exportKey” (in this example, DOC-EXAMPLE-OBJECT) in the S3 bucket that matches the value of “bucket” (in this example, DOC-EXAMPLE-BUCKET).

Now you can view or update the findings in the CSV file, as described in the next section.

Step 3: (Optional) Using filters to limit CSV results

In your test event, you can specify any filter that is accepted by the GetFindings API action. You do this by adding a filter key to your test event. The filter key can either contain the word HighActive (which is a predefined filter configured as a default for selecting active high-severity and critical findings, as shown in Figure 8), or a JSON filter object.

Figure 8 depicts an example JSON filter that performs the same filtering as the HighActive predefined filter.

To use filters to limit CSV results

  1. In the AWS Lambda console, find the CsvExporter Lambda function and select it.
  2. On the Code tab, choose the down arrow at the right of the Test button, as shown in Figure 7, and select Configure test event.
    Figure 7: The down arrow at the right of the Test button

    Figure 7: The down arrow at the right of the Test button

  3. To create a test event containing a filter, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used filterEvent.
    3. For Template, select testEvent,
    4. For Event JSON, enter the following JSON object, as shown in Figure 8.
      {
         "SeverityLabel":[
            {
               "Value":"CRITICAL",
               "Comparison":"EQUALS"
            },
            {
               "Value":"HIGH",
               "Comparison":"EQUALS"
            }
         ],
         "RecordState":[
            {
               "Comparison":"EQUALS",
               "Value":"ACTIVE"
            }
         ]
      }

      Figure 8: Test button to invoke the Lambda function

      Figure 8: Test button to invoke the Lambda function

    5. Choose Save.
  4. To invoke the Lambda function, choose the Test button as shown in Figure 9.
    Figure 9: Test button to invoke the Lambda function

    Figure 9: Test button to invoke the Lambda function

  5. On the Execution Results tab, note the following details, which you will need for the next step.
    {
    "message": "Export succeeded", 
    "bucket": DOC-EXAMPLE-BUCKET,
    "exportKey": DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

  6. Locate the CSV object that matches the value of “exportKey” (in this example, DOC-EXAMPLE-OBJECT) in the S3 bucket that matches the value of “bucket” (in this example, DOC-EXAMPLE-BUCKET).

The results in this CSV file should be a filtered set of Security Hub findings according to the filter you specified above. You can now proceed to step 4 if you want to view or update findings.

Step 4: View or update findings in the CSV file

You can use any program that allows you to view or edit CSV files, such as Microsoft Excel. The first row in the CSV file are the column names. These column names correspond to fields in the JSON objects that are returned by the GetFindings API action.

Warning: Do not modify the first two columns, Id (column A) or ProductArn (column B). If you modify these columns, Security Hub will not be able to locate the finding to update, and any other changes to that finding will be discarded.

You can locally modify any of the columns in the CSV file, but only 12 columns out of 37 columns will actually be updated if you use CsvUpdater to update Security Hub findings. The following are the 12 columns you can update. These correspond to columns C through N in the CSV file.

Column name Spreadsheet column Description
Criticality C An integer value between 0 and 100.
Confidence D An integer value between 0 and 100.
NoteText E Any text you wish
NoteUpdatedBy F Automatically updated with your AWS principal user ID.
CustomerOwner* G Information identifying the owner of this finding (for example, email address).
CustomerIssue* H A Jira issue or another identifier tracking a specific issue.
CustomerTicket* I A ticket number or other trouble/problem tracking identification.
ProductSeverity** J A floating-point number from 0.0 to 99.9.
NormalizedSeverity** K An integer between 0 and 100.
SeverityLabel L One of the following:

  • INFORMATIONAL
  • LOW
  • MEDIUM
  • HIGH
  • HIGH
  • CRITICAL
VerificationState M One of the following:

  • UNKNOWN — Finding has not been verified yet.
  • TRUE_POSITIVE — This is a valid finding and should be treated as a risk.
  • FALSE_POSITIVE — This an incorrect finding and should be ignored or suppressed.
  • BENIGN_POSITIVE — This is a valid finding, but the risk is not applicable or has been accepted, transferred, or mitigated.
Workflow N One of the following:

  • NEW — This is a new finding that has not been reviewed.
  • NOTIFIED — The responsible party or parties have been notified of this finding.
  • RESOLVED — The finding has been resolved.
  • SUPPRESSED — A false or benign finding has been suppressed so that it does not appear as a current finding in Security Hub.

* These columns are stored inside the UserDefinedFields field of the updated findings. The column names imply a certain kind of information, but you can put any information you wish.

** These columns are stored inside the Severity field of the updated findings. These values have a fixed format and will be rejected if they do not meet that format.

Columns with fixed text values (L, M, N) in the previous table can be specified in mixed case and without underscores—they will be converted to all uppercase and underscores added in the CsvUpdater Lambda function. For example, “false positive” will be converted to “FALSE_POSITIVE”.

Step 5: Create a test event and update Security Hub by using the CSV file

If you want to update Security Hub findings, make your changes to columns C through N as described in the previous table. After you make your changes in the CSV file, you can update the findings in Security Hub by using the CSV file and the CsvUpdater Lambda function.

Use the following procedure to create a test event and run the CsvUpdater Lambda function.

To create a test event and run the CsvUpdater Lambda function

  1. In the AWS Lambda console, find the CsvUpdater Lambda function and select it.
  2. On the Code tab, choose the down arrow to the right of the Test button, as shown in Figure 10, and select Configure test event.
    Figure 10: The down arrow to the right of the Test button

    Figure 10: The down arrow to the right of the Test button

  3. To create a test event as shown in Figure 11, on the Configure test event page, do the following:
    1. Choose Create a new event.
    2. Enter an event name; in this example we used testEvent.
    3. For Template, leave the default hello-world.
    4. For Event JSON, enter the following:
      {
      "input": <s3ObjectUri>,
      "primaryRegion": <aggregationRegionName>
      }

      Replace <s3ObjectUri> with the full URI of the S3 object where the updated CSV file is located.

      Replace <aggregationRegionName> with your Security Hub aggregation Region, or the primary Region in which you initially enabled Security Hub.

      Figure 11: Create and save a test event for the CsvUpdater Lambda function

      Figure 11: Create and save a test event for the CsvUpdater Lambda function

  4. Choose Save.
  5. Choose the Test button, as shown in Figure 12, to invoke the Lambda function.
    Figure 12: Test button to invoke the Lambda function

    Figure 12: Test button to invoke the Lambda function

  6. To verify that the Lambda function ran successfully, on the Execution Results tab, review the results for “message”: “Success”, as shown in the following example. Note that the results may be thousands of lines long.
    {
    "message": "Success",
    "details": {
    "processed": [{"Id": arn:aws:securityhub:us-east-1: 111122223333:subscription/cis-aws-foundations-benchmark/v/1.2.0/1.7/finding/6d543b22-6a3d-405c-ae7f-224469bde7d2, "ProductArn": arn:aws:securityhub:us-east-1::product/aws/securityhub}, … ],
    "unprocessed": [],
    "message": "Updated succeeded",
    "success": true
    },
    "input": s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-OBJECT,
    "resultCode": 200
    }

    The processed array lists every successfully updated finding by Id and ProductArn.

    If any of the findings were not successfully updated, their Id and ProductArn appear in the unprocessed array. In the previous example, no findings were unprocessed.

    The value s3://DOC-EXAMPLE-BUCKET/DOC-EXAMPLE-OBJECT is the URI of the S3 object from which your updates were read.

Cleaning up

To avoid incurring future charges, first delete the CloudFormation stack that you deployed in Step 1: Use the CloudFormation template to deploy the solution. Next, you need to manually delete the S3 bucket deployed with the stack. For instructions, see Deleting a bucket in the Amazon Simple Storage Service User Guide.

Conclusion

In this post, we showed you how you can export Security Hub findings to a CSV file in an S3 bucket and update the exported findings by using CSV Manager for Security Hub. We showed you how you can automate this process by using AWS Lambda, Amazon S3, and AWS Systems Manager. Full documentation for CSV Manager for Security Hub is available in the aws-security-hub-csv-manager GitHub repository. You can also investigate other ways to manage Security Hub findings by checking out our blog posts about Security Hub integration with Amazon OpenSearch Service, Amazon QuickSight, Slack, PagerDuty, Jira, or ServiceNow.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub re:Post. To learn more or get started, visit AWS Security Hub.

Want more AWS Security news? Follow us on Twitter.

Andy Robinson

Andy Robinson

Andy wrote CSV Manager for Security Hub in response to requests from several customers. He is an AWS Professional Services Senior Security Consultant with over 30 years of security, software product management, and software design experience. Andy is also a pilot, scuba instructor, martial arts instructor, ham radio enthusiast, and photographer.

Murat Eksi

Murat Eksi

Murat is a full-stack technologist at AWS Professional Services. He has worked with various industries, including finance, sports, media, gaming, manufacturing, and automotive, to accelerate their business outcomes through application development, security, IoT, analytics, devops and infrastructure. Outside of work, he loves traveling around the world, learning new languages while setting up local events for entrepreneurs and business owners in Stockholm, or taking flight lessons.

Shikhar Mishra

Shikhar Mishra

Shikhar is a Senior Solutions Architect at Amazon Web Services. He is a cloud security enthusiast and enjoys helping customers design secure, reliable, and cost-effective solutions on AWS.

Rohan Raizada

Rohan Raizada

Rohan is a Solutions Architect for Amazon Web Services. He works with enterprises of all sizes with their cloud adoption to build scalable and secure solutions using AWS. During his free time, he likes to spend time with family and go cycling outdoors.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

AWS re:Inforce 2022: Key announcements and session highlights

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/aws-reinforce-2022-key-announcements-and-session-highlights/

AWS re:Inforce returned to Boston, MA, in July after 2 years, and we were so glad to be back in person with customers. The conference featured over 250 sessions and hands-on labs, 100 AWS partner sponsors, and over 6,000 attendees over 2 days. If you weren’t able to join us in person, or just want to revisit some of the themes, this blog post is for you. It summarizes all the key announcements and points to where you can watch the event keynote, sessions, and partner lightning talks on demand.

Key announcements

Here are some of the announcements that we made at AWS re:Inforce 2022.

Watch on demand

You can also watch these talks and learning sessions on demand.

Keynotes and leadership sessions

Watch the AWS re:Inforce 2022 keynote where Amazon Chief Security Officer Stephen Schmidt, AWS Chief Information Security Officer CJ Moses, Vice President of AWS Platform Kurt Kufeld, and MongoDB Chief Information Security Officer Lena Smart share the latest innovations in cloud security from AWS and what you can do to foster a culture of security in your business. Additionally, you can review all the leadership sessions to learn best practices for managing security, compliance, identity, and privacy in the cloud.

Breakout sessions and partner lightning talks

  • Data Protection and Privacy track – See how AWS, customers, and partners work together to protect data. Learn about trends in data management, cryptography, data security, data privacy, encryption, and key rotation and storage.
  • Governance, Risk, and Compliance track – Dive into the latest hot topics in governance and compliance for security practitioners, and discover how to automate compliance tools and services for operational use.
  • Identity and Access Management track – Hear from AWS, customers, and partners on how to use AWS Identity Services to manage identities, resources, and permissions securely and at scale. Learn how to configure fine-grained access controls for your employees, applications, and devices and deploy permission guardrails across your organization.
  • Network and Infrastructure Security track – Gain practical expertise on the services, tools, and products that AWS, customers, and partners use to protect the usability and integrity of their networks and data.
  • Threat Detection and Incident Response track – Learn how AWS, customers, and partners get the visibility they need to improve their security posture, reduce the risk profile of their environments, identify issues before they impact business, and implement incident response best practices.
  • You can also catch our Partner Lightning Talks on demand.

Session presentation downloads are also available on our AWS Event Contents page. Consider joining us for more in-person security learning opportunities by registering for AWS re:Invent 2022, which will be held November 28 through December 2 in Las Vegas. We look forward to seeing you there!

If you’d like to discuss how these new announcements can help your organization improve its security posture, AWS is here to help. Contact your AWS account team today.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Identifying publicly accessible resources with Amazon VPC Network Access Analyzer

Post Syndicated from Patrick Duffy original https://aws.amazon.com/blogs/security/identifying-publicly-accessible-resources-with-amazon-vpc-network-access-analyzer/

Network and security teams often need to evaluate the internet accessibility of all their resources on AWS and block any non-essential internet access. Validating who has access to what can be complicated—there are several different controls that can prevent or authorize access to resources in your Amazon Virtual Private Cloud (Amazon VPC). The recently launched Amazon VPC Network Access Analyzer helps you understand potential network paths to and from your resources without having to build automation or manually review security groups, network access control lists (network ACLs), route tables, and Elastic Load Balancing (ELB) configurations. You can use this information to add security layers, such as moving instances to a private subnet behind a NAT gateway or moving APIs behind AWS PrivateLink, rather than use public internet connectivity. In this blog post, we show you how to use Network Access Analyzer to identify publicly accessible resources.

What is Network Access Analyzer?

Network Access Analyzer allows you to evaluate your network against your design requirements and network security policy. You can specify your network security policy for resources on AWS through a Network Access Scope. Network Access Analyzer evaluates the configuration of your Amazon VPC resources and controls, such as security groups, elastic network interfaces, Amazon Elastic Compute Cloud (Amazon EC2) instances, load balancers, VPC endpoint services, transit gateways, NAT gateways, internet gateways, VPN gateways, VPC peering connections, and network firewalls.

Network Access Analyzer uses automated reasoning to produce findings of potential network paths that don’t meet your network security policy. Network Access Analyzer reasons about all of your Amazon VPC configurations together rather than in isolation. For example, it produces findings for paths from an EC2 instance to an internet gateway only when the following conditions are met: the security group allows outbound traffic, the network ACL allows outbound traffic, and the instance’s route table has a route to an internet gateway (possibly through a NAT gateway, network firewall, transit gateway, or peering connection). Network Access Analyzer produces actionable findings with more context such as the entire network path from the source to the destination, as compared to the isolated rule-based checks of individual controls, such as security groups or route tables.

Sample environment

Let’s walk through a real-world example of using Network Access Analyzer to detect publicly accessible resources in your environment. Figure 1 shows an environment for this evaluation, which includes the following resources:

  • An EC2 instance in a public subnet allowing inbound public connections on port 80/443 (HTTP/HTTPS).
  • An EC2 instance in a private subnet allowing connections from an Application Load Balancer on port 80/443.
  • An Application Load Balancer in a public subnet with a Target Group connected to the private web server, allowing public connections on port 80/443.
  • An Amazon Aurora database in a public subnet allowing public connections on port 3306 (MySQL).
  • An Aurora database in a private subnet.
  • An EC2 instance in a public subnet allowing public connections on port 9200 (OpenSearch/Elasticsearch).
  • An Amazon EMR cluster allowing public connections on port 8080.
  • A Windows EC2 instance in a public subnet allowing public connections on port 3389 (Remote Desktop Protocol).
Figure 1: Example environment of web servers hosted on EC2 instances, remote desktop servers hosted on EC2, Relational Database Service (RDS) databases, Amazon EMR cluster, and OpenSearch cluster on EC2

Figure 1: Example environment of web servers hosted on EC2 instances, remote desktop servers hosted on EC2, Relational Database Service (RDS) databases, Amazon EMR cluster, and OpenSearch cluster on EC2

Let us assume that your organization’s security policy requires that your databases and analytics clusters not be directly accessible from the internet, whereas certain workload such as instances for web services can have internet access only through an Application Load Balancer over ports 80 and 443. Network Access Analyzer allows you to evaluate network access to resources in your VPCs, including database resources such as Amazon RDS and Amazon Aurora clusters, and analytics resources such as Amazon OpenSearch Service clusters and Amazon EMR clusters. This allows you to govern network access to your resources on AWS, by identifying network access that does not meet your security policies, and creating exclusions for paths that do have the appropriate network controls in place.

Configure Network Access Analyzer

In this section, you will learn how to create network scopes, analyze the environment, and review the findings produced. You can create network access scopes by using the AWS Command Line Interface (AWS CLI) or AWS Management Console. When creating network access scopes using the AWS CLI, you can supply the scope by using a JSON document. This blog post provides several network access scopes as JSON documents that you can deploy to your AWS accounts.

To create a network scope (AWS CLI)

  1. Verify that you have the AWS CLI installed and configured.
  2. Download the network-scopes.zip file, which contains JSON documents that detect the following publicly accessible resources:
    • OpenSearch/Elasticsearch clusters
    • Databases (MySQL, PostgreSQL, MSSQL)
    • EMR clusters
    • Windows Remote Desktop
    • Web servers that can be accessed without going through a load balancer

    Make note of the folder where you save the JSON scopes because you will need it for the next step.

  3. Open a systems shell, such as Bash, Zsh, or cmd.
  4. Navigate to the folder where you saved the preceding JSON scopes.
  5. Run the following commands in the shell window:
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-databases.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-databases"},{Key="Description",
    		   Value="Detects publicly accessible databases."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-elastic.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-opensearch"},{Key="Description",
    		   Value="Detects publicly accessible OpenSearch/Elasticsearch endpoints."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-emr.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-emr"},{Key="Description",
    		   Value="Detects publicly accessible Amazon EMR endpoints."}]'
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-remotedesktop.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-remotedesktop"},{Key="Description",
    		   Value="Detects publicly accessible Microsoft Remote Desktop servers."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-webserver-noloadbalancer.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-webservers"},{Key="Description",
    		   Value="Detects publicly accessible web servers that can be accessed without using a load balancer."}]' 
    --region us-east-1
    
    

Now that you’ve created the scopes, you will analyze them to find resources that match your match conditions.

To analyze your scopes (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under Network Analysis, choose Network Access Analyzer.
  3. Under Network Access Scopes, select the checkboxes next to the scopes that you want to analyze, and then choose Analyze, as shown in Figure 2.
    Figure 2: Custom network scopes created for Network Access Analyzer

    Figure 2: Custom network scopes created for Network Access Analyzer

If Network Access Analyzer detects findings, the console indicates the status Findings detected for each scope, as shown in Figure 3.

Figure 3: Network Access Analyzer scope status

Figure 3: Network Access Analyzer scope status

To review findings for a scope (console)

  1. On the Network Access Scopes page, under Network Access Scope ID, select the link for the scope that has the findings that you want to review. This opens the latest analysis, with the option to review past analyses, as shown in Figure 4.
    Figure 4: Finding summary identifying Amazon Aurora instance with public access to port 3306

    Figure 4: Finding summary identifying Amazon Aurora instance with public access to port 3306

  2. To review the path for a specific finding, under Findings, select the radio button to the left of the finding, as shown in Figure 4. Figure 5 shows an example of a path for a finding.
    Figure 5: Finding details showing access to the Amazon Aurora instance from the internet gateway to the elastic network interface, allowed by a network ACL and security group.

    Figure 5: Finding details showing access to the Amazon Aurora instance from the internet gateway to the elastic network interface, allowed by a network ACL and security group.

  3. Choose any resource in the path for detailed information, as shown in Figure 6.
    Figure 6: Resource detail within a finding outlining a specific security group allowing access on port 3306

    Figure 6: Resource detail within a finding outlining a specific security group allowing access on port 3306

How to remediate findings

After deploying network scopes and reviewing findings for publicly accessible resources, you should next limit access to those resources and remove public access. Use cases vary, but the scopes outlined in this post identify resources that you should share publicly in a more secure manner or remove public access entirely. The following techniques will help you align to the Protecting Networks portion of the AWS Well-Architected Framework Security Pillar.

If you have a need to share a database with external entities, consider using AWS PrivateLink, VPC peering, or use AWS Site-to-Site VPN to share access. You can remove public access by modifying the security group attached to the RDS instance or EC2 instance serving the database, but you should migrate the RDS database to a private subnet as well.

When creating web servers in EC2, you should not place web servers directly in a public subnet with security groups allowing HTTP and HTTPS ports from all internet addresses. Instead, you should place your EC2 instances in private subnets and use Application Load Balancers in a public subnet. From there, you can attach a security group that allows HTTP/HTTPS access from public internet addresses to your Application Load Balancer, and attach a security group that allows HTTP/HTTPS from your Load Balancer security group to your web server EC2 instances. You can also associate AWS WAF web ACLs to the load balancer to protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources.

Similarly, if you have OpenSearch/Elasticsearch running on EC2 or Amazon OpenSearch Service, or are using Amazon EMR, you can share these resources using PrivateLink. Use the Amazon EMR block public access configuration to verify that your EMR clusters are not shared publicly.

To connect to Remote Desktop on EC2 instances, you should use AWS Systems Manager to connect using Fleet Manager. Connecting with Fleet Manager only requires your Windows EC2 instances to be a managed node. When connecting using Fleet Manager, the security group requires no inbound ports, and the instance can be in a private subnet. For more information, see the Systems Manager prerequisites.

Conclusion

This blog post demonstrates how you can identify and remediate publicly accessible resources. Amazon VPC Network Access Analyzer helps you identify available network paths by using automated reasoning technology and user-defined access scopes. By using these scopes, you can define non-permitted network paths, identify resources that have those paths, and then take action to increase your security posture. To learn more about building continuous verification of network compliance at scale, see the blog post Continuous verification of network compliance using Amazon VPC Network Access Analyzer and AWS Security Hub. Take action today by deploying the Network Access Analyzer scopes in this post to evaluate your environment and add layers of security to best fit your needs.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Patrick Duffy

Patrick is a Solutions Architect in the Small Medium Business (SMB) segment at AWS. He is passionate about raising awareness and increasing security of AWS workloads. Outside work, he loves to travel and try new cuisines and enjoys a match in Magic Arena or Overwatch.

Peter Ticali

Peter Ticali

Peter is a Solutions Architect focused on helping Media & Entertainment customers transform and innovate. With over three decades of professional experience, he’s had the opportunity to contribute to architecture that stream live video to millions, including two Super Bowls, PPVs, and even a Royal Wedding. Previously he held Director, and CTO roles in the EdTech, advertising & public relations space. Additionally, he is a published photo journalist.

John Backes

John Backes

John is a Senior Applied Scientist in AWS Networking. He is passionate about applying Automated Reasoning to network verification and synthesis problems.

Vaibhav Katkade

Vaibhav Katkade

Vaibhav is a Senior Product Manager in the Amazon VPC team. He is interested in areas of network security and cloud networking operations. Outside of work, he enjoys cooking and the outdoors.

How to detect suspicious activity in your AWS account by using private decoy resources

Post Syndicated from Maitreya Ranganath original https://aws.amazon.com/blogs/security/how-to-detect-suspicious-activity-in-your-aws-account-by-using-private-decoy-resources/

As customers mature their security posture on Amazon Web Services (AWS), they are adopting multiple ways to detect suspicious behavior and notify response teams or workflows to take action. One example is using Amazon GuardDuty to monitor AWS accounts and workloads for malicious activity and deliver detailed security findings for visibility and remediation. Another tactic is to deploy decoys, also called honeypots, as an effective way to detect suspicious behavior.

In this blog post, we’ll show how you can create low-cost private decoy AWS resources in your AWS accounts and configure them to generate alerts when they are accessed. These decoy resources appear legitimate but don’t contain any useful or sensitive data and typically are not accessed in the normal course of business by your users and systems. Any attempt to access them is a clear signal of suspicious activity that should be investigated. You can use data sources like AWS CloudTrail, services like Amazon Detective, and your own security incident and event monitoring (SIEM) systems to investigate the activity further. This post is aimed at experienced AWS users and security professionals.

Detecting suspicious activity

Imagine that an unauthorized user has obtained credentials for your account. This could also be an insider, malicious or careless, using their valid credentials inappropriately. The unauthorized user might use these credentials to invoke AWS API calls to list resources in your account. As the next step, they might try to access resources that are commonly used to store sensitive data—such as objects in Amazon Simple Storage Service (Amazon S3) buckets, secrets in AWS Secrets Manager, or items in Amazon DynamoDB. They might also try to elevate their privileges by assuming other Identity and Access Management (IAM) roles in your account. In your role as a security professional, your task is to detect this suspicious behaviour and take actions in response. One approach is to learn the baseline of activities of the IAM users and roles in your account and flag any deviations from the learned baseline—this is the approach taken by GuardDuty when it generates findings such as Discovery:IAMUser/AnomalousBehavior.

This post focuses on another approach of creating private decoy resources in your account that are intended to look legitimate, but don’t have any useful or sensitive data and are not exposed publicly. These decoys are designed to alert you about suspicious activities that could indicate AWS credentials exposure or account compromise. You can use the decoys in conjunction with other techniques, such as creating deception environments and public and private honeypots to better detect suspicious activity in your accounts and applications.

The Fidelity-Isolation-Cost trilemma

In an ACM Queue article titled Lamboozling Attackers: A New Generation of Deception, Kelly Shortridge and Ryan Petrich introduced the Fidelity-Isolation-Cost (FIC) trilemma that “captures the most important dimensions of designing deception systems: fidelity, isolation, and cost.” Using their definition of the FIC trilemma, we see that decoy AWS resources can be well suited to designing deception systems:

  • Fidelity – Because the decoys are actual AWS resources, they behave like other legitimate resources and have high fidelity. For example, a decoy S3 bucket behaves exactly like any other S3 bucket, with the only exception being that the object data it contains is dummy and not useful. However, the unauthorized user only discovers this fact after downloading the object data and generating an automated alert to your security team.
  • Isolation – You can simply isolate the decoy AWS resources from other resources in the same account. For example, an S3 bucket is inherently isolated from other S3 buckets in the same account. An unauthorized user that can read the decoy S3 bucket does not, by doing so, get the ability to access or impact the availability of other resources in the account. The credentials obtained by the unauthorized user might have permissions to actions on other services, but the presence of the decoy S3 bucket doesn’t add to those permissions in any way.
  • Cost – You can keep the cost of deception low by choosing AWS resources that have no cost or low cost to deploy, are deployed by means of automation, and require no further operation or maintenance effort. For example, an S3 bucket with several files that are a few MB in size costs a fraction of a US cent per month for storage. The API request cost should be zero, because the bucket is designed never to be accessed in the normal course of business. Choosing similar zero or low-cost resources can make it cost-effective and feasible to create such decoy resources in multiple accounts, including in Production accounts, where it’s especially important to detect suspicious activity.

Examples of private decoy AWS resources

The following table shows examples of private decoy AWS resources that are high-fidelity, high-isolation, low-cost and are suitable to be deployed in an account that has sensitive data or applications. The table also lists the CloudTrail event fields that provide the source and name for accesses to each resource. You can use these CloudTrail events to create corresponding Amazon EventBridge rules that will generate alerts and notifications.

Private decoy resource CloudTrail event source CloudTrail event names Considerations
S3 bucket and S3 objects with dummy data s3.amazonaws.com GetObject
HeadObject
Ensure that the S3 objects do not contain any sensitive data.

S3 data events must be enabled in CloudTrail for the decoy S3 bucket

IAM role that should never be assumed sts.amazonaws.com AssumeRole Ensure that the IAM policies attached to this role allow access only to decoy resources and no other data or resources.

Ensure that the IAM role’s trust policy only trusts principals in the same account to assume the role.

Secrets Manager secret (See Note at end of table) kms.amazonaws.com Decrypt Ensure that the secret value does not contain any sensitive data.
AWS Systems Manager Parameter Store parameter (See Note at end of table) kms.amazonaws.com Decrypt Ensure that the parameter value does not contain any sensitive data.
DynamoDB table that contains items with dummy data dynamodb.amazonaws.com BatchExecuteStatement
BatchGetItem
BatchWriteItem
DeleteItem
ExecuteStatement
ExecuteTransaction
GetItem
PutItem
Query
Scan
TransactGetItems
TransactWriteItems
UpdateItem
Ensure that the item does not have any sensitive data.

DynamoDB data events must be enabled in CloudTrail for the decoy DynamoDB table.

Note: When CloudTrail Management API events are sent to EventBridge, read-only events such as Get*, List*, and Describe* are filtered out and not processed. In order to get findings for secrets and Systems Manager parameters that are being accessed, you need to alert on GetSecretValue and GetParameter API calls. Since these are not processed by EventBridge, you can instead use the fact that secrets and secure string parameters are encrypted by using AWS Key Management Service (AWS KMS), and match on the corresponding AWS KMS Decrypt API calls. This means that successful calls from an unauthorized user to GetSecretValue and GetParameter are able to be matched and alerted on.

Notifications from matching EventBridge rules can be sent to an AWS Lambda function that generates custom findings in Security Hub. These findings can then be sent to downstream systems that you might have configured in your environment, such as your SIEM system or an automated response workflow in your Security Orchestration, Automation, and Response system. Figure 1 shows this workflow.

Figure 1: Accesses to decoy resources automatically create custom Security Hub findings

Figure 1: Accesses to decoy resources automatically create custom Security Hub findings

Deploy the private decoy resources

We’ve provided an AWS CloudFormation template that you can use to deploy the solution. The template creates the following private decoy AWS resources in your account:

  • DynamoDB table
  • IAM role
  • S3 bucket with a decoy S3 object
  • Systems Manager SecureString parameter
  • Secrets Manager secret

In addition, the CloudFormation template deploys the following resources in your account to detect accesses to the decoys and send custom findings to Security Hub:

  • A CloudTrail data events trail that includes only data events from the decoy S3 bucket and DynamoDB table
  • Six EventBridge rules to match specific CloudTrail API events
  • Two Lambda functions with corresponding IAM roles:
    • The WriteData Lambda function is a CloudFormation custom resource that is used to create the decoy S3 object and the Systems Manager SecureString parameter
    • The Data Lambda function is a target for the EventBridge rules, and it sends custom findings to Security Hub when the decoy resources are accessed

Prerequisites

The prerequisites to deploying the solution are as follows:

  • Security Hub must be enabled in the AWS Regions where the private decoys will be deployed, in order to receive custom findings.
  • You must have created a CloudTrail trail to log management events for the AWS account in the Region where you deploy the private decoys. This trail can be created locally in the account or can be an organization trail. Ensure that you have enabled both read and write events, and enabled all AWS KMS events in the trail (this is the default configuration).

Deploy the solution

After you have the prerequisites set up, you can launch the CloudFormation template to deploy the private decoys.

To launch the template

  1. Choose the following Launch Stack button to launch a CloudFormation stack in your account.

    Launch Stack

    Note: The stack will launch in the N. Virginia (us-east-1) Region. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template, modify it, and deploy it to the selected Region. In order to get maximum coverage for detecting suspicious activity, we recommend that you deploy the solution into your key production accounts and Regions.

  2. On the Specify stack details page, enter the stack name, then choose Next.

    The CloudFormation template will use the stack name as part of the naming of the resources that are created. We recommend that you use your organization’s existing naming conventions for stack names, and not make reference to decoy resources, because this could alert any unauthorized user to the real purpose of the resources they’re attempting to access.

    Figure 2: Specify stack details

    Figure 2: Specify stack details

  3. Configure any tags or other organization-specific stack options you need, or accept the default settings, and then choose Next.
  4. Review the CloudFormation settings and select the box acknowledging that AWS CloudFormation might create IAM resources with custom names, and then choose Create stack.
  5. After the stack has completed deployment, the CloudFormation stack output will show the Amazon Resource Names (ARNs) of the decoy resources that were created.
    Figure 3: CloudFormation stack outputs

    Figure 3: CloudFormation stack outputs

Estimated costs

This solution has been designed to keep costs as low as possible, by using services that have no associated costs (such as IAM roles or any parameters stored in Systems Manager Parameter Store), and keeping the use of paid for services (such as S3 and DynamoDB) to a minimum.

Deploying the solution as outlined in this blog post should result in a cost of less than $1 per month for a single account deployment, however please refer to the AWS Pricing Calculator where you can create a pricing estimate based on your deployment using the most up-to-date pricing information.

Test the alerts

In normal circumstances, after you configure the decoys, there will be no attempted access to these resources, and no findings will be sent to Security Hub in your account. To test that the configuration is working as expected, you can issue the following commands from a device that has programmatic access to your account where the private decoy resources have been deployed. To run each command, replace the bracketed, italicized text with your own information. You can find the details for each of the resources in the outputs section of the CloudFormation stack after it has been deployed successfully.

S3 object access

  • aws s3 cp s3://<bucket_name/object_name> /tmp
  • aws s3 cp s3://<bucket_name/object_name> s3://<any_existing_bucket>

IAM role assumption

  • aws sts assume-role –role-arn <role_name> –role-session-name BlogTestRole

Secrets Manager access

  • aws secretsmanager get-secret-value –secret-id <secret_name>

Parameter Store access

  • aws ssm get-parameters –names <ssm_parameter> –with-decryption

    DynamoDB table scan

  • aws dynamodb scan –table-name <table_name>

An example of what these test-generated findings looks like is shown in Figure 4.

Figure 4: Security Hub findings

Figure 4: Security Hub findings

Considerations

Consider the following as you deploy decoy AWS resources:

  • You should consider decoy AWS resources as enhancements to your foundational security controls. Your foundational controls should include these measures:
    • Help prevent the compromise of AWS credentials and limit the privileges of credentials by implementing strong identity management and permissions management.
    • Identify and investigate alerts generated by decoy resources by implementing detective controls.
    • Implement incident response mechanisms to respond to and mitigate the potential impact of security incidents, such as a decoy AWS resource being accessed.
  • You should ensure that your monitoring services and tools are configured to query the configuration of resources and not the data stored in resources. Otherwise, you might get a large volume of false positives because every time a resource is accessed, a custom finding is created in Security Hub. For example, consider a service like Security Hub Security Standards checks, or a cloud security posture management (CSPM) tool that monitors your S3 buckets by describing the properties of all buckets in your account. Such tools will find the decoy S3 bucket and will interrogate its configuration by making calls such as GetBucketPolicy and GetBucketLogging. However, as long as these tools don’t try to read data in the bucket through calls such as GetObject, the EventBridge rules that are configured as described in this post won’t generate a finding.
  • As a specific example of the previous point, ensure that you don’t run a sensitive data discovery job in Amazon Macie on the decoy S3 bucket, to avoid false alerts. You can configure Amazon Macie to monitor the metadata of your S3 buckets, because those actions won’t generate alerts.
  • The solution generates custom findings in Security Hub only for successful accesses of Secrets Manager secrets and Systems Manager parameters. However, both successful and unsuccessful accesses of S3 objects and DynamoDB items, and IAM role assumption, will generate custom findings in Security Hub.

Conclusion

In this post, we discussed the advantages of using private decoy AWS resources to detect suspicious activities within your account and how these decoys can complement your existing security solutions. You learned how to create private decoys, set up alerting, and ingest (and test) these alerts as custom findings into Security Hub for central visibility across your AWS environment. The solution deployment included a set of common resources as private decoys; however, the necessary code and templates can be found in our GitHub repository, and you can extend and customize these to add other resources that you want to include in your accounts.

If you would also like to learn about using CloudTrail as another method of detecting unexpected behavior within your accounts, see the blog post Using CloudTrail to identify unexpected behaviors in individual workloads for more information.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maitreya Ranganath

Maitreya is an AWS Security Solutions Architect. He enjoys helping customers solve security and compliance challenges and architect scalable and cost-effective solutions on AWS.

Mark Keating

Mark Keating

Mark is an AWS Security Solutions Architect based out of the U.K. who works with Global Healthcare & Life Sciences and Automotive customers to solve their security and compliance challenges and help them reduce risk. He has over 20 years of experience working with technology, within in operations, solution, and enterprise architecture roles.

AWS CyberVadis report now available for due diligence on third-party suppliers

Post Syndicated from Andreas Terwellen original https://aws.amazon.com/blogs/security/aws-cybervadis-report-now-available-for-due-diligence-on-third-party-suppliers/

At Amazon Web Services (AWS), we’re continuously expanding our compliance programs to provide you with more tools and resources to perform effective due diligence on AWS. We’re excited to announce the availability of the AWS CyberVadis report to help you reduce the burden of performing due diligence on your third-party suppliers.

With the increase in adoption of cloud products and services across multiple sectors and industries, AWS is a critical component of customers’ third-party environments. Regulated customers, such as those in the financial services sector, are held to high standards by regulators and auditors when it comes to exercising effective due diligence on third parties.

Many customers use third-party cyber risk management (TPCRM) services such as CyberVadis to better manage risks from their evolving third-party environments and to drive operational efficiencies. To help with such efforts, AWS has completed the CyberVadis assessment of its security posture. CyberVadis security analysts perform the assessment and validate the results annually.

CyberVadis is a comprehensive third-party risk assessment process that combines the speed and scalability of automation with the certainty of analyst validation. The CyberVadis cybersecurity rating methodology assesses the maturity of a company’s information security management system (ISMS) through its policies, implementation measures, and results.

CyberVadis integrates responses from AWS with analytics and risk models to provide an in-depth view of the AWS security posture. The CyberVadis methodology maps to major international compliance standards, including the following:

Customers can download the AWS CyberVadis report at no additional cost. For details on how to access the report, see our AWS CyberVadis report page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

Want more AWS Security news? Follow us on Twitter.

Andreas Terwellen

Andreas Terwellen

Andreas is a senior manager in security audit assurance at AWS, based in Frankfurt, Germany. His team is responsible for third-party and customer audits, attestations, certifications, and assessments across Europe. Previously, he was a CISO in a DAX-listed telecommunications company in Germany. He also worked for different consulting companies managing large teams and programs across multiple industries and sectors.

Manuel Mazarredo

Manuel Mazarredo

Manuel is a security audit program manager at AWS based in Amsterdam, the Netherlands. Manuel leads security audits, attestations, and certification programs across Europe, and is responsible for the BeNeLux area. For the past 18 years, he has worked in information systems audits, ethical hacking, project management, quality assurance, and vendor management across a variety of industries.

How to use customer managed policies in AWS IAM Identity Center for advanced use cases

Post Syndicated from Ron Cully original https://aws.amazon.com/blogs/security/how-to-use-customer-managed-policies-in-aws-single-sign-on-for-advanced-use-cases/

Are you looking for a simpler way to manage permissions across all your AWS accounts? Perhaps you federate your identity provider (IdP) to each account and divide permissions and authorization between cloud and identity teams, but want a simpler administrative model. Maybe you use AWS IAM Identity Center (successor to AWS Single Sign-On) but are running out of room in your permission set policies; or need a way to keep the role models you have while tailoring the policies in each account to reference their specific resources. Or perhaps you are considering IAM Identity Center as an alternative to per-account federation, but need a way to reuse the customer managed policies that you have already created. Great news! Now you can use customer managed policies (CMPs) and permissions boundaries (PBs) to help with these more advanced situations.

In this blog post, we explain how you can use CMPS and PBs with IAM Identity Center to address these considerations. We describe how IAM Identity Center works, how these types of policies work with IAM Identity Center, and how to best use CMPs and PBs with IAM Identity Center. We also show you how to configure and use CMPs in your IAM Identity Center deployment.

IAM Identity Center background

With IAM Identity Center, you can centrally manage access to multiple AWS accounts and business applications, while providing your workplace users a single sign-on experience with your choice of identity system. Rather than manage identity in each account individually, IAM Identity Center provides one place to connect an existing IdP, Microsoft Active Directory Domain Services (AD DS), or workforce users that you create directly in AWS. Because IAM Identity Center integrates with AWS Organizations, it also provides a central place to define your roles, assign them to your users and groups, and give your users a portal where they can access their assigned accounts.

With AWS Identity Center, you manage access to accounts by creating and assigning permission sets. These are AWS Identity and Access Management (IAM) role templates that define (among other things) which policies to include in a role. If you’re just getting started, you can attach AWS managed policies to the permission set. These policies, created by AWS service teams, enable you to get started without having to learn how to author IAM policies in JSON.

For more advanced cases, where you are unable to express policies sufficiently using inline policies, you can create a custom policy in the permission set. When you assign a permission set to users or groups in a specified account, IAM Identity Center creates a role from the template and then controls single sign-on access to the role. During role creation, IAM Identity Center attaches any specified AWS managed policies, and adds any custom policy to the role as an inline policy. These custom policies must be within the 10,240 character IAM quota of inline policies.

IAM provides two other types of custom policies that increase flexibility when managing access in AWS accounts. Customer managed policies (CMPs) are standalone policies that you create and can attach to roles in your AWS accounts to grant or deny access to AWS resources. Permissions boundaries (PBs) provide an advanced feature that specifies the maximum permissions that a role can have. For both CMPs and PBs, you create the custom policy in your account and then attach it to roles. IAM Identity Center now supports attaching both of these to permission sets so you can handle cases where AWS Managed Policies and inline policies may not be enough.

How CMPs and PBs work with IAM Identity Center

Although you can create IAM users to manage access to AWS accounts and resources, AWS recommends that you use roles instead of IAM users for this purpose. Roles act as an identity (sometimes called an IAM principal), and you assign permissions (identity-based policies) to the role. If you use the AWS Management Console or the AWS Command Line Interface to assume a role, you get the permissions of the role that you assumed. With its simpler way to maintain your users and groups in one AWS location and its ability to centrally manage and assign roles, AWS recommends that you use IAM Identity Center to manage access to your AWS accounts.

With this new IAM Identity Center release, you have the option to specify the names of CMPs and one PB in your permission set (role definition). Doing so modifies how IAM Identity Center provisions roles into accounts. When you assign a user or group to a permission set, IAM Identity Center checks the target account to verify that all specified CMPs and the PB are present. If they are all present, IAM Identity Center creates the role in the account and attaches the specified policies. If any of the specified CMPs or the PB are missing, IAM Identity Center fails the role creation.

This all sounds simple enough, but there are important implications to consider.

If you modify the permission set, IAM Identity Center updates the corresponding roles in all accounts to which you assigned the permission set. What is different when using CMPs and PBs is that IAM Identity Center is uninvolved in the creation or maintenance of the CMPs or PBs. It’s your responsibility to make sure that the CMPs and PBs are created and managed in all of the accounts to which you assign permission sets that use the CMPs and PBs. This means that you must be careful in how you name, create, and maintain these policies in your accounts, to avoid unintended consequences. For example, if you do not apply changes to CMPs consistently across all your accounts, the behavior of an IAM Identity Center created role will vary between accounts.

What CMPs do for you

By using CMPs with permission sets, you gain four main benefits:

  1. If you federate to your accounts directly and have CMPs already, you can reuse your CMPs with permission sets in IAM Identity Center. We describe exceptions later in this post.
  2. If you are running out of space in your permission set inline policies, you can add permission sets to increase the aggregate size of your policies.
  3. Policies often need to refer to account-specific resources by Amazon Resource Name (ARN). Designing an inline policy that does this correctly across all your accounts can be challenging and, in some cases, may not be possible. By specifying a CMP in a permission set, you can tailor the CMPs in each of your accounts to reference the resources of the account. When IAM Identity Center creates the role and attaches the CMPs of the account, the policies used by the IAM Identity Center–generated role are now specific to the account. We highlight this example later in this post.
  4. You get the benefit of a central location to define your roles, which gives you visibility of all the policies that are in use across the accounts where you assigned permission sets. This enables you to have a list of CMP and PB names that you should monitor for change across your accounts. This helps you ensure that you are maintaining your policies correctly.

Considerations and best practices

Start simple, avoid complex – If you’re just starting out, try using AWS managed policies first. With managed policies, you don’t need to know JSON policy to get started. If you need more advanced policies, start by creating identity-based inline custom policies in the permission set. These policies are provisioned as inline policies, and they will be identical in all your accounts. If you need larger policies or more advanced capabilities, use CMPs as your next option. In most cases, you can accomplish what you need with inline and customer managed policies. When you can’t achieve your objective using CMPs, use PBs. For information about intended use cases for PBs, see the blog post When and where to use IAM permissions boundaries.

Permissions boundaries don’t constrain IAM Identity Center admins who create permission sets – IAM Identity Center administrators (your staff) that you authorize to create permission sets can create inline policies and attach CMPs and PBs to permission sets, without restrictions. Permissions boundary policies set the maximum permissions of a role and the maximum permissions that the role can grant within an account through IAM only. For example, PBs can set the maximum permissions of a role that uses IAM to create other roles for use by code or services. However, a PB doesn’t set maximum permissions of the IAM Identity Center permission set creator. What does that mean? Suppose you created an IAM Identity Center Admin permission set that has a PB attached, and you assigned it to John Doe. John Doe can then sign in to IAM Identity Center and modify permission sets with any policy, regardless of what you put in the PB. The PB doesn’t restrict the policies that John Doe can put into a permission set.

In short, use PBs only for roles that need to create IAM roles for use by code or services. Don’t use PBs for permission sets that authorize IAM Identity Center admins who create permission sets.

Create and use a policy naming plan – IAM Identity Center doesn’t consider the content of a named policy that you attach to a permission set. If you assign a permission set in multiple accounts, make sure that all referenced policies have the same intent. Failure to do this will result in unexpected and inconsistent role behavior between different accounts. Imagine a CMP named “S3” that grants S3 read access in account A, and another CMP named “S3” that grants S3 administrative permissions over all S3 buckets in account B. A permission set that attaches the S3 policy and is assigned in accounts A and B will be confusing at best, because the level access is quite different in each of the accounts. It’s better to have more specific names, such as “S3Reader” and “S3Admin,” for your policies and ensure they are identical except for the account-specific resource ARNs.

Use automation to provision policies in accounts – Using tools such as AWS CloudFormation stacksets, or other infrastructure-as-code tools, can help ensure that naming and policies are consistent across your accounts. It also helps reduce the potential for administrators to modify policies in undesirable ways.

Policies must match the capabilities of IAM Identity Center – Although IAM Identity Center supports most IAM semantics, there are exceptions:

  1. If you use an identity provider as your identity source, IAM Identity Center passes only PrincipalTag attributes that come through SAML assertions to IAM. IAM Identity Center doesn’t process or forward other SAML assertions to IAM. If you have CMPs or PBs that rely on other information from SAML assertions, they won’t work. For example, IAM Identity Center doesn’t provide multi-factor authentication (MFA) context keys or SourceIdentity.
  2. Resource policies that reference role names or tags as part of trust policies don’t work with IAM Identity Center. You can use resource policies that use attribute-based access control (ABAC). IAM Identity Center role names are not static, and you can’t tag the roles that IAM Identity Center creates from its permission sets.

How to use CMPs with permission sets

Now that you understand permission sets and how they work with CMPs and PBs, let’s take a look at how you can configure a permission set to use CMPs.

In this example, we show you how to use one or more permission sets that attach a CMP that enables Amazon CloudWatch operations to the log group of specified accounts. Specifically, the AllowCloudWatch_permission set attaches a CMP named AllowCloudWatchForOperations. When we assign the permission set in two separate accounts, the assigned users can perform CloudWatch operations against the log groups of the assigned account only. Because the CloudWatch operations policies are in CMPs rather than inline policies, the log groups can be account specific, and you can reuse the CMPs in other permission sets if you want to have CloudWatch operations available through multiple permission sets.

Note: For this blog post, we demonstrate using CMPs by utilizing the IAM Management Console to create policies and assignments. We recommend that after you learn how to do this, you create your policies through automation for production environments. For example, use AWS CloudFormation. The intent of this example is to demonstrate how you can have a policy in two separate accounts that refer to different resources; something that is harder to accomplish using inline policies. The use case itself is not that advanced, but the use of CMPs to have different resources referenced in each account is a more advanced idea. We kept this simple to make it easier to focus on the feature than the use case.

Prerequisites

In this example, we assume that you know how to use the AWS Management Console, create accounts, navigate between accounts, and create customer managed policies. You also need administrative privileges to enable IAM Identity Center and to create policies in your accounts.

Before you begin, enable IAM Identity Center in your AWS Organizations management account in an AWS Region of your choice. You need to create at least two accounts within your AWS Organization. In this example, the account names are member-account and member-account-1. After you set up the accounts, you can optionally configure IAM Identity Center for administration in a delegated member account.

Configure an IAM Identity Center permission set to use a CMP

Follow these four procedures to use a CMP with a permission set:

  1. Create CMPs with consistent names in your target accounts
  2. Create a permission set that references the CMP that you created
  3. Assign groups or users to the permission set in accounts where you created CMPs
  4. Test your assignments

Step 1: Create CMPs with consistent names in your target accounts

In this step, you create a customer managed policy named AllowCloudWatchForOperations in two member accounts. The policy allows your cloud operations users to access a predefined CloudWatch log group in the account.

To create CMPs in your target accounts

  1. Sign into AWS.

    Note: You can sign in to IAM Identity Center if you have existing permission sets that enable you to create policies in member accounts. Alternatively, you can sign in using IAM federation or as an IAM user that has access to roles that enable you to navigate to other accounts where you can create policies. Your sign-in should also give you access to a role that can administer IAM Identity Center permission sets.

  2. Navigate to an AWS Organizations member account.

    Note: If you signed in through IAM Identity Center, use the user portal page to navigate to the account and role. If you signed in by using IAM federation or as an IAM user, choose your sign-in name that is displayed in the upper right corner of the AWS Management Console and then choose switch role, as shown in Figure 1.

    Figure 1: Switch role for IAM user or IAM federation

    Figure 1: Switch role for IAM user or IAM federation

  3. Open the IAM console.
  4. In the navigation pane, choose Policies.
  5. In the upper right of the page, choose Create policy.
  6. On the Create Policy page, choose the JSON tab.
  7. Paste the following policy into the JSON text box. Replace <account-id> with the ID of the account in which the policy is created.

    Tip: To copy your account number, choose your sign-in name that is displayed in the upper right corner of the AWS Management Console, and then choose the copy icon next to the account ID, as shown in Figure 2.

    Figure 2: Copy account number

    Figure 2: Copy account number

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "logs:CreateLogStream",
                    "logs:DescribeLogStreams",
                    "logs:PutLogEvents",
                    "logs:GetLogEvents"
                ],
                "Effect": "Allow",
                "Resource": "arn:aws:logs:us-east-1:<account-id>:log-group:OperationsLogGroup:*"
            },
            {
                "Action": [
                    "logs:DescribeLogGroups"
                ],
                "Effect": "Allow",
                "Resource": "arn:aws:logs:us-east-1:<account-id>:log-group::log-stream:*"
            }
        ]
    }

  8. Choose Next:Tags, and then choose Next:Review.
  9. On the Create Policy/Review Policy page, in the Name field, enter AllowCloudWatchForOperations. This is the name that you will use when you attach the CMP to the permission set in the next procedure (Step 2).
  10. Repeat steps 1 through 7 in at least one other member account. Be sure to replace the <account-id> element in the policy with the account ID of each account where you create the policy. The only difference between the policies in each account is the <account-id> in the policy.

Step 2: Create a permission set that references the CMP that you created

At this point, you have at least two member accounts containing the same policy with the same policy name. However, the ResourceARN in each policy refers to log groups that belong to the respective accounts. In this step, you create a permission set and attach the policy to the permission set. Importantly, you attach only the name of the policy to the permission set. The actual attachment of the policy to the role that IAM Identity Center creates, happens when you assign the permission set to a user or group in Step 3.

To create a permission set that references the CMP

  1. Sign in to the Organizations management account or the IAM Identity Center delegated administration account.
  2. Open the IAM Identity Center console.
  3. In the navigation pane, choose Permission Sets.
  4. On the Select Permission set type screen, select Custom permission Set and choose Next.
    Figure 3: Select custom permission set

    Figure 3: Select custom permission set

  5. On the Specify policies and permissions boundary page, expand the Customer managed policies option, and choose Attach policies.
    Figure 4: Specify policies and permissions boundary

    Figure 4: Specify policies and permissions boundary

  6. For Policy names, enter the name of the policy. This name must match the name of the policy that you created in Step 1. In our example, the name is AllowCloudWatchForOperations. Choose Next.
  7. On the Permission set details page, enter a name for your permission set. In this example, use AllowCloudWatch_PermissionSet. You can alspecify additional details for your permission sets, such as session duration and relay state (these are a link to a specific AWS Management Console page of your choice).
    Figure 5: Permission set details

    Figure 5: Permission set details

  8. Choose Next, and then choose Create.

Step 3: Assign groups or users to the permission set in accounts where you created your CMPs

In the preceding steps, you created a customer managed policy in two or more member accounts, and a permission set with the customer managed policy attached. In this step, you assign users to the permission set in your accounts.

To assign groups or users to the permission set

  1. Sign in to the Organizations management account or the IAM Identity Center delegated administration account.
  2. Open the IAM Identity Center console.
  3. In the navigation pane, choose AWS accounts.
    Figure 6: AWS account

    Figure 6: AWS account

  4. For testing purposes, in the AWS Organization section, select all the accounts where you created the customer managed policy. This means that any users or groups that you assign during the process will have access to the AllowCloudWatch_PermissionSet role in each account. Then, on the top right, choose Assign users or groups.
  5. Choose the Users or Groups tab and then select the users or groups that you want to assign to the permission set. You can select multiple users and multiple groups in this step. For this example, we recommend that you select a single user for which you have credentials, so that you can sign in as that user to test the setup later. After selecting the users or groups that you want to assign, choose Next.
    Figure 7: Assign users and groups to AWS accounts

    Figure 7: Assign users and groups to AWS accounts

  6. Select the permission set that you created in Step 2 and choose Next.
  7. Review the users and groups that you are assigning and choose Submit.
  8. You will see a message that IAM Identity Center is configuring the accounts. In this step, IAM Identity Center creates roles in each of the accounts that you selected. It does this for each account, so it looks in the account for the CMP that you specified in the permission set. If the name of the CMP that you specified in the permission set matches the name that you provided when creating the CMP, IAM Identity Center creates a role from the permission set. If the names don’t match or if the CMP isn’t present in the account to which you assigned the permission set, you see an error message associated with that account. After successful submission, you will see the following message: We reprovisioned your AWS accounts successfully and applied the updated permission set to the accounts.

Step 4: Test your assignments

Congratulations! You have successfully created CMPs in multiple AWS accounts, created a permission set and attached the CMPs by name, and assigned the permission set to users and groups in the accounts. Now it’s time to test the results.

To test your assignments

  1. Go to the IAM Identity Center console.
  2. Navigate to the Settings page.
  3. Copy the user portal URL, and then paste the user portal URL into your browser.
  4. At the sign-in prompt, sign in as one of the users that you assigned to the permission set.
  5. The IAM Identity Center user portal shows the accounts and roles that you can access. In the example shown in Figure 8, the user has access to the AllowCloudWatch_PermissionSet created in two accounts.
    Figure 8: User portal

    Figure 8: User portal

    If you choose AllowCloudWatch_PermissionSet in the member-account, you will have access to the CloudWatch log group in the member-account account. If you choose the role in member-account-1, you will have access to CloudWatch Log group in member-account-1.

  6. Test the access by choosing Management Console for the AllowCloudWatch_PermissionSet in the member-account.
  7. Open the CloudWatch console.
  8. In the navigation pane, choose Log groups. You should be able to access log groups, as shown in Figure 9.
    Figure 9: CloudWatch log groups

    Figure 9: CloudWatch log groups

  9. Open the IAM console. You shouldn’t have permissions to see the details on this console, as shown in figure 10. This is because AllowCloudWatch_PermissionSet only provided CloudWatch log access.
    Figure 10: Blocked access to the IAM console

    Figure 10: Blocked access to the IAM console

  10. Return to the IAM Identity Center user portal.
  11. Repeat steps 4 through 8 using member-account-1.

Answers to key questions

What happens if I delete a CMP or PB that is attached to a role that IAM Identity Center created?
IAM prevents you from deleting policies that are attached to IAM roles.

How can I delete a CMP or PB that is attached to a role that IAM Identity Center created?
Remove the CMP or PB reference from all your permission sets. Then re-provision the roles in your accounts. This detaches the CMP or PB from IAM Identity Center–created roles. If the policies are unused by other IAM roles in your account or by IAM users, you can delete the policy.

What happens if I modify a CMP or PB that is attached to an IAM Identity Center provisioned role?
The IAM Identity Center role picks up the policy change the next time that someone assumes the role.

Conclusion

In this post, you learned how IAM Identity Center works with customer managed policies and permissions boundaries that you create in your AWS accounts. You learned different ways that this capability can help you, and some of the key considerations and best practices to succeed in your deployments. That includes the principle of starting simple and avoiding unnecessarily complex configurations. Remember these four principles:

  1. In most cases, you can accomplish everything you need by starting with custom (inline) policies.
  2. Use customer managed policies for more advanced cases.
  3. Use permissions boundary policies only when necessary.
  4. Use CloudFormation to manage your customer managed policies and permissions boundaries rather than having administrators deploy them manually in accounts.

To learn more about this capability, see the IAM Identity Center User Guide. If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ron Cully

Ron s a Principal Product Manager at AWS where he leads feature and roadmap planning for workforce identity products at AWS. Ron has over 20 years of industry experience in product and program management of networking and directory related products. He is passionate about delivering secure, reliable solutions that help make it easier for customers to migrate directory aware applications and workloads to the cloud.

Nitin Kulkarni

Nitin Kulkarni

Nitin is a Solutions Architect on the AWS Identity Solutions team. He helps customers build secure and scalable solutions on the AWS platform. He also enjoys hiking, baseball and linguistics.

AWS launches AWS Wickr ATAK Plugin

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/aws-launches-aws-wickr-atak-plugin/

AWS is excited to announce the launch of the AWS Wickr ATAK Plugin, which makes it easier for ATAK users to maintain secure communications.

The Android Team Awareness Kit (ATAK)—also known as Android Tactical Assault Kit (ATAK) for military use—is a smartphone geospatial infrastructure and situational awareness application. It provides mapping, messaging, and geofencing capabilities to enable safe collaboration over geography.

ATAK users, referred to as operators, can view the location of other operators and potential hazards—a major advantage over relying on hand-held radio transmissions. While ATAK was initially designed for use in combat zones, the technology has been adapted to fit the missions of local, state, and federal agencies.

ATAK is currently in use by over 40,000 US Department of Defense (DoD) users—including the Air Force, Army, Special Operations, and National Guard—along with the Department of Justice (DOJ), the Department of Homeland Security (DHS), and 32,000 nonfederal users.

Using AWS Wickr with ATAK

AWS Wickr is a secure collaboration service that provides enterprises and government agencies with advanced security and administrative controls to help them meet security and compliance requirements. The AWS Wickr service is now in preview.

With AWS Wickr, communication mechanisms such as one-to-one and group messaging, audio and video calling, screen sharing, and file sharing are protected with 256-bit end-to-end encryption (E2EE). Encryption takes place locally, on the endpoint. Every message, call, and file is encrypted with a new random key, and no one but the intended recipients can decrypt them. Flexible administrative features enable organizations to deploy at scale, and facilitate information governance.

AWS Wickr supports many agencies that use ATAK. However, until now, ATAK operators have had to leave the ATAK application in order to use AWS Wickr, which creates operational risk.

AWS Wickr ATAK Plugin

AWS Wickr has developed a plugin that enhances ATAK with secure communications features. ATAK operators are provided with a Wickr Enterprise or Wickr Pro account, so they can use AWS Wickr within ATAK for secure messaging, calling, and file transfer. This helps reduce interruptions, and the complexity of configuration with ATAK chat features.

Use cases

The AWS Wickr ATAK Plugin has multiple use cases.

Military

The military uses ATAK for blue force tracking to locate team members, red force tracking to locate enemies, terrain and weather analysis, and to visually communicate their movements to friendly forces.

The AWS Wickr ATAK Plugin enhances the ability of military personnel to maintain the situational awareness ATAK provides, while quickly receiving and reacting to Wickr communications. Ephemeral messaging options allow unit leaders to send mission plans, GPS points of interest, and set burn-on-read and expiration timers. Information can be deleted from the device, while being retained on the AWS Wickr service to help meet compliance requirements, and facilitate the creation of after-action reports.

Law enforcement

ATAK is a powerful tool for team tracking and mission planning that promotes a safer and better response to critical law enforcement and public-safety events.

The AWS Wickr ATAK Plugin adds to the capabilities of ATAK by supporting secure communications between tactical, negotiation, and investigative teams.

First responders

ATAK aids in search-and-rescue and multi-jurisdictional natural disaster responses, such as hurricane relief efforts.

The AWS Wickr ATAK Plugin provides secure, uninterrupted communication between all levels of first responders to help them get oriented quickly, and support complex coordination needs.

Getting started

AWS customers can sign up to use AWS Wickr at no cost during the preview period. For more information about the AWS Wickr ATAK Plugin, email [email protected], and visit the AWS Wickr web page.

If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and has a strong focus on privacy risk management. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Randy Brumfield

Randy Brumfield

Randy leads technology business for new initiatives and the Cloud Support Engineering team at Wickr, an AWS Company. Prior to Wickr (and AWS), Randy spent close to two and a half decades in Silicon Valley across several start-ups, networking companies, and system integrators in various corporate development, product management, and operations roles. Randy currently resides in San Jose, California.

How to incorporate ACM PCA into your existing Windows Active Directory Certificate Services

Post Syndicated from Geoff Sweet original https://aws.amazon.com/blogs/security/how-to-incorporate-acm-pca-into-your-existing-windows-active-directory-certificate-services/

Using certificates to authenticate and encrypt data is vital to any enterprise security. For example, companies rely on certificates to provide TLS encryption for web applications so that client data is protected. However, not all certificates need to be issued from a publicly trusted certificate authority (CA). A privately trusted CA can be leveraged to issue certificates to help protect data in transit on resources such as load-balancers and also device authentication for endpoints and IoT devices. Many organizations already have that privately trusted CA running in their Microsoft Active Directory architecture via Active Directory Certificate Services (ADCS).

This post outlines how you can use Microsoft’s Windows 2019 ADCS to sign an AWS Certificate Manager (ACM) Private Certificate Authority (Private CA) instance, extending your existing ADCS system into your AWS environment. This will allow you to issue certificates via ACM for resources like Application Load Balancer that are trusted by your Active Directory members. The ACM PCA documentation talks about how to use an external CA to sign the ACM PCA certificate. However, it leaves the details of the external CA outside of the documentation scope.

Why use ACM PCA?

AWS Certificate Manager Private Certificate Authority (ACM Private CA or ACM PCA) is a private CA service that extends ACM certificate management capabilities to both public and private certificates. ACM PCA provides a highly available private CA service without the upfront investment and ongoing maintenance costs of operating your own private CA. ACM PCA allows developers to be more agile by providing them with APIs to create and deploy private certificates programmatically. You also have the flexibility to create private certificates for applications that require custom certificate lifetimes or resource names.

Why use ACM PCA with Windows Active Directory?

Many enterprises already use Active Directory to manage their IT resources. Whether it is on-premises or built into your AWS accounts, Active Directory’s built-in CA can be extended by ACM PCA. Using your ADCS to sign an ACM PCA means that members of your Active Directory automatically trust certificates issued by that ACM PCA. Keep in mind that these are still private certificates, and they are intended to be used just like certificates from ADCS itself. They will not be trusted by unmanaged devices, because these are not signed by a publicly trusted external CA. Therefore, systems like Mac and Linux may require that you manually deploy the ADCS certificate chain in order to trust certificates issued by your new ACM PCA.

This means it is more efficient for you to rapidly deploy certificates to your endpoint workstations for authentication. Or you can protect internal-only workloads with certificates that are constrained to your internal domain namespace. These tasks can be done conveniently through AWS APIs and the AWS SDK.

Solution overview

In the following sections, we will configure Microsoft ADCS to be able to sign a subordinate CA, deploy and sign ACM PCA, and then test the solution using a private website that is protected by a TLS certificate issued from the ACM PCA.

Configure Microsoft ADCS

Microsoft ADCS is normally deployed as part of your Windows Active Directory architecture. It can be extended to do multiple different types of certificate signing depending on your environment’s needs. Each of these different types of certificates is defined by a template that you must enable and configure. Each template contains configuration information about how Microsoft ADCS will issue the certificate type. You can copy and configure templates differently depending on your environment’s requirements. The specifics of each type of template is outside the scope of this blog post.

To configure ADCS to sign subordinate CAs

  1. On the CA server that will be signing the private CA certificate, open the Certification Authority Microsoft Management Console (MMC).
  2. In the left-side tree view, expand the name of the server.
  3. Open the context (right-click) menu for Certificate Templates and choose Manage.
    Figure 1: Navigating to the Manage option for the certificate templates

    Figure 1: Navigating to the Manage option for the certificate templates

    This opens the Certificate Template Console, which is populated with the list of optional templates.

  4. Scroll down, open the context (right-click) menu for Subordinate Certification Authority, and choose Duplicate Template, as shown in Figure 2. This will create a duplicate of the template that you can alter for your needs, while leaving the original template unaltered for future use. Selecting Duplicate Template immediately opens the configuration for the new template.
    Figure 2: Select Duplicate Template to create a copy of the Subordinate Certification Authority template

    Figure 2: Select Duplicate Template to create a copy of the Subordinate Certification Authority template

To configure and use the new template

  1. On the new template configuration page, choose the General tab, and change the template display name to something that uniquely identifies it. The example in this post uses the name Subordinate Certification Authority – Private CA.
  2. Select the check box for Publish certificate in Active Directory, and then choose OK. The new template appears in the list of available templates. Close the Certificate Templates Console.
  3. Return to the Certification Authority MMC. Open the context (right-click) menu for Certificate Templates again, but this time choose New -> Certificate Template to Issue.
    Figure 3: Issue the new Certificate Template you created for subordinate Cas

    Figure 3: Issue the new Certificate Template you created for subordinate Cas

  4. In the dialog box that appears, choose the new template you created in Step 1 of this procedure, and then choose OK.

That’s all that’s needed! Your CA is now ready to issue certificates for subordinate CAs in your public key infrastructure. Open a browser from either the ADCS CA server itself or through a network connection to the ADCS CA server, and use the following URL to access the certificate server’s certificate signing interface.

http://<hostname-of-your-ca-with-domain>/certsrv/certrqxt.asp

Now you can see that in the Certificate Templates list, you can choose the Subordinate Certification Authority template that you created, as shown in Figure 4.

Figure 4: The interface to sign certificates on your CA now shows the new certificate template you created

Figure 4: The interface to sign certificates on your CA now shows the new certificate template you created

Deploy and sign the ACM Private CA’s certificate

In this step, you will deploy the ACM PCA, which is the first step to create a subordinate CA to deploy in your AWS account. The process of deploying the ACM PCA is well documented, so this post will not go into depth about the deployment itself. Instead, this procedure focuses on the steps for taking the certificate signing request (CSR) and signing it against the ADCS, and then covers the additional steps to convert the certificates that ADCS produces into the certificate format that ACM PCA expects.

After the ACM PCA is initially deployed, it needs to have a certificate signed to authenticate it. ACM PCA offers two options for signing the new instance’s certificate. You can choose to sign either through another ACM PCA instance, or via an external CA. Since you are using ADCS in this walkthrough, you will use the process of an external CA. The ACM PCA deployment is now at a point where it needs its CSR signed by Microsoft ADCS. You should see that it is ready in the AWS Management Console for ACM PCA.

To deploy and sign the ACM PCA’s certificate

  1. When the ACM PCA is ready, in the ACM PCA console, begin the Install subordinate CA certificate process by choosing External private CA for the CA type.
    Figure 5: Options for signing the new instance’s certificate

    Figure 5: Options for signing the new instance’s certificate

  2. You will then be provided the certificate signing request (CSR) for the ACM PCA. Copy and paste the CSR content into the ADCS CA signing URL you visited earlier on the CA server. Then choose Next. The next page is where you will paste in the new signed certificate and certchain in a later step.
  3. From the ADCS CA URL, be sure that the new Subordinate Certification Authority template is selected, and then choose Submit. The new certificate will be issued to you. The ADCS issuing page provides two different formats for the certificate, either as Distinguished Encoding Rules (DER) or base64-encoded.
  4. Copy the base64-encoded files for both the certificate and the certchain to your local computer. The certificate is already in Privacy Enhanced Mail (PEM) format, and its contents can be pasted into the ACM PCA certificate input in the console. However, you must convert the certchain into the format required by the ACM PCA by following these steps:
    1. To convert the format of the certchain, use the openssl tool from the command line. The process of installing the openssl tool is outside the scope of this blog post. Refer to the OpenSSL site documentation for installation options for your operating system.
    2. Use the following command to convert the certchain file from Public Key Cryptographic Standards #7 (PKCS7) to PEM.

      openssl pkcs7 -print_certs -in certnew.p7b -out certchain.pem

    3. Using a text editor, open the certchain.pem file and copy the last certificate block from the file, starting with —–BEGIN CERTIFICATE—– and ending with —–END CERTIFICATE—–. You will notice that the file begins with the signed certificate and includes subject= and issuer= statements. ACM PCA only accepts the content that is the certificate chain.
  5. Return to the ACM PCA console page from Step 1, and paste the text the you just copied into the input area provided for the certificate chain. After this step is complete, the private CA is now signed by your corporate PKI.

Test the solution

Now that the ACM PCA is online, one of the things it can do is issue certificates via ACM that are trusted by your corporate Active Directory joined clients. These certificates can be used in services such as Application Load Balancers to provide TLS protected endpoints that are unique to your organization and trusted only by your internal clients.

From a client joined to our test Active Directory, Internet Explorer shows that it trusts the TLS certificate issued by AWS Certificate Manager and used on the Application Load Balancer for a private site.

Figure 6: Internet Explorer showing that it trusts the TLS certificate

Figure 6: Internet Explorer showing that it trusts the TLS certificate

For this demo, we created a test web server that is hosting an example webpage. The web server is behind an AWS Application Load Balancer. The TLS certificate attached to the Application Load Balancer is issued from the new ACM PCA.

Conclusion

Organizations that have Microsoft Active Directory deployed can use Active Directory’s Certificate Services to issue certificates for private resources. This blog post shows how you can extend that certificate trust to AWS Certificate Manager Private CA. This provides a way for your developers to issue private certificates automatically, which are trusted by your Active Directory domain-joined clients or clients that have the ADCS certificate chain installed.

For more information on hybrid public key infrastructure (PKI) on AWS, refer to these blog posts:

For more information on certificates for Mac and Linux, refer to the following resources:

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Geoff Sweet

Geoff Sweet

Geoff has been in industry for over 20 years, and began his career in Electrical Engineering. Starting in IT during the dot-com boom, he has held a variety of diverse roles, such as systems architect, network architect, and, for the past several years, security architect. Geoff specializes in infrastructure security.

AWS co-announces release of the Open Cybersecurity Schema Framework (OCSF) project

Post Syndicated from Mark Ryland original https://aws.amazon.com/blogs/security/aws-co-announces-release-of-the-open-cybersecurity-schema-framework-ocsf-project/

In today’s fast-changing security environment, security professionals must continuously monitor, detect, respond to, and mitigate new and existing security issues. To do so, security teams must be able to analyze security-relevant telemetry and log data by using multiple tools, technologies, and vendors. The complex and heterogeneous nature of this task drives up costs and may slow down detection and response times. Our mission is to innovate on behalf of our customers so they can more quickly analyze and protect their environment when the need arises.

With that goal in mind, alongside a number of partner organizations, we’re pleased to announce the release of the Open Cybersecurity Schema Framework (OCSF) project, which includes an open specification for the normalization of security telemetry across a wide range of security products and services, as well as open-source tools that support and accelerate the use of the OCSF schema. As a co-founder of the OCSF effort, we’ve helped create the specifications and tools that are available to all industry vendors, partners, customers, and practitioners. Joining us in this announcement is an array of key security vendors, beginning with Splunk, the co-founder with AWS of the OCSF project, and also including Broadcom, Salesforce, Rapid7, Tanium, Cloudflare, Palo Alto Networks, DTEX, CrowdStrike, IBM Security, JupiterOne, Zscaler, Sumo Logic, IronNet, Securonix, and Trend Micro. Going forward, anyone can participate in the evolution of the specification and tooling at https://github.com/ocsf.

Our customers have told us that interoperability and data normalization between security products is a challenge for them. Security teams have to correlate and unify data across multiple products from different vendors in a range of proprietary formats; that work has a growing cost associated with it. Instead of focusing primarily on detecting and responding to events, security teams spend time normalizing this data as a prerequisite to understanding and response. We believe that use of the OCSF schema will make it easier for security teams to ingest and correlate security log data from different sources, allowing for greater detection accuracy and faster response to security events. We see value in contributing our engineering efforts and also projects, tools, training, and guidelines to help standardize security telemetry across the industry. These efforts benefit our customers and the broader security community.

Although we as an industry can’t directly control the behavior of threat actors, we can improve our collective defenses by making it easier for security teams to do their jobs more efficiently. At AWS, we are excited to see the industry come together to use the OCSF project to make it easier for security professionals to focus on the things that are important to their business: identifying and responding to events, then using that data to proactively improve their security posture.

To learn more about the OCSF project, visit https://github.com/ocsf.

Want more AWS Security news? Follow us on Twitter.

Mark Ryland

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has over 30 years of experience in the technology industry and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization and public policy. Previously, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.