Tag Archives: Security, Identity & Compliance

The three most important AWS WAF rate-based rules

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/three-most-important-aws-waf-rate-based-rules/

In this post, we explain what the three most important AWS WAF rate-based rules are for proactively protecting your web applications against common HTTP flood events, and how to implement these rules. We share what the Shield Response Team (SRT) has learned from helping customers respond to HTTP floods and show how all AWS WAF customers can benefit from these learnings.

When you have business-critical applications that are internet-facing, you need to protect them from risks such as distributed denial of service (DDoS) attacks. AWS Shield Advanced is a managed DDoS protection service that safeguards applications that are running behind Amazon Web Services (AWS) internet-facing resources. The backend origin of your application can exist anywhere, including on premises, and Shield Advanced can protect it. Shield Advanced provides DDoS protection for Layers 3–7. It also includes 24/7 access to the SRT to help you quickly respond to sophisticated unauthorized activity scenarios that might be unique to your application. To learn more about what resource types are supported to associate AWS WAF, see AWS WAF.

Increasingly, the SRT has been assisting customers in protecting against Layer 7 HTTP flood occurrences that negatively impact application availability or performance by overloading the application with an unusually high number of HTTP requests. In many cases, these malicious events can be automatically mitigated by using AWS WAF. In addition, AWS WAF has an easy-to-configure native rate-based rule capability, which detects source IP addresses that make large numbers of HTTP requests within a 5-minute time span, and automatically blocks requests from the offending source IP until the rate of requests falls below a set threshold. In this post, we show how you can pull insights from the AWS WAF logs to determine what your rate-based rule threshold should be.

The top three most important AWS WAF rate-based rules are:

  • A blanket rate-based rule to protect your application from large HTTP floods.
  • A rate-based rule to protect specific URIs at more restrictive rates than the blanket rate-based rule.
  • A rate-based rule to protect your application against known malicious source IPs.

Solution overview

AWS WAF is a web application firewall that helps protect your web applications against common web exploits that might affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over which web traffic reaches your applications. If you already know the request rates for your application, you have all the necessary information to start creating your AWS WAF rate-based rules. To learn more about how to create rules, see Creating a rule and adding conditions. However, if you don’t have this data and want to learn how to get started, this solution helps you determine appropriate rates for your applications, and how to create AWS WAF rate-based rules.

Figure 1 shows how incoming request information is captured so that the operations team can use it to determine rate-based rules.

Figure 1: The workflow to collect and query logs and apply rate-based rules

Figure 1: The workflow to collect and query logs and apply rate-based rules

Let’s go through the flow to better understand what’s happening at each step:

  1. An application user makes requests to the application.
  2. AWS WAF captures information about the incoming requests and sends this to Amazon Kinesis Data Firehose.
  3. Kinesis Data Firehose delivers the logs to an Amazon Simple Storage Service (Amazon S3) bucket, where they will be stored.
  4. The operations team uses Amazon Athena to analyze the logs with SQL queries.
  5. Athena queries the logs in the S3 bucket and shows the query results.
  6. The operations team uses the query results to determine the appropriate AWS WAF rate-based rule.

The three rate-based rules in detail

Each of the rules helps to protect web applications from unauthorized activity. Each of the rules focuses on a specific aspect of protection. The rules complement each other, and so when they’re combined, they can offer greater help in protecting your web application. We’ll look at each of the rules to understand what they do.

Blanket rate-based rule

A blanket rate-based rule is designed to prevent any single source IP address from negatively impacting the availability of a website. For example, if the threshold for the rate-based rule is set to 2,000, the rule will block all IPs that are making more than 2,000 requests in a rolling 5-minute period. This is the most basic rate-based rule, and one of the most valuable for AWS WAF customers to implement. The SRT often helps customers who are actively under a DDoS attack to quickly implement this rule. In past experiences with HTTP flood cases, if this rule were proactively in place, the customer would have been protected and wouldn’t have needed to reach out to the SRT for assistance. The blanket rate-based rule would have automatically blocked the attempt without any human intervention.

URI-specific rate-based rule

Some application URI endpoints typically receive a high request volume, but for others it would be unusual and suspicious to see a high request count. For example, multiple requests in a 5-minute period to an application’s login page is suspicious and indicates a potential brute force or credential-stuffing attack against the application. A URI-specific rule can prevent a single source IP address from connecting to the login page as few as 100 times per 5-minute period, while still allowing a much higher request volume to the rest of the application. Some applications naturally have computationally expensive URIs that, when called, require considerably more resources to process the request. An example of this could be a database query or search function. If a bad actor targets these computationally expensive URIs, this can quickly lead to application performance or availability issues. If you assign a URI-specific rate-based rule to these portions of your site, you can configure a much lower threshold than the blanket rate-based rule. It’s beyond the scope of this blog post, but some customers use Application Load Balancer access logs and the target_processing_time information to determine precisely which portions of the site are the slowest to respond and might represent a computationally expensive call. These customers then put additional rate-based rule protections on calls that are made to these URIs.

IP reputation rate-based rule

Many of the DDoS events the SRT assists customers with include HTTP floods that originate from known malicious source IPs. The AWS WAF Security Automations solution provides AWS WAF customers with a subscription to four open-source threat intelligence lists. Rate-based rules with low thresholds can be applied to requests coming from these suspect sources. Some customers feel comfortable completely blocking web requests from these IPs, but at the very least, requests from these IPs should be rate-limited to protect the application from these well-known malicious sources.

It’s also common to see HTTP floods originate from IP addresses within certain countries. You can use AWS WAF geographical matching rules to assign lower rate-based rule thresholds to requests that originate from certain countries, or countries that don’t contain your web application’s primary user base. For example, suppose your application primarily serves users in the United States. In that case, it could be beneficial to create a rate-based rule with a low threshold for requests that come from any country other than the United States. HTTP floods are also commonly seen originating from IP addresses classified as cloud hosting provider IPs. You can use AWS WAF’s “HostingProviderIPList” Managed Rule to label these requests and then assign a lower rate-based rule threshold to them as well.

Prerequisites

Before you implement the solution, verify that:

  • AWS WAF is deployed in your AWS account and is associated with an Amazon CloudFront distribution or an Application Load Balancer.
  • Your AWS WAF default action is set to Block. When you create and configure a web ACL, you set the web ACL default action, which determines how AWS WAF handles web requests that don’t match any rules in the web ACL. To learn more about default action for a web ACL, see Deciding on the default action for a web ACL.
  • AWS WAF logging is configured and logs are being stored in an S3 bucket.

    Note: You can follow these instructions to configure delivery of AWS WAF logs to your S3 bucket, and you can also use AWS Firewall Manager to configure centralized AWS WAF logging in a multi-account environment.

Set up Athena to analyze AWS WAF logs

Amazon Athena is an interactive query service that you can use to analyze data in Amazon S3 by using standard SQL. For this solution, you’ll use Athena to connect to the S3 bucket where AWS WAF logs are stored and query the AWS WAF logs. The first step is to open the Athena console and create a database.

Note: The Athena database and table creation is a once-off configuration process. You can then come back and run the queries and see the query results based on your latest AWS WAF log data.

To create an Athena database, you’ll use a data definition language (DDL) statement. Paste the following query in the Athena query editor, replacing values as described here:

  • Replace <your-bucket-name> with the S3 bucket name that holds your AWS WAF logs.
  • For <bucket-prefix-if-exist>, if AWS WAF logs are stored in an S3 bucket prefix, replace with your prefix name. Otherwise, remove this part from the query, including the slash “/” at the end.
CREATE DATABASE IF NOT EXISTS wafrulesdb
  COMMENT 'AWS WAF logs'
  LOCATION 's3://<your-bucket-name>/<bucket-prefix-if-exist>/';

Choose Run query to run the query and create the database. Successful completion will be indicated by the query result, as shown below.

Results
Query successful. 

Next, you’ll create a table inside the database. Paste the following query in the Athena query editor, replacing values as described here:

  • Replace <your-bucket-name> with the S3 bucket name that holds your AWS WAF logs.
  • For <bucket-prefix-if-exist>, if AWS WAF logs are stored in an S3 bucket prefix, replace with your prefix name. Otherwise, you can remove this part from the query, including the slash “/” at the end.
  • For has_encrypted_data, if your AWS WAF log data is encrypted at rest, change the value to true, otherwise false is the correct value.
CREATE EXTERNAL TABLE IF NOT EXISTS wafrulesdb.waftable (
  `terminatingRuleId` string,
  `httpSourceName` string,
  `action` string,
  `httpSourceId` string,
  `terminatingRuleType` string,
  `webaclId` string,
  `timestamp` float,
  `formatVersion` int,
  `ruleGroupList` array<string>,
  `httpRequest` struct<`headers`:array<struct<name:string,value:string>>,clientIp:string,args:string,requestId:string,httpVersion:string,httpMethod:string,country:string,uri:string>,
  `rateBasedRuleList` string,
  `nonTerminatingMatchingRules` string,
  `terminatingRuleMatchDetails` string 
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
) LOCATION 's3://<your-bucket-name>/<bucket-prefix-if-exist>/'
TBLPROPERTIES ('has_encrypted_data'='false');

Run the query in the Athena console. After the query completes, Athena registers the waftable table, which makes the data in it available for queries.

Run SQL queries to identify rate-based rule thresholds

Now that you have a table in Athena, know where the data is located, and have the correct schema, you can run SQL queries for each of the rate-based rules and see the query results.

Blanket rate-based rule for all application endpoints

You’ll start with a SQL query that identifies the blanket rule. The critical factor in determining the blanket rule is to run the query against AWS WAF logs data that represents a healthy high request volume. The following query defines a time window of 6 hours in the evening, expressed as 2020-12-01 16:00:00 and 2020-12-01 22:00:00. Time windows can span a few hours or several days; however, this time window must be a good representation of your traffic volume, which you will use as the basis to identify the threshold. For example, if your application is busier during certain periods, you should evaluate the log data for that time. In the example shown here, we limit the query results to the top 100 IPs in our SQL queries. You can adjust the limit to your needs by updating the LIMIT value.

SELECT
  httprequest.clientip,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
GROUP BY httprequest.clientip, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100; 

Update the time window to your needs and run the query in the Athena console. The results will show the top requesting IPs in any 5-minute period between two dates, as illustrated in Figure 2.

Figure 2: The top requesting IP in any 5-minute period between dates

Figure 2: The top requesting IP in any 5-minute period between dates

You can visualize the results data to see a holistic view of the request count per IP. The chart in Figure 3 illustrates the SQL query results.

Figure 3: Chart: Top requesting IP in any 5-minute period between dates

Figure 3: Chart: Top requesting IP in any 5-minute period between dates

The results are sorted by showing the IPs with the highest request volume for every 5-minute period. This means that the same IP could appear multiple times, if most of the requests were made within that 5-minute interval. In our example, looking at the result, an excellent first blanket rule would limit the request volume to about 7,000 requests within a 5-minute time period. You can either create the AWS WAF rule by using the following JSON and the JSON rule editor, or by using the AWS WAF visual rule editor and following these instructions. If you’re using the following JSON, make sure to replace the Limit value with the value that you identified by running the SQL query earlier.

{
  "Name": "BlanketRule",
  "Priority": 2,
  "Action": {
    "Block": {}
  },
  "VisibilityConfig": {
    "SampledRequestsEnabled": true,
    "CloudWatchMetricsEnabled": true,
    "MetricName": "BlanketRule"
  },
  "Statement": {
    "RateBasedStatement": {
      "Limit": 7000,
      "AggregateKeyType": "IP"
    }
  }
}

Sometimes a client connects to an application through an HTTP proxy or a content delivery network (CDN), which obscures the client origin IP. It’s important to identify the client IP instead of the one from the proxy or CDN, because blocking source IPs can cause a wider unwanted impact. You can use many tools to help you identify whether the source IP might be a CDN. In this case, you would need to query and filter on the X-Forwarded-For, True-Client-IP, or other custom headers. CDN providers typically publish which headers they add to the requests, but X-Forwarded-For and True-Client-IP are common. The following query shows how you can reference these headers, illustrating with the X-Forwarded-For header, to write rate-based rules. You can replace X-Forwarded-For with the header you expect to hold the client IP.

SELECT
  header.value,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable, UNNEST(httprequest.headers) as t(header)
WHERE
    from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
  AND
    header.name = 'X-Forwarded-For'
GROUP BY header.value, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

URI-based rule for specific application endpoints

Suppose that you want to further limit requests to the login page on your website. To do this, you could add the following string match condition to a rate-based rule:

  • The part of the request to filter on is URI
  • The Match Type is Starts with
  • A Value to match is /login (this needs to be whatever identifies the login page in the URI portion of the web request)

Next you have to identify what is a typical request volume to the /login URI for the application. The following SQL query does exactly that.

SELECT
  httprequest.clientip,
  httprequest.uri,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE 
  from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
AND
  httprequest.uri = '/login'
GROUP BY httprequest.clientip, httprequest.uri, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

Replace the time window 2020-12-01 16:00:00 and 2020-12-01 22:00:00 and the httprequest.uri value, if applicable, and run the query in the Athena console. The results show the highest requesting IP and /login URI for every 5-minute period between dates, as illustrated in Figure 4.

Figure 4: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 4: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 5 illustrates a chart based on the query results for the highest requesting IP and /login URI for every 5-minute period between dates.

Figure 5: Chart: The highest requesting IP and /login URI for every 5-minute period between dates

Figure 5: Chart: The highest requesting IP and /login URI for every 5-minute period between dates

Based on the SQL query results, you would specify a rate limit of 150 requests per 5 minutes. Adding this rate-based rule to a web ACL will limit requests to your login page per IP address without affecting the rest of your site. Once again, you can either create the AWS WAF rule by using the following JSON and the JSON rule editor, or by using the AWS WAF visual rule editor and following these instructions. If you’re using the following JSON, make sure to replace the Limit value with the value that you identified by running the SQL query earlier.

{
  "Name": "UriBasedRule",
  "Priority": 1,
  "Action": {
    "Block": {}
  },
  "VisibilityConfig": {
    "SampledRequestsEnabled": true,
    "CloudWatchMetricsEnabled": true,
    "MetricName": "UriBasedRule"
  },
  "Statement": {
    "RateBasedStatement": {
      "Limit": 150,
      "AggregateKeyType": "IP",
      "ScopeDownStatement": {
        "ByteMatchStatement": {
          "FieldToMatch": {
            "UriPath": {}
          },
          "PositionalConstraint": "STARTS_WITH",
          "SearchString": "/login",
          "TextTransformations": [
            {
              "Type": "NONE",
              "Priority": 0
            }
          ]
        }
      }
    }
  }
}

AWS WAF rules with a lower value for Priority are evaluated before rules with a higher value. For the AWS WAF rules to work as expected (first evaluating the more specific rule—the URI-based rule, and only after that, the more general blanket rule) you have to set the AWS WAF rule priority. You can do that by updating the JSON and setting the Priority value to 1 for the blanket rule and 0 for the URI-based rule, or by using the AWS WAF visual rule editor. The expected AWS WAF rule priority should be as illustrated in Figure 6.

Figure 6: AWS WAF rules with priority for UriBasedRule

Figure 6: AWS WAF rules with priority for UriBasedRule

If you want to know the request volume across all application URIs, the following SQL will accomplish that.

SELECT
  httprequest.clientip,
  httprequest.uri,
  COUNT(*) AS "count"
FROM wafrulesdb.waftable
WHERE from_unixtime(timestamp/1000) BETWEEN TIMESTAMP '2020-12-01 16:00:00' AND TIMESTAMP '2020-12-01 22:00:00'
GROUP BY httprequest.clientip, httprequest.uri, FLOOR("timestamp"/(1000*60*5))
ORDER BY count DESC
LIMIT 100;

Figure 7 shows a chart of what the SQL query results might look like.

Figure 7: The highest requesting IP and URI for every 5-minute period between dates

Figure 7: The highest requesting IP and URI for every 5-minute period between dates

IP reputation rule groups to block bots or other threats

You can use IP reputation rules to block requests based on their source. AWS WAF offers a wide selection of managed rule groups, and Amazon IP reputation list is the one that will help to reduce your exposure to bot traffic or exploitation attempts.

To add the Amazon IP reputation list rule to your web ACL

  1. Open the AWS WAF console and navigate to the managed rule groups view.

    Figure 8: The managed rule group view in AWS WAF

    Figure 8: The managed rule group view in AWS WAF

  2. Expand AWS managed rule groups, and for Amazon IP reputation list, choose Add to web ACL.

    Figure 9: Add the Amazon IP reputation list to the web ACL

    Figure 9: Add the Amazon IP reputation list to the web ACL

  3. Scroll to the bottom of the page and choose Add rule.
  4. At this point, you should see the Set rule priority view. Move up the Amazon managed rule so that it has the highest priority. If a request originates from a bot, you want to deny the request as early as possible, and you achieve exactly that by assigning the highest priority to the Amazon IP reputation list rule. Your final AWS WAF rules order should be as shown in Figure 10.

    Figure 10: Final AWS WAF rules ordered by priority

    Figure 10: Final AWS WAF rules ordered by priority

Considerations for rate-based rules

It’s important to note that the more specific AWS WAF rules should have a higher priority, because you want these rules to limit the request volume first. In our example, the rules strategy is first based on a specific URI, and then on a blanket rule that limits requests across the whole application.

The rate-based rules that we discussed here provide a solid foundation to help you protect your internet-facing applications from common basic HTTP request floods. However, the solution in this blog post shouldn’t be seen as a one-time setup but rather as an iterative activity.

You should determine a healthy time frame to rerun Amazon Athena queries to identify a new rate-based rule that aligns with the application’s growth and increasing request volume. Reviewing the rate-based rules on an iterative basis and incorporating it into your existing processes, such as software development life cycle, is a great way to schedule in the review process. Each AWS WAF rule can publish Amazon CloudWatch metrics, which can be used to trigger alerts before thresholds are crossed. You can use alerts to create tickets to operations teams based on thresholds you set. This alerts your operations teams to review the situation to see if it’s a DDoS attack being thwarted versus legitimate traffic being dropped.

After you define your request, add a buffer to allow for growth. Rate-based rules should have a reasonable buffer to account for near-future application growth. For instance, when an Athena query result shows a request volume of 500 requests, a rate-based rule with a limit of 1,000 requests gives a buffer for an additional 500 requests to account for application growth.

Summary

In this post, we introduced you to the top three most important AWS WAF rate-based rules to protect your web applications from common HTTP flood events. We also covered how to implement these rate-based rules and determined an appropriate request threshold for your application by using AWS WAF logs and Amazon Athena queries. To learn more about best practices that help you protect your websites and web applications against various attack vectors by using AWS WAF, see our whitepaper, Guidelines for Implementing AWS WAF.

You can learn more about AWS WAF in other AWS WAF–related Security Blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Author

Jesse Lepich

Jesse is a Senior Security Solutions Architect at AWS based in Lake St. Louis, Missouri, focused on helping customers implement native AWS security services. Outside of cloud security, his interests include relaxing with family, barefoot waterskiing, snowboarding/snow skiing, surfing, boating/sailing, and mountain climbing.

How to restrict IAM roles to access AWS resources from specific geolocations using AWS Client VPN

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/how-to-restrict-iam-roles-to-access-aws-resources-from-specific-geolocations-using-aws-client-vpn/

You can improve your organization’s security posture by enforcing access to Amazon Web Services (AWS) resources based on IP address and geolocation. For example, users in your organization might bring their own devices, which might require additional security authorization checks and posture assessment in order to comply with corporate security requirements. Enforcing access to AWS resources based on geolocation can help you to automate compliance with corporate security requirements by auditing the connection establishment requests. In this blog post, we walk you through the steps to allow AWS Identity and Access Management (IAM) roles to access AWS resources only from specific geographic locations.

Solution overview

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and your on-premises network resources. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client. A client VPN session terminates at the Client VPN endpoint, which is provisioned in your Amazon Virtual Private Cloud (Amazon VPC) and therefore enables a secure connection to resources running inside your VPC network.

This solution uses Client VPN to implement geolocation authentication rules. When a client VPN connection is established, authentication is implemented at the first point of entry into the AWS Cloud. It’s used to determine if clients are allowed to connect to the Client VPN endpoint. You configure an AWS Lambda function as the client connect handler for your Client VPN endpoint. You can use the handler to run custom logic that authorizes a new connection. When a user initiates a new client VPN connection, the custom logic is the point at which you can determine the geolocation of this user. In order to enforce geolocation authorization rules, you need:

  • AWS WAF to determine the user’s geolocation based on their IP address.
  • A Network address translation (NAT) gateway to be used as the public origin IP address for all requests to your AWS resources.
  • An IAM policy that is attached to the IAM role and validated by AWS when the request origin IP address matches the IP address of the NAT gateway.

One of the key features of AWS WAF is the ability to allow or block web requests based on country of origin. When the client connection handler Lambda function is invoked by your Client VPN endpoint, the Client VPN service invokes the Lambda function on your behalf. The Lambda function receives the device, user, and connection attributes. The user’s public IP address is one of the device attributes that are used to identify the user’s geolocation by using the AWS WAF geolocation feature. Only connections that are authorized by the Lambda function are allowed to connect to the Client VPN endpoint.

Note: The accuracy of the IP address to country lookup database varies by region. Based on recent tests, the overall accuracy for the IP address to country mapping is 99.8 percent. We recommend that you work with regulatory compliance experts to decide if your solution meets your compliance needs.

A NAT gateway allows resources in a private subnet to connect to the internet or other AWS services, but prevents a host on the internet from connecting to those resources. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. Since an Elastic IP address is static, any request originating from a private subnet will be seen with a public IP address that you can trust because it will be the elastic IP address of your NAT gateway.

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. In an IAM policy, you can define the global condition key aws:SourceIp to restrict API calls to your AWS resources from specific IP addresses.

Note: Throughout this post, the user is authenticating with a SAML identity provider (IdP) and assumes an IAM role.

Figure 1 illustrates the authentication process when a user tries to establish a new Client VPN connection session.

Figure 1: Enforce connection to Client VPN from specific geolocations

Figure 1: Enforce connection to Client VPN from specific geolocations

Let’s look at how the process illustrated in Figure 1 works.

  1. The user device initiates a new client VPN connection session.
  2. The Client VPN service redirects the user to authenticate against an IdP.
  3. After user authentication succeeds, the client connects to the Client VPN endpoint.
  4. The Client VPN endpoint invokes the Lambda function synchronously. The function is invoked after device and user authentication, and before the authorization rules are evaluated.
  5. The Lambda function extracts the public-ip device attribute from the input and makes an HTTPS request to the Amazon API Gateway endpoint, passing the user’s public IP address in the X-Forwarded-For header.Because you’re using AWS WAF to protect API Gateway, and have geographic match conditions configured, a response with the status code 200 is returned only if the user’s public IP address originates from an allowed country of origin. Additionally, AWS WAF has another rule configured that blocks all requests to API Gateway if the request doesn’t originate from one of the NAT gateway IP addresses. Because Lambda is deployed in a VPC, it has a NAT gateway IP address, and therefore the request isn’t blocked by AWS WAF. To learn more about running a Lambda function in a VPC, see Configuring a Lambda function to access resources in a VPC.The following code example showcases Lambda code that performs the described step.

    Note: Optionally, you can implement additional controls by creating specific authorization rules. Authorization rules act as firewall rules that grant access to networks. You should have an authorization rule for each network for which you want to grant access. To learn more, see Authorization rules.

  6. The Lambda function returns the authorization request response to Client VPN.
  7. When the Lambda function—shown following—returns an allow response, Client VPN establishes the VPN session.
import os
import http.client


cloud_front_url = os.getenv("ENDPOINT_DNS")
endpoint = os.getenv("ENDPOINT")
success_status_codes = [200]


def build_response(allow, status):
    return {
        "allow": allow,
        "error-msg-on-failed-posture-compliance": "Error establishing connection. Please contact your administrator.",
        "posture-compliance-statuses": [status],
        "schema-version": "v1"
    }


def handler(event, context):
    ip = event['public-ip']

    conn = http.client.HTTPSConnection(cloud_front_url)
    conn.request("GET", f'/{endpoint}', headers={'X-Forwarded-For': ip})
    r1 = conn.getresponse()
    conn.close()

    status_code = r1.status

    if status_code in success_status_codes:
        print("User's IP is based from an allowed country. Allowing the connection to VPN.")
        return build_response(True, 'compliant')

    print("User's IP is NOT based from an allowed country. Blocking the connection to VPN.")
    return build_response(False, 'quarantined')

After the client VPN session is established successfully, the request from the user device flows through the NAT gateway. The originating source IP address is recognized, because it is the Elastic IP address associated with the NAT gateway. An IAM policy is defined that denies any request to your AWS resources that doesn’t originate from the NAT gateway Elastic IP address. By attaching this IAM policy to users, you can control which AWS resources they can access.

Figure 2 illustrates the process of a user trying to access an Amazon Simple Storage Service (Amazon S3) bucket.

Figure 2: Enforce access to AWS resources from specific IPs

Figure 2: Enforce access to AWS resources from specific IPs

Let’s look at how the process illustrated in Figure 2 works.

  1. A user signs in to the AWS Management Console by authenticating against the IdP and assumes an IAM role.
  2. Using the IAM role, the user makes a request to list Amazon S3 buckets. The IAM policy of the user is evaluated to form an allow or deny decision.
  3. If the request is allowed, an API request is made to Amazon S3.

The aws:SourceIp condition key is used in a policy to deny requests from principals if the origin IP address isn’t the NAT gateway IP address. However, this policy also denies access if an AWS service makes calls on a principal’s behalf. For example, when you use AWS CloudFormation to provision a stack, it provisions resources by using its own IP address, not the IP address of the originating request. In this case, you use aws:SourceIp with the aws:ViaAWSService key to ensure that the source IP address restriction applies only to requests made directly by a principal.

IAM deny policy

The IAM policy doesn’t allow any actions. What the policy does is deny any action on any resource if the source IP address doesn’t match any of the IP addresses in the condition. Use this policy in combination with other policies that allow specific actions.

Prerequisites

Make sure that you have the following in place before you deploy the solution:

Implementation and deployment details

In this section, you create a CloudFormation stack that creates AWS resources for this solution. To start the deployment process, select the following Launch Stack button.

Select the Launch Stack button to launch the template

You also can download the CloudFormation template if you want to modify the code before the deployment.

The template in Figure 3 takes several parameters. Let’s go over the key parameters.

Figure 3: CloudFormation stack parameters

Figure 3: CloudFormation stack parameters

The key parameters are:

  • AuthenticationOption: Information about the authentication method to be used to authenticate clients. You can choose either AWS Managed Microsoft AD or IAM SAML identity provider for authentication.
  • AuthenticationOptionResourceIdentifier: The ID of the AWS Managed Microsoft AD directory to use for Active Directory authentication, or the Amazon Resource Number (ARN) of the SAML provider for federated authentication.
  • ServerCertificateArn: The ARN of the server certificate. The server certificate must be provisioned in ACM.
  • CountryCodes: A string of comma-separated country codes. For example: US,GB,DE. The country codes must be alpha-2 country ISO codes of the ISO 3166 international standard.
  • LambdaProvisionedConcurrency: Provisioned concurrency for the client connection handler. We recommend that you configure provisioned concurrency for the Lambda function to enable it to scale without fluctuations in latency.

All other input fields have default values that you can either accept or override. Once you provide the parameter input values and reach the final screen, choose Create stack to deploy the CloudFormation stack.

This template creates several resources in your AWS account, as follows:

  • A VPC and associated resources, such as InternetGateway, Subnets, ElasticIP, NatGateway, RouteTables, and SecurityGroup.
  • A Client VPN endpoint, which provides connectivity to your VPC.
  • A Lambda function, which is invoked by the Client VPN endpoint to determine the country origin of the user’s IP address.
  • An API Gateway for the Lambda function to make an HTTPS request.
  • AWS WAF in front of API Gateway, which only allows requests to go through to API Gateway if the user’s IP address is based in one of the allowed countries.
  • A deny policy with a NAT gateway IP addresses condition. Attaching this policy to a role or user enforces that the user can’t access your AWS resources unless they are connected to your client VPN.

Note: CloudFormation stack deployment can take up to 20 minutes to provision all AWS resources.

After creating the stack, there are two outputs in the Outputs section, as shown in Figure 4.

Figure 4: CloudFormation stack outputs

Figure 4: CloudFormation stack outputs

  • ClientVPNConsoleURL: The URL where you can download the client VPN configuration file.
  • IAMRoleClientVpnDenyIfNotNatIP: The IAM policy to be attached to an IAM role or IAM user to enforce access control.

Attach the IAMRoleClientVpnDenyIfNotNatIP policy to a role

This policy is used to enforce access to your AWS resources based on geolocation. Attach this policy to the role that you are using for testing the solution. You can use the steps in Adding IAM identity permissions to do so.

Configure the AWS client VPN desktop application

When you open the URL that you see in ClientVPNConsoleURL, you see the newly provisioned Client VPN endpoint. Select Download Client Configuration to download the configuration file.

Figure 5: Client VPN endpoint

Figure 5: Client VPN endpoint

Confirm the download request by selecting Download.

Figure 6: Client VPN Endpoint - Download Client Configuration

Figure 6: Client VPN Endpoint – Download Client Configuration

To connect to the Client VPN endpoint, follow the steps in Connect to the VPN. After a successful connection is established, you should see the message Connected. in your AWS Client VPN desktop application.

Figure 7: AWS Client VPN desktop application - established VPN connection

Figure 7: AWS Client VPN desktop application – established VPN connection

Troubleshooting

If you can’t establish a Client VPN connection, here are some things to try:

  • Confirm that the Client VPN connection has successfully established. It should be in the Connected state. To troubleshoot connection issues, you can follow this guide.
  • If the connection isn’t establishing, make sure that your machine has TCP port 35001 available. This is the port used for receiving the SAML assertion.
  • Validate that the user you’re using for testing is a member of the correct SAML group on your IdP.
  • Confirm that the IdP is sending the right details in the SAML assertion. You can use browser plugins, such as SAML-tracer, to inspect the information received in the SAML assertion.

Test the solution

Now that you’re connected to Client VPN, open the console, sign in to your AWS account, and navigate to the Amazon S3 page. Since you’re connected to the VPN, your origin IP address is one of the NAT gateway IPs, and the request is allowed. You can see your S3 bucket, if any exist.

Figure 8: Amazon S3 service console view - user connected to AWS Client VPN

Figure 8: Amazon S3 service console view – user connected to AWS Client VPN

Now that you’ve verified that you can access your AWS resources, go back to the Client VPN desktop application and disconnect your VPN connection. Once the VPN connection is disconnected, go back to the Amazon S3 page and reload it. This time you should see an error message that you don’t have permission to list buckets, as shown in Figure 9.

Figure 9: Amazon S3 service console view - user is disconnected from AWS Client VPN

Figure 9: Amazon S3 service console view – user is disconnected from AWS Client VPN

Access has been denied because your origin public IP address is no longer one of the NAT gateway IP addresses. As mentioned earlier, since the policy denies any action on any resource without an established VPN connection to the Client VPN endpoint, access to all your AWS resources is denied.

Scale the solution in AWS Organizations

With AWS Organizations, you can centrally manage and govern your environment as you grow and scale your AWS resources. You can use Organizations to apply policies that give your teams the freedom to build with the resources they need, while staying within the boundaries you set. By organizing accounts into organizational units (OUs), which are groups of accounts that serve an application or service, you can apply service control policies (SCPs) to create targeted governance boundaries for your OUs. To learn more about Organizations, see AWS Organizations terminology and concepts.

SCPs help you to ensure that your accounts stay within your organization’s access control guidelines across all your accounts within OUs. In particular, these are the key benefits of using SCPs in your AWS Organizations:

  • You don’t have to create an IAM policy with each new account, but instead create one SCP and apply it to one or more OUs as needed.
  • You don’t have to apply the IAM policy to every IAM user or role, existing or new.
  • This solution can be deployed in a separate account, such as a shared infrastructure account. This helps to decouple infrastructure tooling from business application accounts.

The following figure, Figure 10, illustrates the solution in an Organizations environment.

Figure 10: Use SCPs to enforce policy across many AWS accounts

Figure 10: Use SCPs to enforce policy across many AWS accounts

The Client VPN account is the account the solution is deployed into. This account can also be used for other networking related services. The SCP is created in the Organizations root account and attached to one or more OUs. This allows you to centrally control access to your AWS resources.

Let’s review the new condition that’s added to the IAM policy:

"ArnNotLikeIfExists": {
    "aws:PrincipalARN": [
    "arn:aws:iam::*:role/service-role/*"
    ]
}

The aws:PrincipalARN condition key allows your AWS services to communicate to other AWS services even though those won’t have a NAT IP address as the source IP address. For instance, when a Lambda function needs to read a file from your S3 bucket.

Note: Appending policies to existing resources might cause an unintended disruption to your application. Consider testing your policies in a test environment or to non-critical resources before applying them to production resources. You can do that by attaching the SCP to a specific OU or to an individual AWS account.

Cleanup

After you’ve tested the solution, you can clean up all the created AWS resources by deleting the CloudFormation stack.

Conclusion

In this post, we showed you how you can restrict IAM users to access AWS resources from specific geographic locations. You used Client VPN to allow users to establish a client VPN connection from a desktop. You used an AWS client connection handler (as a Lambda function), and API Gateway with AWS WAF to identify the user’s geolocation. NAT gateway IPs served as trusted source IPs, and an IAM policy protects access to your AWS resources. Lastly, you learned how to scale this solution to many AWS accounts with Organizations.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Author

Faiyaz Desai

Faiyaz leads a solutions architecture team supporting cloud-native customers in New York. His team guides customers in their modernization journeys through business and technology strategies, architectural best practices, and customer innovation. Faiyaz’s focus areas include unified communication, customer experience, network design, and mobile endpoint security.

Implement a centralized patching solution across multiple AWS Regions

Post Syndicated from Akash Kumar original https://aws.amazon.com/blogs/security/implement-a-centralized-patching-solution-across-multiple-aws-regions/

In this post, I show you how to implement a centralized patching solution across Amazon Web Services (AWS) Regions by using AWS Systems Manager in your AWS account. This helps you to initiate, track, and manage your patching events across AWS Regions from one centralized place.

Enterprises with large, multi-Region hybrid environments must determine whether they want to centralize patching by using Systems Manager to map all their instances under one Region, or decentralize patching to each Region where instances are deployed. Both approaches have trade-offs in terms of cost and operation overhead. For centralized patching under one Region, you must enable the Systems Manager advanced-instances tier if your running instances count exceeds the registration maximum for on-premises servers or VMs per AWS account per Region. (At the time of this blog post, the maximum count is set to 1,000). This tier is priced at a higher pay-as-you-go rate, but provides additional features on top of the standard-instances tier solution, such as the ability to connect to your hybrid machines by using Systems Manager Session Manager, Microsoft application patching, or other solutions. Using a decentralized patching approach, if you aren’t interested in advanced-tier features and have more instances than the AWS Region registration maximum that is allowed at the standard-tier level, you can distribute your instances across Regions and run it under the standard-tier section which is priced at a lower rate with respect to the advanced tier.

Solution overview

Figure 1 shows the architecture of the centralized patching solution across multiple Regions.

Figure 1: Solution architecture

Figure 1: Solution architecture

The automated solution I provide in this post is focused on scheduling and patching managed instances across AWS Regions. Systems Manager Maintenance Windows initiates a series of steps for automated patching for the instances, regardless of which Regions the instances are in.

Here are the key building blocks for this solution:

AWS Systems Manager Maintenance Windows is a feature you can use to define a schedule for when to perform potentially disruptive actions on your instances, such as patching an operating system, updating drivers, or installing software. Maintenance Windows also makes it possible for you to schedule actions on other AWS resource types, such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Simple Queue Service (Amazon SQS) queues, AWS Key Management Service (AWS KMS) keys, and others that are out of scope for this blog post.

AWS Lambda automatically runs your code without requiring you to provision or manage infrastructure. It can automatically scale your application by running code in response to each event. Also, you only pay for the compute time you consume, so you’re never paying for over-provisioned infrastructure.

AWS Systems Manager Automation simplifies common maintenance and deployment tasks for Amazon Elastic Compute Cloud (Amazon EC2) instances and other AWS resources, without the need for human action.

An AWS Systems Manager document (SSM document) defines the actions that Systems Manager performs on your managed instances.

Solution details

Figure 2 shows the centralized patching solution for a multi-Region hybrid workflow in detail.

Figure 2: Detailed workflow diagram: Centralized patching solution for multi-Region and hybrid instances

Figure 2: Detailed workflow diagram: Centralized patching solution for multi-Region and hybrid instances

You implement the solution as follows:

  1. In a central management Region, configure a maintenance window with a custom Lambda function as a target, with a JSON payload input that defines your target Regions, custom SSM document information, and target resource groups.
  2. Configure the Lambda function that will first filter out the target Regions where there are no instances mapped to resource groups, and then initiate the Systems Manager Automation API for the remaining Regions that have instances mapped to the resource groups.
  3. Configure a Systems Manager Automation API to initiate Run Command in all target Regions according to the custom AWS document.
  4. Configure the AWS custom automation document to call the AWS-RunPatchBaseline document against all instances for patching according to the resource group defined in the input payload JSON.

Solution deployment

To deploy the solution, you perform these steps:

  1. Verify prerequisites in your AWS account
  2. Deploy an AWS CloudFormation template
  3. Create a test patching event

Step 1: Verify prerequisites in your AWS account

The sample solution provided by this blog requires that you set up Systems Manager in your account and resource groups in the target Regions. Before you get started, make sure you’ve completed all of the following steps:

Step 2: Deploy the CloudFormation template

In this next step, you deploy a CloudFormation template to implement the centralized patching solution across Regions in your account. Make sure you deploy the template within the AWS account and Region from which you want to centralize patching coordination.

To deploy the CloudFormation stack

  1. Choose the following Launch Stack button to launch a CloudFormation stack in your account.

    Select the Launch Stack button to launch the template

Note: The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template and deploy it to the selected Region.

 

  • In the AWS CloudFormation console, choose the Select Template form, and then choose Next.
  • On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Description
    Duration The duration for the maintenance window automation job, in hours. The default is 5.
    OwnerInformation The owner information for the maintenance window. The default is Patch Management Team.
    Schedule The schedule for the owner of the maintenance window, in the form of either a cron or rate expression). The default is cron(0 4 ? * SUN *).
    TimeZone The time zone for the maintenance window automation job. The default is S/EasternU.
    Figure 4: An example of the values entered for the template parameters

    Figure 4: An example of the values entered for the template parameters

  • After you’ve entered values for all of the input parameters, choose Next.
  • On the Options page, keep the defaults, and then choose Next.
  • On the Review page, under Capabilities, select the check box next to I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then choose Create stack.

    Figure 5: CloudFormation capabilities acknowledgement

    Figure 5: CloudFormation capabilities acknowledgement

 

After the Status field for the CloudFormation stack changes to CREATE_COMPLETE, as shown in Figure 6, the solution is implemented and is ready for testing.

Figure 6: Completed deployment of the AWS CloudFormation stack

Figure 6: Completed deployment of the AWS CloudFormation stack

Step 3: Create a test patching event

After the CloudFormation stack has completed deployment, an AWS maintenance window is created. To test the centralized patching solution, you can use this maintenance window to initiate patching across Regions.

(Optional) To create a test patching event, edit the Lambda task as follows. Under the Tasks tab, add the following JSON data as the payload, and update the following parameters with your own data: resource group, AutomationAssumeRole ARN, MaxConcurrency, MaxErrors, Operation (Scan/ Install), and Regions, as needed for the target environment.

{
  "WindowId": "{{WINDOW_ID}}",
  "TaskExecutionId": "{{TASK_EXECUTION_ID}}",
  "Document": {
    "Name": "CustomAutomationDocument",
    "Version": "1",
    "Parameters": {
      "AutomationAssumeRole": [
        "arn:aws:iam::111222333444:role/AWS-SystemsManager-AutomationAdministrationRole"
      ],
      "Operation": [
        "Scan"
      ]
    }
  },
  "TargetParameterName": "InstanceIds",
  "Targets": [
    {
      "Key": "ResourceGroup",
      "Values": [
        "DevGroup"
      ]
    }
  ],
  "MaxConcurrency": "10",
  "MaxErrors": "1",
  "Regions": ["us-east-2","us-east-1"]
}

Wait for the next execution time for the maintenance window. On the History tab, you should see status Success to indicate that patching is complete, as shown in Figure 7.

Figure 7: The History tab for the maintenance window, showing successful patching

Figure 7: The History tab for the maintenance window, showing successful patching

To see more details related to the completed automations, look on the Automation Executions tab, shown in Figure 8.

Figure 8: The Executions tab showing details

Figure 8: The Executions tab showing details

Congratulations! You’ve successfully deployed and tested a centralized patching solution for an AWS multi-Region hybrid environment. In order to fully implement this solution, you’ll need to add the resource groups in all your target Regions and update the payload JSON in Systems Manager Maintenance Windows.

Summary

You’ve learned how to use Systems Manager to centralize patching across multiple AWS Regions and to include on-premises instances in your patching solution. All of the code for this solution is available as part of an CloudFormation template. Feel free to play around with the code; we hope it helps you learn more about automated security remediation. You can adjust the code to better fit your unique environment, or extend the code with additional steps. For example, you could extend it across accounts and also create a custom Systems Manager document to run across Regions.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about using this solution, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akash Kumar

Akash is a Cloud Migration Specialist with AWS Professional Services. He is passionate about re-architecting, designing, and developing modern IT solutions for the cloud.

OSPAR 2021 report now available with 127 services in scope

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/ospar-2021-report-now-available-with-127-services-in-scope/

We are excited to announce the completion of the third Outsourced Service Provider Audit Report (OSPAR) audit cycle on July 1, 2021. The latest OSPAR certification includes the addition of 19 new services in scope, bringing the total number of services to 127 in the Asia Pacific (Singapore) Region.

You can download our latest OSPAR report in AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The list of services in scope for OSPAR is available in the report, and is also available on the AWS Services in Scope by Compliance Program webpage.

Some of the newly added services in scope include the following:

  • AWS Outposts, a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to any datacenter, co-location space, or an on-premises facility for a consistent hybrid experience.
  • Amazon Connect, an easy to use omnichannel cloud contact center that helps customers provide superior customer service across voice, chat, and tasks at lower cost than traditional contact center systems.
  • Amazon Lex, a service for building conversational interfaces into any application using voice and text.
  • Amazon Macie, a fully managed data security and data privacy service that uses machine learning and pattern matching to help customers discover, monitor, and protect customers’ sensitive data in AWS.
  • Amazon Quantum Ledger Database (QLDB), a fully managed ledger database that provides a transparent, immutable and cryptographically verifiable transaction log owned by a central trusted authority.

The OSPAR assessment is performed by an independent third-party auditor, selected from the ABS list of approved auditors. The assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates to customers, our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. You can leverage the OSPAR assessment report to conduct due diligence, and to help reduce the effort and costs required for compliance. AWS OSPAR reporting period is now updated in the ABS list of OSPAR Audited Outsourced Service Providers.

As always, we are committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the OSPAR report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

Multi-Cloud and Hybrid Threat Protection with Sumo Logic Cloud SIEM Powered by AWS

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/hybrid-threat-protection-with-sumo-logic-cloud-siem-powered-by-aws/

IT security teams need to have a real-time understanding of what’s happening with their infrastructure and applications. They need to be able to find and correlate data in this continuous flood of information to identify unexpected behaviors or patterns that can lead to a security breach.

To simplify and automate this process, many solutions have been implemented over the years. For example:

  • Security information management (SIM) systems collect data such as log files into a central repository for analysis.
  • Security event management (SEM) platforms simplify data inspection and the interpretation of logs or events.

Many years ago these two approaches were merged to address both information analysis and interpretation of events. These security information and event management (SIEM) platforms provide real-time analysis of security alerts generated by applications, network hardware, and domain specific security tools such as firewalls and endpoint protection tools).

Today, I’d like to introduce a solution created by one of our partners: Sumo Logic Cloud SIEM powered by AWS. Sumo Logic Cloud SIEM provides deep security analytics and contextualized threat data across multi-cloud and hybrid environments to reduce the time to detect and respond to threats. You can use this solution to quickly detect and respond to the higher-priority issues, including malicious activities that negatively impact your business or brand. The solution comes with more than 300 out-of-the-box integrations, including key AWS services, and can help reduce time and effort required to conduct compliance audits for regulations such as PCI and HIPAA.

Sumo Logic Cloud SIEM is available in AWS Marketplace and you can use a free trial to evaluate the solution. Before having a look at how this works in practice, let’s see how it’s being used.

Customer Case Study – Medidata
I had the chance to meet a very interesting customer to talk about how they use Sumo Logic Cloud SIEM. Scott Sumner is the VP and CISO at Medidata, a company that is redefining what’s possible in clinical trials. Medidata is processing patient data for clinical trials of the Moderna and Johnson & Johnson COVID-19 vaccines, so you can see why security is a priority for them. With such critical workloads, the company must keep the trust of the people participating in those trials.

Scott told me, “There is an old saying: If you can’t measure it, you can’t manage it.” In fact, when he joined Medidata in 2015, one of the first things Scott did was to implement a SIEM. Medidata has been using Sumo Logic for more than five years now. They appreciate that it’s a cloud-native solution, and that has made it easier for them to follow the evolution of the tool over the years.

“Not having transparency in the environment you process data is not good for security professionals.” Scott wanted his team to be able to respond quickly and, to do so, they needed to be able to look at a single screen that displays all IP calls, network flows, and any relevant information. For example, Medidata has very aggressive checks for security scans and any kind of external access. To do so, they have to look not just at the perimeter, but at the entire environment. Sumo Logic Cloud SIEM allows them to react without breaking anything in their corporate environment, including the resources they have in more than 45 AWS accounts.

“One of the metrics that is floated around by security specialists is that you have up to five hours to respond to a tentative intrusion,” Scott says. “With Sumo Logic Cloud SIEM, we can match that time aggressively, and most of the times we respond within five minutes.” The ability to respond quickly allows Medidata to keep patient trust. This is very important for them, because delaying a clinical trial can affect people’s health.

Medidata security response is managed by a global team through three levels. Level 1 is covered by a partner who is empowered to block simple attacks. For the next level of escalation, Level 2, Medidata has a team in each region. Beyond that there is Level 3: a hardcore team of forensics examiners distributed across the US, Europe, and Asia who deal with more complicated attacks.

Availability is also important to Medidata. In addition to providing the Cloud SIEM functionality, Sumo Logic helps them monitor availability issues, such as web server failover, and very quickly figure out possible problems as they happen. Interestingly, they use Sumo Logic to better understand how applications talk with each other. These different use cases don’t complicate their setup because the security and application teams are segregated and use Sumo Logic as a single platform to share information seamlessly between the two teams when needed.

I asked Scott Sumner if Medidata was affected by the move to remote work in 2020 due to the COVID-19 pandemic. It’s an important topic for them because at that time Medidata was already involved in clinical trials to fight the pandemic. “We were a mobile environment anyway. A significant part of the company was mobile before. So, we were ready and not being impacted much by working remotely. All our tools are remote, and that helped a lot. Not sure we’d done it easily with an on-premises solution.”

Now, let’s see how this solution works in practice.

Setting Up Sumo Logic Cloud SIEM
In AWS Marketplace I search for “Sumo Logic Cloud SIEM” and look at the product page. I can either subscribe or start the one-month free trial. The free trial includes 1GB of log ingest for security and observability. There is no automatic conversion to paid offer when the free trials expires. After the free trial I have the option to either buy Sumo Logic Cloud SIEM from AWS Marketplace or remain as a free user. I create and accept the contract and set up my Sumo Logic account.

Console screenshot.

In the setup, I choose the Sumo Logic deployment region to use. The Sumo Logic documentation provides a table that describes the AWS Regions used by each Sumo Logic deployment. I need this information later when I set up the integration between AWS security services and Sumo Logic Cloud SIEM. For now, I select US2, which corresponds to the US West (Oregon) Region in AWS.

When my Sumo Logic account is ready, I use the Sumo Logic Security Integrations on AWS Quick Start to deploy the required integrations in my AWS account. You’ll find the source files used by this Quick Start in this GitHub repository. I open the deployment guide and follow along.

This architecture diagram shows the environment deployed by this Quick Start:

Architectural diagram.

Following the steps in the deployment guide, I create an access key and access ID in my Sumo Logic account, and write down my organization ID. Then, I launch the Quick Start to deploy the integrations.

The Quick Start uses an AWS CloudFormation template to create a stack. First, I check that I am in the right AWS Region. I use US West (Oregon) because I am using US2 with Sumo Logic. Then, I leave all default values and choose Next. In the parameters, I select US2 as my Sumo Logic deployment region and enter my Sumo Logic access ID, access key, and organization ID.

Console screenshot.

After that, I enable and configure the integrations with AWS security services and tools such as AWS CloudTrail, Amazon GuardDuty, VPC Flow Logs, and AWS Config.

Console screenshot.

If you have a delegate administrator with GuardDuty, you can enabled multiple member accounts as long as they are a part of the same AWS organization. With that, all findings from member accounts are vended to the delegate administrator and then to Sumo Logic Cloud SIEM through the GuardDuty events processor.

In the next step, I leave the stack options to their default values. I review the configuration and acknowledge the additional capabilities required by this stack (such as the creation of IAM resources) and then choose Create stack.

When the stack creation is complete, the integration with Sumo Logic Cloud SIEM is ready. If I had a hybrid architecture, I could connect those resources for a single point of view and analysis of my security events.

Using Sumo Logic Cloud SIEM
To see how the integration with AWS security services works, and how security events are handled by the SIEM, I use the amazon-guardduty-tester open-source scripts to generate security findings.

First, I use the included CloudFormation template to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in an Amazon Virtual Private Cloud (VPC) private subnet. The stack also includes a bastion host to provide external access. When the stack has been created, I write down the IP addresses of the two instances from the stack output.

Console screenshot.

Then, I use SSH to connect to the EC2 instance in the private subnet through the bastion host. There are easy-to-follow instructions in the README file. I use the guardduty_tester.sh script, installed in the instance by CloudFormation, to generate security findings for my AWS account.

$ ./guardduty_tester.sh

SSH screenshot.

GuardDuty processes these findings and the events are sent to Sumo Logic through the integration I just set up. In the Sumo Logic GuardDuty dashboard, I see the threats ready to be analyzed and addressed.

Console screenshot.

Availability and Pricing
Sumo Logic Cloud SIEM powered by AWS is a multi-tenant Software as a Service (SaaS) available in AWS Marketplace that ingests data over HTTPS/TLS 1.2 on the public internet. You can connect data from any AWS Region and from multi-cloud and hybrid architectures for a single point of view of your security events.

Start a free trial of Sumo Logic Cloud SIEM and see how it can help your security team.

Read the Sumo Logic team’s blog post here for more information on the service. 

Danilo

How AWS is helping EU customers navigate the new normal for data protection

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/how-aws-is-helping-eu-customers-navigate-the-new-normal-for-data-protection/

Achieving compliance with the European Union’s data protection regulations is critical for hundreds of thousands of Amazon Web Services (AWS) customers. Many of them are subject to the EU’s General Data Protection Regulation (GDPR), which ensures individuals’ fundamental right to privacy and the protection of personal data. In February, we announced strengthened commitments to protect customer data, such as challenging law enforcement requests for customer data that conflict with EU law.

Today, we’re excited to announce that we’ve launched two new online resources to help customers more easily complete data transfer assessments and comply with the GDPR, taking into account the European Data Protection Board (EDPB) recommendations. These resources will also assist AWS customers in other countries to understand whether their use of AWS services involves a data transfer.

Using AWS’s new “Privacy Features for AWS Services,” customers can determine whether their use of an individual AWS service involves the transfer of customer data (the personal data they’ve uploaded to their AWS account). Knowing this information enables customers to choose the right action for their applications, such as opting out of the data transfer or creating an appropriate disclosure of the transfer for end user transparency.

We’re also providing additional information on the processing activities and locations of the limited number of sub-processors that AWS engages to provide services that involve the processing of customer data. AWS engages three types of sub-processors:

  • Local AWS entities that provide the AWS infrastructure.
  • AWS entities that process customer data for specific AWS services.
  • Third parties that AWS contracts with to provide processing activities for specific AWS services.

The enhanced information available on our updated Sub-processors page enables customers to assess if a sub-processor is relevant to their use of AWS services and AWS Regions.

These new resources make it easier for AWS customers to conduct their data transfer assessments as set out in the EDPB recommendations and, as a result, comply with GDPR. After completing their data transfer assessments, customers will also be able to determine whether they need to implement supplemental measures in line with the EDPB’s recommendations.

These resources support our ongoing commitment to giving customers control over where their data is stored, how it’s stored, and who has access to it.

Since we opened our first region in the EU in 2007, customers have been able to choose to store customer data with AWS in the EU. Today, customers can store their data in our AWS Regions in France, Germany, Ireland, Italy, and Sweden, and we’re adding Spain in 2022. AWS will never transfer data outside a customer’s selected AWS Region without the customer’s agreement.

AWS customers control how their data is stored, and we have a variety of tools at their disposal to enhance security. For example, AWS CloudHSM and AWS Key Management Service (AWS KMS) allow customers to encrypt data in transit and at rest and securely generate and manage encryption keys that they control.

Finally, our customers control who can access their data. We never use customer data for marketing or advertising purposes. We also prohibit, and our systems are designed to prevent, remote access by AWS personnel to customer data for any purpose, including service maintenance, unless requested by a customer, required to prevent fraud and abuse, or to comply with the law.

As previously mentioned, we challenge law enforcement requests for customer data from governmental bodies, whether inside or outside the EU, where the request conflicts with EU law, is overbroad, or we otherwise have any appropriate grounds to do so.

Earning customer trust is the foundation of our business at AWS, and we know protecting customer data is key to achieving this. We also know that helping customers protect their data in a world with constantly changing regulations, technology, and risks takes teamwork. We would never expect our customers to go it alone.

As we continue to enhance the capabilities customers have at their fingertips, they can be confident that choosing AWS will ensure they have the tools necessary to help them meet the most stringent security, privacy, and compliance requirements.

If you have questions or need more information, visit our EU Data Protection page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Author

Donna Dodson

Donna is a Senior Principal Scientist at AWS focusing on security and privacy capabilities including cryptography, risk management, standards, and assessments. Before joining AWS, Donna was the Chief Cybersecurity Advisor at the National Institute of Standards and Technology (NIST). She led NIST’s comprehensive cybersecurity research and development to cultivate trust in technology for stakeholders nationally and internationally.

TLS-enabled Kubernetes clusters with ACM Private CA and Amazon EKS

Post Syndicated from Param Sharma original https://aws.amazon.com/blogs/security/tls-enabled-kubernetes-clusters-with-acm-private-ca-and-amazon-eks-2/

In this blog post, we show you how to set up end-to-end encryption on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Certificate Manager Private Certificate Authority. For this example of end-to-end encryption, traffic originates from your client and terminates at an Ingress controller server running inside a sample app. By following the instructions in this post, you can set up an NGINX ingress controller on Amazon EKS. As part of the example, we show you how to configure an AWS Network Load Balancer (NLB) with HTTPS using certificates issued via ACM Private CA in front of the ingress controller.

AWS Private CA supports an open source plugin for cert-manager that offers a more secure certificate authority solution for Kubernetes containers. cert-manager is a widely-adopted solution for TLS certificate management in Kubernetes. Customers who use cert-manager for application certificate lifecycle management can now use this solution to improve security over the default cert-manager CA, which stores keys in plaintext in server memory. Customers with regulatory requirements for controlling access to and auditing their CA operations can use this solution to improve auditability and support compliance.

Solution components

  • Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
  • Amazon EKS is a managed service that you can use to run Kubernetes on Amazon Web Services (AWS) without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
  • cert-manager is an add on to Kubernetes to provide TLS certificate management. cert-manager requests certificates, distributes them to Kubernetes containers, and automates certificate renewal. cert-manager ensures certificates are valid and up-to-date, and attempts to renew certificates at an appropriate time before expiry.
  • ACM Private CA enables the creation of private CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA. With ACM Private CA, you can issue certificates for authenticating internal users, computers, applications, services, servers, and other devices, and for signing computer code. The private keys for private CAs are stored in AWS managed hardware security modules (HSMs), which are FIPS 140-2 certified, providing a better security profile compared to the default CAs in Kubernetes. Private certificates help identify and secure communication between connected resources on private networks such as servers, mobile and IoT devices, and applications.
  • AWS Private CA Issuer plugin. Kubernetes containers and applications use digital certificates to provide secure authentication and encryption over TLS. With this plugin, cert-manager requests TLS certificates from Private CA. The integration supports certificate automation for TLS in a range of configurations, including at the ingress, on the pod, and mutual TLS between pods. You can use the AWS Private CA Issuer plugin with Amazon Elastic Kubernetes Service, self managed Kubernetes on AWS, and Kubernetes on-premises.
  • The AWS Load Balancer controller manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions the following resources.
    • An AWS Application Load Balancer (ALB) when you create a Kubernetes Ingress.
    • An AWS Network Load Balancer (NLB) when you create a Kubernetes Service of type LoadBalancer.

Different points for terminating TLS in Kubernetes

How and where you terminate your TLS connection depends on your use case, security policies, and need to comply with regulatory requirements. This section talks about four different use cases that are regularly used for terminating TLS. The use cases are illustrated in Figure 1 and described in the text that follows.

Figure 1: Terminating TLS at different points

Figure 1: Terminating TLS at different points

  1. At the load balancer: The most common use case for terminating TLS at the load balancer level is to use publicly trusted certificates. This use case is simple to deploy and the certificate is bound to the load balancer itself. For example, you can use ACM to issue a public certificate and bind it with AWS NLB. You can learn more from How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?
  2. At the ingress: If there is no strict requirement for end-to-end encryption, you can offload this processing to the ingress controller or the NLB. This helps you to optimize the performance of your workloads and make them easier to configure and manage. We examine this use case in this blog post.
  3. On the pod: In Kubernetes, a pod is the smallest deployable unit of computing and it encapsulates one or more applications. End-to-end encryption of the traffic from the client all the way to a Kubernetes pod provides a secure communication model where the TLS is terminated at the pod inside the Kubernetes cluster. This could be useful for meeting certain security requirements. You can learn more from the blog post Setting up end-to-end TLS encryption on Amazon EKS with the new AWS Load Balancer Controller.
  4. Mutual TLS between pods: This use case focuses on encryption in transit for data flowing inside Kubernetes cluster. For more details on how this can be achieved with Cert-manager using an Istio service mesh, please see the Securing Istio workloads with mTLS using cert-manager blog post. You can use the AWS Private CA Issuer plugin in conjunction with cert-manager to use ACM Private CA to issue certificates for securing communication between the pods.

In this blog post, we use a scenario where there is a requirement to terminate TLS at the ingress controller level, demonstrating the second example above.

Figure 2 provides an overall picture of the solution described in this blog post. The components and steps illustrated in Figure 2 are described fully in the sections that follow.

Figure 2: Overall solution diagram

Figure 2: Overall solution diagram

Prerequisites

Before you start, you need the following:

Verify that you have the latest versions of these tools installed before you begin.

Provision an Amazon EKS cluster

If you already have a running Amazon EKS cluster, you can skip this step and move on to install NGINX Ingress.

You can use the AWS Management Console or AWS CLI, but this example uses eksctl to provision the cluster. eksctl is a tool that makes it easier to deploy and manage an Amazon EKS cluster.

This example uses the US-EAST-2 Region and the T2 node type. Select the node type and Region that are appropriate for your environment. Cluster provisioning takes approximately 15 minutes.

To provision an Amazon EKS cluster

  1. Run the following eksctl command to create an Amazon EKS cluster in the us-east-2 Region with Kubernetes version 1.19 and two nodes. You can change the Region to the one that best fits your use case.
    eksctl create cluster \
    --name acm-pca-lab \
    --version 1.19 \
    --nodegroup-name acm-pca-nlb-lab-workers \
    --node-type t2.medium \
    --nodes 2 \
    --region us-east-2
    

  2. Once your cluster has been created, verify that your cluster is running correctly by running the following command:
    $ kubectl get pods --all-namespaces
    NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
    kube-system   aws-node-t94rp             1/1     Running   0          3m4s
    kube-system   aws-node-w7dm6             1/1     Running   0          3m19s
    kube-system   coredns-56b458df85-6tgjl   1/1     Running   0          10m
    kube-system   coredns-56b458df85-8gp94   1/1     Running   0          10m
    kube-system   kube-proxy-2pjx7           1/1     Running   0          3m19s
    kube-system   kube-proxy-hz8wq           1/1     Running   0          3m4s 
    

You should see output similar to the above, with all pods in a running state.

Install NGINX Ingress

NGINX Ingress is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.

To install NGINX Ingress

  1. Use the following command to install NGINX Ingress:
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/aws/deploy.yaml
    

  2. Run the following command to determine the address that AWS has assigned to your NLB:
    $ kubectl get service -n ingress-nginx
    NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)                      AGE
    ingress-nginx-controller             LoadBalancer   10.100.214.10   a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com   80:32598/TCP,443:30624/TCP   14s
    ingress-nginx-controller-admission   ClusterIP      10.100.118.1    <none>                                                                          443/TCP                      14s
    
    

  3. It can take up to 5 minutes for the load balancer to be ready. Once the external IP is created, run the following command to verify that traffic is being correctly routed to ingress-nginx:
    curl http://a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com
    <html>
    <head><title>404 Not Found</title></head>
    <body>
    <center><h1>404 Not Found</h1></center>
    <hr><center>nginx</center>
    </body>
    </html>
    

Note: Even though, it’s returning an HTTP 404 error code, in this case curl is still reaching the ingress controller and getting the expected HTTP response back.

Configure your DNS records

Once your load balancer is provisioned, the next step is to point the application’s DNS record to the URL of the NLB.

You can use your DNS provider’s console, for example Route53, and set a CNAME record pointing to your NLB. See CNAME record type for more details on how to setup a CNAME record using Route53.

This scenario uses the sample domain rsa-2048.example.com.

rsa-2048.example.com CNAME a3ebe22e7ca0522d1123456fbc92605c-8ac7f1d49be2fc42.elb.us-east-2.amazonaws.com

As you go through the scenario, replace rsa-2048.example.com with your registered domain.

Install cert-manager

cert-manager is a Kubernetes add-on that you can use to automate the management and issuance of TLS certificates from various issuing sources. It runs within your Kubernetes cluster and will ensure that certificates are valid and attempt to renew certificates at an appropriate time before they expire.

You can use the regular installation on Kubernetes guide to install cert-manager on Amazon EKS.

After you’ve deployed cert-manager, you can verify the installation by following these instructions. If all the above steps have completed without error, you’re good to go!

Note: If you’re planning to use Amazon EKS with Kubernetes pods running on AWS Fargate, please follow the cert-manager Fargate instructions to make sure cert-manager installation works as expected. AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers.

Install aws-privateca-issuer

The AWS PrivateCA Issuer plugin acts as an addon (see external cert configuration) to cert-manager that signs certificate requests using ACM Private CA.

To install aws-privateca-issuer

  1. For installation, use the following helm commands:
    kubectl create namespace aws-pca-issuer
    
    helm repo add awspca https://cert-manager.github.io/aws-privateca-issuer
    helm repo update
    helm install awspca/aws-pca-issuer --generate-name --namespace aws-pca-issuer
    

  2. Verify that the AWS Private CA Issuer is configured correctly by running the following command and ensure that it is in READY state with status as Running:
    $ kubectl get pods --namespace aws-pca-issuer
    NAME                                         READY   STATUS    RESTARTS   AGE
    aws-pca-issuer-1622570742-56474c464b-j6k8s   1/1     Running   0          21s
    

  3. You can check the chart configuration in the default values file.

Create an ACM Private CA

In this scenario, you create a private certificate authority in ACM Private CA with RSA 2048 selected as the key algorithm. You can create a CA using the AWS console, AWS CLI, or AWS CloudFormation.

To create an ACM Private CA

Download the CA certificate using the following command. Replace the <CA_ARN> and <Region> values with the values from the CA you created earlier and save it to a file named cacert.pem:

aws acm-pca get-certificate-authority-certificate --certificate-authority-arn <CA_ARN> -- region <region> --output text > cacert.pem

Once your private CA is active, you can proceed to the next step. You private CA will look similar to the CA in Figure 3.

Figure 3: Sample ACM Private CA

Figure 3: Sample ACM Private CA

Set EKS node permission for ACM Private CA

In order to issue a certificate from ACM Private CA, add the IAM policy from the prerequisites to your EKS NodeInstanceRole. Replace the <CA_ARN> value with the value from the CA you created earlier:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "awspcaissuerpolicy",
            "Effect": "Allow",
            "Action": [
                "acm-pca:GetCertificate",
                "acm-pca:DescribeCertificateAuthority",
                "acm-pca:IssueCertificate"
            ],
            "Resource": "<CA_ARN>"
        }
        
    ]
}

Create an Issuer in Amazon EKS

Now that the ACM Private CA is active, you can begin requesting private certificates which can be used by Kubernetes applications. Use aws-privateca-issuer plugin to create the ClusterIssuer, which will be used with the ACM PCA to issue certificates.

Issuers (and ClusterIssuers) represent a certificate authority from which signed x509 certificates can be obtained, such as ACM Private CA. You need at least one Issuer or ClusterIssuer before you can start requesting certificates in your cluster. There are two custom resources that can be used to create an Issuer inside Kubernetes using the aws-privateca-issuer add-on:

  • AWSPCAIssuer is a regular namespaced issuer that can be used as a reference in your Certificate custom resources.
  • AWSPCAClusterIssuer is specified in exactly the same way, but it doesn’t belong to a single namespace and can be referenced by certificate resources from multiple different namespaces.

To create an Issuer in Amazon EKS

  1. For this scenario, you create an AWSPCAClusterIssuer. Start by creating a file named cluster-issuer.yaml and save the following text in it, replacing <CA_ARN> and <Region> information with your own.
    apiVersion: awspca.cert-manager.io/v1beta1
    kind: AWSPCAClusterIssuer
    metadata:
              name: demo-test-root-ca
    spec:
              arn: <CA_ARN>
              region: <Region>
    

  2. Deploy the AWSPCAClusterIssuer:
    kubectl apply -f cluster-issuer.yaml
    

  3. Verify the installation and make sure that the following command returns a Kubernetes service of kind AWSPCAClusterIssuer:
    $ kubectl get AWSPCAClusterIssuer
    NAME                AGE
    demo-test-root-ca   51s
    

Request the certificate

Now, you can begin requesting certificates which can be used by Kubernetes applications from the provisioned issuer. For more details on how to specify and request Certificate resources, please check Certificate Resources guide.

To request the certificate

  1. As a first step, create a new namespace that contains your application and secret:
    $ kubectl create namespace acm-pca-lab-demo
    namespace/acm-pca-lab-demo created
    

  2. Next, create a basic X509 private certificate for your domain.
    Create a file named rsa-2048.yaml and save the following text in it. Replace rsa-2048.example.com with your domain.
kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name: rsa-cert-2048
  namespace: acm-pca-lab-demo
spec:
  commonName: www.rsa-2048.example.com
  dnsNames:
    - www.rsa-2048.example.com
    - rsa-2048.example.com
  duration: 2160h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: demo-test-root-ca
  renewBefore: 360h0m0s
  secretName: rsa-example-cert-2048
  usages:
    - server auth
    - client auth
  privateKey:
    algorithm: "RSA"
    size: 2048

 

  • For a certificate with a key algorithm of RSA 2048, create the resource:
    kubectl apply -f rsa-2048.yaml -n acm-pca-lab-demo
    

  • Verify that the certificate is issued and in READY state by running the following command:
    $ kubectl get certificate -n acm-pca-lab-demo
    NAME            READY   SECRET                  AGE
    rsa-cert-2048   True    rsa-example-cert-2048   12s
    

  • Run the command kubectl describe certificate -n acm-pca—lab-demo to check the progress of your certificate.
  • Once the certificate status shows as issued, you can use the following command to check the issued certificate details:
    kubectl get secret rsa-example-cert-2048 -n acm-pca-lab-demo -o 'go-template={{index .data "tls.crt"}}' | base64 --decode | openssl x509 -noout -text
    

 

Deploy a demo application

For the purpose of this scenario, you can create a new service—a simple “hello world” website that uses echoheaders that respond with the HTTP request headers along with some cluster details.

To deploy a demo application

  1. Create a new file named hello-world.yaml with below content:
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world
      namespace: acm-pca-lab-demo
    spec:
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: hello-world
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world
      namespace: acm-pca-lab-demo
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-world
      template:
        metadata:
          labels:
            app: hello-world
        spec:
          containers:
          - name: echoheaders
            image: k8s.gcr.io/echoserver:1.10
            args:
            - "-text=Hello World"
            imagePullPolicy: IfNotPresent
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
            ports:
            - containerPort: 8080
    

  2. Create the service using the following command:
    $ kubectl apply -f hello-world.yaml
    

Expose and secure your application

Now that you’ve issued a certificate, you can expose your application using a Kubernetes Ingress resource.

To expose and secure your application

  1. Create a new file called example-ingress.yaml and add the following content:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: acm-pca-demo-ingress
      namespace: acm-pca-lab-demo
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      tls:
      - hosts:
        - www.rsa-2048.example.com
        secretName: rsa-example-cert-2048
      rules:
      - host: www.rsa-2048.example.com
        http:
          paths:
          - path: /
            pathType: Exact
            backend:
              service:
                name: hello-world
                port:
                  number: 80
    

  2. Create a new Ingress resource by running the following command:
    kubectl apply -f example-ingress.yaml 
    

Access your application using TLS

After completing the previous step, you can to access this service from any computer connected to the internet.

To access your application using TLS

  1. Log in to a terminal window on a machine that has access to the internet, and run the following:
    $ curl https://rsa-2048.example.com --cacert cacert.pem 
    

  2. You should see an output similar to the following:
    Hostname: hello-world-d8fbd49c6-9bczb
    
    Pod Information:
    	-no pod information available-
    
    Server values:
    	server_version=nginx: 1.13.3 - lua: 10008
    
    Request Information:
    	client_address=192.162.32.64
    	method=GET
    	real path=/
    	query=
    	request_version=1.1
    	request_scheme=http
    	request_uri=http://rsa-2048.example.com:8080/
    
    Request Headers:
    	accept=*/*
    	host=rsa-2048.example.com
    	user-agent=curl/7.62.0
    	x-forwarded-for=52.94.2.2
    	x-forwarded-host=rsa-2048.example.com
    	x-forwarded-port=443
    	x-forwarded-proto=https
    	x-real-ip=52.94.2.2
    	x-request-id=371b6fc15a45d189430281693cccb76e
    	x-scheme=https
    
    Request Body:
    	-no body in request-…
    

    This response is returned from the service running behind the Kubernetes Ingress controller and demonstrates that a successful TLS handshake happened at port 443 with https protocol.

  3. You can use the following command to verify that the certificate issued earlier is being used for the SSL handshake:
    echo | openssl s_client -showcerts -servername www.rsa-2048.example.com -connect www.rsa-2048.example.com:443 2>/dev/null | openssl x509 -inform pem -noout -text
    

Cleanup

To avoid incurring future charges on your AWS account, perform the following steps to remove the scenario.

Delete the ACM Private CA

You can delete the ACM Private CA by following the instructions in Deleting your private CA.

As an alternative, you can use the following commands to delete the ACM Private CA, replacing the <CA_ARN> and <Region> with your own:

  1. Disable the CA.
    aws acm-pca update-certificate-authority \
    --certificate-authority-arn <CA_ARN>
    --region <Region>
    --status DISABLED
    

  2. Call the Delete Certificate Authority API
    aws acm-pca delete-certificate-authority \
    --certificate-authority-arn <CA_ARN>
    --region <Region>
    --permanent-deletion-time-in-days 7
    

Continue the cleanup

Once the ACM Private CA has been deleted, continue the cleanup by running the following commands.

  1. Delete the services:
    kubectl delete -f hello-world.yaml
    

  2. Delete the Ingress controller:
    kubectl delete -f example-ingress.yaml
    

  3. Delete the IAM NodeInstanceRole, replace role name with your EKS Node instance role created for the demo:
    aws iam delete-role --role-name eksctl-acm-pca-lab-nodegroup-acm-pca-nlb-lab-workers-NodeInstanceProfile-XXXXXXX
    

  4. Delete the Amazon EKS cluster using ekctl command:
    eksctl delete cluster acm-pca-lab --region us-east-2
    

You can also clean up from your Cloudformation console by deleting the stacks named eksctl-acm-pca-lab-nodegroup-acm-pca-nlb-lab-workers and eksctl-acm-pca-lab-cluster.

Conclusion

In this blog post, we showed you how to set up a Kubernetes Ingress controller with a service running in Amazon EKS cluster using AWS Load Balancer Controller with Network Load Balancer and set up HTTPS using private certificates issued by ACM Private CA. If you have questions or want to contribute, join the aws-privateca-issuer add-on project on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Param Sharma

Param is a Senior Software Engineer with AWS. She is passionate about PKI, security, and privacy. She works with AWS customers to design, deploy, and manage their PKI infrastructures, helping customers improve their security, risk, and compliance in the cloud. In her spare time, she enjoys traveling, reading, and watching movies.

Author

Arindam Chatterji

Arindam is a Senior Solutions Architect with AWS SMB.

Protect public clients for Amazon Cognito by using an Amazon CloudFront proxy

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/protect-public-clients-for-amazon-cognito-by-using-an-amazon-cloudfront-proxy/

In Amazon Cognito user pools, an app client is an entity that has permission to call unauthenticated API operations (that is, operations that don’t have an authenticated user), such as operations to sign up, sign in, and handle forgotten passwords. In this post, I show you a solution designed to protect these API operations from unwanted bots and distributed denial of service (DDoS) attacks.

To protect Amazon Cognito services and customers, Amazon Cognito applies request rate quotas on all API categories, and throttles rapid calls that exceed the assigned quota. For that reason, you must ensure your applications control who can call unauthenticated API operations and at what rate, so that user calls aren’t throttled because of unwanted or misconfigured clients that call these API operations at high rates.

App clients fall into one of two categories: public clients (used from web or mobile applications) and private or confidential clients (used from a secured backend). Public clients shouldn’t have secrets, because it isn’t possible to protect secrets in these types of clients. Confidential clients, on the other hand, use a secret to authorize calls to unauthenticated operations. In these clients, the secret can be protected in the backend.

The benefit of using a confidential app client with a secret in Amazon Cognito is that unauthenticated API operations will accept only the calls that include the secret hash for this client, and will drop calls with an invalid or missing secret. In this way, you control who calls these API operations. Public applications can use a confidential app client by implementing a lightweight proxy layer in front of the Amazon Cognito endpoint, and then using this proxy to add a secret hash in relevant requests before passing the requests to Amazon Cognito.

There are multiple options that you can use to implement this proxy. One option is to use Amazon CloudFront and [email protected] to add the secret hash to the incoming requests. When you use a CloudFront proxy, you can also use AWS WAF, which gives you tools to detect and block unwanted clients. From [email protected], you can also integrate with other services (like Amazon Fraud Detector or third-party bot detection services) to help you detect possible fraudulent requests and block them. The CloudFront proxy, with the right set of security tools, helps protect your Amazon Cognito user pool from unwanted clients.

Solution overview

To implement this lightweight proxy pattern, you need to create an application client with a secret. Unauthenticated API calls to this client must include the secret hash which is added to the request from the proxy layer. Client applications use an SDK like AWS Amplify, the Amazon Cognito Identity SDK, or a mobile SDK to communicate with Amazon Cognito. By default, the SDK sends requests to the Regional Amazon Cognito endpoint. Your application must override the default endpoint by manually adding an “Endpoint” property in the app configuration. See the Integrate the client application with the proxy section later in this post for more details.

Figure 1 shows how this works, step by step.
 

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

The workflow is as follows:

  1. You configure the client application (mobile or web client) to use a CloudFront endpoint as a proxy to an Amazon Cognito Regional endpoint. You also create an application client in Amazon Cognito with a secret. This means that any unauthenticated API call must have the secret hash.
  2. Clients that send unauthenticated API calls to the Amazon Cognito endpoint directly are blocked and dropped because of the missing secret.
  3. You use [email protected] to add a secret hash to the relevant incoming requests before passing them on to the Amazon Cognito endpoint.
  4. From [email protected], you must have the app client secret to be able to calculate the secret hash and add it to the request. It’s recommended that you keep the secret in AWS Secrets Manager and cache it for the lifetime of the function.
  5. You use AWS WAF with CloudFront distribution to enforce rate limiting, allow and deny lists, and other rule groups according to your security requirements.

When to use this pattern

It’s a best practice to use this proxy pattern with clients that use SDKs to integrate with Amazon Cognito user pools. Examples include mobile applications that use the iOS or Android SDK, or web applications that use client-side libraries like Amplify or the Amazon Cognito Identity SDK to integrate with Amazon Cognito.

You don’t need to use a proxy pattern with server-side applications that use an AWS SDK to integrate with Amazon Cognito user pools from a protected backend, because server-side applications can natively use confidential clients and protect the secret in the backend.

You can’t use this solution with applications that use Hosted UI and OAuth 2.0 endpoints to integrate with Amazon Cognito user pools. This includes federation scenarios where users sign in with an external identity provider (IdP).

Implementation and deployment details

Before you deploy this solution, you need a user pool and an application client that has the client secret. When you have these in place, choose the following Launch Stack button to launch a CloudFormation stack in your account and deploy the proxy solution.

Select the Launch Stack button to launch the template

Note: The CloudFormation stack must be created in the us-east-1 AWS Region, but the user pool itself can exist in any supported Region.

The template takes the parameters shown in Figure 2 below.
 

Figure 2: CloudFormation stack creation with initial parameters

Figure 2: CloudFormation stack creation with initial parameters

The parameters in Figure 2 include:

  • AdvancedSecurityEnabled is a flag that indicates whether advanced security is enabled in the user pool or not. This flag determines which version of the Lambda function is deployed. Notice that if you change this flag as part of a stack update, it overrides the function code, so if you have any manual changes, make sure to back up your changes.
  • AppClientSecret is the secret for your application client. This secret is stored in Secrets Manager and accessed from [email protected] as needed.
  • LambdaS3BucketName is the bucket that hosts the Lambda code package. You don’t need to change this parameter unless you have a requirement to modify or extend the solution with your own Lambda function.
  • RateLimit is the maximum number of calls from a single IP address that are allowed within a 5 minute period. Values between 100 requests and 20 million requests are valid for RateLimit.
  • Important: provide a value suitable for your application and security requirements.

  • UserPoolId is the ID of your user pool. This value is used by [email protected] when needed (for example, to call admin APIs, which require the user pool ID).
  • UserPoolRegion is the AWS Region where you created your user pool. This value is used to determine which Amazon Cognito Regional endpoint to proxy the calls to.

This template creates several resources in your AWS account, as follows:

  1. A CloudFront distribution that serves as a proxy to an Amazon Cognito Regional endpoint.
  2. An AWS WAF web access control list (ACL) with rules for the allow list, deny list, and rate limit.
  3. A Lambda function to be deployed at the edge and assigned to the origin request event.
  4. A secret in Secrets Manager, to hold the values of the application client secret and user pool ID.

After you create the stack, the CloudFront distribution domain name is available on the Outputs tab in the CloudFront console, as shown in Figure 3. This is the value that’s used as the Endpoint property in your client-side application. You can optionally add an alternative domain name to the CloudFront distribution if you prefer to use your own custom domain.
 

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Use [email protected] to add a secret hash to the request

As explained earlier, the purpose of having this proxy is to be able to inject the secret hash in unauthenticated API calls before passing them to the Amazon Cognito endpoint. This injection is achieved by a Lambda function that intercepts incoming requests at the edge (the CloudFront distribution) before passing them to the origin (the Amazon Cognito Regional endpoint).

The Lambda function that is deployed to the edge has two versions. One is a simple pass-through proxy that only adds the secret hash, and this version is used if Amazon Cognito advanced security isn’t enabled. The other version is a proxy that uses the AdminInitiateAuth and AdminRespondToAuthChallenge API operations instead of unauthenticated API operations for the user authentication and challenge response. This allows the proxy layer to propagate the client IP address to the Amazon Cognito endpoint, which guides the adaptive authentication features of advanced security. The version that is deployed by the stack is determined by the AdvancedSecurityEnabled flag when you create or update the CloudFormation stack.

You can extend this solution by manually modifying the Lambda function with your own processing logic. For example, you can integrate with fraud detection or bot detection services to evaluate the request and decide to proceed or reject the call. Note that after making any change to the Lambda function code, you must deploy a new version to the edge location. To do that from the Lambda console, navigate to Actions, choose Deploy to [email protected], and then choose Use existing CloudFront trigger on this function.

Important: If you update the stack from CloudFormation and change the value of the AdvancedSecurityEnabled flag, the new value overrides the Lambda code with the default version for the choice. In that case, all manual changes are lost.

Allow or block requests

The template that is provided in this blog post creates a web ACL with three rules: AllowList, DenyList, and RateLimit. These rules are evaluated in order and determine which requests are allowed or blocked. The template also creates four IP sets, as shown in Figure 4, to hold the values of allowed or blocked IPs for both IPv4 and IPv6 address types.
 

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

If you want to always allow requests from certain clients, for example, trusted enterprise clients or server-side clients in cases where a large volume of requests is coming from the same IP address like a VPN gateway, add these IP addresses to the corresponding AllowList IP set. Similarly, if you want to always block traffic from certain IPs, add those IPs to the corresponding DenyList IP set.

Requests from sources that aren’t on the allow list or deny list are evaluated based on the volume of calls within 5 minutes, and sources that exceed the defined rate limit within 5 minutes are automatically blocked. If you want to change the defined rate limit, you can do so by updating the CloudFormation stack and providing a different value for the RateLimit parameter. Or you can modify this value directly in the AWS WAF console by editing the RateLimit rule.

Note: You can also use AWS Managed Rules for AWS WAF to add additional protection according to your security needs.

Integrate the client application with the proxy

You can integrate the client application with the proxy by changing the Endpoint in your client application to use the CloudFront distribution domain name. The domain name is located in the Outputs section of the CloudFormation stack.

You then need to edit your client-side code to forward calls to Amazon Cognito through the proxy endpoint. For example, if you’re using the Identity SDK, you should change this property as follows.

var poolData = {
  UserPoolId: '<USER-POOL-ID>',
  ClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
};

If you’re using AWS Amplify, you can change the endpoint in the aws-exports.js file by overriding the property aws_cognito_endpoint. Or, if you configure Amplify Auth in your code, you can provide the endpoint as follows.

Amplify.Auth.configure({
  userPoolId: '<USER-POOL-ID>',
  userPoolWebClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
});

If you have a mobile application that uses the Amplify mobile SDK, you can override the endpoint in your configuration as follows (don’t include AppClientSecret parameter in your configuration). Note that the Endpoint value contains the domain name only, not the full URL. This feature is available in the latest releases of the iOS and Android SDKs.

"CognitoUserPool": {
  "Default": {
    "AppClientId": "<APP-CLIENT-ID>",
    "Endpoint": "<CF-DISTRIBUTION-DOMAIN>",
    "PoolId": "<USER-POOL-ID>",
    "Region": "<REGION>"
  }
}

Warning: The Amplify CLI overwrites customizations to the awsconfiguration.json and amplifyconfiguration.json files if you do an amplify push or amplify pull operation. You must manually re-apply the Endpoint customization and remove the AppClientSecret if you use the CLI to modify your cloud backend.

Solution limitations

This solution has these limitations:

  • If advanced security features are enabled for the user pool, Amazon Cognito calculates risk for user events. If you use this proxy pattern, the IP address that is propagated in user events is the proxy IP address, which causes risk calculation for SignUp, ForgotPassword, and ResendCode events to be inaccurate. On the other hand, Sign-In events still have the client IP address propagated correctly, and risk calculation and adaptive authentication for Sign-In events aren’t affected by the use of this proxy.
  • This solution is not applicable to Hosted UI, OAuth 2.0 endpoints, and federation flows.
  • Authenticated and admin API operations (which require developer credentials or an access token) aren’t covered in this solution. These API operations don’t require a secret hash, and they use other authentication mechanisms.
  • Using this proxy solution with mobile apps requires an update to the application. The update might take time to be available in the relevant app store, and you must depend on end users to update their app. Plan ahead of time to use the solution with mobile apps.

How to detect unusual behavior

In this section, I share with you the steps to detect, quickly analyze and respond to unwanted clients. It’s a best practice to configure monitoring and alarms that help you to detect unexpected spikes in activity. Additionally, I show you how to be ready to quickly identify clients that are calling your resources at a higher-than-usual rate.

Monitor utilization compared to quotas

Amazon Cognito integrates with Service Quotas, which monitor service utilization compared to quotas. These metrics help you detect unexpected spikes and be alerted if you’re approaching your quota for a certain API category. Approaching your quota indicates that there is a risk that calls from legitimate users will be throttled.

To view utilization versus quota metrics

  1. In the Service Quotas console, choose Service Quotas, choose AWS Services, and then choose Amazon Cognito User Pools.
  2. Under Service quotas, enter the search term rate of. This shows you the list of API categories and the assigned quotas for each category.
     
    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

  3. Choose any of the API categories to see utilization versus quota metrics.
     
    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

  4. You can also create alarms from this page to alert you if utilization is above a pre-defined threshold. You can create alarms starting at 50 percent utilization. It’s recommended that you create multiple alarms, for example at the 50 percent, 70 percent, and 90 percent thresholds, and configure CloudWatch alarms as appropriate.
     
    Figure 7: Creating an alarm for the utilization of the UserCreation API category

    Figure 7: Creating an alarm for the utilization of the UserCreation API category

Analyze CloudTrail logs with Athena

If you detect an unexpected spike in traffic to a certain API category, the next step is to identify the sources of this spike. You can do that by using CloudTrail logs or, after you deploy and use this proxy solution, CloudFront logs as sources of information. You can then analyze these logs by using Amazon Athena queries.

The first step is to create Athena tables from CloudTrail and CloudFront logs. You can do that by following these steps for CloudTrail and similar steps for CloudFront. After you have these tables created, you can create a set of queries that help you identify unwanted clients. Here are a couple of examples:

  • Use the following query to identify clients with the highest call rate to the InitiateAuth API operation within the timeframe you noticed the spike (change the eventtime value to reflect the attack window).
    SELECT sourceipaddress, count(*)
    FROM "default"."cloudtrail_logs"
    WHERE eventname='InitiateAuth'
    AND eventtime >= '2021-03-01T00:00:00Z'and eventtime < '2021-03-31T00:00:00Z'
    GROUP BY sourceipaddress
    LIMIT 10
    

  • Use the following query to identify clients that come through CloudFront with the highest error rate.
    SELECT count(*) as count, request_ip
    FROM "default"."cloudfront_logs"
    WHERE status>500
    GROUP BY request_ip
    

After you identify sources that are calling your service with a higher-than-usual rate, you can block these clients by adding them to the DenyList IP set that was created in AWS WAF.

Analyze CloudTrail events with CloudWatch Logs Insights

It’s a best practice to configure your trail to send events to CloudWatch Logs. After you do this, you can interactively search and analyze your Amazon Cognito CloudTrail events with CloudWatch Logs Insights to identify errors, unusual activity, or unusual user behavior in your account.

Conclusion

In this post, I showed you how to implement a lightweight proxy to an Amazon Cognito endpoint, which can be used with an application client secret to control access to unauthenticated API operations. This approach, together with security tools such as AWS WAF, helps provide protection for these API operations from unwanted clients. I also showed you strategies to help detect an ongoing attack and quickly analyze, identify, and block unwanted clients.

For more strategies for DDoS mitigation, see the AWS Best Practices for DDoS Resiliency.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Implement tenant isolation for Amazon S3 and Aurora PostgreSQL by using ABAC

Post Syndicated from Ashutosh Upadhyay original https://aws.amazon.com/blogs/security/implement-tenant-isolation-for-amazon-s3-and-aurora-postgresql-by-using-abac/

In software as a service (SaaS) systems, which are designed to be used by multiple customers, isolating tenant data is a fundamental responsibility for SaaS providers. The practice of isolation of data in a multi-tenant application platform is called tenant isolation. In this post, we describe an approach you can use to achieve tenant isolation in Amazon Simple Storage Service (Amazon S3) and Amazon Aurora PostgreSQL-Compatible Edition databases by implementing attribute-based access control (ABAC). You can also adapt the same approach to achieve tenant isolation in other AWS services.

ABAC in Amazon Web Services (AWS), which uses tags to store attributes, offers advantages over the traditional role-based access control (RBAC) model. You can use fewer permissions policies, update your access control more efficiently as you grow, and last but not least, apply granular permissions for various AWS services. These granular permissions help you to implement an effective and coherent tenant isolation strategy for your customers and clients. Using the ABAC model helps you scale your permissions and simplify the management of granular policies. The ABAC model reduces the time and effort it takes to maintain policies that allow access to only the required resources.

The solution we present here uses the pool model of data partitioning. The pool model helps you avoid the higher costs of duplicated resources for each tenant and the specialized infrastructure code required to set up and maintain those copies.

Solution overview

In a typical customer environment where this solution is implemented, the tenant request for access might land at Amazon API Gateway, together with the tenant identifier, which in turn calls an AWS Lambda function. The Lambda function is envisaged to be operating with a basic Lambda execution role. This Lambda role should also have permissions to assume the tenant roles. As the request progresses, the Lambda function assumes the tenant role and makes the necessary calls to Amazon S3 or to an Aurora PostgreSQL-Compatible database. This solution helps you to achieve tenant isolation for objects stored in Amazon S3 and data elements stored in an Aurora PostgreSQL-Compatible database cluster.

Figure 1 shows the tenant isolation architecture for both Amazon S3 and Amazon Aurora PostgreSQL-Compatible databases.

Figure 1: Tenant isolation architecture diagram

Figure 1: Tenant isolation architecture diagram

As shown in the numbered diagram steps, the workflow for Amazon S3 tenant isolation is as follows:

  1. AWS Lambda sends an AWS Security Token Service (AWS STS) assume role request to AWS Identity and Access Management (IAM).
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to Amazon S3 with the assumed role.
  4. Amazon S3 sends the response back to Lambda.

The diagram also shows the workflow steps for tenant isolation for Aurora PostgreSQL-Compatible databases, as follows:

  1. Lambda sends an STS assume role request to IAM.
  2. IAM validates the request and returns the tenant role.
  3. Lambda sends a request to IAM for database authorization.
  4. IAM validates the request and returns the database password token.
  5. Lambda sends a request to the Aurora PostgreSQL-Compatible database with the database user and password token.
  6. Aurora PostgreSQL-Compatible database returns the response to Lambda.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account for your workload.
  2. An Amazon S3 bucket.
  3. An Aurora PostgreSQL-Compatible cluster with a database created.

    Note: Make sure to note down the default master database user and password, and make sure that you can connect to the database from your desktop or from another server (for example, from Amazon Elastic Compute Cloud (Amazon EC2) instances).

  4. A security group and inbound rules that are set up to allow an inbound PostgreSQL TCP connection (Port 5432) from Lambda functions. This solution uses regular non-VPC Lambda functions, and therefore the security group of the Aurora PostgreSQL-Compatible database cluster should allow an inbound PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Make sure that you’ve completed the prerequisites before proceeding with the next steps.

Deploy the solution

The following sections describe how to create the IAM roles, IAM policies, and Lambda functions that are required for the solution. These steps also include guidelines on the changes that you’ll need to make to the prerequisite components Amazon S3 and the Aurora PostgreSQL-Compatible database cluster.

Step 1: Create the IAM policies

In this step, you create two IAM policies with the required permissions for Amazon S3 and the Aurora PostgreSQL database.

To create the IAM policies

  1. Open the AWS Management Console.
  2. Choose IAM, choose Policies, and then choose Create policy.
  3. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with the bucket name from your account.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:Get*",
                    "s3:List*"
                ],
                "Resource": "arn:aws:s3:::sts-ti-demo-<111122223333>/${aws:PrincipalTag/s3_home}/*"
            }
        ]
    }
    

  4. Save the policy with the name sts-ti-demo-s3-access-policy.

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

    Figure 2: Create the IAM policy for Amazon S3 (sts-ti-demo-s3-access-policy)

  5. Open the AWS Management Console.
  6. Choose IAM, choose Policies, and then choose Create policy.
  7. Use the following JSON policy document to create a second policy. This policy grants an IAM role permission to connect to an Aurora PostgreSQL-Compatible database through a database user that is IAM authenticated. Replace the placeholders with the appropriate Region, account number, and cluster resource ID of the Aurora PostgreSQL-Compatible database cluster, respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:<us-west-2>:<111122223333>:dbuser:<cluster- ZTISAAAABBBBCCCCDDDDEEEEL4>/${aws:PrincipalTag/dbuser}"
            ]
        }
    ]
}
  • Save the policy with the name sts-ti-demo-dbuser-policy.

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

    Figure 3: Create the IAM policy for Aurora PostgreSQL database (sts-ti-demo-dbuser-policy)

Note: Make sure that you use the cluster resource ID for the clustered database. However, if you intend to adapt this solution for your Aurora PostgreSQL-Compatible non-clustered database, you should use the instance resource ID instead.

Step 2: Create the IAM roles

In this step, you create two IAM roles for the two different tenants, and also apply the necessary permissions and tags.

To create the IAM roles

  1. In the IAM console, choose Roles, and then choose Create role.
  2. On the Trusted entities page, choose the EC2 service as the trusted entity.
  3. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  4. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant1_home
    dbuser tenant1_dbuser
  5. On the Review screen, name the role assumeRole-tenant1, and then choose Save.
  6. In the IAM console, choose Roles, and then choose Create role.
  7. On the Trusted entities page, choose the EC2 service as the trusted entity.
  8. On the Permissions policies page, select sts-ti-demo-s3-access-policy and sts-ti-demo-dbuser-policy.
  9. On the Tags page, add two tags with the following keys and values.

    Tag key Tag value
    s3_home tenant2_home
    dbuser tenant2_dbuser
  10. On the Review screen, name the role assumeRole-tenant2, and then choose Save.

Step 3: Create and apply the IAM policies for the tenants

In this step, you create a policy and a role for the Lambda functions. You also create two separate tenant roles, and establish a trust relationship with the role that you created for the Lambda functions.

To create and apply the IAM policies for tenant1

  1. In the IAM console, choose Policies, and then choose Create policy.
  2. Use the following JSON policy document to create the policy. Replace the placeholder <111122223333> with your AWS account number.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": [
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant1",
                    "arn:aws:iam::<111122223333>:role/assumeRole-tenant2"
                ]
            }
        ]
    }
    

  3. Save the policy with the name sts-ti-demo-assumerole-policy.
  4. In the IAM console, choose Roles, and then choose Create role.
  5. On the Trusted entities page, select the Lambda service as the trusted entity.
  6. On the Permissions policies page, select sts-ti-demo-assumerole-policy and AWSLambdaBasicExecutionRole.
  7. On the review screen, name the role sts-ti-demo-lambda-role, and then choose Save.
  8. In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box.
  9. Select the assumeRole-tenant1 role and go to the Trust relationship tab.
  10. Choose Edit the trust relationship, and replace the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number, and choose Update trust policy to save the policy.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

To verify that the policies are applied correctly for tenant1

In the IAM console, go to Roles, and enter assumeRole-tenant1 in the search box. Select the assumeRole-tenant1 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, as shown in Figure 4.

Figure 4: The assumeRole-tenant1 Permissions tab

Figure 4: The assumeRole-tenant1 Permissions tab

On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities, as shown in Figure 5.

Figure 5: The assumeRole-tenant1 Trust relationships tab

Figure 5: The assumeRole-tenant1 Trust relationships tab

On the Tags tab, verify that the following tags appear, as shown in Figure 6.

Tag key Tag value
dbuser tenant1_dbuser
s3_home tenant1_home

 

Figure 6: The assumeRole-tenant1 Tags tab

Figure 6: The assumeRole-tenant1 Tags tab

To create and apply the IAM policies for tenant2

  1. In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box.
  2. Select the assumeRole-tenant2 role and go to the Trust relationship tab.
  3. Edit the trust relationship, replacing the existing value with the following JSON document. Replace the placeholder <111122223333> with your AWS account number.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<111122223333>:role/sts-ti-demo-lambda-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

  4. Choose Update trust policy to save the policy.

To verify that the policies are applied correctly for tenant2

In the IAM console, go to Roles, and enter assumeRole-tenant2 in the search box. Select the assumeRole-tenant2 role and on the Permissions tab, verify that sts-ti-demo-dbuser-policy and sts-ti-demo-s3-access-policy appear in the list of policies, you did for tenant1. On the Trust relationships tab, verify that sts-ti-demo-lambda-role appears under Trusted entities.

On the Tags tab, verify that the following tags appear, as shown in Figure 7.

Tag key Tag value
dbuser tenant2_dbuser
s3_home tenant2_home
Figure 7: The assumeRole-tenant2 Tags tab

Figure 7: The assumeRole-tenant2 Tags tab

Step 4: Set up an Amazon S3 bucket

Next, you’ll set up an S3 bucket that you’ll use as part of this solution. You can either create a new S3 bucket or re-purpose an existing one. The following steps show you how to create two user homes (that is, S3 prefixes, which are also known as folders) in the S3 bucket.

  1. In the AWS Management Console, go to Amazon S3 and select the S3 bucket you want to use.
  2. Create two prefixes (folders) with the names tenant1_home and tenant2_home.
  3. Place two test objects with the names tenant.info-tenant1_home and tenant.info-tenant2_home in the prefixes that you just created, respectively.

Step 5: Set up test objects in Aurora PostgreSQL-Compatible database

In this step, you create a table in Aurora PostgreSQL-Compatible Edition, insert tenant metadata, create a row level security (RLS) policy, create tenant users, and grant permission for testing purposes.

To set up Aurora PostgreSQL-Compatible

  1. Connect to Aurora PostgreSQL-Compatible through a client of your choice, using the master database user and password that you obtained at the time of cluster creation.
  2. Run the following commands to create a table for testing purposes and to insert a couple of testing records.
    CREATE TABLE tenant_metadata (
        tenant_id VARCHAR(30) PRIMARY KEY,
        email     VARCHAR(50) UNIQUE,
        status    VARCHAR(10) CHECK (status IN ('active', 'suspended', 'disabled')),
        tier      VARCHAR(10) CHECK (tier IN ('gold', 'silver', 'bronze')));
    
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant1_dbuser','[email protected]','active','gold');
    INSERT INTO tenant_metadata (tenant_id, email, status, tier) 
    VALUES ('tenant2_dbuser','[email protected]','suspended','silver');
    ALTER TABLE tenant_metadata ENABLE ROW LEVEL SECURITY;
    

  3. Run the following command to query the newly created database table.
    SELECT * FROM tenant_metadata;
    

    Figure 8: The tenant_metadata table content

    Figure 8: The tenant_metadata table content

  4. Run the following command to create the row level security policy.
    CREATE POLICY tenant_isolation_policy ON tenant_metadata
    USING (tenant_id = current_user);
    

  5. Run the following commands to establish two tenant users and grant them the necessary permissions.
    CREATE USER tenant1_dbuser WITH LOGIN;
    CREATE USER tenant2_dbuser WITH LOGIN;
    GRANT rds_iam TO tenant1_dbuser;
    GRANT rds_iam TO tenant2_dbuser;
    
    GRANT select, insert, update, delete ON tenant_metadata to tenant1_dbuser, tenant2_dbuser;
    

  6. Run the following commands to verify the newly created tenant users.
    SELECT usename AS role_name,
      CASE
         WHEN usesuper AND usecreatedb THEN
           CAST('superuser, create database' AS pg_catalog.text)
         WHEN usesuper THEN
            CAST('superuser' AS pg_catalog.text)
         WHEN usecreatedb THEN
            CAST('create database' AS pg_catalog.text)
         ELSE
            CAST('' AS pg_catalog.text)
      END role_attributes
    FROM pg_catalog.pg_user
    WHERE usename LIKE (‘tenant%’)
    ORDER BY role_name desc;
    

    Figure 9: Verify the newly created tenant users output

    Figure 9: Verify the newly created tenant users output

Step 6: Set up the AWS Lambda functions

Next, you’ll create two Lambda functions for Amazon S3 and Aurora PostgreSQL-Compatible. You also need to create a Lambda layer for the Python package PG8000.

To set up the Lambda function for Amazon S3

  1. Navigate to the Lambda console, and choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-s3-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Copy the following Python code into the lambda_function.py file that is created in your Lambda function.
    import json
    import os
    import time 
    
    def lambda_handler(event, context):
        import boto3
        bucket_name     =   os.environ['s3_bucket_name']
    
        try:
            login_tenant_id =   event['login_tenant_id']
            data_tenant_id  =   event['s3_tenant_home']
        except:
            return {
                'statusCode': 400,
                'body': 'Error in reading parameters'
            }
    
        prefix_of_role  =   'assumeRole'
        file_name       =   'tenant.info' + '-' + data_tenant_id
    
        # create an STS client object that represents a live connection to the STS service
        sts_client = boto3.client('sts')
        account_of_role = sts_client.get_caller_identity()['Account']
        role_to_assume  =   'arn:aws:iam::' + account_of_role + ':role/' + prefix_of_role + '-' + login_tenant_id
    
        # Call the assume_role method of the STSConnection object and pass the role
        # ARN and a role session name.
        RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        try:
            assumed_role_object = sts_client.assume_role(
                RoleArn         = role_to_assume, 
                RoleSessionName = RoleSessionName, 
                DurationSeconds = 900) #15 minutes
    
        except:
            return {
                'statusCode': 400,
                'body': 'Error in assuming the role ' + role_to_assume + ' in account ' + account_of_role
            }
    
        # From the response that contains the assumed role, get the temporary 
        # credentials that can be used to make subsequent API calls
        credentials=assumed_role_object['Credentials']
        
        # Use the temporary credentials that AssumeRole returns to make a connection to Amazon S3  
        s3_resource=boto3.resource(
            's3',
            aws_access_key_id=credentials['AccessKeyId'],
            aws_secret_access_key=credentials['SecretAccessKey'],
            aws_session_token=credentials['SessionToken']
        )
    
        try:
            obj = s3_resource.Object(bucket_name, data_tenant_id + "/" + file_name)
            return {
                'statusCode': 200,
                'body': obj.get()['Body'].read()
            }
        except:
            return {
                'statusCode': 400,
                'body': 'error in reading s3://' + bucket_name + '/' + data_tenant_id + '/' + file_name
            }
    

  7. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  8. Edit Environment variables to add a key called s3_bucket_name, with the value set to the name of your S3 bucket.
  9. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant1_home"
    }
    

  10. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

    Figure 10: The result of running the sts-ti-demo-s3-lambda function

Next, create another Lambda function for Aurora PostgreSQL-Compatible. To do this, you first need to create a new Lambda layer.

To set up the Lambda layer

  1. Use the following commands to create a .zip file for Python package pg8000.

    Note: This example is created by using an Amazon EC2 instance running the Amazon Linux 2 Amazon Machine Image (AMI). If you’re using another version of Linux or don’t have the Python 3 or pip3 packages installed, install them by using the following commands.

    sudo yum update -y 
    sudo yum install python3 
    sudo pip3 install pg8000 -t build/python/lib/python3.8/site-packages/ 
    cd build 
    sudo zip -r pg8000.zip python/
    

  2. Download the pg8000.zip file you just created to your local desktop machine or into an S3 bucket location.
  3. Navigate to the Lambda console, choose Layers, and then choose Create layer.
  4. For Name, enter pgdb, and then upload pg8000.zip from your local desktop machine or from the S3 bucket location.

    Note: For more details, see the AWS documentation for creating and sharing Lambda layers.

  5. For Compatible runtimes, choose python3.6, python3.7, and python3.8, and then choose Create.

To set up the Lambda function with the newly created Lambda layer

  1. In the Lambda console, choose Function, and then choose Create function.
  2. Choose Author from scratch. For Function name, enter sts-ti-demo-pgdb-lambda.
  3. For Runtime, choose Python 3.7.
  4. Change the default execution role to Use an existing role, and then select sts-ti-demo-lambda-role from the drop-down list.
  5. Keep Advanced settings as the default value, and then choose Create function.
  6. Choose Layers, and then choose Add a layer.
  7. Choose Custom layer, select pgdb with Version 1 from the drop-down list, and then choose Add.
  8. Copy the following Python code into the lambda_function.py file that was created in your Lambda function.
    import boto3
    import pg8000
    import os
    import time
    import ssl
    
    connection = None
    assumed_role_object = None
    rds_client = None
    
    def assume_role(event):
        global assumed_role_object
        try:
            RolePrefix  = os.environ.get("RolePrefix")
            LoginTenant = event['login_tenant_id']
        
            # create an STS client object that represents a live connection to the STS service
            sts_client      = boto3.client('sts')
            # Prepare input parameters
            role_to_assume  = 'arn:aws:iam::' + sts_client.get_caller_identity()['Account'] + ':role/' + RolePrefix + '-' + LoginTenant
            RoleSessionName = 'AssumeRoleSession' + str(time.time()).split(".")[0] + str(time.time()).split(".")[1]
        
            # Call the assume_role method of the STSConnection object and pass the role ARN and a role session name.
            assumed_role_object = sts_client.assume_role(
                RoleArn         =   role_to_assume, 
                RoleSessionName =   RoleSessionName,
                DurationSeconds =   900) #15 minutes 
            
            return assumed_role_object['Credentials']
        except Exception as e:
            print({'Role assumption failed!': {'role': role_to_assume, 'Exception': 'Failed due to :{0}'.format(str(e))}})
            return None
    
    def get_connection(event):
        global rds_client
        creds = assume_role(event)
    
        try:
            # create an RDS client using assumed credentials
            rds_client = boto3.client('rds',
                aws_access_key_id       = creds['AccessKeyId'],
                aws_secret_access_key   = creds['SecretAccessKey'],
                aws_session_token       = creds['SessionToken'])
    
            # Read the environment variables and event parameters
            DBEndPoint   = os.environ.get('DBEndPoint')
            DatabaseName = os.environ.get('DatabaseName')
            DBUserName   = event['dbuser']
    
            # Generates an auth token used to connect to a database with IAM credentials.
            pwd = rds_client.generate_db_auth_token(
                DBHostname=DBEndPoint, Port=5432, DBUsername=DBUserName, Region='us-west-2'
            )
    
            ssl_context             = ssl.SSLContext()
            ssl_context.verify_mode = ssl.CERT_REQUIRED
            ssl_context.load_verify_locations('rds-ca-2019-root.pem')
    
            # create a database connection
            conn = pg8000.connect(
                host        =   DBEndPoint,
                user        =   DBUserName,
                database    =   DatabaseName,
                password    =   pwd,
                ssl_context =   ssl_context)
            
            return conn
        except Exception as e:
            print ({'Database connection failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    def execute_sql(connection, query):
        try:
            cursor = connection.cursor()
            cursor.execute(query)
            columns = [str(desc[0]) for desc in cursor.description]
            results = []
            for res in cursor:
                results.append(dict(zip(columns, res)))
            cursor.close()
            retry = False
            return results    
        except Exception as e:
            print ({'Execute SQL failed!': {'Exception': "Failed due to :{0}".format(str(e))}})
            return None
    
    
    def lambda_handler(event, context):
        global connection
        try:
            connection = get_connection(event)
            if connection is None:
                return {'statusCode': 400, "body": "Error in database connection!"}
    
            response = {'statusCode':200, 'body': {
                'db & user': execute_sql(connection, 'SELECT CURRENT_DATABASE(), CURRENT_USER'), \
                'data from tenant_metadata': execute_sql(connection, 'SELECT * FROM tenant_metadata')}}
            return response
        except Exception as e:
            try:
                connection.close()
            except Exception as e:
                connection = None
            return {'statusCode': 400, 'statusDesc': 'Error!', 'body': 'Unhandled error in Lambda Handler.'}
    

  9. Add a certificate file called rds-ca-2019-root.pem into the Lambda project root by downloading it from https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem.
  10. Under Basic settings, edit Timeout to increase the timeout to 29 seconds.
  11. Edit Environment variables to add the following keys and values.

    Key Value
    DBEndPoint Enter the database cluster endpoint URL
    DatabaseName Enter the database name
    RolePrefix assumeRole
    Figure 11: Example of environment variables display

    Figure 11: Example of environment variables display

  12. Configure a new test event with the following JSON document, and save it as testEvent.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant1_dbuser"
    }
    

  13. Choose Test to test the Lambda function with the newly created test event testEvent. You should see status code 200, and the body of the results should contain the data for tenant1.

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

    Figure 12: The result of running the sts-ti-demo-pgdb-lambda function

Step 7: Perform negative testing of tenant isolation

You already performed positive tests of tenant isolation during the Lambda function creation steps. However, it’s also important to perform some negative tests to verify the robustness of the tenant isolation controls.

To perform negative tests of tenant isolation

  1. In the Lambda console, navigate to the sts-ti-demo-s3-lambda function. Update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ objects.
    {
      "login_tenant_id": "tenant1",
      "s3_tenant_home": "tenant2_home"
    }
    

  2. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

    Figure 13: The results of running the sts-ti-demo-s3-lambda function (negative test)

  3. Navigate to the sts-ti-demo-pgdb-lambda function and update the test event to the following, to mimic a scenario where tenant1 attempts to access other tenants’ data elements.
    {
      "login_tenant_id": "tenant1",
      "dbuser": "tenant2_dbuser"
    }
    

  4. Choose Test to test the Lambda function with the updated test event. You should see status code 400, and the body of the results should contain an error message.

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

    Figure 14: The results of running the sts-ti-demo-pgdb-lambda function (negative test)

Cleaning up

To de-clutter your environment, remove the roles, policies, Lambda functions, Lambda layers, Amazon S3 prefixes, database users, and the database table that you created as part of this exercise. You can choose to delete the S3 bucket, as well as the Aurora PostgreSQL-Compatible database cluster that we mentioned in the Prerequisites section, to avoid incurring future charges.

Update the security group of the Aurora PostgreSQL-Compatible database cluster to remove the inbound rule that you added to allow a PostgreSQL TCP connection (Port 5432) from anywhere (0.0.0.0/0).

Conclusion

By taking advantage of attribute-based access control (ABAC) in IAM, you can more efficiently implement tenant isolation in SaaS applications. The solution we presented here helps to achieve tenant isolation in Amazon S3 and Aurora PostgreSQL-Compatible databases by using ABAC with the pool model of data partitioning.

If you run into any issues, you can use Amazon CloudWatch and AWS CloudTrail to troubleshoot. If you have feedback about this post, submit comments in the Comments section below.

To learn more, see these AWS Blog and AWS Support articles:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ashutosh Upadhyay

Ashutosh works as a Senior Security, Risk, and Compliance Consultant in AWS Professional Services (GFS) and is based in Singapore. Ashutosh started off his career as a developer, and for the past many years has been working in the Security, Risk, and Compliance field. Ashutosh loves to spend his free time learning and playing with new technologies.

Author

Chirantan Saha

Chirantan works as a DevOps Engineer in AWS Professional Services (GFS) and is based in Singapore. Chirantan has 20 years of development and DevOps experience working for large banking, financial services, and insurance organizations. Outside of work, Chirantan enjoys traveling, spending time with family, and helping his children with their science projects.

How to create auto-suppression rules in AWS Security Hub

Post Syndicated from BK Das original https://aws.amazon.com/blogs/security/how-to-create-auto-suppression-rules-in-aws-security-hub/

AWS Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts. With Security Hub, you have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services. Security Hub lets you assign workflow statuses to these findings, which are NEW, NOTIFIED, SUPPRESSED, or RESOLVED. These statuses allow you to categorize which findings are open and need your attention.

In this blog post, we show how you can create automated suppression rules for specific types of findings in AWS Security Hub, such as ones that are an accepted risk by design, or have a compensating control. By automatically suppressing these findings that don’t require follow-up action from your security team, you can concentrate on investigating and remediating findings that are not yet resolved.

As an example of a finding that you may want to suppress, suppose that your development environment doesn’t need to have Amazon Virtual Private Cloud (VPC) Flow Logs enabled because it does not contain any sensitive data (that is, it is an accepted risk). However, your production environment must have VPC Flow Logs enabled. You can use this solution to automatically suppress the development environment findings regarding VPC Flow Logs not being enabled. Then, you can focus on responding to and remediating findings regarding the production environment VPC Flow Logs that are not enabled.

This solution uses an Amazon EventBridge rule to evaluate Security Hub findings based on predefined filters. An AWS Lambda function is the target of the rule, and is triggered to perform the suppression. The Lambda function calls the Security Hub BatchUpdateFindings API action to set the finding of interest to the SUPPRESSED status.

Prerequisites

This solution assumes that you have Security Hub and AWS Config enabled in your administrator and member AWS accounts. AWS Config is required to execute the rules that will generate the findings. You will also need to enable the AWS Foundational Security Best Practices standard, because the examples in this post rely on those findings. You should ensure that you have configured your administrator account to aggregate your Security Hub findings from across your AWS accounts.

Solution overview

In Security Hub, the status of an investigation of a finding is tracked using the workflow status attribute. The workflow status for new findings is initially set to NEW. You can change the workflow status of a finding either by selecting it in the AWS Security Hub console, or by automating the change of workflow status by using AWS CLI or Security Hub API. After the owner of the finding’s resource is notified to take action, you can set the workflow status to NOTIFIED. After a finding is remediated, you can set the workflow status to RESOLVED. If the finding is not a concern for your given environment and does not require any action, then you can set the workflow status to SUPPRESSED.

In this solution, we show you how to automatically set the workflow status to SUPPRESSED for expected findings, by using EventBridge event patterns that trigger on Security Hub findings that match your defined criteria. The event pattern can match on fields of the findings such as account number, AWS Region, and Amazon Resource Names (ARNs). The Lambda function triggers on findings that match all defined criteria, and then sets the workflow status to SUPPRESSED for all matched findings using the BatchUpdateFindings Security Hub API action.

Solution architecture

 

Figure 1: Solution architecture overview

Figure 1: Solution architecture overview

Figure 1 shows the administrator account aggregating the Security Hub findings from the member accounts.

  1. Security Hub generates findings in the member accounts, then forwards the findings to the administrator account to be evaluated.
  2. In the administrator account, Security Hub evaluates every finding (whether generated or forwarded) against EventBridge rules.
  3. If a finding satisfies any of the defined EventBridge rule conditions, EventBridge triggers a Lambda function in the same Region. The EventBridge event bus delivers the finding to the Lambda function.
  4. The Lambda function in the administrator account performs the finding suppression evaluation, and sets the Security Hub workflow status of the finding to SUPPRESSED.

This architecture uses one Lambda function per Region. You can group together multiple suppression rules into the same EventBridge pattern when they apply to the same group of AWS accounts. You can also configure multiple separate EventBridge event patterns when a suppression rule shouldn’t apply to an account.

Implementation

First, we show how to write the EventBridge event pattern. You use the CDK to define the event rule and pattern. The following example code will suppress Security Hub findings that originate in the development accounts for VPC flow logs that aren’t enabled. The solution will filter new findings only.

In the following example, replace <account-id-1> and <account-id-2> with your own information.

event_pattern_obj = events.EventPattern(
            source=["aws.securityhub"],
            detail_type=["Security Hub Findings - Imported"],
            detail= {
                "findings": {
                    "GeneratorId": [
                        "aws-foundational-security-best-practices/v/1.0.0/EC2.6"
                    ],
                    "AwsAccountId": [
                        "<account-id-1>",
                        "<account-id-2>"
                    ],
                    "Workflow": {
                        "Status": [
                            "NEW"
                        ]
                    }
                }
            }
        )

Second, you define the EventBridge rule that will match on the defined pattern.

        vpc_flow_log_dev_account_event_rule = events.Rule(
                    self,
                    'vpc-flow-logs-development-account-eventbridge-rule',
                    description='VPC flow logs in development account finding suppression',
                    rule_name='vpc-flow-logs-development-account-sechub-rule',
                    event_pattern=event_pattern_obj
                    )

Finally, the EventBridge rule triggers the suppression Lambda function.

 vpc_flow_log_dev_account_event_rule.add_target(lambda_targets.LambdaFunction(security_hub_suppression_lambda))

Solution deployment

You can deploy the solution through either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution by using the AWS Management Console

In your security account, launch the template by choosing the following Launch Stack button.
Select the Launch Stack button to launch the template

To deploy the solution by using the AWS CDK

You can find the latest code on GitHub, where you can also contribute to the sample code. The following commands show how to deploy the solution by using the AWS CDK. First, the CDK initializes your environment and uploads the Lambda assets to Amazon Simple Storage Service (Amazon S3). Then, you can deploy the solution to your account. For <account_id>, specify the account number, or comma separated list of account numbers, that you want the suppression rule to apply to.

cdk bootstrap

cdk deploy sechub-finding-suppression --parameters GeneratorIds=<generator_ids> --parameters AccountNumbers=<account_ids>

To test the solution

  1. Create a VPC that does not have flow logs enabled. We have included a test VPC that you can deploy with the following command:
    cdk deploy vpc-test-suppression
    

  2. Verify that the Security Hub finding EC2.6 has been suppressed in the parent account and the target account. You might need to wait a few minutes for the AWS Config recorder to detect the newly created resource and then to manually trigger the following AWS Config rule:
    securityhub-vpc-flow-logs-enabled-* 
    

  3. After verifying the suppression, delete the test VPC you created to test the suppression rule:
    cdk destroy vpc-test-suppression
    

Next steps

You can configure EventBridge rules and patterns to suppress all of your findings that are accepted risk, by design, or that have a compensating control. For example, if you are performing IAM authentication by using Amazon RDS Proxy, you could consider suppressing the control [RDS.10] IAM authentication should be configured for RDS instances. You can also consider creating event patterns that filter based on resource tags, such as filtering VPCs based on tags rather than account numbers for [EC2.6] VPC flow logging should be enabled in all VPCs.

Summary

In this blog post, we showed how you can automatically suppress specific findings by using the Security Hub BatchUpdateFindings API action. We showed you how to configure EventBridge patterns and rules in order to trigger a Lambda function that calls this API action to suppress your expected findings. After you follow the steps in this blog post for automatic Security Hub suppression, your console view in Security Hub will only show findings that are not suppressed.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

BK Das

BK works as a Senior Security Architect with AWS Professional Services. He love to solve security problems for his customers, and help them feel comfortable within AWS. Outside of work, BK loves to play computer games, and go on long drives.

Author

Josh Joy

Josh is a Senior Security Consultant with the AWS Global Security Practice, a part of our Worldwide Professional Services Organization. Josh helps customers improve their security posture as they migrate their most sensitive workloads to AWS. Josh enjoys diving deep and working backwards in order to help customers achieve positive outcomes.

Author

Moumita Saha

Moumita is a Security Consultant with AWS Professional Services working to help enterprise customers secure their workloads in the cloud. She assists customers in secure cloud migration, designing automated solutions to protect against cyber threats in the cloud. She is passionate about cyber security, data privacy, and new, emerging cloud-security technologies.

Configure SAML single sign-on for Kibana with AD FS on Amazon Elasticsearch Service

Post Syndicated from Sajeev Attiyil Bhaskaran original https://aws.amazon.com/blogs/security/configure-saml-single-sign-on-for-kibana-with-ad-fs-on-amazon-elasticsearch-service/

It’s a common use case for customers to integrate identity providers (IdPs) with Amazon Elasticsearch Service (Amazon ES) to achieve single sign-on (SSO) with Kibana. This integration makes it possible for users to leverage their existing identity credentials and offers administrators a single source of truth for user and permissions management. In this blog post, we’ll discuss how you can configure Security Assertion Markup Language (SAML) authentication for Kibana by using Amazon ES and Microsoft Active Directory Federation Services (AD FS).

Amazon ES now natively supports SSO authentication that uses the SAML protocol. With SAML authentication for Kibana, users can integrate directly with their existing third-party IdPs, such as Okta, Ping Identity, OneLogin, Auth0, AD FS, AWS Single Sign-on, and Azure Active Directory. SAML authentication for Kibana is powered by Open Distro for Elasticsearch, an Apache 2.0-licensed distribution of Elasticsearch, and is available to all Amazon ES customers who have enabled fine-grained access controls.

When you set up SAML authentication with Kibana, you can configure authentication that uses either service provider (SP)-initiated SSO or IdP-initiated SSO. The SP-initiated SSO flow occurs when a user directly accesses any SAML-configured Kibana endpoint, at which time Amazon ES redirects the user to their IdP for authentication, followed by a redirect back to Amazon ES after successful authentication. An IdP-initiated SSO flow typically occurs when a user chooses a link that first initiates the sign-in flow at the IdP, skipping the redirect between Amazon ES and the IdP. This blog post will focus on the SAML SP-initiated SSO flow.

Prerequisites

To complete this walkthrough, you must have the following:

Solution overview

For the solution presented in this post, you use your existing AD FS as an IdP for the user’s authentication. The SAML federation uses a claim-based authentication model in which user attributes (in this case stored in Active Directory) are passed from the IdP (AD FS) to the SP (Kibana).

Let’s walk through how a user would use the SAML protocol to access Amazon ES Kibana (the SP) while using AD FS as the IdP. In Figure 1, the user authentication request comes from an on-premises network, which is connected to Amazon VPC through a VPN connection—in this case, this could also be over AWS Direct Connect. The Amazon ES domain and AD FS are created in the same VPC.

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

The initial sign-in flow is as follows:

  1. Open a browser on the on-premises computer and navigate to the Kibana endpoint for your Amazon ES domain in the VPC.
  2. Amazon ES generates a SAML authentication request for the user and redirects it back to the browser.
  3. The browser redirects the SAML authentication request to AD FS.
  4. AD FS parses the SAML request and prompts user to enter credentials.
    1. User enters credentials and AD FS authenticates the user with Active Directory.
    2. After successful authentication, AD FS generates a SAML response and returns the encoded SAML response to the browser. The SAML response contains the destination (the Assertion Consumer Service (ACS) URL), the authentication response issuer (the AD FS entity ID URL), the digital signature, and the claim (which user is authenticated with AD FS, the user’s NameID, the group, the attribute used in SAML assertions, and so on).
  5. The browser sends the SAML response to the Kibana ACS URL, and then Kibana redirects to Amazon ES.
  6. Amazon ES validates the SAML response. If all the validations pass, you are redirected to the Kibana front page. Authorization is performed by Kibana based on the role mapped to the user. The role mapping is performed based on attributes of the SAML assertion being consumed by Kibana and Amazon ES.

Deploy the solution

Now let’s walk through the steps to set up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS.

Enable SAML for Amazon Elasticsearch Service

The first step in the configuration setup process is to enable SAML authentication in the Amazon ES domain.

To enable SAML for Amazon ES

  1. Sign in to the Amazon ES console and choose any existing Amazon ES domain that meets the criteria described in the Prerequisites section of this post.
  2. Under Actions, select Modify Authentication.
  3. Select the Enable SAML authentication check box.
    Figure 2: Enable SAML authentication

    Figure 2: Enable SAML authentication

    When you enable SAML, it automatically creates and displays the different URLs that are required to configure SAML support in your IdP.

    Figure 3: URLs for configuring the IdP

    Figure 3: URLs for configuring the IdP

  4. Look under Configure your Identity Provider (IdP), and note down the URL values for Service provider entity ID and SP-initiated SSO URL.

Set up and configure AD FS

During the SAML authentication process, the browser receives the SAML assertion token from AD FS and forwards it to the SP. In order to pass the claims to the Amazon ES domain, AD FS (the claims provider) and the Amazon ES domain (the relying party) have to establish a trust between them. Then you define the rules for what type of claims AD FS needs to send to the Amazon ES domain. The Amazon ES domain authorizes the user with internal security roles or backend roles, according to the claims in the token.

To configure Amazon ES as a relying party in AD FS

  1. Sign in to the AD FS server. In Server Manager, choose Tools, and then choose AD FS Management.
  2. In the AD FS management console, open the context (right-click) menu for Relying Party Trust, and then choose Add Relying Party Trust.

    Figure 4: Set up a relying party trust

    Figure 4: Set up a relying party trust

  3. In the Add Relying Party Trust Wizard, select Claims aware, and then choose Start.

    Figure 5: Create a claims aware application

    Figure 5: Create a claims aware application

  4. On the Select Data Source page, choose Enter data about the relying party manually, and then choose Next.

    Figure 6: Enter data about the relying party manually

    Figure 6: Enter data about the relying party manually

  5. On the Specify Display Name page, type in the display name of your choice for the relying party, and then choose Next. Choose Next again to move past the Configure Certificate screen. (Configuring a token encryption certificate is optional and at the time of writing, Amazon ES doesn’t support SAML token encryption.)

    Figure 7: Provide a display name for the relying party

    Figure 7: Provide a display name for the relying party

  6. On the Configure URL page, do the following steps.
    1. Choose the Enable support for the SAML 2.0 WebSSO protocol check box.
    2. In the URL field, add the SP-initiated SSO URL that you noted when you enabled SAML authentication in Amazon ES earlier.
    3. Choose Next.

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

  7. On the Configure Identifiers page, do the following:
      1. For Relying party trust identifier, provide the service provider entity ID that you noted when you enabled SAML authentication in Amazon ES.
      2. Choose Add, and then choose Next.

     

    Figure 9: Provide the service provider entity ID

    Figure 9: Provide the service provider entity ID

  8. On the Choose Access Control Policy page, choose the appropriate access for your domain. Depending on your requirements, choose one of these options:
    • Choose Permit Specific Group to restrict access to one or more groups in your Active Directory domain based on the Active Directory group.
    • Choose Permit Everyone to allow all Active Directory domain users to access Kibana.

    Note: This step only provides access for the users to authenticate into Kibana. You have not yet set up Open Distro security roles and permissions.

     

    Figure 10: Choose an access control policy

    Figure 10: Choose an access control policy

  9. On the Ready to Add Trust page, choose Next, and then choose Close.

Now you’ve finished adding Amazon ES as a relying party trust.

To configure claim issuance rules for the relying party during the authentication process, AD FS sends user attributes—claims—to the relying party. With claim rules, you define what claims AD FS can send to the Amazon ES domain. In the following procedure, you create two claim rules: one is to send the incoming Windows account name as the Name ID and the other is to send Active Directory groups as roles.

To configure claim issuance rules

  1. On the Relying Party Trusts page, right-click the relying party trust (in this case, AWS_ES_Kibana) and choose Edit Claim Issuance Policy.

    Figure 11: Edit the claim issuance policy

    Figure 11: Edit the claim issuance policy

  2. Configure the claim rule to send the Windows account name as the Name ID, using these steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Transform an Incoming Claim, and then choose Next.
    3. On the Configure Rule page, enter the following information:
      • Claim rule name: NameId
      • Incoming claim type: Windows account name
      • Outgoing claim type: Name ID
      • Outgoing name ID format: Unspecified
      • Pass through all claim values: Select this option
    4. Choose Finish.

     

    Figure 12: Set the claim rule for Name ID

    Figure 12: Set the claim rule for Name ID

  3. Configure Active Directory groups to send as roles, using the following steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Send LDAP Attributes as Claims, and then choose Next.
    3. On the Configure Rule page, enter or choose the following settings:
      • Claim rule name: Send-Groups-as-Roles
      • Attribute store: Active Directory
      • LDAP attribute: Token-Groups – Unqualified Names (to select the group name)
      • Outgoing claim type: Roles (the value for Roles should match the Roles Key that you will set in the Configure SAML in the Amazon ES domain step later in this process)
    4. Choose Finish

      Figure 13: Set claim rule for Active Directory groups as Roles

      Figure 13: Set claim rule for Active Directory groups as Roles

The configuration of AD FS is now complete and you can download the SAML metadata file from AD FS. The SAML metadata is in XML format and is needed to configure SAML in the Amazon ES domain. The AD FS metadata file (the IdP metadata) can be accessed from the following link (replace <AD FS FQDN> with the domain name of your AD FS server). Copy the XML and note down the value of entityID from the XML, as shown in Figure 14. You will need this information in the next steps.

https://<AD FS FQDN>/FederationMetadata/2007-06/FederationMetadata.xml

 

Figure 14: The value of entityID in the XML file

Figure 14: The value of entityID in the XML file

Configure SAML in the Amazon ES domain

Next, you configure SAML settings in the Amazon Elasticsearch Service console. You need to import the IdP metadata, configure the IdP entity ID, configure the backend role, and set up the Roles key.

To configure SAML setting in the Amazon ES domain

    1. Sign in to the Amazon Elasticsearch Service console. On the Actions menu, choose Modify authentication.
    2. Import the IdP metadata, using the following steps.
      1. Choose Import IdP metadata, and then choose Metadata from IdP.
      2. Paste the contents of the FederationMetadata XML file (the IdP metadata) that you copied earlier in the Add or edit metadata field. You can also choose the Import from XML file button if you have the metadata file on the local disk.

        Figure 15: The imported identity provider metadata

        Figure 15: The imported identity provider metadata

    3. Copy and paste the value of entityID from the XML file to the IdP entity ID field, if that field isn’t autofilled.
    4. For SAML manager backend role (the console may refer to this as master backend role), enter the name of the group you created in AD FS as part of the prerequisites for this post. In this walkthrough, we set the name of the group as admins, and therefore the backend role is admins.

Optionally, you can also provide the user name instead of the backend role.

  1. Set up the Roles key, using the following steps.
    1. Under Optional SAML settings, for Roles key, enter Roles. This value must match the value for Outgoing claim type, which you set when you configured claims rules earlier.

      Figure 16: Set the Roles key

      Figure 16: Set the Roles key

    2. Leave the Subject key field empty to use the NameID element of the SAML assertion for the user name. Keep the defaults for everything else, and then choose Submit.

It can take few minutes to update the SAML settings and for the domain to come back to the active state.

Congratulations! You’ve completed all the SP and IdP configurations.

Sign in to Kibana

When the domain comes back to the active state, choose the Kibana URL in the Amazon ES console. You will be redirected to the AD FS sign-in page for authentication. Provide the user name and password for any of the users in the admins group. The example in Figure 17 uses the credentials for the user [email protected], who is a member of the admins group.

Figure 17: The AD FS sign-in screen with user credentials

Figure 17: The AD FS sign-in screen with user credentials

AD FS authenticates the user and redirect the page to Kibana. If the user has at least one role mapped, you go to the Kibana home page, as shown in Figure 18. In this walkthrough, you mapped the AD FS group admins as a backend role to the manager user. Internally, the Open Distro security plugin maps the backend role admins to the security roles all_access and security_manager. Therefore, the Active Directory user in the admins group is authorized with the privileges of the manager user in the domain. For more granular access, you can create different AD FS groups and map the group names (backend roles) to internal security roles by using Role Mappings in Kibana.

Figure 18: The AD FS user user1@example.com is successfully logged in to Kibana

Figure 18: The AD FS user [email protected] is successfully logged in to Kibana

Note: At the time of writing for this blog post, if you specify the <SingleLogoutService /> details in the AD FS metadata XML, when you sign out from Kibana, Kibana will call AD FS directly and try to sign the user out. This doesn’t work currently, because AD FS expects the sign-out request to be signed with a certificate that Amazon ES doesn’t currently support. If you remove <SingleLogoutService /> from the metadata XML file, Amazon ES will use its own internal sign-out mechanism and sign the user out on the Amazon ES side. No calls will be made to AD FS for signing out.

Conclusion

In this post, we covered setting up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS. The integration of IdPs with your Amazon ES domain provides a powerful way to control fine-grained access to your Kibana endpoint and integrate with existing identity lifecycle processes for create/update/delete operations, which reduces the operational overhead required to manage users.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sajeev Attiyil Bhaskaran

Sajeev works closely with AWS customers to provide them architectural and engineering assistance and guidance. He dives deep into big data technologies and streaming solutions and leads onsite and online sessions for customers to design best solutions for their use cases. Outside of work, he enjoys spending time with his wife and daughter.

Author

Jagadeesh Pusapadi

Jagadeesh is a Senior Solutions Architect with AWS working with customers on their strategic initiatives. He helps customers build innovative solutions on the AWS Cloud by providing architectural guidance to achieve desired business outcomes.

Automate resolution for IAM Access Analyzer cross-account access findings on IAM roles

Post Syndicated from Ramesh Balajepalli original https://aws.amazon.com/blogs/security/automate-resolution-for-iam-access-analyzer-cross-account-access-findings-on-iam-roles/

In this blog post, we show you how to automatically resolve AWS Identity and Access Management (IAM) Access Analyzer findings generated in response to unintended cross-account access for IAM roles. The solution automates the resolution by responding to the Amazon EventBridge event generated by IAM Access Analyzer for each active finding.

You can use identity-based policies and resource-based policies to granularly control access to a specific resource and how you use it across the entire AWS Cloud environment. It is important to ensure that policies you create adhere to your organization’s requirements on data/resource access and security best practices. IAM Access Analyzer is a feature that you can enable to continuously monitor policies for changes, and generate detailed findings related to access from external entities to your AWS resources.

When you enable Access Analyzer, you create an analyzer for your entire organization or your account. The organization or account you choose is known as the zone of trust for the analyzer. The zone of trust determines what type of access is considered trusted by Access Analyzer. Access Analyzer continuously monitors all supported resources to identify policies that grant public or cross-account access from outside the zone of trust, and generates findings. In this post, we will focus on an IAM Access Analyzer finding that is generated when an IAM role is granted access to an external AWS principal that is outside your zone of trust. To resolve the finding, we will show you how to automatically block such unintended access by adding explicit deny statement to the IAM role trust policy.

Prerequisites

To ensure that the solution only prevents unintended cross account access for IAM roles, we highly recommend you to do the following within your AWS environment before deploying the solution described in the blog post:

Note: This solution adds an explicit deny in the IAM role trust policy to block the unintended access, which overrides any existing allow actions. We recommend that you carefully evaluate that this is the resolution action you want to apply.

Solution overview

To demonstrate this solution, we will take a scenario where you are asked to grant access to an external AWS account. In order to grant access, you create an IAM role named Audit_CrossAccountRole on your AWS account 123456789012. You grant permission to assume the role Audit_CrossAccountRole to an AWS principal named Alice in AWS account 999988887777, which is out-side of your AWS Organizations. The following is an example of the trust policy for the IAM role Audit_CrossAccountRole:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::999988887777:user/Alice"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

Assuming the principal arn:aws:iam::999988887777:user/Alice was not archived previously in Access Analyzer, you see an active finding in Access Analyzer as shown in Figure 1.

Figure 1: Sample IAM Access Analyzer finding in AWS Console

Figure 1: Sample IAM Access Analyzer finding in AWS Console

Typically, you will review this finding and determine whether this access is intended or not. If the access is unintended, you can block access to the principal 999988887777/Alice by adding an explicit deny to the IAM role trust policy, and then follow up with IAM role owner to find out if there is a reason to allow this cross-account access. If the access is intended, then you can create an archive rule that will archive the finding and suppress such findings in future.

We will walk through the solution to automate this resolution process in the remainder of this blog post.

Solution walkthrough

Access Analyzer sends an event to Amazon EventBridge for every active finding. This solution configures an event rule in EventBridge to match an active finding, and triggers a resolution AWS Lambda function. The Lambda function checks that the resource type in the finding is an IAM role, and then adds a deny statement to the associated IAM role trust policy as a resolution. The Lambda function also sends an email through Amazon Simple Notification Service (Amazon SNS) to the email address configured in the solution. The individual or group who receives the email can then review the automatic resolution and the IAM role. They can then decide either to remove the role for unintended access, or to delete the deny statement from the IAM trust policy and create an archive rule in Access Analyzer to suppress such findings in future.

Figure 2: Automated resolution followed by human review

Figure 2: Automated resolution followed by human review

Figure 2 shows the following steps of the resolution solution.

  1. Access Analyzer scans resources and generates findings based on the zone of trust and the archive rules configuration. The following is an example of an Access Analyzer active finding event sent to Amazon EventBridge:
    { 
        "version": "0",
        "id": "22222222-dcba-4444-dcba-333333333333",
        "detail-type": "Access Analyzer Finding",
        "source": "aws.access-analyzer",
        "account": "123456789012",
        "time": "2020-05-13T03:14:33Z",
        "region": "us-east-1",
        "resources": [
            "arn:aws:access-analyzer:us-east-1: 123456789012:analyzer/AccessAnalyzer"
        ],
        "detail": {
            "version": "1.0",
            "id": "a5018210-97c4-46c4-9456-0295898377b6",
            "status": "ACTIVE",
            "resourceType": "AWS::IAM::Role",
            "resource": "arn:aws:iam::123456789012:role/ Audit_CrossAccountRole",
            "createdAt": "2020-05-13T03:14:32Z",
            "analyzedAt": "2020-05-13T03:14:32Z",
            "updatedAt": "2020-05-13T03:14:32Z",
            "accountId": "123456789012",
            "region": "us-east-1",
            "principal": {
                "AWS": "aws:arn:iam::999988887777:user/Alice"
            },
            "action": [
                "sts:AssumeRole"
            ],
            "condition": {},
            "isDeleted": false,
            "isPublic": false
        }
    }
    

  2. EventBridge receives an event for the Access Analyzer finding, and triggers the AWS Lambda function based on the event rule configuration. The following is an example of the EventBridge event pattern to match active Access Analyzer findings:
    {
      "source": [
        "aws.access-analyzer"
      ],
      "detail-type": [
        "Access Analyzer Finding"
      ],
      "detail": { 
         "status": [ "ACTIVE" ],
    	"resourceType": [ "AWS::IAM:Role" ] 
     	}
    }
    

  3. The Lambda function processes the event when ResourceType is equal to AWS::IAM::Role, as shown in the following example Python code:
    ResourceType = event['detail']['resourceType']
    ResourceType = "".join(ResourceType.split())
    if ResourceType == 'AWS::IAM::Role' :
    

    Then, the Lambda function adds an explicit deny statement in the trust policy of the IAM role where the Sid of the new statement references the Access Analyzer finding ID.

    def disable_iam_access(<resource_name>, <ext_arn>, <finding_id>):
        try:
            ext_arn = ext_arn.strip()
            policy = {
                "Sid": <finding_id>,
                "Effect": "Deny",
                "Principal": {
                    "AWS": <ext_arn>},
                "Action": "sts:AssumeRole"
            }
            response = iam.get_role(RoleName=<resource_name>)
            current_policy = response['Role']['AssumeRolePolicyDocument']
            current_policy = current_policy['Statement'].append(policy)
            new_policy = json.dumps(response['Role']['AssumeRolePolicyDocument'])
            logger.debug(new_policy)
            response = iam.update_assume_role_policy(
                PolicyDocument=new_policy,
                RoleName=<resource_name>)
            logger.info(response)
        except Exception as e:
            logger.error(e)
            logger.error('Unable to update IAM Policy')
    

    As result, the IAM role trust policy looks like the following example:

    {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
            "AWS": "arn:aws:iam::999988887777:user/Alice"
                    },
                    "Action": "sts:AssumeRole",
                },
                {
                    "Sid": "22222222-dcba-4444-dcba-333333333333",
                    "Effect": "Deny",
                    "Principal": {
            "AWS": "arn:aws:iam::999988887777:user/Alice"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }
    

    Note: After Access Analyzer adds the deny statement, on the next scan, Access Analyzer finds the resource is no longer shared outside of your zone of trust. Access Analyzer changes the status of the finding to Resolved and the finding appears in the Resolved findings table.

  4. The Lambda function sends a notification to an SNS topic that sends an email to the configured email address (which should be the business owner or security team) subscribed to the SNS topic. The email notifies them that a specific IAM role has been blocked from the cross-account access. The following is an example of the SNS code for the notification.
    def send_notifications(sns_topic,principal, resource_arn, finding_id):
        sns_client = boto3.client("sns")
        message = "The IAM Role resource {} allows access to the principal {}. Trust policy for the role has been updated to deny the external access. Please review the IAM Role and its trust policy. If this access is intended, update the IAM Role trust policy to remove a statement with SID matching with the finding id {} and mark the finding as archived or create an archive rule. If this access is not intended then delete the IAM Role.". format(
            resource_arn, principal)
        subject = "Access Analyzer finding {} was automatically resolved ".format(finding_id)
        snsResponse = sns_client.publish(
            TopicArn=sns_topic,
            Message=message,
            Subject=subject
        )
    

    Figure 3 shows an example of the email notification.

    Figure 3: Sample resolution email generated by the solution

    Figure 3: Sample resolution email generated by the solution

  5. The security team or business owner who receives the email reviews the role and does one of the following steps:
    • If you find that the IAM role with cross-account access is intended then:Remove the deny statement added in the trust policy through AWS CLI or AWS Management Console. As mentioned above, the solution adds the Access Analyzer finding ID as Sid for the deny statement. The following command shows removing the deny statement for role_name through AWS CLI using the finding id available in the email notification.
      POLICY_DOCUMENT=`aws iam get-role --role-name '<role_name>' --query "Role.AssumeRolePolicyDocument.{Version: Version, Statement: Statement[?Sid!='<finding_id>']}"`
      aws iam update-assume-role-policy --role-name '<role_name>' --policy-document "$POLICY_DOCUMENT"
      

      Further, you can create an archive rule with criteria such as AWS Account ID, resource type, and principal, to automatically archive new findings that match the criteria.

    • If you find that the IAM role provides unintentional cross-account access then you may delete the IAM role. Also, you should investigate who created the IAM role by checking relevant AWS CloudTrail events like iam:createRole, so that you can plan for preventive actions.

Solution deployment

You can deploy the solution by using either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution by using the AWS Management Console

  1. In your AWS account, launch the template by choosing the Launch Stack button, which creates the stack the in us-east-1 Region.
    Select the Launch Stack button to launch the template
  2. On the Quick create stack page, for Stack name, enter a unique stack name for this account; for example, iam-accessanalyzer-findings-resolution, as shown in Figure 4.

    Figure 4: Deploy the solution using CloudFormation template

    Figure 4: Deploy the solution using CloudFormation template

  3. For NotificationEmail, enter the email address to receive notifications for any resolution actions taken by the solution.
  4. Choose Create stack.

Additionally, you can find the latest code on the aws-iam-permissions-guardrails GitHub repository, where you can also contribute to the sample code. The following procedure shows how to deploy the solution by using the AWS Cloud Development Kit (AWS CDK).

To deploy the solution by using the AWS CDK

  1. Install the AWS CDK.
  2. Deploy the solution to your account using the following commands:
    git clone [email protected]:aws-samples/aws-iam-permissions-guardrails.git
    cd aws-iam-permissions-guardrails/access-analyzer/iam-role-findings-resolution/ 
    cdk bootstrap
    cdk deploy --parameters NotificationEmail=<YOUR_EMAIL_ADDRESS_HERE>
    

After deployment, you must confirm the AWS Amazon SNS email subscription to get the notifications from the solution.

To confirm the email address for notifications

  1. Check your email inbox and choose Confirm subscription in the email from Amazon SNS.
  2. Amazon SNS opens your web browser and displays a subscription confirmation with your subscription ID.

To test the solution

Create an IAM role with a trust policy with another AWS account as principal that is neither part of archive rule nor within your zone of trust. Also, for this test, do not attach any permission policies to the IAM role. You will receive an email notification after a few minutes, similar to the one shown previously in Figure 3.

As a next step, review the resolution action as described in step 5 in the solution walkthrough section above.

Clean up

If you launched the solution in the AWS Management Console by using the Launch Stack button, you can delete the stack by navigating to CloudFormation console, selecting the specific stack by its name, and then clicking the Delete button.

If you deployed the solution using AWS CDK, you can perform the cleanup using the following CDK command from the local directory where the solution was cloned from GitHub.

cdk destroy

Cost estimate

Deploying the solution alone will not incur any costs, but there is a cost associated with the AWS Lambda execution and Amazon SNS notifications through email, when the findings generated by IAM Access Analyzer match the EventBridge event rule and the notifications are sent. AWS Lambda and Amazon SNS have perpetual free tier and you will be charged only when the usage goes beyond the free tier usage each month.

Summary

In this blog post, we showed you how to automate the resolution of unintended cross-account IAM roles using IAM Access Analyzer. As a resolution, this solution added a deny statement into the IAM role’s trust policy.

You can expand the solution to resolve Access Analyzer findings for Amazon S3 and KMS, by modifying the associated resource policies. You can also include capabilities like automating the rollback of the resolution if the role is intended, or introducing an approval workflow to resolve the finding to suit to your organization’s process requirements. Also, IAM Access Analyzer now enables you to preview and validate public and cross-account access before deploying permissions changes.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Balajepalli

Ramesh is a Senior Solutions Architect at AWS. He enjoys working with customers to solve their technical challenges using AWS services. In his spare time, you can find him spending time with family and cooking.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect for AWS. Siva enjoys working closely with customers to accelerate their AWS cloud adoption and improve their overall security posture.

Author

Sujatha Kuppuraju

Sujatha is a Senior Solutions Architect at AWS. She works with ISV customers to help design secured, scalable, and well-architected solutions on the AWS Cloud. She is passionate about solving complex business problems with the ever-growing capabilities of technology.

Automatically update AWS WAF IP sets with AWS IP ranges

Post Syndicated from Fola Bolodeoku original https://aws.amazon.com/blogs/security/automatically-update-aws-waf-ip-sets-with-aws-ip-ranges/

Note: This blog post describes how to automatically update AWS WAF IP sets with the most recent AWS IP ranges for AWS services. This related blog post describes how to perform a similar update for Amazon CloudFront IP ranges that are used in VPC Security Groups.

You can use AWS Managed Rules for AWS WAF to quickly create baseline protections for your web applications, including setting up lists of IP addresses to be blocked. In some cases, you might need to create an IP set in AWS WAF with the IP address ranges of Amazon Web Services (AWS) services that you use, so that traffic from these services is allowed. In this blog post, we provide a solution that automatically updates an AWS WAF IP set with the IP address ranges of the AWS services Amazon CloudFront, Amazon Route 53 health checks, and Amazon EC2 (and also the services that share the same IP address ranges, such as AWS Lambda, Amazon CloudWatch, and so on). These services are present in the AWS Managed Rules Anonymous IP list, and blocking them may cause inadvertent service impairment for applications that expect traffic from the services.

As an application owner, you can improve your security posture by using the Anonymous IP list in your AWS WAF web access control lists (web ACLs) to block source IP addresses from specific hosting providers and anonymization services, such as VPNs, proxies, and Tor nodes. Due to the generic nature of these rules, when you use the Anonymous IP list, you might want to exclude certain IPs from the list of IPs to be blocked, in order to allow web traffic from those sources. For example, you can allow traffic that originates from the AWS network.

Alternatively, you might want to permit only IP addresses from certain AWS services in a web ACL. This is a common requirement when you protect an Application Load Balancer by restricting all incoming traffic to CloudFront IP ranges. Creating your own custom list to allow expected traffic from the AWS network requires some effort, because you need to periodically update the list by using the IP ranges that we provide. With the solution we present here, you don’t have to manually manage the exclusion list. When the new AWS IP ranges are published, this solution will automatically fetch and update the list.

Note: This solution only works with AWS WAF, and will not work with AWS WAF Classic.

Solution overview

Figure 1 shows the solution architecture.

Figure 1: Automatic update process for service IPs

Figure 1: Automatic update process for service IPs

AWS sends Amazon Simple Notification Service (Amazon SNS) notifications to subscribers of the AmazonIPSpaceChanged SNS topic when updates are made to the public IP addresses for AWS services. This solution uses an AWS CloudFormation template to deploy an AWS Lambda function that is triggered by these SNS notifications. The function creates AWS WAF IP sets for IPv4 and IPv6 address ranges in your web ACL.

The solution workflow is as follows:

  1. In the CloudFormation template, you select the services that you want the AWS WAF IP set to be updated with.
  2. The template deploys the required AWS resources with the configuration that specifies what services to fetch from an AWS public IP address update.
  3. AWS Lambda function is manually invoked one first time to populate AWS WAF IP sets with selected IPs from AWS IP range.
  4. Once AWS IP range is updated, an Amazon SNS notification is sent to subscribers of the SNS topic.
  5. SNS notification triggers the AWS Lambda function.
  6. The Lambda function fetches the selected IP ranges and updates IP sets for IPv4 addresses and IPv6 addresses.
  7. The application owner adds a custom AWS WAF web ACL rule that uses the IP sets to allow traffic from the AWS services that you’ve selected. This way, the web ACL makes reference to always updated AWS WAF IP sets with no further action required from your side.

Solution prerequisites

The solution is automatically created when you deploy the AWS CloudFormation template that is available on the solution’s GitHub page. There are three resources that you must have in place before you deploy the template:

  • The Python code that will be used as the Lambda function.
    • Download the update_aws_waf_ipset.py Python code from the project’s AWS Lambda directory in GitHub. This function is responsible for constantly checking AWS IPs and making sure that your AWS WAF IP sets are always updated with the most recent set of IPs in use by the AWS service of choice.
  • An Amazon Simple Storage Service (Amazon S3) bucket that you will use to store the compressed Python code.
    • Compress the file to a .zip file and upload it to an Amazon Simple Storage Service (Amazon S3) bucket in the same AWS Region where you will deploy the template. For instructions on how to create an S3 bucket, see Creating a bucket.
  • An AWS WAF web ACL to filter requests that come in from trusted sources. The web ACL uses the IP sets that the solution creates and updates with the necessary IP addresses.

Deploy the AWS CloudFormation template

The CloudFormation template deploys the required resources for this solution in your account. The following resources are deployed:

  • Two AWS WAF IP sets, IPv4Set and IPv6Set that are used to store IPv4 and IPv6 IP addresses from the services you’re interested in allowing. Those IP sets are visible in the AWS WAF console under the same Region where the template is deployed.
    • Note: The IP address 192.0.2.0/24 that appears in the template is a placeholder for the IP addresses that will be populated by the solution, and it is used for documentation purposes only.
  • The update_aws_waf_ipset.py Python code is used in an AWS Lambda function called UpdateWAFIPSet. This is the function that will read which services the solution should collects IPs from, and which IP sets should be populated. If you don’t change those parameters, the function will use default IP set suffixes. By default, the solution will select ROUTE53_HEALTHCHECKS and CLOUDFRONT as the services for which to download IPs. You can update the list of IP addresses as needed, by referring to the AWS IP JSON document for a list of service names and IP ranges.
  • A Lambda execution role with permissions restricted to least privilege required.
  • The Lambda function is automatically subscribed to the AmazonIPSpaceChanged SNS topic, which is responsible for monitoring changes in the list of AWS IPs.
  • A Lambda permission resource to allow the previously created SNS topic to invoke the template’s Lambda function.

Solution deployment through the console

You can download the AWS CloudFormation template, called template.yml, from the solution’s GitHub page.

After you’ve downloaded the template, access the CloudFormation console to create the stack. See the CloudFormation User Guide for instructions on selecting a downloaded template in the CloudFormation console to deploy a stack.

Note: The Region that you use when you deploy the template is where resources will be created.

On the Specify stack details page, you can enter the stack name, which will be the name used as a reference for resources created by the template, as well as six other stack parameters, shown in Figure 2.

Figure 2: Template parameters

Figure 2: Template parameters

The parameters are as follows:

  • EC2REGIONS – This is the Region that the solution will use as a reference when it updates its list of IPs. Select all for all Regions, but you can also specify a Region of interest.
  • IPV4SetNameSuffix – The solution will create an AWS WAF IPv4 IP set with the stack name as its name, but you can also add a suffix of your choice to the name.
  • IPV6SetNameSuffix – Like the AWS WAF IPv4 IP set, the IPv6 IP set can also have a suffix of your choice.
  • LambdaCodeS3Bucket – As mentioned in the Prerequisites section, you need to have previously uploaded the Lambda function Python code to an Amazon S3 bucket in the same Region where you’re deploying the stack. Enter the bucket name here, for example, mybucket.
  • LambdaCodeS3Object – Enter the name of the .zip file of the compressed Lambda function in the S3 bucket, for example, myfunction.zip.
  • SERVICES – Enter the list of AWS services for which you want the IP addresses populated in the AWS WAF IP sets. By default, this solution uses ROUTE53_HEALTHCHECKS and CLOUDFRONT, but you can change this parameter and add any service name, according to the list in the AWS IP ranges JSON.

After you deploy the template, its status will change to CREATE_COMPLETE.

Solution deployment through the AWS CLI

You can also deploy the solution template through the AWS Command Line Interface (AWS CLI). On the solution’s GitHub page, in the Setup section, follow the instructions for deploying the solution by using AWS CLI commands.

Note: To use the AWS CLI, you must have set it up in your environment. To set up the AWS CLI, follow the instructions in the AWS CLI installation documentation.

Invoke the Lambda function for the first time

After you successfully deploy the CloudFormation stack, it’s required that you run an initial Lambda invocation so that the AWS WAF IP sets are updated with AWS services IPs. This Lambda invocation is only required once, and after this initial call, the solution will handle future updates on your behalf.

To invoke this Lambda call through the AWS Management Console, open the Lambda console, select the Lambda function that was created by the template, and use the following event to create a test event. See Invoke the Lambda function in the AWS Lambda Developer Guide for step-by-step guidance on how to run a test event.

{
  "Records": [
    {
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:EXAMPLE",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "1970-01-01T00:00:00.000Z",
        "Signature": "EXAMPLE",
        "SigningCertUrl": "EXAMPLE",
        "MessageId": "12345678-1234-1234-1234-123456789012",
        "Message": "{\"create-time\": \"yyyy-mm-ddThh:mm:ss+00:00\", \"synctoken\": \"0123456789\", \"md5\": \"test-hash\", \"url\": \"https://ip-ranges.amazonaws.com/ip-ranges.json\"}",
        "Type": "Notification",
        "UnsubscribeUrl": "EXAMPLE",
        "TopicArn": "arn:aws:sns:EXAMPLE",
        "Subject": "TestInvoke"
      }
    }
  ]
}

The success of the event will mean that the newly created AWS WAF IP sets now have the updated list of IPs from the services you’re working with.

You can also achieve Lambda function invocation through the AWS CLI by using the following command, where test_event.json is the test event I mentioned earlier.

aws lambda invoke \
  --function-name $CFN_STACK_NAME-UpdateWAFIPSets \
  --region $REGION \
  --payload file://lambda/test_event.json lambda_return.json

You can use the documentation for invoking a Lambda function in the AWS CLI to explore this command and its parameters.

After successful invocation, status code 200 is returned on the AWS CLI to illustrate that invocation happened as expected. At this point, the AWS WAF IP sets are updated.

Use the solution IP sets in your AWS WAF web ACL

Now the AWS WAF IPv4 and IPv6 IP sets are populated, and you can obtain the IP lists either by using the AWS WAF console, or by calling the GetIPSet API through the AWS CLI command get-ip-set.

To use AWS WAF IP sets in your web ACL, see Creating and managing an IP set in the AWS WAF Developer Guide. You can use these IP sets in the same web ACL or rule group that contains the AWS Managed Rules Anonymous IP list and is associated to the AWS resource that AWS WAF is protecting. AWS WAF evaluation order and solution positioning within WebACL will be discussed in later section.

To associate your web ACL with an AWS resource, see Associating or disassociating a web ACL with an AWS resource.

Validate the solution

To validate the solution, let’s consider a scenario where you would like to allow requests from CloudFront to come through, while blocking any other anonymous and hosting provider sources. In this scenario, consider the following requests that are filtered by AWS WAF.

In the first one, a customer has the AWSManagedRulesAnonymousIpList rule group, and a request coming from an Amazon EC2 instance IP is blocked.

{
    "timestamp": 1619175030566,
    "formatVersion": 1,
    "webaclId": "arn:aws:wafv2:eu-west-1:111122223333:regional/webacl/managedRuleValidation/11fd1e32-ae25-45f8-811f-3c1485f76ceb",
    "terminatingRuleId": "AWS-AWSManagedRulesAnonymousIpList",
    "terminatingRuleType": "MANAGED_RULE_GROUP",
    "action": "BLOCK",
    (...)
    "ruleGroupList": [
        {
            "ruleGroupId": "AWS#AWSManagedRulesAnonymousIpList",
            "terminatingRule": {
                "ruleId": "HostingProviderIPList",
                "action": "BLOCK",
                "ruleMatchDetails": null
            },
            (...)
    ],
    (...)
    "httpRequest": {
        "clientIp": "203.0.113.176",
        (...)
    }
}

In the second request, this time coming in from CloudFront, you can see that AWS WAF didn’t block the request.

{
    "timestamp": 1619175149405,
    "formatVersion": 1,
    "webaclId": "arn:aws:wafv2:eu-west-1:111122223333:regional/webacl/managedRuleValidation/11fd1e32-ae25-45f8-811f-3c1485f76ceb",
    "terminatingRuleId": "Default_Action",
    "terminatingRuleType": "REGULAR",
    "action": "ALLOW",
    "terminatingRuleMatchDetails": [],
    (...)
    "httpRequest": {
       "clientIp": "130.176.96.86",
       (...)
    }
}

To achieve this result, you need to edit AWSManagedRulesAnonymousIpList and add a scope-down statement so that the rule set only blocks requests that aren’t sent from sources within this solution’s IPv4 and IPv6 IP sets.

To create a scope-down statement for AWSManagedRulesAnonymousIpList

  1. In the AWS WAF console, access your web ACL.
  2. Open the Rules tab.
  3. Select AWSManagedRulesAnonymousIpList rule set, and then choose Edit.
  4. Choose the arrow next to Scope-down statement – optional. You will see two options, Rule visual editor and Rule JSON editor.
  5. Choose Rule JSON editor and enter the following JSON. Replacing <IPv4-IPSET-ARN> and <IPv6-IPSET-ARN> with respective IP sets’ Amazon Resource Numbers (ARNs).

Note: You can use the AWS WAF ListIPSets action or the list-ip-sets CLI command to obtain the IP set Amazon Resource Numbers (ARNs) and enter that information in the provided JSON.

{
  "NotStatement": {
    "Statement": {
      "OrStatement": {
        "Statements": [
          {
            "IPSetReferenceStatement": {
              "ARN": "<IPv4-IPSET-ARN>"
            }
          },
          {
            "IPSetReferenceStatement": {
              "ARN": "<IPv6-IPSET-ARN>"
            }
          }
        ]
      }
    }
  }
}

After making this change, your rule editing page will look like the following.

Figure 3: AWSManagedRulesAnonymousIpList scope-down statement

Figure 3: AWSManagedRulesAnonymousIpList scope-down statement

When you set the rule priority, consider using the AWSManagedRulesAnonymousIpList rule group with a lower priority than other rules within the web ACL. This causes that rule group to be evaluated prior to rules that are configured with terminating actions (that is, Allow and Block actions). The scope-down statement will match the request and allow traffic from the IP addresses within the IP set, and pass every other IP on to the next rule for further evaluation. Figure 4 shows an example of the suggested priority.

Figure 4: Example with suggested use of AWS WAF web ACL priority

Figure 4: Example with suggested use of AWS WAF web ACL priority

Summary

This blog post provides you with a solution that is capable of automatically updating AWS WAF IP sets with the list of current IP ranges for one or more AWS services. You can use this solution in various ways, such as to allow requests from Amazon CloudFront when you’re using the AWS Managed Rules Anonymous IP List.

For best practices on AWS WAF implementation, see Guidelines for Implementing AWS WAF. For further reading on AWS WAF, see the AWS WAF Developer Guide.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fola Bolodeoku

Fola is a Security Engineer on the AWS Shield Team, where he focuses on helping customers improve their application security posture against DDoS, and other application threats. When he is not working, he enjoys spending time on road trips in the Western Cape, and beyond.

Author

Mario Pinho

Mário is a Security Engineer at AWS. He has a background in network engineering and consulting, and feels at his best when breaking apart complex topics and processes into their simpler components. In his free time, he pretends to be an artist by playing piano and doing landscape photography.

Author

Davidson Junior

Davidson is a Cloud Support Engineer at AWS in Cape Town, South Africa. He is a subject matter expert in AWS WAF, and is focused on helping customers troubleshoot and protect their network in the cloud. Outside of work, he enjoys listening to music, outdoor photography, and hiking the Western Cape.

Build an end-to-end attribute-based access control strategy with AWS SSO and Okta

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/build-an-end-to-end-attribute-based-access-control-strategy-with-aws-sso-and-okta/

This blog post discusses the benefits of using an attribute-based access control (ABAC) strategy and also describes how to use ABAC with AWS Single Sign-On (AWS SSO) when you’re using Okta as an identity provider (IdP).

Over the past two years, Amazon Web Services (AWS) has invested heavily in making ABAC available across the majority of our services. With ABAC, you can simplify your access control strategy by granting access to groups of resources, which are specified by tags, instead of managing long lists of individual resources. Each tag is a label that consists of a user-defined key and value, and you can use these to assign metadata to your AWS resources. Tags can help you manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria. To learn more about tags and AWS best practices for tagging, see Tagging AWS resources.

The ability to include tags in sessions—combined with the ability to tag AWS Identity and Access Management (IAM) users and roles—means that you can now incorporate user attributes from your identity provider as part of your tagging and authorization strategy. Additionally, user attributes help organizations to make permissions more intuitive, because the attributes are easier to relate to teams and functions. A tag that represents a team or a job function is easier to audit and understand.

For more information on ABAC in AWS, see our ABAC documentation.

Why use ABAC?

ABAC is a strategy that that can help organizations to innovate faster. Implementing a purely role-based access control (RBAC) strategy requires identity and security teams to define a large number of RBAC policies, which can lead to complexity and time delays. With ABAC, you can make use of attributes to build more dynamic policies that provide access based on matching the attribute conditions. AWS supports both RBAC and ABAC as co-existing strategies, so you can use ABAC alongside your existing RBAC strategy.

A good example that uses ABAC is the scenario where you have two teams that require similar access to their secrets in AWS Secrets Manager. By using ABAC, you can build a single role or policy with a condition based on the Department attribute from your IdP. When the user is authenticated, you can pass the Department attribute value and use a condition to provide access to resources that have the identical tag, as shown in the following code snippet. In this post, I show how to use ABAC for this example scenario.

"Condition": {
                "StringEquals": {
                    "secretsmanager:ResourceTag/Department": "${aws:PrincipalTag/Department}"

ABAC provides organizations with a more dynamic way of working with permissions. There are four main benefits for organizations that use ABAC:

  • Scale your permissions as you innovate: As developers create new project resources, administrators can require specific attributes to be applied when resources are created. This can include applying tags with attributes that give developers immediate access to the new resources they create, without requiring an update to their own permissions.
  • Help your teams to change and grow quickly: Because permissions are based on user attributes from a corporate identity source such as an IdP, changing user attributes in the IdP that you use for access control in AWS automatically updates your permissions in AWS.
  • Create fewer AWS SSO permission sets and IAM roles: With ABAC, multiple users who are using the same AWS SSO permission set and IAM role can still get unique permissions, because permissions are now based on user attributes. Administrators can author IAM policies that grant users access only to AWS resources that have matching attributes. This helps to reduce the number of IAM roles you need to create for various use cases in a single AWS account.
  • Efficiently audit who performed an action: By using attributes that are logged in AWS CloudTrail next to every action that is performed in AWS by using an IAM role, you can make it easier for security administrators to determine the identity that takes actions in a role session.

Prerequisites

In this section, I describe some higher-level prerequisites for using ABAC effectively. ABAC in AWS relies on the use of tags for access-control decisions, so it’s important to have in place a tagging strategy for your resources. To help you develop an effective strategy, see the AWS Tagging Strategies whitepaper.

Organizations that implement ABAC can enhance the use of tags across their resources for the purpose of identity access. Making sure that tagging is enforced and secure is essential to an enterprise-wide strategy. For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

You can use the service AWS Resource Groups to identify untagged resources and to find resources to tag. You can also use Resource Groups to remediate untagged resources.

Use AWS SSO with Okta as an IdP

AWS SSO gives you an efficient way to centrally manage access to multiple AWS accounts and business applications, and to provide users with single sign-on access to all their assigned accounts and applications from one place. With AWS SSO, you can manage access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO configures and maintains all the necessary permissions for your accounts automatically, without requiring any additional setup in the individual accounts.

AWS SSO supports access control attributes from any IdP. This blog post focuses on how you can use ABAC attributes with AWS SSO when you’re using Okta as an external IdP.

Use other single sign-on services with ABAC

This post describes how to turn on ABAC in AWS SSO. To turn on ABAC with other federation services, see these links:

Implement the solution

Follow these steps to set up Okta as an IdP in AWS SSO and turn on ABAC.

To set up Okta and turn on ABAC

  1. Set up Okta as an IdP for AWS SSO. To do so, follow the instructions in the blog post Single Sign-On Between Okta Universal Directory and AWS. For more information on the supported actions in AWS SSO with Okta, see our documentation.
  2. Enable attributes for access control (in other words, turn on ABAC) in AWS SSO by using these steps:
    1. In the AWS Management Console, navigate to AWS SSO in the AWS Region you selected for your implementation.
    2. On the Dashboard tab, select Choose your identity source.
    3. Next to Attributes for access control, choose Enable.

      Figure 1: Turn on ABAC in AWS SSO

      Figure 1: Turn on ABAC in AWS SSO

    You should see the message “Attributes for access control has been successfully enabled.”

  3. Enable updates for user attributes in Okta provisioning. Now that you’ve turned on ABAC in AWS SSO, you need to verify that automatic provisioning for Okta has attribute updates enabled.Log in to Okta as an administrator and locate the application you created for AWS SSO. Navigate to the Provisioning tab, choose Edit, and verify that Update User Attributes is enabled.

    Figure 2: Enable automatic provisioning for ABAC updates

    Figure 2: Enable automatic provisioning for ABAC updates

  4. Configure user attributes in Okta for use in AWS SSO by following these steps:
    1. From the same application that you created earlier, navigate to the Sign On tab.
    2. Choose Edit, and then expand the Attributes (optional) section.
    3. In the Attribute Statements (optional) section, for each attribute that you will use for access control in AWS SSO, do the following:
      1. For Name, enter https://aws.amazon.com/SAML/Attributes/AccessControl:<AttributeName>. Replace <AttributeName> with the name of the attribute you’re expecting in AWS SSO, for example https://aws.amazon.com/SAML/Attributes/AccessControl:Department.
      2. For Name Format, choose URI reference.
      3. For Value, enter user.<AttributeName>. Replace <AttributeName> with the Okta default user profile variable name, for example user.department. To view the Okta default user profile, see these instructions.

     

    Figure 3: Configure two attributes for users in Okta

    Figure 3: Configure two attributes for users in Okta

    In the example shown here, I added two attributes, Department and Division. The result should be similar to the configuration shown in Figure 3.

  5. Add attributes to your users by using these steps:
    1. In your Okta portal, log in as administrator. Navigate to Directory, and then choose People.
    2. Locate a user, navigate to the Profile tab, and then choose Edit.
    3. Add values to the attributes you selected.
    Figure 4: Addition of user attributes in Okta

    Figure 4: Addition of user attributes in Okta

  6. Confirm that attributes are mapped. Because you’ve enabled automatic provisioning updates from Okta, you should be able to see the attributes for your user immediately in AWS SSO. To confirm this:
    1. In the console, navigate to AWS SSO in the Region you selected for your implementation.
    2. On the Users tab, select a user that has attributes from Okta, and select the user. You should be able to see the attributes that you mapped from Okta.
    Figure 5: User attributes in Okta

    Figure 5: User attributes in Okta

Now that you have ABAC attributes for your users in AWS SSO, you can now create permission sets based on those attributes.

Note: Step 4 ensures that users will not be successfully authenticated unless the attributes configured are present. If you don’t want this enforcement, do not perform step 4.

Build an ABAC permission set in AWS SSO

For demonstration purposes, I’ll show how you can build a permission set that is based on ABAC attributes for AWS Secrets Manager. The permission set will match resource tags to user tags, in order to control which resources can be managed by Secrets Manager administrators. You can apply this single permission set to multiple teams.

To build the ABAC permission set

  1. In the console, navigate to AWS SSO, and choose AWS Accounts.
  2. Choose the Permission sets tab.
  3. Choose Create permission set, and then choose Create a custom permission set.
  4. Fill in the fields as follows.
    1. For Name, enter a name for your permission set that will be visible to your users, for example, SecretsManager-Profile.
    2. For Description, enter ABAC SecretsManager Profile.
    3. Select the appropriate session duration.
    4. For Relay State, for my example I will enter the URL for Secrets Manager: https://console.aws.amazon.com/secretsmanager/home. This will give a better user experience when the user signs in to AWS SSO, with an automatic redirect to the Secrets Manager console.
    5. For the field What policies do you want to include in your permission set?, choose Create a custom permissions policy.
    6. Under Create a custom permissions policy, paste the following policy.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "SecretsManagerABAC",
                  "Effect": "Allow",
                  "Action": [
                      "secretsmanager:DescribeSecret",
                      "secretsmanager:PutSecretValue",
                      "secretsmanager:CreateSecret",
                      "secretsmanager:ListSecretVersionIds",
                      "secretsmanager:UpdateSecret",
                      "secretsmanager:GetResourcePolicy",
                      "secretsmanager:GetSecretValue",
                      "secretsmanager:ListSecrets",
                      "secretsmanager:TagResource"
                  ],
                  "Resource": "*",
                  "Condition": {
                      "StringEquals": {
                          "secretsmanager:ResourceTag/Department": "${aws:PrincipalTag/Department}"
                      }
                  }
              },
              {
                  "Sid": "NeededPermissions",
                  "Effect": "Allow",
                  "Action": [
             "kms:ListKeys",
             "kms:ListAliases",
                      "rds:DescribeDBInstances",
                      "redshift:DescribeClusters",
                      "rds:DescribeDBClusters",
                      "secretsmanager:ListSecrets",
                      "tag:GetResources",
                      "lambda:ListFunctions"
                  ],
                  "Resource": "*"
              }
          ]
      }
      

    This policy grants users the ability to create and list secrets that belong to their department. The policy is configured to allow Secrets Manager users to manage only the resources that belong to their department. You can modify this policy to perform matching on more attributes, in order to have more granular permissions.

    Note: The RDS permissions in the policy enable users to select an RDS instance for the secret and the Lambda Permissions are to enable custom key rotation.

    If you look closely at the condition

    “secretsmanager:ResourceTag/Department”: “${aws:PrincipalTag/Department}”

    …the condition states that the user can only access Secrets Manager resources that have a Department tag, where the value of that tag is identical to the value of the Department tag from the user.

  5. Choose Next: Tags.
  6. Tag your permission set. For my example, I’ll add Key: Service and Value: SecretsManager.
  7. Choose Next: Review and create.
  8. Assign the permission set to a user or group and to the appropriate accounts that you have in AWS Organizations.

Test an ABAC permission set

Now you can test the ABAC permission set that you just created for Secrets Manager.

To test the ABAC permission set

  1. In the AWS SSO console, on the Dashboard page, navigate to the User Portal URL.
  2. Sign in as a user who has the attributes that you configured earlier in AWS SSO. You will assume the permission set that you just created.
  3. Choose Management console. This will take you to the console that you specified in the Relay State setting for the permission set, which in my example is the Secrets Manager console.

    Figure 6: AWS SSO ABAC profile access

    Figure 6: AWS SSO ABAC profile access

  4. Try to create a secret with no tags:
    1. Choose Store a new secret.
    2. Choose Other type of secrets.
    3. You can add any values you like for the other options, and then choose Next.
    4. Give your secret a name, but don’t add any tags. Choose Next.
    5. On the Configure automatic rotation page, choose Next, and then choose Store.

    You should receive an error stating that the user failed to create the secret, because the user is not authorized to perform the secretsmanager:CreateSecret action.

    Figure 7: Failure to create a secret (no attributes)

    Figure 7: Failure to create a secret (no attributes)

  5. Choose Previous twice, and then add the appropriate tag. For my example, I’ll add a tag with the key Department and the value Serverless.

    Figure 8: Adding tags for a secret

    Figure 8: Adding tags for a secret

  6. Choose Next twice, and then choose Store. You should see a message that your secret creation was successful.

    Figure 9: Successful secret creation

    Figure 9: Successful secret creation

Now administrators who assume this permission set can view, create, and manage only the secrets that belong to their team or department, based on the tags that you defined. You can reuse this permission set across a large number of teams, which can reduce the number of permission sets you need to create and manage.

Summary

In this post, I’ve talked about the benefits organizations can gain from embracing an ABAC strategy, and walked through how to turn on ABAC attributes in Okta and AWS SSO. I’ve also shown how you can create ABAC-driven permission sets to simplify your permission set management. For more information on AWS services that support ABAC—in other words, authorization based on tags—see our updated AWS services that work with IAM page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Louay Shaat

Louay is a Security Solutions Architect with AWS. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on security and automation helping customers improve their security, risk, and compliance in the cloud.

How to monitor and track failed logins for your AWS Managed Microsoft AD

Post Syndicated from Tekena Orugbani original https://aws.amazon.com/blogs/security/how-to-monitor-and-track-failed-logins-for-your-aws-managed-microsoft-ad/

AWS Directory Service for Microsoft Active Directory provides customers with the ability to review security logs on their AWS Managed Microsoft AD domain controllers by either using a domain management Amazon Elastic Compute Cloud (Amazon EC2) instance or by forwarding domain controller security event logs to Amazon CloudWatch Logs.

You can further improve visibility by monitoring Windows login activities on your AWS Managed Microsoft AD domain-joined EC2 instances, and in this blog post, I show you how. Monitoring and tracking Windows security events on your AWS Managed Microsoft AD domain-joined instances can reveal unexpected activities on your domain-joined EC2 instances so that you can take proactive remediating action.

For example, every time there is an unsuccessful attempt to log in to a domain-joined EC2 instance or on-premises server by using an AWS Managed Microsoft AD user or a local account, an “Audit Failure” Windows security event with ID 4625 is recorded on the EC2 instance itself. The event data includes details of the account name, workstation name, and source network address. Unsuccessful attempts to log in to non–domain-joined EC2 instances and servers are handled the same way. You can track and monitor these events on an ongoing basis across your fleet of Windows EC2 instances by using the solution described here.

Solution overview

Figure 1 shows the workflow for the solution.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

The workflow steps are as follows:

  1. An Amazon CloudWatch agent that is running on the EC2 instances sends the Windows security event logs to Amazon CloudWatch.
  2. CloudWatch filters the logs based on the filter you specify. When the configured threshold is met, CloudWatch posts an alert to an SNS topic.
  3. Amazon Simple Notification Service (Amazon SNS) invokes an AWS Lambda function.
  4. The Lambda function scans through the events and determines which EC2 instance(s) generated the security events at a frequency that satisfies the configured threshold. It discards any other instances listed in the events that don’t meet the specified criteria. The function sends an email to the configured email address with a high-level description of the event logs and the instance(s) that generated them.
  5. Amazon Simple Email Service (Amazon SES) delivers the emails in the specified mailbox.

Note: Although this example uses email notification via Amazon SES to monitor failed logins, there are opportunities to extend the solution. For example, you can integrate with a Security Information and Event Management (SIEM) tool that may potentially be integrated with a ticketing service and/or some automation or incident response process when a set threshold for failed logins is breached.

Prerequisites

Before you deploy the solution, you must complete the following steps:

  1. Create AWS Identity and Access Management (IAM) roles for use with the CloudWatch agent
  2. Sign up for Amazon SES
  3. Verify the sender and recipient email addresses that you’ll use to send and receive email notifications

Deploy the solution

The solution I present here involves four main steps:

  1. Install and configure the CloudWatch agent for your EC2 instances.
  2. Create a metric filter in CloudWatch.
  3. Create a CloudWatch alarm based on the metric filter and add SNS notification.
  4. Create a Lambda function and subscribe the function to the SNS topic.

Step 1: Install and configure the CloudWatch agent for all your EC2 instances

The first step is to create an AWS Systems Manager parameter to contain the JSON configuration for the CloudWatch agent that runs on the EC2 instances. You’ll then use Systems Manager Run Command to install the CloudWatch agent on the instances and to apply the configuration in the Parameter Store to the CloudWatch agent.

To install and configure the CloudWatch agent

  1. Open the AWS Systems Manager console and in the navigation pane, choose Parameter Store to create a new Systems Manager parameter.
  2. Give your parameter a name. In my example, I named my parameter AmazonCloudWatch-Windows.
  3. For Tier, choose Standard. For Type, choose String. For Data type, choose Text.
  4. For the value of the parameter, enter the following JSON configuration and choose Create Parameter.

    Note: This JSON configuration creates a log group in CloudWatch with the name /aws/SecurityAuditLogs. If you would prefer to use another log group name, you can modify the JSON configuration. Also, if you already have a Systems Manager parameter named AmazonCloudWatch-Windows, you can use any other name of your choice.

    { "logs": { "logs_collected": { "windows_events": { "collect_list": [ { "event_format": "xml", "event_levels": [ "VERBOSE", "INFORMATION", "WARNING", "ERROR", "CRITICAL" ], "event_name": "Security", "log_group_name": "/aws/SecurityAuditLogs", "log_stream_name": "{instance_id}" } ] } } }, "metrics": { "metrics_collected": { "statsd": { "metrics_aggregation_interval": 60, "metrics_collection_interval": 10, "service_address": ":8125" } } } }
    

    The Parameter details page should look similar to the following.
     

    Figure 2: Create the System Manager parameter for the CloudWatch agent

    Figure 2: Create the System Manager parameter for the CloudWatch agent

  5. Next, you’ll use Run Command to install and configure the CloudWatch agent. In the navigation pane, choose Run Command.
  6. On the Run a command page, in the search box, enter Document name prefix: Equals: AWS-ConfigureAWSPackage. Press Enter and select the document that appears.
  7. Under Command parameters, for Name, enter AmazonCloudWatchAgent.
     
    Figure 3: Install the CloudWatch agent on the instances

    Figure 3: Install the CloudWatch agent on the instances

  8. Under Targets, specify your EC2 instances based on their tags, or choose them manually, and then choose Run.
  9. To configure the CloudWatch agent, choose Run Command again. On the Run a command screen, enter Document name prefix: Equals: AmazonCloudWatch-ManageAgent. Press Enter and select the document that appears.
  10. Under Command parameters, for Optional Configuration Location, enter the name of the Systems Manager parameter you created earlier. In my example, I used the name AmazonCloudWatch-Windows. Keep the defaults for the other settings.
     
    Figure 4: Configure the CloudWatch agent on the instances

    Figure 4: Configure the CloudWatch agent on the instances

  11. Under Targets, specify your EC2 instances based on their tags, or choose them manually, and then choose Run.

Step 2: Create a metric filter in CloudWatch

After the completion of the tasks in Step 1, your EC2 instances should now be sending logs to a log group in Amazon CloudWatch called /aws/SecurityAuditLogs. The log group should have log streams named after the EC2 instances that are sending the logs to CloudWatch. The next step is to create a metric filter to filter the noise from the logs.

To create a metric filter

  1. Open the CloudWatch console and in the left navigation menu, choose Log Groups.
  2. Select the check box next to the /aws/SecurityAuditLogs log group, choose Actions, and then choose Create metric filter.
  3. On the Define pattern page, enter Audit Failure, keep the defaults for the other settings, and then choose Next.
  4. Enter values for Filter name, Metric namespace, Metric name, and Metric value, and then choose Next to create the metric filter.

 

Figure 5: Create a CloudWatch metric filter

Figure 5: Create a CloudWatch metric filter

Step 3: Create a CloudWatch alarm based on the metric filter and add SNS notification

In this step, you set a threshold for how many “Audit Failure” events you want to allow within a period of time before triggering an alarm.

To create the CloudWatch alarm and add SNS notification

  1. Open the Amazon Simple Notification Service console and in the left navigation menu, choose Topics.
  2. Choose Create topic, and then choose Standard.
  3. Provide a name for your topic, and then choose Create topic. In my example, I named the topic WindowsSecurityLogsAlarmNotifications.
  4. Open the CloudWatch console, choose Log groups, and select the /aws/SecurityAuditLogs log group.
  5. Choose the Metric filters tab, select the check box next to the WindowsSecurityAuditFailures filter you just created, and choose Create alarm.
  6. On the Specify metric and conditions page, set the parameters as follows:
    1. For Statistic, choose Sample count.
    2. For Period, choose 5 minutes.
    3. For Threshold type, choose Static.
    4. For Define the alarm condition, choose Greater>threshold.
    5. For Define the threshold value, specify the threshold number of failed login attempts that will cause a notification to be sent.

      Note: In my example, I’ve specified to be notified after five failed login attempts. You should determine the appropriate threshold to use, based on your organization’s security policies.

     

    Figure 6: Create a CloudWatch alarm

    Figure 6: Create a CloudWatch alarm

  7. On the Configure actions page, choose Next.
  8. Choose In alarm, choose Select an existing SNS topic, and then select the SNS topic you created earlier in this procedure.
  9. Specify a name for the alarm, and then choose Create Alarm.

Step 4: Create a Lambda function and subscribe the function to the SNS topic

CloudWatch alarm messages are predefined, can’t be modified, and don’t provide details based on CloudWatch streams. Additionally, a CloudWatch alarm will trigger when a combination of failed login attempts on two or more instances meets the threshold. For instance, in my example, when there are three failed attempts on one instance and two failed attempts on a second instance all within a 5-minute period, a CloudWatch alarm will be triggered.

The purpose of the Lambda function that you’ll create in this step is to validate whether the triggered alarms meet the specified threshold on a per-instance basis before the function sends an email notification to the designated email address. When a CloudWatch alarm is triggered, the function reads through the CloudWatch logs and filters the logs based on CloudWatch log streams that meet the specified threshold for the alarm. If no individual CloudWatch log stream (that is, no individual instance or server) meets the threshold, the function won’t send a notification. The function only sends a notification if it determines that one or more instances have each met the specified threshold. The function also provides more information about the failed login attempts when it does send you an email.

To create the Lambda function and subscribe it to the SNS topic

  1. Open the AWS Lambda console and choose Create function.
  2. Choose Author from scratch, and provide a name for your function. Under Runtime, select Node.js 14.x, and then choose Create function.
  3. Double-click index.js, replace the code with the following code, and then choose Deploy.
    var aws = require('aws-sdk');
    var cwl = new aws.CloudWatchLogs();
    var ses = new aws.SES();
    let alarmThreshold = process.env.ALARM_THRESHOLD;
    
    exports.handler = function(event, context) {
        var message = JSON.parse(event.Records[0].Sns.Message);
        var alarmName = message.AlarmName;
        var oldState = message.OldStateValue;
        var newState = message.NewStateValue;
        var reason = message.NewStateReason;
        var requestParams = {
            metricName: message.Trigger.MetricName,
            metricNamespace: message.Trigger.Namespace
        };
        cwl.describeMetricFilters(requestParams, function(err, data) {
            if(err) console.error('Error is:', err);
            else {
                console.log('Metric Filter data is:', data);
        	    getInstanceIdsAndSendEmail(message, data);
            }
        });
    };
    
    function getInstanceIdsAndSendEmail(message, metricFilterData) {
        var timestamp = Date.parse(message.StateChangeTime);
        var offset = message.Trigger.Period * message.Trigger.EvaluationPeriods * 1000;
        var metricFilter = metricFilterData.metricFilters[0];
        var dictInstances = {};
        var arrayInstances = [];
        var instancesFinalList = [];
        var key;
        var val;
        // Getting the Instance Ids
        var paramsForInstanceId = {
            'logGroupName' : metricFilter.logGroupName,
            'filterPattern' : metricFilter.filterPattern ? metricFilter.filterPattern : "",
             'startTime' : timestamp - offset,
             'endTime' : timestamp
        };
        cwl.filterLogEvents(paramsForInstanceId, function (err, data){
            if (err) {
                console.error('Filtering failure:', err);
            } else {
                var events = data.events;
                for (var i in events) {
                    var InstanceId = JSON.stringify(events[i]['logStreamName']);
                    arrayInstances.push(InstanceId);
                }
                console.log('Array Instance is:', arrayInstances);
                for (var i = 0; i < arrayInstances.length; i++) {
                    var instId = arrayInstances[i];
                    dictInstances[instId] = dictInstances[instId] ? dictInstances[instId] + 1 : 1;
                }
                console.log('Instance(s) and number of audit failure occurrences:', dictInstances);
                for([key, val] of Object.entries(dictInstances)) {
                    if (val > alarmThreshold) {
                        instancesFinalList.push(key.replace(/['"]+/g, ''));
                    }
                }
                console.log('Instance(s) with failure audit that exceed the threshold:', instancesFinalList);
        	    getLogsAndSendEmail(message, metricFilterData, instancesFinalList);
            }
        });
    }
    
    function getLogsAndSendEmail(message, metricFilterData, logStreamNames_Instance) {
        var timestamp = Date.parse(message.StateChangeTime);
        var offset = message.Trigger.Period * message.Trigger.EvaluationPeriods * 1000;
        var metricFilter = metricFilterData.metricFilters[0];
        var dictInstances = {};
        var arrayInstances = [];
        var instancesFinalList = []
    
        // Send Email to the Instances
        var paramsForEmail = {
            'logGroupName' : metricFilter.logGroupName,
            'filterPattern' : metricFilter.filterPattern ? metricFilter.filterPattern : "",
             'startTime' : timestamp - offset,
             'endTime' : timestamp,
             'logStreamNames' : logStreamNames_Instance
        };
        cwl.filterLogEvents(paramsForEmail, function (err, data){
            if (err) {
                console.error('Filtering failure:', err);
            } else {
                console.log("===SENDING EMAIL===");
    			var email = ses.sendEmail(generateEmailContent(data, message), function(err, data){
                    if(err) console.error(err);
                    else {
                        console.log("===EMAIL SENT===");
                        console.log(data);
                    }
                });
            }
        });
    }
    
    function generateEmailContent(data, message) {
        var events = data.events;
        let senderEmail = process.env.SENDER_EMAIL;
        let recipientEmail = process.env.RECIPIENT_EMAIL.split(",");
        console.log('Recipient is: ', recipientEmail);
        console.log('Events are:', events);
        var style = '<style> pre {color: red;} </style>';
        var logData = '<br/>Logs:<br/>' + style;
        for (var i in events) {
            logData += '<pre>Instance:' + JSON.stringify(events[i]['logStreamName'])  + '</pre>';
            logData += '<pre>Message:' + JSON.stringify(events[i]['message']) + '</pre><br/>';
        }
        var date = new Date(message.StateChangeTime);
        var text = 'Alarm Name: ' + '<b>' + message.AlarmName + '</b><br/>' + 
                   'Message: ' + 'There has been an unusually high number of Windows Security Audit Failure events for the instance(s) with details below. Please review the event logs <br/>' +
                   'Account ID: ' + message.AWSAccountId + '<br/>'+
                   'Region: ' + message.Region + '<br/>'+
                   'Alarm Time: ' + date.toString() + '<br/>'+
                   logData;
        var subject = 'Alarm Triggered - ' + message.AlarmName;
        var emailContent = {
            Destination: {
                ToAddresses: recipientEmail
            },
    
            Message: {
                Body: {
                    Html: {
                        Data: text
                    }
                },
                Subject: {
                    Data: subject
                }
            },
            Source: senderEmail
        };
        return emailContent;
    }
    

  4. Choose Add trigger, and in the drop-down list, choose SNS.
  5. Under SNS topic, select the SNS topic you created in Step 3, and then choose Add.
     
    Figure 7: Create the AWS Lambda function

    Figure 7: Create the AWS Lambda function

  6. Choose the Configuration tab, and then choose Environment variables. Choose Edit to add the environment variables for ALARM_THRESHOLD, RECIPIENT_EMAIL, and SENDER_EMAIL, and then choose Save.
     
    Figure 8: The Lambda environment variables

    Figure 8: The Lambda environment variables

Note: The variables’ keys must be set exactly as ALARM_THRESHOLD, RECIPIENT_EMAIL, and SENDER_EMAIL, because otherwise the code will fail. For the recipient, you can specify a single email or multiple email addresses that are separated by commas, as shown in Figure 8, provided that the emails are verified as specified in the Prerequisites section.

Next, create an IAM policy, which you’ll attach to a role that will be assumed by the Lambda function. This policy provides permissions to perform the DescribeMetricFilters, FilterLogEvents, and SendEmail API calls that are necessary for the function to work. It also provides permissions to create a log group and log stream in CloudWatch for the Lambda function, so that you can review the logs if the Lambda function fails to run properly.

To create the IAM policy

  1. Sign in to the IAM console, and in the navigation bar, choose Policies.
  2. In the content pane, choose Create policy, and then choose JSON.
  3. Replace the content with the following script. Make sure to replace the placeholders with the ARN of the Lambda function, the ARNs for log group creation and the ARN of your SES verified email address to use as sender.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "ses:SendEmail",
                "Resource": "<arn-of-verified-ses-email-sender>"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:DescribeMetricFilters"
                ],
                "Resource": "<arn-for-CloudWatch-log-groups>"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:FilterLogEvents"
                ],
                "Resource": "<arn-of-CloudWatch-log-group-created-in-step-1>"
    
            },
            {
                "Effect": "Allow",
                "Action": "logs:CreateLogGroup",
                "Resource": "<arn-for-CloudWatch-log-groups>"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": [
                    "<arn-of-lambda-function:*>"
                ]
            }
        ]
    }
    

    Here is how it appears in my example. Note that FilterSendSecurityEvents is the name of my Lambda function and /aws/SecurityAuditLogs is the name my log group created in Step 1.
     

    Figure 9: Example policy for the IAM role to be attached to the Lambda function

    Figure 9: Example policy for the IAM role to be attached to the Lambda function

  4. Choose Review policy, specify a name and a description for the policy, and then choose Create policy.

Next, create an IAM role and attach this policy.

To create the IAM role and attach the policy

  1. In the IAM console navigation bar, choose Roles, and then choose Create role.
  2. Under Choose the service that will use this role, choose Lambda, and then choose Next: Permissions.
  3. On the next page, select the policy you just created, and then choose Next: Tags. Add an optional tag, and then choose Next: Review.
  4. Specify a name and description for the role, and then choose Create role.
  5. To attach this role to the Lambda function, go to the AWS Lambda console. Navigate to the Lambda function, and choose Configurations.
  6. Choose Permissions, and then under Execution role, choose Edit.
  7. On the Edit basic settings page, under Existing role, select the role you just created, and then choose Save.

And that’s it! You will now be notified whenever there are “Audit Failure” events that reach the threshold you set on a per-instance basis for your AWS Managed Microsoft AD domain-joined instances. If you installed and configured the CloudWatch agent on non–domain-joined instances in Step 1, then you’ll also get notifications for “Audit Failure” events that are generated by failed login attempts that use local accounts.

Conclusion

In this post, I showed you how you can proactively track and monitor Windows security audit failures across your AWS Managed Microsoft AD domain-joined EC2 instances. This helps provide greater visibility into Windows login activities for administrators, so that they can take action to maintain the security of their server fleet. This solution can also be extended to potentially trigger an automation workflow or incident response process in the event of unexpected events.

Although this blog has specifically targeted AWS Managed Microsoft AD domain-joined instances, the procedure here also applies to standalone EC2 instances or on-premises servers that are configured to send logs to CloudWatch.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tekena Orugbani

Tekena is a Cloud Support Engineer at the AWS Cape Town office. He has many years of experience working with Windows Systems, virtualization/cloud technologies, and directory services. When he’s not helping customers make the most of their cloud investments, he enjoys hanging out with his family and watching Premier League football (soccer).

AWS achieves Spain’s ENS High certification across 149 services

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-149-services/

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). We continually add more services to our ENS certification scope. This helps to assure public sector organizations in Spain that want to build secure applications and services on AWS that the expected ENS certification security standards are being met.

ENS certification establishes security standards that apply to all government agencies and public organizations in Spain, and to service providers that the public services are dependent on. Spain’s National Security Framework is regulated under Royal Decree 3/2010 and was developed through close collaboration between Entidad Nacional de Acreditación (ENAC), the Ministry of Finance and Public Administration, and the National Cryptologic Centre (CCN), as well as other administrative bodies.

We’re excited to announce the addition of 44 new services to the scope of our Spain Esquema Nacional de Seguridad (ENS) High certification, for a total of 149 services. The certification covers all AWS Regions. Some of the new security services included in ENS High scope are:

  • Amazon Macie is a data security and data privacy service that uses machine learning and pattern matching to help you discover, monitor, and protect your sensitive data in AWS.
  • AWS Control Tower is a service you can use to set up and govern a new, secure, multi-account AWS environment based on best practices established through AWS’s experience working with thousands of enterprises as they move to the cloud.
  • Amazon Fraud Detector is a fully managed machine learning (ML) fraud detection solution that provides everything needed to build, deploy, and manage fraud detection models.
  • AWS Network Firewall is a managed service that makes it easier to deploy essential network protections for all your Amazon Virtual Private Clouds (Amazon VPC).

AWS’s achievement of the ENS High certification is verified by BDO España, which conducted an independent audit and attested that AWS meets the required confidentiality, integrity, and availability standards.

For more information about ENS High, see the AWS Compliance page Esquema Nacional de Seguridad High. To view which services are covered, see the ENS High tab on the AWS Services in Scope by Compliance Program page. You can download the Esquema Nacional de Seguridad (ENS) Certificate from AWS Artifact in the AWS Console or from the Compliance page Esquema Nacional de Seguridad High.

As always, we’re committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. Please reach out to your AWS account team or [email protected] if you have questions about the ENS program.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is a Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across the Asia Pacific, Japan, and Europe Regions. During his professional career, he has helped multiple cloud service providers obtain global and regional security certifications. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

How to integrate third-party IdP using developer authenticated identities

Post Syndicated from Andrew Lee original https://aws.amazon.com/blogs/security/how-to-integrate-third-party-idp-using-developer-authenticated-identities/

Amazon Cognito identity pools enable you to create and manage unique identifiers for your users and provide temporary, limited-privilege credentials to your application to access AWS resources. Currently, there are several out of the box external identity providers (IdPs) to integrate with Amazon Cognito identity pools, including Facebook, Google, and Apple. If your application’s primary users use another social media network such as Snapchat and you would like to make it easier for them to authenticate with your application, you would need to use developer authenticated identities and interface with their third-party IdP. This blog post will describe what is needed from the third-party IdP, how to build a scalable backend for authentication, and how to access AWS services from the client.

As an example, this post will use Snapchat’s Login Kit to integrate with Amazon Cognito. The overall authentication flow for the integration is shown in Figure 1.

Figure 1: Overall authentication flow of integration

Figure 1: Overall authentication flow of integration

Prerequisites

The following are the prerequisites for integrating third-party IdP using developer authenticated identities.

  1. The client SDK to authenticate with third-party IdP. This will handle client authentication and access token retrieval. In the example in this post, we use Snapchat’s Login Kit.
  2. A method to authenticate access tokens that are retrieved from the third-party IdP. For this blog post, I am using an endpoint provided by Snapchat which will retrieve user data by passing in access tokens. A successful query of user data indicates the access token is valid.
  3. Developer authenticated identities (identity pool) configured in Amazon Cognito. You will need to note the identity pool ID and the developer provider name you specify.

Client SDK

Follow the third-party client SDK instructions for implementing authentication in your application. Snapchat’s Login Kit provides an SDK to mount a login button in your app, and to allow you to authenticate against your Snapchat account credentials. After a user clicks on the login button, they will be redirected to Snapchat to login. After successfully logging in, they will be redirected back to your application with an access token. The handleResponseCallback is where you can implement an API call to your developer backend, to pass your access token to retrieve credentials from Amazon Cognito to access AWS services. The following code example mounts a login button on your application, to allow the user to authenticate with Snapchat and retrieve an access token.

var loginButtonIconId = "<HTML div id>";
// Mount Login Button
snap.loginkit.mountButton(loginButtonIconId, {
    clientId: "<Snapchat Client Id>",
    redirectURI: "<Developer backend url>",
    scopeList: [
    "user.display_name",
    "user.bitmoji.avatar",
    "user.external_id",
    ],
    handleResponseCallback: function (token) {
   <IMPLEMENT API CALL TO DEVELOPER BACKEND PASSING SNAPCHAT ACCESS TOKEN>
   }
});

Developer backend

The developer backend is responsible for authenticating access tokens from the third-party IdP and exchanging them for an OpenID Connect token that can be used to access AWS services. For this example, you will use Amazon API Gateway with AWS Lambda with the IAM permissions to call getOpenIdTokenForDeveloperIdentity.

The following is a code example to authenticate access tokens with Snapchat.

let result = await axios({
    method: 'post',
    url: 'https://kit.snapchat.com/v1/me',
    headers: {'Content-Type': 'application/json', 
             'Authorization': 'Bearer ' + body.access_token},
    data: {"query":"{me{displayName bitmoji{avatar} externalId}}"}
});

After successful authentication, next you call getOpenIdTokenForDeveloperIdentity with the identity pool ID and logins map. The logins map has a mapping of the developer provider name to an external ID from the IdP. An OpenID Connect token and the Amazon Cognito identifier (identity ID) will be returned from the call, which can be sent to the application. The identity ID and token can be used to access AWS services. The following is a code example to retrieve AWS credentials after authenticating with Snapchat.

if(result.status == 200) {
    returnbody = JSON.stringify(await cognitoidentity.getOpenIdTokenForDeveloperIdentity({
        IdentityPoolId: '<Identity Pool Id>',
        Logins: {
            '<Developer provider name>': result.data.data.me.externalId,  
        }
    }).promise());
}

Considerations

The following are considerations about Amazon Cognito identity pools that you should keep in mind when building your solution.

  1. The identity ID returned by getOpenIdTokenForDeveloperIdentity is mapped to the external ID provided in the logins map. This mapping is stored in Amazon Cognito. You can then use the identity ID to identify who is calling AWS services, which is especially useful for auditing purposes.
  2. This solution is suitable for multi-regional deployments. All that is required is that you copy the identity pool to another AWS Region. Please note that the identity ID will be different in each Region, but that should not affect functionality.

Conclusion

By using developer authenticated identities, you can integrate your application with Amazon Cognito and a third-party IdP with the proper prerequisites. For more examples of using developer authenticated identities, see Developer Authenticated Identities (Identity Pools) in the Amazon Cognito Developer Guide. If you have feedback about this post, submit comments in the Comments section below or start a new thread on one of our forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrew Lee

Andrew is a Solutions Architect at AWS. He is passionate about projects that drive collaboration and partnerships between Amazon and Snapchat. A builder at heart, he believes that only through building something can a vision be truly realized.

AWS Verified episode 6: A conversation with Reeny Sondhi of Autodesk

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-6-a-conversation-with-reeny-sondhi-of-autodesk/

I’m happy to share the latest episode of AWS Verified, where we bring you global conversations with leaders about issues impacting cybersecurity, privacy, and the cloud. We take this opportunity to meet with leaders from various backgrounds in security, technology, and leadership.

For our latest episode of Verified, I had the opportunity to meet virtually with Reeny Sondhi, Vice President and Chief Security Officer of Autodesk. In her role, Reeny drives security-related strategy and decisions across the company. She leads the teams responsible for the security of Autodesk’s infrastructure, cloud, products, and services, as well as the teams dedicated to security governance, risk & compliance, and security incident response.

Reeny and I touched on a variety of subjects, from her career journey, to her current stewardship of Autodesk’s security strategy based on principles of trust. Reeny started her career in product management, having conceptualized, created, and brought multiple software and hardware products to market. “My passion as a product manager was to understand customer problems and come up with either innovative products or features to help solve them. I tell my team I entered the world of security by accident from product management, but staying in this profession has been my choice. I’ve been able to take the same passion I had when I was a product manager for solving real world customer problems forward as a security leader. Even today, sitting down with my customers, understanding what their problems are, and then building a security program that directly solves these problems, is core to how I operate.”

Autodesk has customers across a wide variety of industries, so Reeny and her team work to align the security program with customer experience and expectations. Reeny has also worked to drive security awareness across Autodesk, empowering employees throughout the organization to act as security owners. “One lesson is consistency in approach. And another key lesson that I’ve learned over the last few years is to demystify security as much as possible for all constituents in the organization. We have worked pretty hard to standardize security practices across the entire organization, which has helped us in scaling security throughout Autodesk.”

Reeny and Autodesk are setting a great example on how to innovate on behalf of their customers, securely. I encourage you to learn more about her perspective on this, and other aspects of how to manage and scale a modern security program, by watching the interview.

Watch my interview with Reeny, and visit the Verified webpage for previous episodes, including conversations with security leaders at Netflix, Comcast, and Vodafone. If you have any suggestions for topics you’d like to see featured in future episodes, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Join us in person for AWS re:Inforce 2021

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/join-us-in-person-for-aws-reinforce-2021/

I’d like to personally invite you to attend our security conference, AWS re:Inforce 2021 in Houston, TX on August 24–25. This event will offer interactive educational content to address your security, compliance, privacy, and identity management needs.

As the Chief Information Security Officer of Amazon Web Services (AWS), my primary job is to help our customers navigate their security journey while keeping the AWS environment safe. AWS re:Inforce will help you understand how you can change to accelerate the pace of innovation in your business while staying secure. With recent headlines around ransomware, misconfigurations, and unintended privacy consequences, this is your chance to learn the tactical and strategic lessons that will help keep your systems and tools protected.

AWS re:Inforce 2021 will kick off with my keynote on Tuesday, August 24. You’ll hear about the latest innovations in cloud security from AWS and learn what you can do to foster a culture of security in your business. Take a look at my re:Invent 2020 presentation, AWS Security: Where we’ve been, where we’re going, or this short overview of the top 10 areas security groups should focus on for examples of the type of content to expect.

For those who are just getting started on AWS and for our more tenured customers, AWS re:Inforce offers you an opportunity to learn how to prioritize your security posture and investments. Using the Security pillar of the AWS Well-Architected Framework, sessions will address how you can build practical and prescriptive measures to protect your data, systems, and assets.

Sessions are offered at all levels and for all backgrounds, from business to technical, and there are learning opportunities in over 100 sessions across five tracks: Data Protection & Privacy; Governance, Risk & Compliance; Identity & Access Management; Network & Infrastructure Security; and Threat Detection & Incident Response. In these sessions, you’ll connect with and learn from AWS experts, customers, and partners who share actionable insights that you can apply in your everyday work. AWS re:Inforce is interactive, with sessions like chalk talks and lecture-style breakout content available to suit your learning style and goals. Sessions will be available from the intermediate (200) through expert (400) levels, so you can grow your skills, no matter where you are in your career. Finally, there will be a leadership session for each track, where AWS leaders will share best practices and trends in each of these areas.

At re:Inforce, AWS developers and experts will cover the latest advancements in AWS security, compliance, privacy, and identity solutions—including actionable insights your business can use right now. Plus, you’ll learn from AWS customers and partners who are using AWS services in innovative ways to protect their data, achieve security at scale, and stay ahead of bad actors in this rapidly evolving security landscape.

We hope you can join us in Houston, and we want you to feel safe if you do. The health and safety of our customers, partners, and employees remains our top priority. If you want to learn more about health measures that are being taken at re:Inforce, visit our Health Measures page on the conference website. If you’re not yet comfortable attending in person, or if local travel restrictions prevent you from doing so, register to access a livestream of my keynote for free. Also, a selection of sessions will be recorded and available to watch after the event. Keep checking the AWS re:Inforce website for additional updates.

A full conference pass is $1,099. However, if you register today with the code “RFSALUwi70xfx” you’ll receive a $300 discount (while supplies last).

We’re excited to get back to re:Inforce; it is emblematic of our commitment to giving customers direct access to the latest security research and trends. We’ll continue to release additional details about the event on our website, and we look forward to seeing you in Houston!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.