Tag Archives: Security, Identity & Compliance

How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-automate-import-third-party-threat-intelligence-feeds-into-amazon-guardduty/

Amazon GuardDuty is an AWS threat detection service that helps protect your AWS accounts and workloads by continuously monitoring them for malicious and unauthorized behavior. You can enable Amazon GuardDuty through the AWS Management Console with one click. It analyzes billions of events across your AWS accounts and uses machine learning to detect anomalies in account and workload activity. Then it references integrated threat intelligence feeds to identify suspected attackers. Within an AWS region, GuardDuty processes data from AWS CloudTrail Logs, Amazon Virtual Private Cloud (VPC) Flow Logs, and Domain Name System (DNS) Logs. All log data is encrypted in transit. GuardDuty extracts various fields from the logs for profiling and anomaly detection and then discards the logs. GuardDuty’s threat intelligence findings are based on ingested threat feeds from AWS threat intelligence and from third-party vendors CrowdStrike and Proofpoint.

However, beyond these built-in threat feeds, you have two ways to customize your protection. Customization is useful if you need to enforce industry-specific threat feeds, such as those for the financial services or the healthcare space. The first customization option is to provide your own list of whitelisted IPs. The second is to generate findings based on third-party threat intelligence feeds that you own or have the rights to share and upload into GuardDuty. However, keeping the third party threat list ingested to GuardDuty up-to-date requires many manual steps. You would need to:

  • Authorize administrator access
  • Download the list from a third-party provider
  • Upload the generated file to the service
  • Replace outdated threat feeds

In the following blog we’ll show you how to automate these steps when using a third-party feed. We’ll leverage FireEye iSIGHT Threat Intelligence as an example of how to upload a feed you have licensed to GuardDuty, but this solution can also work with other threat intelligence feeds. If you deploy this solution with the default parameters, it builds the following environment:
 

Figure 1: Diagram of the solution environment

Figure 1: Diagram of the solution environment

The following resources are used in this solution:

  • An Amazon CloudWatch Event that periodically invokes an AWS Lambda function. By default, CloudWatch will invoke the function every six days, but you can alter this if you’d like.
  • An AWS Systems Manager Parameter Store that securely stores the public and private keys that you provide. These keys are required to download the threat feeds.
  • An AWS Lambda function that consists of a script that programmatically imports a licensed FireEye iSIGHT Threat Intelligence feed into Amazon GuardDuty.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following:
    1. GuardDuty, to list, create, obtain, and update threat lists.
    2. CloudWatch Logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload threat lists on Amazon S3 and ingest them to GuardDuty.
  • An Amazon Simple Storage Service (S3) bucket to store your threat lists. After the solution is deployed, the bucket is retained unless you delete it manually.
  • Amazon GuardDuty, which needs to be enabled in the same AWS region in which you want to deploy the solution.

    Note: It’s a security best practice to enable GuardDuty in all regions.

Deploy the solution

Once you’ve taken care of the prerequisites, follow these steps:

  1. Select the Launch Stack button to launch a CloudFormation stack in your account. It takes approximately 5 minutes for the CloudFormation stack to complete:
     
    Select this image to open a link that starts building the CloudFormation stack
  2. Notes: If you’ve invited other accounts to enable GuardDuty and become associated with your AWS account (such that you can view and manage their GuardDuty findings on their behalf), please run this solution from the master account. Find more information on managing master and member GuardDuty accounts here. Executing this solution from the master account ensures that Guard Duty reports findings from all the member accounts as well, using the imported threat list.

    The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar. This is because Guard Duty is a region specific service.

    The code is available on GitHub.

  3. On the Select Template page, select Next.
  4. On the Specify Details page, give your solution stack a name.
  5. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    Parameter Value Description
    Public Key <Requires input> FireEye iSIGHT Threat Intelligence public key.
    Private Key <Requires input> FireEye iSIGHT Threat Intelligence private key
    Days Requested 7 The maximum age (in days) of the threats you want to collect. (min 1 – max 30)
    Frequency 6 The number of days between executions – when the solution downloads a new threat feed (min 1 – max 29)
  6. Select Next.
  7. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like, and then select Next.
  8. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  9. To deploy the stack, select Create.

After approximately 5 minutes, the stack creation should be complete. You can verify this on the Events tab:
 

Figure 2: Check the status of the stack creation on the "Events" tab

Figure 2: Check the status of the stack creation on the “Events” tab

The Lambda function that updates your GuardDuty threat lists is invoked right after you provision the solution. It’s also set to run periodically to keep your environment updated. However, in scenarios that require faster updates to your threat intelligence lists, such as the discovery of a new Zero Day vulnerability, you can manually run the Lambda function to avoid waiting until the scheduled update event. To manually run the Lambda function, follow the steps described here to create and ingest the newly downloaded threat feeds into Amazon GuardDuty.

Summary

We’ve described how to deploy an automated solution that downloads the latest threat intelligence feeds you have licensed from a third-party provider such as FireEye. This solution provides a large amount of individual threat intelligence data for GuardDuty to process and report findings on. Furthermore, as newer threat feeds are published by FireEye (or the threat intelligence feed provider of your choice), they will be automatically ingested into GuardDuty.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuartDuty forum.

Want more AWS Security news? Follow us on Twitter.

How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/

When you’re implementing security measures across your AWS resources, you should use a holistic approach that incorporates controls across multiple areas. In the Cloud Adoption Framework (CAF) Security perspective whitepaper, we define these controls across four categories.

  • Directive controls. Establish the governance, risk, and compliance models the environment will operate within.
  • Preventive controls. Protect your workloads and mitigate threats and vulnerabilities.
  • Detective controls. Provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive controls. Drive remediation of potential deviations from your security baselines.

The use of security automation is also a key principle outlined in the whitepaper. It helps reduce operational overhead and create repeatable, predictable approaches to monitoring and responding to events. You can take advantage of AWS services to build powerful solutions for the automated detection and remediation of threats against your AWS environments. For example, you can configure Amazon CloudWatch Events to invoke a Lambda action in response to suspicious or unexpected behavior in your AWS environment detected by Amazon GuardDuty. You can configure automated flows that use both detective and responsive controls and might also feed into preventative controls to help mitigate the threat in the future. Depending on the type of source event, you can automatically invoke specific actions, such as modifying access controls, terminating instances, or revoking credentials.

In this blog post, we’ll show you how to use Amazon GuardDuty to automatically update the AWS Web Application Firewall Web Access Control Lists (WebACLs) and VPC Network Access Control Lists (NACLs) in response to GuardDuty findings. After GuardDuty detects a suspicious activity, the solution updates these resources to block communication from the suspicious host while you perform additional investigation and remediation. Once communication has been blocked, further occurrences of a finding are reduced, allowing security and operations teams to focus more on higher priority tasks.

Amazon GuardDuty is a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. Amazon GuardDuty sends notifications based on Amazon CloudWatch Events when any change in the findings takes place. In the context of GuardDuty, such changes include newly generated findings and all subsequent occurrences of these existing findings. Using rules that you can quickly set up, you can match CloudWatch events and route them to one or more target actions. This solution routes matched events to AWS Lambda, which then performs updates to AWS Web Application Firewall (WAF) and VPC NACLs. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports both managed rules as well as a powerful rule language for custom rules. A Network Access Control List (NACL) is an optional layer of security for your Amazon Virtual Private Cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets.

Solution overview

The solution assumes that Amazon GuardDuty is enabled in your AWS account. If it isn’t enabled, you can find more info about the free trial and pricing here, and you can follow the steps in the GuardDuty documentation to set up the service and start monitoring your account.

Figure 1 shows how the CloudFormation template creates the sample solution:

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A GuardDuty finding is raised with suspected malicious activity.
  2. A CloudWatch Event is configured to filter for GuardDuty Finding type.
  3. A Lambda function is invoked by the CloudWatch Event and parses the GuardDuty finding.
  4. State data for blocked hosts is stored in Amazon DynamoDB table. The Lambda function checks the state table for existing host entry.
  5. The Lambda function creates a Rule inside AWS WAF and in a VPC NACL.
  6. A notification email is sent via Amazon Simple Notification Service (SNS).

A second Lambda function runs on a 5-minute recurring schedule and removes entries that are past the configurable retention period from WAF IPSets (which is a list that contains the blacklisted IPs or CIDRs), VPC NACLs, and the Dynamo DB table.

GuardDuty findings referenced in this solution

This solution’s CloudWatch Event Rule pattern is configured to match the following GuardDuty Finding types:

  1. UnauthorizedAccess:EC2/SSHBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to SSH services on Linux-based systems.
  2. UnauthorizedAccess:EC2/RDPBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to RDP services on Windows-based systems.
  3. Recon:EC2/PortProbeUnprotectedPort
    This finding informs you that a port on an EC2 instance in your AWS environment isn’t blocked by a security group, access control list (ACL), or an on-host firewall (for example, Linux IPChains), and known scanners on the internet are actively probing it.
  4. Trojan:EC2/BlackholeTraffic
    This finding informs you that an EC2 instance in your AWS environment might be compromised because it’s trying to communicate with an IP address of a black hole. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
  5. Backdoor:EC2/XORDDOS
    This finding informs you that an EC2 instance in your AWS environment is attempting to communicate with an IP address that’s associated with XOR DDoS malware. XOR DDoS is Trojan malware that hijacks Linux systems.
  6. UnauthorizedAccess:EC2/TorIPCaller
    This finding informs you that an EC2 instance in your AWS environment is receiving inbound connections from a Tor exit node. Tor is software for enabling anonymous communication. It encrypts and randomly bounces communications through relays between a series of network nodes.
  7. Trojan:EC2/DropPoint
    This finding informs you that an EC2 instance in your AWS environment is trying to communicate with an IP address of a remote host that’s known to hold credentials and other stolen data captured by malware.

When one of these GuardDuty finding types is matched by the CloudWatch Event Rule, an entry is created in the target ACLs to deny the suspicious host, and then a notification is sent to an email address by this solution’s Lambda. Blocking traffic from the suspicious host helps to mitigate the threat while you perform additional investigation and remediation. For more information, see Remediating a Compromised EC2 Instance.

Solution deployment

This sample solution includes 6 main steps:

  1. Deploy the CloudFormation template.
  2. Create and run a Lambda GuardDuty finding test event.
  3. Confirm the entry in the VPC Network ACL.
  4. Confirm the entry in the AWS WAF IPSets.
  5. Confirm the SNS notification subscription.
  6. Apply the WAF Web ACLs to resources.

Step 1: Deploy the CloudFormation template

For this next step, make sure you deploy the template within the AWS account and region where you want to monitor GuardDuty findings.

  1. Select this link to launch a CloudFormation stack in your account.

    Note: The stack will launch in the N. Virginia (us-east-1) region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS regions, first upload the solution’s Lambda deployment packages (zip files with code) to an S3 bucket in the selected region. Once you have uploaded the zip files in the target region, update the CloudFormation ArtifactsBucket and ArticaftsPrefix parameters referenced in step 3 below.

  2. In the CloudFormation console, select the Select Template form, and then select Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Input parameter description
    AdminEmail Email address to receive notifications. Must be a valid email address.
    Retention How long to retain IP addresses in the blacklist (in minutes). Default is 12 hours.
    CloudFrontIPSetId ID for existing WAF IPSet on CloudFront. Enter the ID here if there’s an existing WAF IPSet on CloudFront you want to use. Leave set to the default value of False if you want to create a new WebACL and IPSet.
    ALBIPSetId ID for existing WAF IPSet on ALB. Enter if there is an existing WAF IPSet on ALB. Leave set to False for creation of new WebACL and IPSet.
    ArtifactsBucket S3 bucket with artifact files (Lambda functions, templates, html files, etc.). Leave set to the default value for deployment into N. Virginia region.
    ArtifactsPrefix Path in the S3 bucket containing artifact files. Leave set to the default value for deployment into N. Virginia region.

    Note: AWS WAF is not currently available in all regions. For more information about where it’s available, refer to this page.

    Figure 2 shows an example of values entered on this screen:

    Figure 2: CloudFormation parameters on the "Specify Details" page

    Figure 2: CloudFormation parameters on the “Specify Details” page

  4. Enter values for all of the input parameters, and then select Next.
  5. On the Options page, accept the defaults, and then select Next.
  6. On the Review page, confirm the details, and then select Create.
  7. While the stack is being created, check the email inbox for the value you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation”. Select the link to confirm the subscription to the SNS topic. You should see a message similar to this:

    Figure 3: Subscription confirmation

    Figure 3: Subscription confirmation

Once the Status field for the CloudFormation stack changes to CREATE_COMPLETE, the solution is implemented and is ready for testing.

Figure 4: The "Status" displays "CREATE_COMPLETE"

Figure 4: The “Status” displays “CREATE_COMPLETE”

Step 2: Create and run a Lambda GuardDuty finding test event

Once the CloudFormation stack has completed deployment, you can test the functionality using a Lambda test event.

  1. In the console, select Services > VPC > Subnets and locate a subnet suitable for testing the solution. On the Summary tab, copy the Subnet ID to the clipboard or to a text editor.

    Figure 5: The "Subnet ID" value on the "Summary" tab

    Figure 5: The “Subnet ID” value on the “Summary” tab

  2. In the console, select Services > CloudFormation > GuardDutytoACL stack. In the stack Outputs tab, look for the GuardDutytoACLLambda entry, similar to Figure 6 below:

    Figure 6: The "GuardDutytoACLLambda" entry on the "Outputs" tab

    Figure 6: The “GuardDutytoACLLambda” entry on the “Outputs” tab

  3. 3. Select the link and you’ll be redirected to the Lambda console, with the Lambda function already open, similar to Figure 7:

    Figure 7: The Lambda function open in the Lambda console

    Figure 7: The Lambda function open in the Lambda console

  4. In the top right, select the Select a test event… drop-down list, and then select Configure test events.

    Figure 8: Select "Configure test events" from the drop-down list

    Figure 8: Select “Configure test events” from the drop-down list

  5. To facilitate testing, a test event file has been provided. On the Configure test event page, provide a name for Event name, and then paste the provided test event JSON in the body of the event.
  6. Update the value of subnetId key (line 34) to the value of your Subnet ID from step 2.1, and then select Create.

    Figure 9: Update the value of the "subnetId" key

    Figure 9: Update the value of the “subnetId” key

  7. Select Test to invoke the Lambda with the test event. You should see a message “Execution result: succeeded” similar to below:

    Figure 10: The "Test" button and the "succeeded" message

    Figure 10: The “Test” button and the “succeeded” message

Step 3: Confirm the entry in the VPC Network ACL (NACL)

In this step, you’ll confirm the DENY entry was created in the NACL. This solution is configured to create up to 10 entries in an ACL ranging between rule numbers 71 and 80. Since NACL rules are processed in order, it’s important that the DENY rule is placed before the ALLOW rule.

  1. In the console, select Services > VPC > Subnets and locate the subnet you provided for the test event.
  2. Select the Network ACL tab and confirm the new entry generated from the test event.
    Figure 11: Check the entry from the test event on the "Network" tab

    Figure 11: Check the entry from the test event on the “Network” tab

    Note that VPC NACL entries are created in the rule number range between 71 and 80. Older entries are aged out to create a “sliding window” of blocked hosts.

Step 4: Confirm the entry in the AWS WAF IPSets

In this step, you’ll verify that the entry was added to the CloudFront WAF IPSet and to the ALB WAF IPSet.

  1. In the console, select Services > WAF & Shield, and then select IP addresses.
  2. For Filter, select Global (CloudFront), and then select the IPSet named GD2ACL CloudFront IPSet for Blacklisted IP addresses.

    Figure 12: Filter the list and then select "GD2ACL CloudFront IPSet for Blacklisted IP addresses"

    Figure 12: Filter the list and then select “GD2ACL CloudFront IPSet for Blacklisted IP addresses”

  3. Confirm the IP address that was added to the list in the IPSet:

    Figure 13: Confirm the IP address was added

    Figure 13: Confirm the IP address was added

  4. In the console, select Services > WAF & Shield, and then select IP addresses.
  5. For Filter, select US East (N. Virginia)–or another region in which you deployed this solution–and then select the IPSet named GD2ACL ALB IPSet for blacklisted IP addresses.
  6. Confirm the IP address added to the ALB IPSet:

    Figure 14: Make sure the IP address was added

    Figure 14: Make sure the IP address was added

There might be specific host addresses that you want to prevent from being added to the blacklist. You can do this within GuardDuty by using a trusted IP list. Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty doesn’t generate findings for IP addresses on trusted IP lists. For additional information, see Working with Trusted IP Lists and Threat Lists.

Step 5: Confirm the SNS notification subscription

In this step, you’ll view the SNS notification that was sent to the email address you set up.

    1. Review the email inbox for the value you provided for the AdminEmail parameter and look for a message with the subject line “AWS GD2ACL Alert.”The contents of the message from SNS should be similar to this:

      Figure 15: SNS message example

      Figure 15: SNS message example

Step 6: Apply the WAF Web ACLs to resources

The final task is to associate the Web ACL with the CloudFront Distributions and Application Load Balancers that you want to automatically update with this solution. To learn how to do this, see Associating or Disassociating a Web ACL with a CloudFront Distribution or an Application Load Balancer.

You can also use AWS Firewall Manger to associate the Web ACLs. AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources.

Summary

You’ve learned how to use Amazon GuardDuty to automatically update AWS Web Application Firewall (AWS WAF) and VPC Network Access Control Lists (ACLs) in response to GuardDuty findings. With just a few steps, you can use this sample solution to help mitigate threats by blocking communication with suspicious hosts. You can explore additional solutions possible using GuardDuty Finding types and CloudWatch Events target actions. This solution’s code is available on GitHub. Feel free to play around with the code to add more GuardDuty findings to this solution and also to build bigger and better solutions!

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the GuardDuty, WAF, or CloudWatch forums, or contact AWS Support.

Using AWS CloudHSM-backed certificates with Microsoft Internet Information Server

Post Syndicated from Jeff Levine original https://aws.amazon.com/blogs/security/using-aws-cloudhsm-backed-certificates-with-microsoft-internet-information-server/

SSL/TLS certificates are used to create encrypted sessions to endpoints such as web servers. If you want to get an SSL certificate, you usually start by creating a private key and a corresponding certificate signing request (CSR). You then send the CSR to a certificate authority (CA) and receive a certificate. When a user seeks to securely browse your website, the web server presents the certificate to the user’s browser. The user’s browser verifies that the certificate is associated with a trusted root certificate, and then uses the certificate to encrypt the session data. The web server then uses the private key to decrypt the user’s data. The security of this protocol relies on the private key remaining private.

A common problem with web servers, however, is that the private key and the certificate reside on the web server itself. If an attacker compromises the server and steals both the private key and certificate, the attacker could potentially use them to intercept the session or create an imposter server. If your business requires the use of hardware security modules for key storage, you can use AWS CloudHSM to generate and hold the private keys for web server certificates rather than placing them on the web server directly and reduce your exposure. On Windows, AWS CloudHSM integrates with Microsoft Internet Information Server (IIS) through the Key Storage Provider extensions to the Microsoft Cryptography Next Generation API (KSP/CNG). In this blog post, I will show you how to leverage CloudHSM KSP/CNG capabilities to secure the private key within the hardware security module (HSM), if you are using Microsoft Internet Information Server (IIS) to host your website.

Prerequisites and assumptions

You need the following for this walkthrough.

  • An AWS account and permissions to provision Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Route 53, and Amazon CloudHSM. I recommend using an AWS user account ID that has full administrative privileges.
  • A region that supports CloudHSM. I’m using the eu-west-1 (Ireland) region.
  • A domain to use for certificate generation. I’m using the domain example.com.
  • A trusted public certificate authority (CA).
  • A working knowledge of the Amazon VPC, Amazon EC2, and Amazon CloudHSM services.
  • Familiarity with the administration of Windows Server 2012 R2

Important: You will incur charges for the services used in this example. You can find the cost of each service on that service’s pricing page. You can also use the Simple Monthly Calculator to estimate the cost.

Architectural overview

I’ll go over a few diagrams before I start building. I’ll start with a diagram of the infrastructure.

Figure 1: CloudHSM infrastructure

Figure 1: CloudHSM infrastructure

This diagram shows a VPC in the eu-west-1 (Ireland) region. An Amazon EC2 instance running Windows Server 2012 R2 resides on a public subnet. This instance will run both the CloudHSM client software and IIS to host a website. The instance has an Elastic IP and can be accessed via the Internet Gateway. I’ll also have security groups that enable RDP, HTTP, and HTTPS access. The private subnet hosts the Elastic Network Interface (ENI) for CloudHSM cluster that has a single HSM.

And here’s the certification hierarchy:

Figure 2: Certification hierarchy

Figure 2: Certification hierarchy

The Amazon EC2 instance will run Windows Server 2012 R2 (WS2012R2) and IIS. I’ll use the KSP/CNG capabilities of CloudHSM to request a CSR from the WS2012R2 instance. The KSP/CNG integration will reach out to CloudHSM that will, in turn, create a private/public key pair. The private key will remain within CloudHSM while the public key will be used to create the CSR. I’ll then submit the CSR to a CA. The CA will sign the CSR with the CA‘s private key and return a certificate. I’ll then accept the CSR with the certificate that will make it available to IIS. I’ll then secure the default IIS website with the certificate. Finally, I’ll browse to the website and show how it uses the CNG/KSP integration.

Now that you’ve seen what I’m building, we can get started!

Build the CloudHSM infrastructure

You’ll need to set up a CloudHSM environment. This environment will consist of a VPC, a CloudHSM cluster with a single HSM, and a Windows 2012 R2 EC2 instance that will run both the CloudHSM client and IIS. For each of the steps below, I have provided a link to the relevant documentation, as well as any additional instructions to build out the infrastructure to match my sample environment.

  1. Select a region that offers AWS CloudHSM. I’m using the eu-west-1 (Ireland) region.
  2. Create a Virtual Private Cloud (VPC). Use the following values:
    VPC IPv4 CIDR block: 10.0.0.0/16
    VPC Name: cloudhsm-vpc
    Public subnet IPv4 CIDR block: 10.0.0.0/24
    Public subnet Availability Zone: eu-west-1b
    Public subnet name: cloudhsm-public
  3. Create a private subnet. Use the following values:
    IPv4 CIDR block: 10.0.1.0/24
    Availability Zone: eu-west-1b
    Name: cloudhsm-private
  4. Create a cluster. Use the following values:
    HSM Availability Zone: eu-west-1b
    Subnet: Use the private subnet you just created in eu-west-1b.
    Number of HSMs: 1
  5. Launch an Amazon EC2 client instance. Use the following values:
    AMI: Microsoft Windows Server 2012 R2 Base
    Instance type: t2.medium
    VPC: cloudhsm-vpc
    Subnet: cloudhsm-public
    Auto-assign Public IP: Enable
    Name tag: cloudhsm
    Security group: The CloudHSM cluster security group

    Once the instance is running, create an additional administrative Windows user because the built-in Administrator account has restrictions.

  6. Create an additional security group to allow you to connect to the instance and attach the security group to the instance in addition to the security group of the CloudHSM cluster.
  7. Allocate and associate an Elastic IP address to the instance so that the instance’s IP address persists if you need to stop and start the instance.
  8. Create a DNS “A record” to map the host name to the Elastic IP. In this example, I’m mapping www.example.com to the Elastic IP that I allocated.
  9. Create an HSM. After the HSM is created, note the HSM Id in the form of hsm- followed be an alphanumeric string; for example, hsm-1234abcd.
  10. Initialize the cluster.
  11. Install and configure the AWS CloudHSM client (Windows).
    You might need to use a command window with administrative privileges to do this.
  12. Activate the cluster.
  13. Create a crypto user. You will need to specify the ID and the password of the crypto user in the environment variables that you will create in the next section.

Define environment variables

We now need to define some Windows system environment variables (not user variables) used by the CNG/KSP integration. You can modify the environment variables by selecting System from the Control Panel, selecting Advanced System Settings, and selecting Environment Variables.

  1. Go into the Windows Control Panel and enter the following definitions:
    liquidsecurity_daemon_id 1
    liquidsecurity_daemon_socket_type TCPSOCKET
    liquidsecurity_daemon_tcp_port 1111
    n3fips_username USER
    n3fips_password USER:PASSWORD
    n3fips_partition hsm-ABCDEFGHIJK
    • Substitute the user ID and password that you specified in step 12 for the USER and PASSWORD values.
    • Substitute the HSM ID value you got from step 8 for hsm-ABCDEFGHIJK.

    Your Windows Control Panel configuration should look like the image below, not including the above substitutions.

    Figure 3: Environment variable settings

    Figure 3: Environment variable settings

  2. Reboot the EC2 instance to ensure that the environment variables are in place for all process.

Launch the CloudHSM client

You must start the CloudHSM client in order to enable the CNG/KSP integration.

  1. Open a command prompt window on the Windows instance.
  2. Enter the following commands:

    cd \ProgramData\Amazon\CloudHSM\data
    \Program Files\Amazon\CloudHSM\cloudhsm_client.exe cloudhsm_client.cfg

    The cloudhsm_client.exe command will continue to run. Important note: Do not close the window! The output at the bottom of the window looks similar to the following:

    Figure 4: CloudHSM client output

    Figure 4: CloudHSM client output

Launch key_mgmt_util

We are now going to launch the key_mgmt_util client and confirm that no keys are present in the CloudHSM cluster.

  1. Open a new command window. Do not use the window that you opened in the previous section.
  2. Enter the following commands.

    cd “\Program files\Amazon\CloudHSM”
    key_mgmt_util.exe
    loginHSM –u CU –s USER –p PASSWORD

    (replace USER and PASSWORD with the crypto user and password that you created above)

    findKey

You should see a message saying that there are no keys present, as shown in the figure below.

Figure 5: findKey output showing no keys present

Figure 5: findKey output showing no keys present

Important note: Do not exit from key_mgmt_util! You’ll return to this window later.

Install IIS

Here’s how:

  1. Follow the steps of this tutorial to install IIS. When installing IIS, make the following changes:
    • Use the Windows Server 2012 instructions as a guideline.
    • Do not install MySQL and PHP because you won’t need these services for this walkthrough.
  2. From your workstation’s web browser, browse the Elastic IP or the DNS “A Record” of the web server using the HTTP protocol. You should see the default Microsoft IIS page as shown below. If you don’t see the page, check the security groups for the server and verify that port 80 (HTTP) and port 443 (HTTPS) are open.
     
    Figure 6: http://www.example.com

    Figure 6: http://www.example.com

  3. If you browse to the same IP using the HTTPS protocol, you should not get a response because you haven’t configured HTTPS on the EC2 instance.

Provision a server certificate using the CNG/KSP integration

I’m now going to show how to provision a server certificate. I will do this with the certreq program that’s included with Windows Server 2012 R2. Certreq generates a CSR that I’ll submit to a CA. Certreq supports the KSP/CNG standard and can place certificates into the Windows certificate store.

  1. Create a file named request.inf that contains the lines below. Note that the Subject line may wrap onto the following line. It begins with Subject and ends with Washington and a closing quotation mark (“).

    
    [Version]
    Signature= $Windows NT$
    [NewRequest]
    Subject = "C=US,CN=www.example.com,O=Information Technology,OU=Certificate Management,L=Seattle,S=Washington"
    HashAlgorithm = SHA256
    KeyAlgorithm = RSA
    KeyLength = 2048
    ProviderName = Cavium Key Storage Provider
    KeyUsage = 0xf0
    MachineKeySet = True
    SuppressDefaults = True
    SMIME = false
    RequestType = PKCS10
    [EnhancedKeyUsageExtension]
    OID=1.3.6.1.5.5.7.3.1
    [Extensions]
    2.5.29.17 = "{text}"
    _continue_ = "dns=www.example.com&"
    _continue_ = "dns=example.com&"
    

    Note the presence of the extensions section that allows you to specify SANs (Subject Alternative Names). SANs are now required by some browsers. Remember to substitute your own domain for example.com in the above example.

  2. Create a certificate request with certreq.exe.
    certreq.exe -new request.inf request.csr.pem

    Certreq returns a message saying that the certificate request has been created.

  3. Figure 7: Output from certreq.exe showing that the CSR has been created

    Figure 7: Output from certreq.exe showing that the CSR has been created

  4. In the same command window where you ran key_mgmt_util, run findKey again to see if there are any new keys. Use the findKey command by itself to display all keys. This shows that two keys have been added with handles 7 and 8. Use the -c 2 parameter to display only public keys (handle 262214) and -c 3 to display private keys (handle 262154). The two keys are part of the public/private key pair that were just created with certreq.
     
    Figure 8: findKey output showing two new keys

    Figure 8: findKey output showing two new keys

  5. Submit the CSR you just created to the CA using the procedures provided by the CA. The CA will return a certificate that is named request.crt.pem (or similar).

    If the certificate authority doesn’t accept the CSR provided by certreq.exe, try the following:

    1. If the certificate authority asks you to provide the domains for the certificate, make sure you enter all of the SANs included in the request.inf file.
    2. Check the first and last lines of the CSR and see if the word “NEW” appears. If so, remove the word in both lines.
  6. Use the certreq command to accept the new certificate and insert it into the certificate store where IIS can access it.

    certreq.exe -accept request.crt.pem

Configure and test the secure website

  1. Start the IIS Manager. You can do this through the Server Manager or by running the following command:

    Inetmgr

  2. Under Connections, expand the local server, and then expand the Sites folder below. Select the site named Default Web Site.
     
    Figure 9: IIS Manager settings

    Figure 9: IIS Manager settings

  3. On the right, under Actions, select Edit Site-> Bindings, and then add a binding with the following fields making sure to substitute your domain name in place of example.com:

    Type: https
    Host name: www.example.com
    SSL certificate: www.example.com

  4. Select OK to save the new binding, and then select Close.
     
    Figure 10: Adding site bindings

    Figure 10: Adding site bindings

  5. Select Manage Website -> Restart. This will restart the website and ensure the new binding is active.
  6. Browse to https://www.example.com, remembering to substitute your own domain. You’ll see the default website with a secure connection as shown below. If you get a warning message from your workstation’s browser, make sure you added the root-level certificate to the trusted certificate store. (Note: This should not be an issue if you’re using a commercial CA supported by major web browsers)
     
    Figure 11: https://www.example.com

    Figure 11: https://www.example.com

  7. Examine the certificate information being reported to the browser. For my example, I’m using the Chrome browser so I’ll select the padlock icon, then Certificates, then Detail.
     
    Figure 12: Server certificate details

    Figure 12: Server certificate details

    Figure 12 shows that the Subject field contains the same information that I provided before. You can check out the other fields for information about the Subject Alternative Names and other fields and verify that the correct certificate is being used.

Conclusion

For projects that have special requirements regarding the storage of keys, AWS CloudHSM supports the Microsoft KSP/CNG standard that enables you to store the private key of an SSL/TLS certificate within an HSM. Since the private key no longer has to reside on the server, it’s no longer at risk if the server itself were to be compromised.

If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon CloudHSM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Amazon ElastiCache for Redis now PCI DSS compliant, allowing you to process sensitive payment card data in-memory for faster performance

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/

Amazon ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and caching service in the cloud. It delivers sub-millisecond response times with millions of requests per second.

To create a PCI-Compliant ElastiCache for Redis cluster, you must use the latest Redis engine version 4.0.10 or higher and current generation node types. The service offers various data security controls to store, process, and transmit sensitive financial data. These controls include in-transit encryption (TLS), at-rest encryption, and Redis AUTH. There’s no additional charge for PCI DSS compliant ElastiCache for Redis.

In addition to PCI, ElastiCache for Redis is a HIPAA eligible service. If you want to use your existing Redis clusters that process healthcare information to also process financial information while meeting PCI requirements, you must upgrade your Redis clusters from 3.2.6 to 4.0.10. For more details, see Upgrading Engine Versions and ElastiCache for Redis Compliance.

Meeting these high bars for security and compliance means ElastiCache for Redis can be used for secure database and application caching, session management, queues, chat/messaging, and streaming analytics in industries as diverse as financial services, gaming, retail, e-commerce, and healthcare. For example, you can use ElastiCache for Redis to build an internet-scale, ride-hailing application and add digital wallets that store customer payment card numbers, thus enabling people to perform financial transactions securely and at industry standards.

To get started, see ElastiCache for Redis Compliance Documentation.

Want more AWS Security news? Follow us on Twitter.

Maintaining Transport Layer Security all the way to your container part 2: Using AWS Certificate Manager Private Certificate Authority

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/maintaining-transport-layer-security-all-the-way-to-your-container-part-2-using-aws-certificate-manager-private-certificate-authority/

This post contributed by AWS Senior Cloud Infrastructure Architect Anabell St Vincent and AWS Solutions Architect Alex Kimber.

The previous post, Maintaining Transport Layer Security All the Way to Your Container, covered how the layer 4 Network Load Balancer can be used to maintain Transport Layer Security (TLS) all the way from the client to running containers.

In this post, we discuss the various options available for ensuring that certificates can be securely and reliably made available to containers. By simplifying the process of distributing or generating certificates and other secrets, it’s easier for you to build inherently secure architectures without compromising scalability.

There are several ways to achieve this:

1. Storing the certificate and private key in the Docker image

Certificates and keys can be included in the Docker image and made available to the container at runtime. This approach makes the deployment of containers with certificates and keys simple and easy.

However, there are some drawbacks. First, the certificates and keys need to be created, stored securely, and then included in the Docker image. There are some manual or additional automation steps required to securely create, retrieve, and include them for every new revision of the Docker image.

The following example Docker file creates an NGINX container that has the certificate and the key included:

FROM nginx:alpine

# Copy in secret materials
RUN mkdir -p /root/certs/nginxdemotls.com
COPY nginxdemotls.com.key /root/certs/nginxdemotls.com/nginxdemotls.com.key
COPY nginxdemotls.com.crt /root/certs/nginxdemotls.com/nginxdemotls.com.crt
RUN chmod 400 /root/certs/nginxdemotls.com/nginxdemotls.com.key

# Copy in nginx configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginxdemo.conf /etc/nginx/conf.d
COPY nginxdemotls.conf /etc/nginx/conf.d

# Create folders to hold web content and copy in HTML files.
RUN mkdir -p /var/www/nginxdemo.com
RUN mkdir -p /var/www/nginxdemotls.com

COPY index.html /var/www/nginxdemo.com/index.html
COPY indextls.html /var/www/nginxdemotls.com/index.html

From a security perspective, this approach has additional drawbacks. Because certificates and private keys are bundled with the Docker images, anyone with access to a Docker image can also retrieve the certificate and private key.
The other drawback is the updated certificates are not replaced automatically and the Docker image must be re-created to include any updated certificates. Running containers must either be restarted with the new image, or have the certificates updated.

2. Storing the certificates in AWS Systems Manager Parameter Store and Amazon S3

The post Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks explains how you can use Systems Manager Parameter Store to store secrets. Some customers use Parameter Store to keep their secrets for simpler retrieval, as well as fine-grained access control. Parameter Store allows for securing data using AWS Key Management Service (AWS KMS) for the encryption. Each encryption key created in KMS can be accessed and controlled using AWS Identity and Access Management (IAM) roles in addition to key policy functionality within KMS. This approach allows for resource-level permissions to each item that is stored in Parameter Store, based on the KMS key used for the encryption.

Some certificates can be stored in Parameter Store using the ‘Secure String’ type and using KMS for encryption. With this approach, you can make an API call to retrieve the certificate when the container is deployed. As mentioned earlier, the access to the certificate can be based on the role used to retrieve the certificate. The advantage of this approach is that the certificate can be replaced. The next time the container is deployed, it picks up the new certificate and there is no need to update the Docker image.

Currently, there is a limitation of 4,096 characters that can be stored in Parameter Store. This may not be sufficient for some type of certificates. For example, some x509 certs include the chain and so can exceed the 4,096 character limit. To avoid any character size limitation, Amazon S3 can be used to store the certificate with Parameter Store. The certificate can be stored on Amazon S3, encrypted with KMS and the private key, or the password can be stored in Parameter Store.

With this approach, there is no limitation on certificate length and the private key remains secured with KMS. However, it does involve some additional complexity in setting up the process of creating the certificates, storing them in S3, and then storing the password or private keys in Parameter Store. That is in addition to securing, trusting, and auditing the system handling the private keys and certificates.

3. Storing the certificates in AWS Secrets Manager

AWS Secrets Manager offers a number of features to allow you to store and manage credentials, keys, and other secret materials. This eliminates the need to store these materials with the application code and instead allows them to be referenced on demand. By centralizing the management of secret materials, this single service can manage fine-grained access control through granular IAM policies as well as the revocation and rotation, all through API calls.

All materials stored in the AWS Secrets Manager are encrypted with the customer’s choice of KMS key. The post AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely shows how AWS Secrets Manager can be used to store RDS database credentials. However, the same process can apply to TLS certificates and keys.

Secrets currently have a limit of 4,096 characters. This approach may be unsuitable for some x509 certificates that include the chain and can exceed this limit. This limit applies to the sum of all key-value pairs within a single secret, so certificates and keys may need to be stored in separate secrets.

After the secure material is in place, it can be retrieved by the container instance at runtime via the AWS Command Line Interface (AWS CLI) or directly from within the application code. All that’s required is for the container task role to have the requisite permissions in IAM to read the secrets.

With the exception of rotating RDS credentials, AWS Secrets Manager requires the user to provide Lambda function code, which is called on a configurable schedule to manage the rotation. This rotation would need to consider the generation of new keys and certificates and redeploying the containers.

4. Using self-signed certificates, generated as the Docker container is created

The advantage of this approach is that it allows the use of TLS communications without any of the complexity of distributing certificates or private keys. However, this approach does require implicit trust of the server. Some applications may generate warnings that there is no acceptable root of trust.

5. Building and managing a private certificate authority

A private certificate authority (CA) can offer greater security and flexibility than the solutions outlined earlier. Typically, a private CA solution would manage the following for each ‘Common name’:

  • A private key
  • A certificate, created with the private key
  • Lists of certificates issued and those that have been revoked
  • Policies for managing certificates, for example which services have the right to make a request for a new certificate
  • Audit logs to track the lifecycle of certificates, in particular to ensure timely renewal where necessary

It is possible for an organization to build and maintain their own certificate issuing platform. This approach requires the implementation of a platform that is highly available and secure. These types of systems add to the overall overhead of maintaining infrastructures from a security, availability, scalability, and maintenance perspective. Some customers have also implemented Lambda functions to achieve the same functionality when it comes to issuing private certificates.

While it’s possible to build a private CA for internal services, there are some challenges to be aware of. Any solution should provide a number of features that are key to ensuring appropriate management of the certificates throughout their lifecycle.

For instance, the solution must support the creation, tracking, distribution, renewal, and revocation of certificates. All of these operations must be provided with the requisite security and authentication controls to ensure that certificates are distributed appropriately.

Scalability is another consideration. As applications become increasingly stateless and elastic, it’s conceivable that certificates may be required for every new container instance or wildcard certificates created to support an environment. Whatever CA solution is implemented must be ready to accommodate such a load while also providing high availability.

These types of approaches have drawbacks from various perspectives:

  • Custom code can be hard to maintain
  • Additional security measures must be implemented
  • Certificate renewal and revocation mechanisms also must be implemented
  • The platform must be maintained and kept up-to-date from a patching perspective while maintaining high availability

6. Using the new ACM Private CA to issue private certificates

ACM Private CA offers a secure, managed infrastructure to support the issuance and revocation of private digital certificates. It supports RSA and ECDSA key types for CA keys used for the creation of new certificates, as well as certificate revocation lists (CRLs) to inform clients when a certificate should no longer be trusted. Currently, ACM Private CA does not offer root CA support.

The following screenshot shows a subordinate certificate that is available for use:

The private key for any private CA that you create with ACM Private CA is created and stored in a FIPS 140-2 Level 3 Hardware Security Module (HSM) managed by AWS. The ACM Private CA is also integrated with AWS CloudTrail, which allows you to record the audit trail of API calls made using the AWS Management Console, AWS CLI, and AWS SDKs.

Setting up ACM Private CA requires a root CA. This can be used to sign a certificate signing request (CSR) for the new subordinate (CA), which is then imported into ACM Private CA. After this is complete, it’s possible for containers within your platform to generate their own key-value pairs at runtime using OpenSSL. They can then use the key-value pairs to make their own CSR and ultimately receive their own certificate.

More specifically, the container would complete the following steps at runtime:

  1. Add OpenSSL to the Docker image (if it is not already included).
  2. Generate a key-value pair (a cryptographically related private and public key).
  3. Use that private key to make a CSR.
  4. Call the ACM Private CA API or CLI issue-certificate operation, which issues a certificate based on the CSR.
  5. Call the ACM Private CA API or CLI get-certificate operation, which returns an issued certificate.

The following diagram shows these steps:

The authorization to successfully request a certificate is controlled via IAM policies, which can be attached via a role to the Amazon ECS task. Containers require the ‘Allow’ effect for at least the acm-pca:GetCertificate and acm:IssueCertificate actions. The following is a sample IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "acm-pca:*",
            "Resource": "arn:aws:acm-pca:us-east-1:1234567890:certificate-authority/2c4ccba1-215e-418a-a654-aaaaaaaa"
        }
    ]
}

For additional security, it is possible to store the certificate and keys in a temporary volume mounted in memory through the ‘tmpfs’ parameter. With this option enabled, the secure material is never written to the filesystem of the host machine.

Note: This feature is not currently available for containers run on AWS Fargate.

The task now has the necessary materials and starts up. Clients should be able to establish the trust hierarchy from the server, through ACM Private CA to the root or intermediate CA.

One consideration to be aware of is that ACM Private CA currently has a limit of 50,000 certificates for each CA in each Region. If the requirement is for each, short-lived container instance to have a separate certificate, then this limit could be reached.

Summary

The approaches outlined in this post describe the available options for ensuring that generation, storage, or distribution of sensitive material is done efficiently and securely. It should also be done in a way that supports the ephemeral, automatic scaling capabilities of container-based architectures. ACM Private CA provides a single interface to manage public and now private certificates, as well as seamlessly integrating with the AWS services.

If you have questions or suggestions, please comment below.

How to migrate your on-premises domain to AWS Managed Microsoft AD using ADMT

Post Syndicated from Danny Jenkins original https://aws.amazon.com/blogs/security/how-to-migrate-your-on-premises-domain-to-aws-managed-microsoft-ad-using-admt/

Customers often ask me how to migrate their on-premises Active Directory (AD) domain to AWS so they can be free of the operational management of their AD infrastructure. Frequently they are unsure how to make the migration easy. A common approach using the CSVDE utility doesn’t migrate attributes such as user passwords. This makes migration difficult and necessitates manual effort for a large part of the migration that can cause operational and security challenges when migrating to a new directory. So what’s changed?

You can now use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate your self-managed AD to AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. This enables you to migrate AD objects and encrypted passwords for your users more easily.

AWS Managed Microsoft AD, a managed service built on actual Microsoft Active Directory. AWS provides operational management of the domain controllers and you use standard AD tools to administer users, groups, and computers. AWS Managed Microsoft AD enables you to take advantage of built-in Active Directory features, such as Group Policy, trusts, and single sign-on and helps make it easy to migrate AD-dependent workloads into the AWS Cloud. With AWS Managed Microsoft AD, you can join Amazon EC2 and Amazon RDS for SQL Server instances to a domain, and use AWS Enterprise IT applications, such as Amazon WorkSpaces, and AWS SSO with Active Directory users and groups.

In this blog, I will show you how to migrate your existing AD objects to AWS Managed Microsoft AD. The source of the objects can be your self-managed AD running on EC2, on-premises, co-located, or even another cloud provider. I will show how to use ADMT and PES to migrate objects including users (and their passwords), groups, and computers.

The blog post assumes you are familiar with AD and how to use the Remote Desktop Protocol client to sign and use EC2 Windows instances.

Background

In this post I will migrate user and computer objects, as well as passwords, to a new AWS Managed Microsoft AD directory. The source will be an on-premises domain.

This example migration will be for a fairly simple use case. Large customers with complex source domains or forests may have more complex processes involved to map users, groups, and computers to the single OU structure of AWS Managed Microsoft AD. For example, you may want to migrate an OU at a time. Customers with single domain forests may be able to migrate in fewer steps. Similarly, the options you might select in ADMT will vary based on what you are trying to accomplish.

To perform the migration, I will use the admin user account from my AWS Managed Microsoft AD. AWS creates the admin user account and delegates administrative permissions to the account for an organizational unit (OU) in the AWS Managed Microsoft AD domain. This account has most of the permissions required to manage your domain, and all the permissions required to complete this migration.

In this example, I have a Source domain called source.local that’s running in a 10.0.0.0/16 network range, and I want to migrate my users, groups, and computers to a destination domain in AWS Managed Microsoft AD called destination.local that’s running in a network range of 192.168.0.0/16.

To migrate users from source.local to destination.local, I need a migration computer that I join to the destination.local domain on which I will run ADMT. I also use this machine to perform administrative tasks on my AWS Managed Microsoft AD. As a prerequisite for ADMT, I must install Microsoft SQL Express 2016 on the migration computer. I also need an administrative account that has permissions in both the source and destination AD domains. To do this, I will use an AD trust and will add my AWS Managed Microsoft AD admin account from destination.local to my source.local domain. Next I will install ADMT on the migration computer, and will run the PES on one of the source.local domain controllers. Finally, I will migrate the users and computers.

For this example, I have a handful of users, groups, and computers, shown in the source domain in these screen shots, that I will migrate:
 

Figure 1: Example source users

Figure 1: Example source users

 

Figure 2: Example client computers

Figure 2: Example client computers

In the remainder of this blog, I will show you how to do the migration in 5 main steps:

  1. Prepare the forests, migration computer, and administrative account.
  2. Install SQL Express and ADMT on the migration computer.
  3. Configure ADMT and PES.
  4. Migrate users and groups.
  5. Migrate computers.

Step 1: Prepare the forests, migration computer, and administrative account

To migrate users and passwords from the source domain to AWS Managed Microsoft AD, you must have a 2-way forest trust. The trust from the source domain to AWS Managed Microsoft AD enables you to add the admin account from the AWS Managed Microsoft AD to the source domain. This is necessary so you can grant the AWS Managed Microsoft AD admin account permissions in your source AD directory so it can read the attributes to migrate. I’ve already created a two-way forest trust between these domains. You should do the same by following this guide. Once your trust has been created, it should show up in the AWS console as Verified.

The ADMT tool should be installed on a computer that isn’t the domain controller in the destination domain destination.local. For this, I will launch an EC2 instance in the same VPC as my domain controller and I will add it to the destination.local domain using the EC2 seamless domain join feature. This will act as my ADMT transfer machine.

  1. Launch a Microsoft Windows 2012 R2 instance.
  2. Complete a domain join to the target domain destination.local. You can complete this manually, or alternatively you can use AWS Systems Manager to complete a seamless domain join as covered here.
  3. Sign into the instance using RDP and use Active Directory Users and Computers (ADUC) to add the AWS Managed Microsoft AD admin user from the destination.local domain to the source.local domain’s built-in administrators group (you will not be able to add the admin user as a domain admin). For information on how to set up this instance to use ADUC, please see this documentation.
     
    Figure 3: the "Administrator's Propoerties" dialog box

    Figure 3: the “Administrator’s Propoerties” dialog box

Step 2: Install SQL Express and ADMT on the migration computer

Next, I need to install SQL Express and ADMT on the migration computer by following these steps.

  1. Install Microsoft SQL Express 2016 on this computer with a default install.
  2. Download ADMT version 3.2 from the Microsoft website.
  3. Once it has downloaded, run the installer and, when setting the tool up, on the Database Selection page of the wizard, for Database (Server\Instance), type the local instance of Microsoft SQL Express we previously installed to work with ADMT.
     
    Figure 4: Specify the "Database (Server\Instance)"

    Figure 4: Specify the “Database (Server\Instance)”

  4. On the Database Import page of the wizard, select No, do not import data from an existing database (Default).
     
    Figure 5: The "Database Import" dialog box

    Figure 5: The “Database Import” dialog box

  5. Complete the rest of the installation using all of the default options.

Step 3: Configure ADMT and PES

I’ll use PES to take care of encrypted password synchronization, but before I configure that, I need to create an encryption key that will be used during this process to encrypt the password migration.

  1. On the ADMT transfer machine, open an elevated Command Prompt and use the following format to create the encryption key.
     
    admt key /option:create /sourcedomain:<SourceDomain> /keyfile:<KeyFilePath> /keypassword:{<password>|*}
     
    Here’s an example:
     
    admt key /option:create /sourcedomain:source.local /keyfile:c:\ /keypassword:password123

    Note: If you get an error stating that the command is not found, close and reopen Command Prompt to refresh the path locations to the ADMT executable, and then try again.

  2. Now, I can download and start the install for the Password Export Server.
  3. Start the install and, in the ADMT Password Migration DLL Setup window, browse to the encryption file you created in the previous step.
  4. When prompted, enter the password used in the ADMT encryption command.
  5. Run PES using the local system account. Note that this will prompt a restart of the domain controller you’re installing PES on.
  6. Once the domain controller has rebooted, open services.msc and start the Password Export Server Service, which is currently set to Manual. You might choose to set this to automatic if it’s likely your DC will be rebooted again before the end of your migration.
     
    Figure 6: Start the Password Export Server Service

    Figure 6: Start the Password Export Server Service

  7. You can now open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  8. Right-click Active Directory Migration Tool to see the migration options:
     
    Figure 7: List of migration options

    Figure 7: List of migration options

Step 4: Migrate users and groups

  1. In the Domain Selection page, select or type the Source and Target domains, and then select Next.
  2. On the User Selection page, select the users to migrate. You can use an include file if you have a large domain. Select Next.
  3. On the Organizational Unit Selection page, select the destination OU that you want to migrate your users across to, and then select Next. AWS Managed Microsoft AD gives you a managed OU where you can create your OU tree structure.

    In this example, I will place them in Users OU:
     
    LDAP://destination.local/OU=Users,OU=destination,DC=destination,DC=local
     

  4. On the Password Options page, select Migrate passwords, and then select Next. This will contact PES running on the source domain controller.
  5. On the Account Transitions Page, decide how to handle the migration of user objects. In this example, I’m going to replicate the state from the source domain. Migrating SID history is beneficial when you’re doing long, staged migrations where users may need to access resources in the source and destination domain before migration is complete. At this time, AWS Managed Microsoft AD doesn’t support migrating user SIDs. I select Target same as source, and then select Next. Again, what you choose to do might be different.
     
    Figure 8: The "Account Transition Options" dialog

    Figure 8: The “Account Transition Options” dialog

  6. Now, let’s customize the transfer. The following screen shot shows the commonly selected options on the User Options page of the User Account Migration Wizard:
     
    Figure 9: Common user options, including "Update user rights," "Migrate associated user groups," and "Fix users' group memberships

    Figure 9: Common user options

It’s likely you’ll have more than one migration pass, so choosing how you handle existing objects is important. This will be a single run for us, but the default behavior is to not migrate if the object already exists (see the image of the Conflict Management page below). If you’re running multiple passes, you’ll will want to look at options that involve merging conflicting objects. The method you select will depend on your use case. If you don’t know where to start, read this article.
 

Figure 10: The "Conflict Management" dialog box with the "Do not migrate source object..." option selected

Figure 10: The “Conflict Management” dialog box

In my example, you can see my 3 users, and any groups they were members of, have been migrated.
 

Figure 11: The "Migration Progress" window

Figure 11: The “Migration Progress” window

We can verify this by checking the users exist in our destination.local domain:
 

Figure 12: Checking the users exist in the destination.local domain

Figure 12: Checking the users exist in the destination.local domain

Step 5: Migrate computers

Now, I’ll move on to computer objects.

  1. Open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  2. Right-click Active Directory Migration Tool and select Computer Migration Wizard.
  3. Select the computers you want to migrate to the new domain. I’ll select four computers for migration.
     
    Figure 13: Four computers that will be migrated

    Figure 13: Four computers that will be migrated

  4. On the Translate Objects page, select which access controls you want to reapply during the migration, and then select Next.
     
    Figure 14: The "Translate Objects" dialog box

    Figure 14: The “Translate Objects” dialog box

    The migration process will quickly show completed, but we need to make sure the entire process worked.

  5. To verify the migration was successful, select Close, and the migration tool will open a new window that has a link to the migration log. Check the log file to see that it has started the process of migrating these four computers:


    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-56SQFFFJCR1.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-IG2V2NAN1MU.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-QKQEJHUEV27.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-SE98KE4Q9CR.source.local

    If the admin user doesn’t have access to the C$ or admin$ share on the computer in the source domain share, then then installation of the agent will fail as shown here:


    2017-08-11 04:09:29 ERR2:7006 Failed to install agent on \\WIN-IG2V2NAN1MU.source.local, rc=5 Access is denied.

    Once the agent is installed, it will perform a domain disjoin from source.local and perform a join to desintation.local. The log file will update when this has been successful:


    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-SE98KE4Q9CR.source.local’. The new computer name is ‘WIN-SE98KE4Q9CR.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-QKQEJHUEV27.source.local’. The new computer name is ‘WIN-QKQEJHUEV27.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-56SQFFFJCR1.source.local’. The new computer name is ‘WIN-56SQFFFJCR1.destination.local’.

    You can then view the new computer objects in the destination domain.

  6. Log in to one of the old source.localsource.local computers and, by looking at the computer’s System Properties, confirm that the computer is now a member of the new destination.local domain.
     
    Figure 15: Confirm the computer is member of the destination.local domain

    Figure 15: Confirm the computer is member of the destination.local domain

Summary

In this simple example I showed how to migrate users and their passwords, groups, and computer objects from an on premises deployment of Active Directory, to our fully AWS Managed Microsoft AD. We created a management instance on which we ran SQL Express and ADMT, we created a forest trust to grant permissions for an account to use ADT to move users, I configured ADMT and the PES tool, and then stepped through the migration using ADMT.

The ADMT tool gives us a great way to migrate to our managed Microsoft AD service that allows powerful customization of the migration, and it does so in a more secure way through encrypted password synchronization. You may need to do additional investigation and planning if the complexity of your environment requires a different approach with some of these steps.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

U.K. National Health Services IGToolkit Assessment report now available

Post Syndicated from Stephen McDermid original https://aws.amazon.com/blogs/security/u-k-national-health-services-igtoolkit-assessment-report-now-available/

We know that customers often seek out third-party tools to allow for the baselining and benchmarking of their environment. Additionally, healthcare and life sciences customers (HCLS) have specific needs, which is why we continually strive to meet relevant global standards validating our security and compliance. Today, we’d like to take a look at a new tool, our latest compliance report from the National Health Services (NHS) Information Governance (IG) Toolkit for UK Health, which has a unique element because it allows you to assess us as well compare AWS to other service providers.

The AWS IGToolkit Assessment Report gives you the ability to evaluate how we performed against IGToolkit requirements. Our overall score of 98 percent and grade of “Satisfactory,” which is the highest grade awarded, mean you can use our services to host and process NHS patient data with assurance that we are following exacting requirements for data security.

The NHS IGToolkit site is straightforward to use and reports can be downloaded into a spreadsheet. AWS is categorised as a “Commercial Third Party.” For more information about the NHS IGToolkit, you can download this pdf.

As always, security is a shared responsibility, so while this report allows you to use AWS, you are required to complete your own NHS IGToolkit assessment. Finally, we are also working on addressing the Data Security and Protection Toolkit, which will replace the IGToolkit, and will release further news over the coming months as we move towards submitting our assessment.

 

Accept a BAA with AWS for all accounts in your organization

Post Syndicated from Kristen Haught original https://aws.amazon.com/blogs/security/accept-a-baa-with-aws-for-all-accounts-in-your-organization/

I’m excited to announce to our healthcare customers and partners that you can now accept a single AWS Business Associate Addendum (BAA) for all accounts within your organization. Once accepted, all current and future accounts created or added to your organization will immediately be covered by the BAA.

Our team is always thinking about how we can reduce manual processes related to your compliance tasks. That’s why I’ve been looking forward to the release of AWS Artifact Organization Agreements, which was designed to simplify the BAA process and improve your experience when designating AWS accounts as HIPAA accounts. Previously, if you wanted to designate several AWS accounts, you had to sign-in to each account individually to accept the BAA or email us. Now, an authorized master account user can accept the BAA once to automatically designate all existing and future member accounts in the organization as HIPAA accounts for use with protected health information (PHI). This release addresses a frequent customer request to be able to quickly designate multiple HIPAA accounts and confirm those accounts are covered under the terms of the BAA.

If you have a BAA in place already and want to leverage this new capability a master account user can accept the new AWS Organizations BAA in AWS Artifact today. To get started, your organization must use AWS Organizations to manage your accounts, and “all features” needs to be enabled. Learn more about creating an organization here.

Once you are using AWS Organizations with all features enabled, and you have the necessary user permissions, then accepting the AWS Organizations BAA takes about two minutes. We’ve created a video that shows you the process, step-by-step.

If your organization prefers to continue managing HIPAA accounts individually, you can still do that.  We have streamlined the process for accepting an individual account BAA as well. It takes less than two minutes to designate a single account as a HIPAA account in AWS Artifact. You can watch the new video here to learn how.

As with all AWS Artifact features, there is no additional cost to use AWS Artifact to review, accept, and manage individual account BAAs or the new organization BAA. To learn more, go to the FAQ page.

 

Delegate permission management to developers by using IAM permissions boundaries

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

Today, AWS released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees. As your organization grows, you might want to allow trusted employees to configure and manage IAM permissions to help your organization scale permission management and move workloads to AWS faster. For example, you might want to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. See the following diagram for a visual representation.
 

Figure 1: The intersection of permission boundaries and policies

Figure 1: The intersection of permissions boundary and permissions policy

In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.

IAM administrator tasks

  1. Define the permissions boundary by creating a customer-managed policy.
  2. Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
  3. Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.

Employee tasks

  1. Create a role with the required permissions boundary.
  2. Attach a permissions policy to the role.

Administrator step 1: Define the permissions boundary

As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.


{
  "Version" : "2012-10-17",
  "Statement" : [
  {
    "Effect": "Allow",
    "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem"
   ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:RequestedRegion": "eu-central-1"
        }
    }
  }
]
}

Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.

$aws iam create-policy –policy-name DynamoDB_Boundary_Frankfurt –policy-document file://DynamoDB_Boundary_Frankfurt_Text.json

Note: You can also use an AWS managed policy as a permissions boundary.

Administrator step 2: Create and attach the permissions policy

Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create and attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynmoDB table.


{
  "Version" : "2012-10-17",
  "Statement" : [
     {
"Sid": "SetPermissionsBoundary",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*",
"Condition": {
     "StringEquals": {
     "iam:PermissionsBoundary":     
     "arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt"}}
      },
     {
      "Sid": "CreateAndEditPermissionsPolicy",
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
      "iam:CreatePolicyVersion"
],
"Resource": "*"
     }
]
}

Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.

$aws iam create-policy –policy-name Permissions_Policy_For_Employee –policy-document file://Permissions_Policy_For_Employee_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee –role-name MyEmployeeRole

Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role

Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*"
        }
    ]
}

Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.

$aws iam create-policy –policy-name Pass_Role_Policy –policy-document file://Pass_Role_Policy_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy –role-name MyEmployeeRole

As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.

Managing Permissions Boundaries

Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*actions, then you must explicitly deny those actions.

Employee step 1: Create a role by providing the permissions boundary

Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.

For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.

$aws iam create-role –role-name MyTestAppRole
–assume-role-policy-document file://Role_Trust_Policy_Text.json
–permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt

Next, the employee grants permissions to this role by using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.


{
    "Version": "2012-10-17",
    "Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": [
"arn:aws:dynamodb:eu-central-1:<ACCOUNT_NUMBER>:table/MyTestApp_DDB_Table"]
}
]
}

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/MyTestApp_DDB_Permissions
–role-name MyTestAppRole

Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
 

Figure 2: Effective permissions for the IAM role

Figure 2: Effective permissions for the IAM role

Summary

We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.

If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Recovering from a rough Monday morning: An Amazon GuardDuty threat detection and remediation scenario

Post Syndicated from Greg McConnel original https://aws.amazon.com/blogs/security/amazon-guardduty-threat-detection-and-remediation-scenario/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. Given the many log types that Amazon GuardDuty analyzes (Amazon Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and DNS logs), you never know what it might discover in your AWS account. After enabling GuardDuty, you might quickly find serious threats lurking in your account or, preferably, just end up staring at a blank dashboard for weeks…or even longer.

A while back at an AWS Loft event, one of the customers enabled GuardDuty in their AWS account for a lab we were running. Soon after, GuardDuty alerts (findings) popped up that indicated multiple Amazon Elastic Compute Cloud (EC2) instances were communicating with known command and control servers. This means that GuardDuty detected activity commonly seen in the situation where an EC2 instance has been taken over as part of a botnet. The customer asked if this was part of the lab, and we explained it wasn’t and that the findings should be immediately investigated. This led to an investigation by that customer’s security team and luckily the issue was resolved quickly.

Then there was the time we spoke to a customer that had been running GuardDuty for a few days but had yet to see any findings in the dashboard. They were concerned that the service wasn’t working. We explained that the lack of findings was actually a good thing, and we discussed how to generate sample findings to test GuardDuty and their remediation pipeline.

This post, and the corresponding GitHub repository, will help prepare you for either type of experience by walking you through a threat detection and remediation scenario. The scenario will show you how to quickly enable GuardDuty, generate and examine test findings, and then review automated remediation examples using AWS Lambda.

Scenario overview

The instructions and AWS CloudFormation template for setting everything up are provided in a GitHub repository. The CloudFormation template sets up a test environment in your AWS Account, configures everything needed to run through the scenario, generates GuardDuty findings and provides automatic remediation for the simulated threats in the scenario. All you need to do is run the CloudFormation template in the GitHub repository and then follow the instructions to investigate what occurred.

The scenario presented is that you manage an IT organization and Alice, your security engineer, has enabled GuardDuty in a production AWS Account and configured a few automated remediations. In threat detection and remediation, the standard pattern starts with a threat which is then investigated and finally remediated. These remediations can be manual or automated. Alice focused on a few specific attack vectors, which represent a small sample of what GuardDuty is capable of detecting. Alice has set all this up on Thursday but isn’t in the office on Monday. Unfortunately, as soon as you arrive at the office, GuardDuty notifies you that multiple threats have been detected (and given the automated remediation setup, these threats have been addressed but you still need to investigate.) The documentation in GitHub will guide you through the analysis of the findings and discuss how the automatic remediation works. You will also have the opportunity to manually trigger a GuardDuty finding and view that automated remediation.

The GuardDuty findings generated in the scenario are listed here:

You can view all of the GuardDuty findings here.

You can get started immediately by browsing to the GitHub repository for this scenario where you will find the instructions and AWS CloudFormation template. This scenario will show you how easy it is to enable GuardDuty in addition to demonstrating some of the threats GuardDuty can discover. To learn more about Amazon GuardDuty please see the GuardDuty site and GuardDuty documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

New PCI DSS report now available, eight services added in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-pci-dss-report-now-available-eight-services-added-in-scope/

We continue to expand the scope of our assurance programs to support your most important workloads. I’m pleased to tell you that eight services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. With these additions, you can now select from a total of 62 PCI-compliant services. You can see the full list on our Services in Scope by Compliance program page. The eight newly added services are:

Amazon ElastiCache for Redis

Amazon Elastic File System

Amazon Elastic Container Registry

Amazon Polly

AWS CodeCommit

AWS Firewall Manager

AWS Service Catalog

AWS Storage Gateway

We were evaluated by third-party auditors from Coalfire and their report is available on-demand through AWS Artifact. When you go to AWS Artifact, you’ll find something new. We’ve made the full Responsibility Summary, listing each requirement and control, available in a spreadsheet. This includes a break down of the shared responsibility for each control – yours and ours – with a mapping to our services. We hope this new format makes it easier to evaluate and use the information from the audit.

To learn more about our PCI program and other compliance and security programs, please go to the AWS Compliance Programs page. As always, we value your feedback and questions, reach out to the team through the Contact Us page.

AWS Online Tech Talks – July 2018

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-july-2018/

Join us this month to learn about AWS services and solutions featuring topics on Amazon EMR, Amazon SageMaker, AWS Lambda, Amazon S3, Amazon WorkSpaces, Amazon EC2 Fleet and more! We also have our third episode of the “How to re:Invent” where we’ll dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent. Register now! We look forward to seeing you. Please note – all sessions are free and in Pacific Time.

 

Tech talks featured this month:

 

Analytics & Big Data

July 23, 2018 | 11:00 AM – 12:00 PM PT – Large Scale Machine Learning with Spark on EMR – Learn how to do large scale machine learning on Amazon EMR.

July 25, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon QuickSight: Business Analytics for Everyone – Get an introduction to Amazon Quicksight, Amazon’s BI service.

July 26, 2018 | 11:00 AM – 12:00 PM PT – Multi-Tenant Analytics on Amazon EMR – Discover how to make an Amazon EMR cluster multi-tenant to have different processing activities on the same data lake.

 

Compute

July 31, 2018 | 11:00 AM – 12:00 PM PT – Accelerate Machine Learning Workloads Using Amazon EC2 P3 Instances – Learn how to use Amazon EC2 P3 instances, the most powerful, cost-effective and versatile GPU compute instances available in the cloud.

August 1, 2018 | 09:00 AM – 10:00 AM PT – Technical Deep Dive on Amazon EC2 Fleet – Learn how to launch workloads across instance types, purchase models, and AZs with EC2 Fleet to achieve the desired scale, performance and cost.

 

Containers

July 25, 2018 | 11:00 AM – 11:45 AM PT – How Harry’s Shaved Off Their Operational Overhead by Moving to AWS Fargate – Learn how Harry’s migrated their messaging workload to Fargate and reduced message processing time by more than 75%.

 

Databases

July 23, 2018 | 01:00 PM – 01:45 PM PT – Purpose-Built Databases: Choose the Right Tool for Each Job – Learn about purpose-built databases and when to use which database for your application.

July 24, 2018 | 11:00 AM – 11:45 AM PT – Migrating IBM Db2 Databases to AWS – Learn how to migrate your IBM Db2 database to the cloud database of your choice.

 

DevOps

July 25, 2018 | 09:00 AM – 09:45 AM PT – Optimize Your Jenkins Build Farm – Learn how to optimize your Jenkins build farm using the plug-in for AWS CodeBuild.

 

Enterprise & Hybrid

July 31, 2018 | 09:00 AM – 09:45 AM PT – Enable Developer Productivity with Amazon WorkSpaces – Learn how your development teams can be more productive with Amazon WorkSpaces.

August 1, 2018 | 11:00 AM – 11:45 AM PT – Enterprise DevOps: Applying ITIL to Rapid Innovation – Innovation doesn’t have to equate to more risk for your organization. Learn how Enterprise DevOps delivers agility while maintaining governance, security and compliance.

 

IoT

July 30, 2018 | 01:00 PM – 01:45 PM PT – Using AWS IoT & Alexa Skills Kit to Voice-Control Connected Home Devices – Hands-on workshop that covers how to build a simple backend service using AWS IoT to support an Alexa Smart Home skill.

 

Machine Learning

July 23, 2018 | 09:00 AM – 09:45 AM PT – Leveraging ML Services to Enhance Content Discovery and Recommendations – See how customers are using computer vision and language AI services to enhance content discovery & recommendations.

July 24, 2018 | 09:00 AM – 09:45 AM PT – Hyperparameter Tuning with Amazon SageMaker’s Automatic Model Tuning – Learn how to use Automatic Model Tuning with Amazon SageMaker to get the best machine learning model for your datasets, to tune hyperparameters.

July 26, 2018 | 09:00 AM – 10:00 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

 

re:Invent

July 18, 2018 | 08:00 AM – 08:30 AM PT – Episode 3: Training & Certification Round-Up – Join us as we dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent.

 

Security, Identity, & Compliance

July 30, 2018 | 11:00 AM – 11:45 AM PT – Get Started with Well-Architected Security Best Practices – Discover and walk through essential best practices for securing your workloads using a number of AWS services.

 

Serverless

July 24, 2018 | 01:00 PM – 02:00 PM PT – Getting Started with Serverless Computing Using AWS Lambda – Get an introduction to serverless and how to start building applications with no server management.

 

Storage

July 30, 2018 | 09:00 AM – 09:45 AM PT – Best Practices for Security in Amazon S3 – Learn about Amazon S3 security fundamentals and lots of new features that help make security simple.

Podcast: We developed Amazon GuardDuty to meet scaling demands, now it could assist with compliance considerations such as GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-we-developed-amazon-guardduty-to-meet-scaling-demands-now-it-could-assist-with-compliance-considerations-such-as-gdpr/

It isn’t simple to meet the scaling requirements of AWS when creating a threat detection monitoring service. Our service teams have to maintain the ability to deliver at a rapid pace. That led to the question what can be done to make a security service as frictionless as possible to business demands?

Core parts of our internal solution can now be found in Amazon GuardDuty, which doesn’t require deployment of software or security infrastructure. Instead, GuardDuty uses machine learning to monitor metadata for access activity such as unusual API calls. This method turned out to be highly effective. Because it worked well for us, we thought it would work well for our customers, too. Additionally, when we externalized the service, we enabled it to be turned on with a single click. The customer response to Amazon GuardDuty has been positive with rapid adoption since launch in late 2017.

The service’s monitoring capabilities and threat detections could become increasingly helpful to customers concerned with data privacy or facing regulations such as the EU’s General Data Privacy Regulation (GDPR). Listen to the podcast with Senior Product Manager Michael Fuller to learn how Amazon GuardDuty could be leveraged to meet your compliance considerations.

New guide helps explain cloud security with AWS for public sector customers in India

Post Syndicated from Meng Chow Kang original https://aws.amazon.com/blogs/security/new-guide-helps-explain-cloud-security-with-aws-for-public-sector-customers-in-india/

Our teams are continuing to focus on compliance enablement around the world and now that includes a new guide for public sector customers in India. The User Guide for Government Departments and Agencies in India provides information that helps government users at various central, state, district, and municipal agencies understand security and controls available with AWS. It also explains how to implement appropriate information security, risk management, and governance programs using AWS Services, which are offered in India by Amazon Internet Services Private Limited (AISPL).

The guide focuses on the Ministry of Electronics and Information Technology (Meity) requirements that are detailed in Guidelines for Government Departments for Adoption/Procurement of Cloud Services, addressing common issues that public sector customers encounter.

Our newest guide is part of a series diving into customer compliance issues across industries and jurisdictions, such as financial services guides for Singapore, Australia, and Hong Kong. We’ll be publishing additional guides this year to help you understand other regulatory requirements around the world.

Want more AWS Security news? Follow us on Twitter.

New data classification whitepaper available

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/new-data-classification-whitepaper-available/

We’ve published a new whitepaper, Secure Cloud Adoption: Data Classification, to help governments address data classification. Data classification is a foundational step in cybersecurity risk management. It involves identifying the types of data that are being processed and stored in an information system owned or operated by an organization. It also involves making a determination about the sensitivity of the data and the likely impact arising from compromise, loss, or misuse.

While data classification has been used for decades to help organizations safeguard sensitive or critical data with appropriate levels of protection, some traditional classification approaches lacked specificity and would place large amounts of differing levels of data under the same strict tier. Regardless of whether data is processed or stored in traditional, on-premise systems or the cloud, data classification is a starting point for maintaining the confidentiality—and potentially the integrity and availability—of data based on the data’s risk impact level, so setting the right level of specificity matters.

This whitepaper is focused on best practices and the models governments can use to classify their data so they can more quickly move their computing workloads to the cloud. It describes the practices and models that have been implemented by early adopters, and it recommends practices to meet internationally recognized standards and frameworks.

If you have questions or want to learn more, contact your account executive or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

How AWS uses automated reasoning to help you achieve security at scale

Post Syndicated from Andrew Gacek original https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova/

At AWS, we focus on achieving security at scale to diminish risks to your business. Fundamental to this approach is ensuring your policies are configured in a way that helps protect your data, and the Automated Reasoning Group (ARG), an advanced innovation team at AWS, is using automated reasoning to do it.

What is automated reasoning, you ask? It’s a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems; that is, fancy math that proves things are working as expected. If you want a deeper understanding of automated reasoning, check out this re:Invent session. While the applications of this methodology are vast, in this post I’ll explore one specific aspect: analyzing policies using an internal Amazon service named Zelkova.

What is Zelkova? How will it help me?

Zelkova uses automated reasoning to analyze policies and the future consequences of policies. This includes AWS Identity and Access Management (IAM) policies, Amazon Simple Storage Service (S3) policies, and other resource policies. These policies dictate who can (or can’t) do what to which resources. Because Zelkova uses automated reasoning, you no longer need to think about what questions you need to ask about your policies. Using fancy math, as mentioned above, Zelkova will automatically derive the questions and answers you need to be asking about your policies, improving confidence in your security configuration(s).

How does it work?

Zelkova translates policies into precise mathematical language and then uses automated reasoning tools to check properties of the policies. These tools include automated reasoners called Satisfiability Modulo Theories (SMT) solvers, which use a mix of numbers, strings, regular expressions, dates, and IP addresses to prove and disprove logical formulas. Zelkova has a deep understanding of the semantics of the IAM policy language and builds upon a solid mathematical foundation. While tools like the IAM Policy Simulator let you test individual requests, Zelkova is able to use mathematics to talk about all possible requests. Other techniques guess and check, but Zelkova knows.

You may have noticed, as an example, the new “Public / Not public” checks in S3. These are powered by Zelkova:
 

Figure 1: the "public/Not public" checks in S3

Figure 1: the “Public/Not public” checks in S3

S3 uses Zelkova to check each bucket policy and warns you if an unauthorized user is able to read or write to your bucket. When a bucket is flagged as “Public”, there are some public requests that are allowed to access the bucket. However, when a bucket is flagged as “Not public”, all public requests are denied. Zelkova is able to make such statements because it has a precise mathematical representation of IAM policies. In fact, it creates a formula for each policy and proves a theorem about that formula.

Consider the following S3 bucket policy statement where my goal is to disallow a certain principal from accessing the bucket:


{
    "Effect": "Allow",
    "NotPrincipal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

Unfortunately, this policy statement does not capture my intentions. Instead, it allows access for everybody in the world who is not the given principal. This means almost everybody now has access to my bucket, including anonymous unauthorized users. Fortunately, as soon as I attach this policy, S3 flags my bucket as “Public”—warning me that there’s something wrong with the policy I wrote. How did it know?

Zelkova translates this policy into a mathematical formula:

(Resource = “arn:aws:s3:::test-bucket”) ∧ (Principal ≠ 11112222333)

Here, ∧ is the mathematical symbol for “and” which is true only when both its left and right side are true. Resource and Principal are variables just like you would use x and y in algebra class. The above formula is true exactly when my policy allows a request. The precise meaning of my policy has now been defined in the universal language of mathematics. The next step is to decide if this policy formula allows public access, but this is a hard problem. Now Zelkova really goes to work.

A counterintuitive trick sometimes used by mathematicians is to make a problem harder in order to make finding a solution easier. That is, solving a more difficult problem can sometimes lead to a simpler solution. In this case, Zelkova solves the harder problem of comparing two policies against each other to decide which is more permissive. If P1 and P2 are policy formulas, then suppose formula P1 ⇒ P2 is true. This arrow symbol is an implication that means whenever P1 is true, P2 must also be true. So, whenever policy 1 accepts a request, policy 2 must also accept the request. Thus, policy 2 is at least as permissive as policy 1. Suppose also that the converse formula P2 ⇒ P1 is not true. That means there’s a request which makes P2 true and P1 false. This request is allowed by policy 2 and is denied by policy 1. Combining all these results, policy 1 is strictly less permissive than policy 2.

How does this solve the “Public / Not public” problem? Zelkova has a special policy that allows anonymous, unauthorized users to access an S3 resource. It compares your policy against this policy. If your policy is more permissive, then Zelkova says your policy allows public access. If you restrict access—for example, based on source VPC endpoint (aws:SourceVpce) or source IP address (aws:SourceIp)—then your policy is not more permissive than the special policy, and Zelkova says your policy does not allow public access.

For all this to work, Zelkova uses SMT solvers. Using mathematical language, these tools take a formula and either prove it is true for all possible values of the variables, or they return a counterexample that makes the formula false.

To understand SMT solvers better, you can play with them yourself. For example, if asked to prove x+y > xy, an SMT solver will quickly find a counterexample such as x=5,y=-1. To fix this, you could strengthen your formula to assume that y is positive:

(y > 0) ⇒ (x + y > xy)

The SMT solver will now respond that your formula is true for all values of the variables x and y. It does this using the rules of algebra and logic. This same idea carries over into theories like strings. You can ask the SMT solver to prove the formula length(append(a,b)) > length(a) where a and b are string variables. It will find a counterexample such as a=”hello” and b=”” where b is the empty string. This time, you could fix your formula by changing from greater-than to greater-than-or-equal-to:

length(append(a, b)) ≥ length(a)

The SMT solver will now respond that the formula is true for all values of the variables a and b. Here, the solver has combined reasoning about strings (length, append) with reasoning about numbers (greater-than-or-equal-to). SMT solvers are designed for exactly this sort of theory composition.

What about my original policy? Once I see that my bucket is public, I can fix my policy using an explicit “Deny”:


{
    "Effect": "Deny"
    "Principal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

With this policy statement attached, S3 correctly reports my bucket as “Not public”. Zelkova has translated this policy into a mathematical formula, compared it against a special policy, and proved that my policy is less permissive. Fancy math has proved that things are working (or in this case, not working) as expected.

Where else is Zelkova being used?

In addition to S3, several AWS services are using Zelkova:

We have also engaged with a number of enterprise and regulated customers who have adopted Zelkova for their use cases:

“Bridgewater, like many other security-conscious AWS customers, needs to quickly reason about the security posture of our AWS infrastructure, and an integral part of that posture is IAM policies. These govern permissions on everything from individual users, to S3 buckets, KMS keys, and even VPC endpoints, among many others. Bridgewater uses Zelkova to verify and provide assurances that our policies do not allow data exfiltration, misconfigurations, and many other malicious and accidental undesirable behaviors. Zelkova allows our security experts to encode their understanding once and then mechanically apply it to any relevant policies, avoiding error-prone and slow human reviews, while at the same time providing us high confidence in the correctness and security of our IAM policies.”
Dan Peebles, Lead Cloud Security Architect at Bridgewater Associates

Summary

AWS services such as S3 use Zelkova to precisely represent policies and prove that they are secure—improving confidence in your security configurations. Zelkova can make broad statements about all resource requests because it’s based on mathematics and proofs instead of heuristics, pattern matching, or simulation. The ubiquity of policies in AWS means that the value of Zelkova and its benefits will continue to grow as it serves to protect more customers every day.

Want more AWS Security news? Follow us on Twitter.

Podcast: How AWS KMS could help customers meet encryption and deletion requirements, including GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-how-aws-kms-could-help-customers-meet-encryption-and-deletion-requirements-including-gdpr/

Encryption is a powerful tool to protect your data but it can be difficult to get right because it demands understanding how encryption keys are created, distributed, used, and managed. To make encryption easier to use, we created AWS Key Management Service (KMS) to let you scale your use of the cloud without struggling to ensure encryption is used consistently across workloads.

Because AWS KMS makes it easy for you to create and control the encryption keys used to encrypt your data, the service can be used to meet both encryption and deletion requirements in a data lifecycle management policy. Cryptographic deletion is the idea is that you can delete a relatively small number of keys to make a large amount of encrypted data irretrievable. This concept is being widely discussed as an option for organizations facing data deletion requirements, such as those in the EU’s General Data Protection Regulation (GDPR).

Listen to the podcast and hear from Ken Beer, general manager of AWS KMS, about best practices related to encryption, key management, and cryptographic deletion. He also covers the advantages of KMS over on-premises systems and how the service has been designed so that even AWS operators can’t access customer keys.

How to create custom alerts with Amazon Macie

Post Syndicated from Jeremy Haynes original https://aws.amazon.com/blogs/security/how-to-create-custom-alerts-with-amazon-macie/

Amazon Macie is a security service that makes it easy for you to discover, classify, and protect sensitive data in Amazon Simple Storage Service (Amazon S3). Macie collects AWS CloudTrail events and Amazon S3 metadata such as permissions and content classification. In this post, I’ll show you how to use Amazon Macie to create custom alerts for those data sets to notify you of events and objects of interest. I’ll go through the various types of data you can find in Macie, and talk about how to identify fields that are relevant to a given security use case. What you’ll learn is a method of investigation and alerting that you’ll be able to apply to your own situation.

If you’re totally new to Macie, make sure you first head over to the AWS Blog launch post for instructions on how to get started.

Understand the types of data in Macie

There are three sources of data found in Macie: CloudTrail events, S3 Bucket metadata, and S3 Object metadata. Each data source is stored separately in its own index. Therefore, when writing queries it’s important to keep fields from different sources in separate queries.

Within each data source there are two types of fields: Extracted and Generated. Extracted fields come directly from pre-existing AWS fields associated with that data source. For example, Extracted fields from the CloudTrail data source come directly from the events recorded in the CloudTrail logs that correspond to the actions taken by an IAM user, role, or an AWS service. See these CloudTrail log file examples for more details.

Generated fields, on the other hand, are created by Macie. These fields provide additional security value and context for creating more powerful queries. For example, the Internet Service Provider (ISP) field is populated by taking the IP addresses from a CloudTrail event and matching them against a global service provider list. This enables searching for the ISP associated with an API call.

In the sections below, I’ll explore each of the three data sources in more detail. There are reference guides in the Macie documentation dedicated to each source that provide an extensive list of fields and descriptions. I’ll use these lists to discover what fields are appropriate for investigating a particular security use case.

Choose a security use case to instrument alerts

For the purposes of this post, I’ll choose one particular security use case to focus on. The process of discovering relevant data fields collected by Macie and turning those into custom alerts will be the same for any use case you wish to investigate. With that in mind, let’s choose the theme of sensitive or critical data stored in S3 to explore because it can affect many AWS customers.

When beginning to design alerts, the first step is to think about all of the resources, attributes, actions, and identities related to the subject. In this case, I’m looking at sensitive or critical data stored in S3, so the following are some potentially useful fields of data to consider:

  • S3 bucket and object resources
  • S3 configuration and security attributes
  • Read, write, and delete actions on S3
  • IAM users, roles, and access policies associated with the S3 resources

I’ll use this list to guide my search for relevant fields in the Macie reference documentation. First, I’ll dive into the CloudTrail data.

Explore CloudTrail data

The best way to build an understanding of what activities are happening in your AWS environment is by using the CloudTrail data source because it contains your AWS API calls and the identities that made them. If you’re unfamiliar with CloudTrail, head over to the AWS CloudTrail documentation for more information.

Ok, I have a critical S3 bucket and I want to be notified each time an object write is attempted to it outside of the typical access pattern used in my corporate environment. In this case, write actions to the bucket are normally controlled by my organization’s own identity system via federated access. So that means I want to write a query that searches for object write attempts by a non-federated principal. If you’re unfamiliar with what federation is, that’s ok, you can still follow along. If you’re curious about federation, see the IAM Identity Providers and Federation documentation.

I begin by opening the Macie CloudTrail Reference documentation and looking at the section titled “CloudTrail Data Fields Extracted by Macie.” Skimming over the list, I see that the objectsWritten.key description matches my criteria for investigating actions on an S3 resource: “A list of S3 objects’ ARNs that were part of a PutObject, CopyObject, or CompleteMultipartUpload API calls.”

Next, I take a look at the CloudTrail field names that are extracted from the userIdentity object in an event. The userIdentity.type field looks promising, but I’m unsure what values this field can accept. Using the CloudTrail userIdentity Element documentation as a reference, I look up the type field and see that FederatedUser is listed as one of the values.

Great! I have all of the information necessary to write my first custom query. I’ll search for all “user sessions” (a 5-minute aggregation of CloudTrail events corresponding to a user) that have both an attempted write to my S3 bucket and any API call with a userIdentity type which is not FederatedUser. It’s important to note that I’m searching groups of events and not individual events because we’re looking for events related to a specific API call rather than all events related to all API calls. To match values which do not equal FederatedUser, I use a regular expression and place FederatedUser inside parenthesis with a tilde character (~) at the beginning. Here’s what I came up with using the corresponding Macie field names and example search queries as a formatting guide:
 
objectsWritten.key:/arn:aws:s3:::my_sensitive_bucket.*/ AND userIdentityType.key:/(~FederatedUser)/
 
Now, it’s time to test this query on the Research page and turn it into a custom alert.

Create custom alerts

The first step to create a custom alert is to run the proposed query from the Research page. I do this so I can verify that the results match my expectations and to get an idea of how often the alert will occur. Once I’m happy with the query, setting up a custom alert is only a few clicks away.

  1. In the Macie navigation pane, select Research.
  2. Select the data source matching the query from the options in the drop-down menu. In this case, I select CloudTrail data.
     
    Figure 1: Select the data source on the Research page

    Figure 1: Select the data source on the Research page

  3. To run the search, I copy my custom query, paste it in the query box, and then select Enter.
     
    objectsWritten.key:/arn:aws:s3:::my_sensitive_bucket.*/ AND userIdentityType.key:/(~FederatedUser)/
     
  4. At this point, I verify that the results match my expectations and make any necessary modifications to the query. To learn more about how the features of the Research page can help with verifying and modifying queries, see Using the Macie Research Tab.
  5. Once I’m confident in the results, I select the Save query as alert icon.
     
    Figure 2: Select the "Save query as alert" icon

    Figure 2: Select the “Save query as alert” icon

  6. Now, I fill in the remaining fields: Alert title, Description, Min number of matches, and Severity. For more information about each of these fields, see Macie Adding New Custom Basic Alerts.
     
    Figure 3: Fill in the remaining "Alert title," "Description," "Min number of matches," and "Severity" fields

    Figure 3: Fill in the remaining “Alert title,” “Description,” “Min number of matches,” and “Severity” fields

  7. In the Whitelisted users field, I add any users which appeared in my Research page results that I would like to exclude from the alert. For more details on this feature, see Whitelisting Users or Buckets for Basic Alerts.
  8. Finally, I save the alert.

That’s it! I now have a custom basic alert that will alert me every time there’s an attempt to write to my bucket in the same user session as an API call with a userIdentityType other than FederatedUser. Now, I’ll use the S3 object and S3 bucket data sources to look for some more useful fields.

Use S3 object data

The S3 object data source contains fields extracted from S3 API metadata, as well as fields populated by Macie relating to content classification. To expand my search and alerts for sensitive or critical data stored in S3, I’ll look at the S3 Object Data Reference documentation and look for fields that are promising candidates.

I see that S3 Server Side Encryption Settings metadata could be useful because I know that all of the objects in a certain bucket should be encrypted using AES256, and I’d like to be notified every time an object is uploaded that doesn’t match that attribute.

To create the query, I combine the server_encryption field and the bucket field to match on all S3 objects within the specified bucket. Note the forward slashes and “.*” that make this a regular expression search. This allows me to match all buckets that share my project name, even when the full bucket name is different.
 
filesystem_metadata.bucket:/.*my_sensitive_project.*/ AND NOT filesystem_metadata.server_encryption:"AES256"
 
Next, I have a different bucket that I know should never have any objects which contain personally identifiable information (PII). This includes such information as names, email addresses, and credit card numbers. For the full list of what Macie considers PII, see Classifying Data with Amazon Macie. I’d like to set up an alert that notifies me every time an object containing any type of PII is added to this bucket. Since this is a data field that’s provided by Macie, I look under the Generated Fields heading and find the field pii_impact. I’m looking for all levels of PII impact, so my query will search for any value which isn’t equal to none.

As before, I’ll combine this with the bucket field to include all S3 objects matching the bucket name.
 
filesystem_metadata.bucket:"my_logs_bucket" AND NOT pii_impact:"none"
 

Use S3 bucket data

The S3 bucket data source contains information extracted from S3 bucket APIs, as well as fields that Macie generates by processing bucket metadata and access control lists (ACLs). Following the same method as before, I head over to the S3 Bucket Data Reference documentation and look for fields that will help me create useful alerts.

There are plenty of fields which could be useful here, depending on how much information I know in advance about my bucket and which type of security vector I want to protect against. To narrow my search, I decide to add some protection against accidental or unauthorized data destruction.

In the S3 Versioning section, I see that the Multi-Factor Authentication Delete settings are one of the available fields. Since I have this bucket locked down to only allow MFA delete actions, I can create an alert to notify me every time this delete action is disabled.
 
bucket:"my_critical_bucket" AND versioning.MFADelete:"disabled"
 
Another method of potential data destruction is through the bucket lifecycle expiration settings automatically removing data after a period of time. I’d like to know if someone changes this to a low number so I can make a modification and prevent losing any recent data.
 
bucket:"my_critical_bucket" AND lifecycle_configuration.Rules.Expiration.Days:<3
 
Now that I have gathered another set of potential alert queries, I can walk through the same steps I used for CloudTrail data to turn them into custom alerts. Once saved, I’ll begin to receive notifications in the Macie console whenever a match is found. I can view the alerts I’ve received by selecting Alerts from the Macie navigation pane.

Summary

I described the various types and sources of data available in Macie. After demonstrating how to take a security use case and discover relevant fields, I stepped through the process of creating queries and turning them into custom alerts. My goal has been to show you how to build alerts that are tailored to your specific environment and solve your own individual needs.

If you have comments about this post, submit them in the Comments section below. If you have questions about how to use Macie, or you’d like to request new fields and data sources, start a new thread on the Macie forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.