Tag Archives: Amazon Inspector

How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 2

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-2/

Yesterday in Part 1 of this blog post, I showed you how to:

  1. Launch an Amazon EC2 instance with an AWS Identity and Access Management (IAM) role, an Amazon Elastic Block Store (Amazon EBS) volume, and tags that Amazon EC2 Systems Manager (Systems Manager) and Amazon Inspector use.
  2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.

Today in Steps 3 and 4, I show you how to:

  1. Take Amazon EBS snapshots using Amazon EBS Snapshot Scheduler to automate snapshots based on instance tags.
  2. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

To catch up on Steps 1 and 2, see yesterday’s blog post.

Step 3: Take EBS snapshots using EBS Snapshot Scheduler

In this section, I show you how to use EBS Snapshot Scheduler to take snapshots of your instances at specific intervals. To do this, I will show you how to:

  • Determine the schedule for EBS Snapshot Scheduler by providing you with best practices.
  • Deploy EBS Snapshot Scheduler by using AWS CloudFormation.
  • Tag your EC2 instances so that EBS Snapshot Scheduler backs up your instances when you want them backed up.

In addition to making sure your EC2 instances have all the available operating system patches applied on a regular schedule, you should take snapshots of the EBS storage volumes attached to your EC2 instances. Taking regular snapshots allows you to restore your data to a previous state quickly and cost effectively. With Amazon EBS snapshots, you pay only for the actual data you store, and snapshots save only the data that has changed since the previous snapshot, which minimizes your cost. You will use EBS Snapshot Scheduler to make regular snapshots of your EC2 instance. EBS Snapshot Scheduler takes advantage of other AWS services including CloudFormation, Amazon DynamoDB, and AWS Lambda to make backing up your EBS volumes simple.

Determine the schedule

As a best practice, you should back up your data frequently during the hours when your data changes the most. This reduces the amount of data you lose if you have to restore from a snapshot. For the purposes of this blog post, the data for my instances changes the most between the business hours of 9:00 A.M. to 5:00 P.M. Pacific Time. During these hours, I will make snapshots hourly to minimize data loss.

In addition to backing up frequently, another best practice is to establish a strategy for retention. This will vary based on how you need to use the snapshots. If you have compliance requirements to be able to restore for auditing, your needs may be different than if you are able to detect data corruption within three hours and simply need to restore to something that limits data loss to five hours. EBS Snapshot Scheduler enables you to specify the retention period for your snapshots. For this post, I only need to keep snapshots for recent business days. To account for weekends, I will set my retention period to three days, which is down from the default of 15 days when deploying EBS Snapshot Scheduler.

Deploy EBS Snapshot Scheduler

In Step 1 of Part 1 of this post, I showed how to configure an EC2 for Windows Server 2012 R2 instance with an EBS volume. You will use EBS Snapshot Scheduler to take eight snapshots each weekday of your EC2 instance’s EBS volumes:

  1. Navigate to the EBS Snapshot Scheduler deployment page and choose Launch Solution. This takes you to the CloudFormation console in your account. The Specify an Amazon S3 template URL option is already selected and prefilled. Choose Next on the Select Template page.
  2. On the Specify Details page, retain all default parameters except for AutoSnapshotDeletion. Set AutoSnapshotDeletion to Yes to ensure that old snapshots are periodically deleted. The default retention period is 15 days (you will specify a shorter value on your instance in the next subsection).
  3. Choose Next twice to move to the Review step, and start deployment by choosing the I acknowledge that AWS CloudFormation might create IAM resources check box and then choosing Create.

Tag your EC2 instances

EBS Snapshot Scheduler takes a few minutes to deploy. While waiting for its deployment, you can start to tag your instance to define its schedule. EBS Snapshot Scheduler reads tag values and looks for four possible custom parameters in the following order:

  • <snapshot time> – Time in 24-hour format with no colon.
  • <retention days> – The number of days (a positive integer) to retain the snapshot before deletion, if set to automatically delete snapshots.
  • <time zone> – The time zone of the times specified in <snapshot time>.
  • <active day(s)>all, weekdays, or mon, tue, wed, thu, fri, sat, and/or sun.

Because you want hourly backups on weekdays between 9:00 A.M. and 5:00 P.M. Pacific Time, you need to configure eight tags—one for each hour of the day. You will add the eight tags shown in the following table to your EC2 instance.

Tag Value
scheduler:ebs-snapshot:0900 0900;3;utc;weekdays
scheduler:ebs-snapshot:1000 1000;3;utc;weekdays
scheduler:ebs-snapshot:1100 1100;3;utc;weekdays
scheduler:ebs-snapshot:1200 1200;3;utc;weekdays
scheduler:ebs-snapshot:1300 1300;3;utc;weekdays
scheduler:ebs-snapshot:1400 1400;3;utc;weekdays
scheduler:ebs-snapshot:1500 1500;3;utc;weekdays
scheduler:ebs-snapshot:1600 1600;3;utc;weekdays

Next, you will add these tags to your instance. If you want to tag multiple instances at once, you can use Tag Editor instead. To add the tags in the preceding table to your EC2 instance:

  1. Navigate to your EC2 instance in the EC2 console and choose Tags in the navigation pane.
  2. Choose Add/Edit Tags and then choose Create Tag to add all the tags specified in the preceding table.
  3. Confirm you have added the tags by choosing Save. After adding these tags, navigate to your EC2 instance in the EC2 console. Your EC2 instance should look similar to the following screenshot.
    Screenshot of how your EC2 instance should look in the console
  4. After waiting a couple of hours, you can see snapshots beginning to populate on the Snapshots page of the EC2 console.Screenshot of snapshots beginning to populate on the Snapshots page of the EC2 console
  5. To check if EBS Snapshot Scheduler is active, you can check the CloudWatch rule that runs the Lambda function. If the clock icon shown in the following screenshot is green, the scheduler is active. If the clock icon is gray, the rule is disabled and does not run. You can enable or disable the rule by selecting it, choosing Actions, and choosing Enable or Disable. This also allows you to temporarily disable EBS Snapshot Scheduler.Screenshot of checking to see if EBS Snapshot Scheduler is active
  1. You can also monitor when EBS Snapshot Scheduler has run by choosing the name of the CloudWatch rule as shown in the previous screenshot and choosing Show metrics for the rule.Screenshot of monitoring when EBS Snapshot Scheduler has run by choosing the name of the CloudWatch rule

If you want to restore and attach an EBS volume, see Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

Step 4: Use Amazon Inspector

In this section, I show you how to you use Amazon Inspector to scan your EC2 instance for common vulnerabilities and exposures (CVEs) and set up Amazon SNS notifications. To do this I will show you how to:

  • Install the Amazon Inspector agent by using EC2 Run Command.
  • Set up notifications using Amazon SNS to notify you of any findings.
  • Define an Amazon Inspector target and template to define what assessment to perform on your EC2 instance.
  • Schedule Amazon Inspector assessment runs to assess your EC2 instance on a regular interval.

Amazon Inspector can help you scan your EC2 instance using prebuilt rules packages, which are built and maintained by AWS. These prebuilt rules packages tell Amazon Inspector what to scan for on the EC2 instances you select. Amazon Inspector provides the following prebuilt packages for Microsoft Windows Server 2012 R2:

  • Common Vulnerabilities and Exposures
  • Center for Internet Security Benchmarks
  • Runtime Behavior Analysis

In this post, I’m focused on how to make sure you keep your EC2 instances patched, backed up, and inspected for common vulnerabilities and exposures (CVEs). As a result, I will focus on how to use the CVE rules package and use your instance tags to identify the instances on which to run the CVE rules. If your EC2 instance is fully patched using Systems Manager, as described earlier, you should not have any findings with the CVE rules package. Regardless, as a best practice I recommend that you use Amazon Inspector as an additional layer for identifying any unexpected failures. This involves using Amazon CloudWatch to set up weekly Amazon Inspector scans, and configuring Amazon Inspector to notify you of any findings through SNS topics. By acting on the notifications you receive, you can respond quickly to any CVEs on any of your EC2 instances to help ensure that malware using known CVEs does not affect your EC2 instances. In a previous blog post, Eric Fitzgerald showed how to remediate Amazon Inspector security findings automatically.

Install the Amazon Inspector agent

To install the Amazon Inspector agent, you will use EC2 Run Command, which allows you to run any command on any of your EC2 instances that have the Systems Manager agent with an attached IAM role that allows access to Systems Manager.

  1. Choose Run Command under Systems Manager Services in the navigation pane of the EC2 console. Then choose Run a command.
    Screenshot of choosing "Run a command"
  2. To install the Amazon Inspector agent, you will use an AWS managed and provided command document that downloads and installs the agent for you on the selected EC2 instance. Choose AmazonInspector-ManageAWSAgent. To choose the target EC2 instance where this command will be run, use the tag you previously assigned to your EC2 instance, Patch Group, with a value of Windows Servers. For this example, set the concurrent installations to 1 and tell Systems Manager to stop after 5 errors.
    Screenshot of installing the Amazon Inspector agent
  3. Retain the default values for all other settings on the Run a command page and choose Run. Back on the Run Command page, you can see if the command that installed the Amazon Inspector agent executed successfully on all selected EC2 instances.
    Screenshot showing that the command that installed the Amazon Inspector agent executed successfully on all selected EC2 instances

Set up notifications using Amazon SNS

Now that you have installed the Amazon Inspector agent, you will set up an SNS topic that will notify you of any findings after an Amazon Inspector run.

To set up an SNS topic:

  1. In the AWS Management Console, choose Simple Notification Service under Messaging in the Services menu.
  2. Choose Create topic, name your topic (only alphanumeric characters, hyphens, and underscores are allowed) and give it a display name to ensure you know what this topic does (I’ve named mine Inspector). Choose Create topic.
    "Create new topic" page
  3. To allow Amazon Inspector to publish messages to your new topic, choose Other topic actions and choose Edit topic policy.
  4. For Allow these users to publish messages to this topic and Allow these users to subscribe to this topic, choose Only these AWS users. Type the following ARN for the US East (N. Virginia) Region in which you are deploying the solution in this post: arn:aws:iam::316112463485:root. This is the ARN of Amazon Inspector itself. For the ARNs of Amazon Inspector in other AWS Regions, see Setting Up an SNS Topic for Amazon Inspector Notifications (Console). Amazon Resource Names (ARNs) uniquely identify AWS resources across all of AWS.
    Screenshot of editing the topic policy
  5. To receive notifications from Amazon Inspector, subscribe to your new topic by choosing Create subscription and adding your email address. After confirming your subscription by clicking the link in the email, the topic should display your email address as a subscriber. Later, you will configure the Amazon Inspector template to publish to this topic.
    Screenshot of subscribing to the new topic

Define an Amazon Inspector target and template

Now that you have set up the notification topic by which Amazon Inspector can notify you of findings, you can create an Amazon Inspector target and template. A target defines which EC2 instances are in scope for Amazon Inspector. A template defines which packages to run, for how long, and on which target.

To create an Amazon Inspector target:

  1. Navigate to the Amazon Inspector console and choose Get started. At the time of writing this blog post, Amazon Inspector is available in the US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.
  2. For Amazon Inspector to be able to collect the necessary data from your EC2 instance, you must create an IAM service role for Amazon Inspector. Amazon Inspector can create this role for you if you choose Choose or create role and confirm the role creation by choosing Allow.
    Screenshot of creating an IAM service role for Amazon Inspector
  3. Amazon Inspector also asks you to tag your EC2 instance and install the Amazon Inspector agent. You already performed these steps in Part 1 of this post, so you can proceed by choosing Next. To define the Amazon Inspector target, choose the previously used Patch Group tag with a Value of Windows Servers. This is the same tag that you used to define the targets for patching. Then choose Next.
    Screenshot of defining the Amazon Inspector target
  4. Now, define your Amazon Inspector template, and choose a name and the package you want to run. For this post, use the Common Vulnerabilities and Exposures package and choose the default duration of 1 hour. As you can see, the package has a version number, so always select the latest version of the rules package if multiple versions are available.
    Screenshot of defining an assessment template
  5. Configure Amazon Inspector to publish to your SNS topic when findings are reported. You can also choose to receive a notification of a started run, a finished run, or changes in the state of a run. For this blog post, you want to receive notifications if there are any findings. To start, choose Assessment Templates from the Amazon Inspector console and choose your newly created Amazon Inspector assessment template. Choose the icon below SNS topics (see the following screenshot).
    Screenshot of choosing an assessment template
  6. A pop-up appears in which you can choose the previously created topic and the events about which you want SNS to notify you (choose Finding reported).
    Screenshot of choosing the previously created topic and the events about which you want SNS to notify you

Schedule Amazon Inspector assessment runs

The last step in using Amazon Inspector to assess for CVEs is to schedule the Amazon Inspector template to run using Amazon CloudWatch Events. This will make sure that Amazon Inspector assesses your EC2 instance on a regular basis. To do this, you need the Amazon Inspector template ARN, which you can find under Assessment templates in the Amazon Inspector console. CloudWatch Events can run your Amazon Inspector assessment at an interval you define using a Cron-based schedule. Cron is a well-known scheduling agent that is widely used on UNIX-like operating systems and uses the following syntax for CloudWatch Events.

Image of Cron schedule

All scheduled events use a UTC time zone, and the minimum precision for schedules is one minute. For more information about scheduling CloudWatch Events, see Schedule Expressions for Rules.

To create the CloudWatch Events rule:

  1. Navigate to the CloudWatch console, choose Events, and choose Create rule.
    Screenshot of starting to create a rule in the CloudWatch Events console
  2. On the next page, specify if you want to invoke your rule based on an event pattern or a schedule. For this blog post, you will select a schedule based on a Cron expression.
  3. You can schedule the Amazon Inspector assessment any time you want using the Cron expression, or you can use the Cron expression I used in the following screenshot, which will run the Amazon Inspector assessment every Sunday at 10:00 P.M. GMT.
    Screenshot of scheduling an Amazon Inspector assessment with a Cron expression
  4. Choose Add target and choose Inspector assessment template from the drop-down menu. Paste the ARN of the Amazon Inspector template you previously created in the Amazon Inspector console in the Assessment template box and choose Create a new role for this specific resource. This new role is necessary so that CloudWatch Events has the necessary permissions to start the Amazon Inspector assessment. CloudWatch Events will automatically create the new role and grant the minimum set of permissions needed to run the Amazon Inspector assessment. To proceed, choose Configure details.
    Screenshot of adding a target
  5. Next, give your rule a name and a description. I suggest using a name that describes what the rule does, as shown in the following screenshot.
  6. Finish the wizard by choosing Create rule. The rule should appear in the Events – Rules section of the CloudWatch console.
    Screenshot of completing the creation of the rule
  7. To confirm your CloudWatch Events rule works, wait for the next time your CloudWatch Events rule is scheduled to run. For testing purposes, you can choose your CloudWatch Events rule and choose Edit to change the schedule to run it sooner than scheduled.
    Screenshot of confirming the CloudWatch Events rule works
  8. Now navigate to the Amazon Inspector console to confirm the launch of your first assessment run. The Start time column shows you the time each assessment started and the Status column the status of your assessment. In the following screenshot, you can see Amazon Inspector is busy Collecting data from the selected assessment targets.
    Screenshot of confirming the launch of the first assessment run

You have concluded the last step of this blog post by setting up a regular scan of your EC2 instance with Amazon Inspector and a notification that will let you know if your EC2 instance is vulnerable to any known CVEs. In a previous Security Blog post, Eric Fitzgerald explained How to Remediate Amazon Inspector Security Findings Automatically. Although that blog post is for Linux-based EC2 instances, the post shows that you can learn about Amazon Inspector findings in other ways than email alerts.

Conclusion

In this two-part blog post, I showed how to make sure you keep your EC2 instances up to date with patching, how to back up your instances with snapshots, and how to monitor your instances for CVEs. Collectively these measures help to protect your instances against common attack vectors that attempt to exploit known vulnerabilities. In Part 1, I showed how to configure your EC2 instances to make it easy to use Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also showed how to use Systems Manager to schedule automatic patches to keep your instances current in a timely fashion. In Part 2, I showed you how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

If you have comments about today’s or yesterday’s post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or the Amazon Inspector forum, or contact AWS Support.

– Koen

How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 1

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-1/

Most malware tries to compromise your systems by using a known vulnerability that the maker of the operating system has already patched. To help prevent malware from affecting your systems, two security best practices are to apply all operating system patches to your systems and actively monitor your systems for missing patches. In case you do need to recover from a malware attack, you should make regular backups of your data.

In today’s blog post (Part 1 of a two-part post), I show how to keep your Amazon EC2 instances that run Microsoft Windows up to date with the latest security patches by using Amazon EC2 Systems Manager. Tomorrow in Part 2, I show how to take regular snapshots of your data by using Amazon EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

What you should know first

To follow along with the solution in this post, you need one or more EC2 instances. You may use existing instances or create new instances. For the blog post, I assume this is an EC2 for Microsoft Windows Server 2012 R2 instance installed from the Amazon Machine Images (AMIs). If you are not familiar with how to launch an EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. A private subnet is not directly accessible via the internet, and access to it requires either a VPN connection to your on-premises network or a jump host in a public subnet (a subnet with access to the internet). You must make sure that the EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager and Amazon Inspector. The following diagram shows how you should structure your Amazon Virtual Private Cloud (VPC). You should also be familiar with Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

Later on, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the AWS Identity and Access Management (IAM) user you are using for this post must have the iam:PassRole permission. This permission allows this IAM user to assign tasks to pass their own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. This safeguard ensures that the user cannot use the creation of tasks to elevate their IAM privileges because their own IAM privileges limit which tasks they can run against an EC2 instance. You should also authorize your IAM user to use EC2, Amazon Inspector, Amazon CloudWatch, and Systems Manager. You can achieve this by attaching the following AWS managed policies to the IAM user you are using for this example: AmazonInspectorFullAccess, AmazonEC2FullAccess, and AmazonSSMFullAccess.

Architectural overview

The following diagram illustrates the components of this solution’s architecture.

Diagram showing the components of this solution's architecture

For this blog post, Microsoft Windows EC2 is Amazon EC2 for Microsoft Windows Server 2012 R2 instances with attached Amazon Elastic Block Store (Amazon EBS) volumes, which are running in your VPC. These instances may be standalone Windows instances running your Windows workloads, or you may have joined them to an Active Directory domain controller. For instances joined to a domain, you can be using Active Directory running on an EC2 for Windows instance, or you can use AWS Directory Service for Microsoft Active Directory.

Amazon EC2 Systems Manager is a scalable tool for remote management of your EC2 instances. You will use the Systems Manager Run Command to install the Amazon Inspector agent. The agent enables EC2 instances to communicate with the Amazon Inspector service and run assessments, which I explain in detail later in this blog post. You also will create a Systems Manager association to keep your EC2 instances up to date with the latest security patches.

You can use the EBS Snapshot Scheduler to schedule automated snapshots at regular intervals. You will use it to set up regular snapshots of your Amazon EBS volumes. EBS Snapshot Scheduler is a prebuilt solution by AWS that you will deploy in your AWS account. With Amazon EBS snapshots, you pay only for the actual data you store. Snapshots save only the data that has changed since the previous snapshot, which minimizes your cost.

You will use Amazon Inspector to run security assessments on your EC2 for Windows Server instance. In this post, I show how to assess if your EC2 for Windows Server instance is vulnerable to any of the more than 50,000 CVEs registered with Amazon Inspector.

In today’s and tomorrow’s posts, I show you how to:

  1. Launch an EC2 instance with an IAM role, Amazon EBS volume, and tags that Systems Manager and Amazon Inspector will use.
  2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.
  3. Take EBS snapshots by using EBS Snapshot Scheduler to automate snapshots based on instance tags.
  4. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

Step 1: Launch an EC2 instance

In this section, I show you how to launch your EC2 instances so that you can use Systems Manager with the instances and use instance tags with EBS Snapshot Scheduler to automate snapshots. This requires three things:

  • Create an IAM role for Systems Manager before launching your EC2 instance.
  • Launch your EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  • Add tags to instances so that you can automate policies for which instances you take snapshots of and when.

Create an IAM role for Systems Manager

Before launching your EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the EC2 instance you will launch. AWS already provides a preconfigured policy that you can use for your new role, and it is called AmazonEC2RoleforSSM.

  1. Sign in to the IAM console and choose Roles in the navigation pane. Choose Create new role.
    Screenshot of choosing "Create role"
  2. In the role-creation workflow, choose AWS service > EC2 > EC2 to create a role for an EC2 instance.
    Screenshot of creating a role for an EC2 instance
  3. Choose the AmazonEC2RoleforSSM policy to attach it to the new role you are creating.
    Screenshot of attaching the AmazonEC2RoleforSSM policy to the new role you are creating
  4. Give the role a meaningful name (I chose EC2SSM) and description, and choose Create role.
    Screenshot of giving the role a name and description

Launch your EC2 instance

To follow along, you need an EC2 instance that is running Microsoft Windows Server 2012 R2 and that has an Amazon EBS volume attached. You can use any existing instance you may have or create a new instance.

When launching your new EC2 instance, be sure that:

  • The operating system is Microsoft Windows Server 2012 R2.
  • You attach at least one Amazon EBS volume to the EC2 instance.
  • You attach the newly created IAM role (EC2SSM).
  • The EC2 instance can connect to the internet through a network address translation (NAT) gateway or a NAT instance.
  • You create the tags shown in the following screenshot (you will use them later).

If you are using an already launched EC2 instance, you can attach the newly created role as described in Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.

Add tags

The final step of configuring your EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this blog post and to configure Amazon Inspector in Part 2. For this example, I add a tag key, Patch Group, and set the value to Windows Servers. I could have other groups of EC2 instances that I treat differently by having the same tag key but a different tag value. For example, I might have a collection of other servers with the Patch Group tag key with a value of IAS Servers.

Screenshot of adding tags

Note: You must wait a few minutes until the EC2 instance becomes available before you can proceed to the next section.

At this point, you now have at least one EC2 instance you can use to configure Systems Manager, use EBS Snapshot Scheduler, and use Amazon Inspector.

Note: If you have a large number of EC2 instances to tag, you may want to use the EC2 CreateTags API rather than manually apply tags to each instance.

Step 2: Configure Systems Manager

In this section, I show you how to use Systems Manager to apply operating system patches to your EC2 instances, and how to manage patch compliance.

To start, I will provide some background information about Systems Manager. Then, I will cover how to:

  • Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  • Associate a Systems Manager patch baseline with your instance to define which patches Systems Manager should apply.
  • Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  • Monitor patch compliance to verify the patch state of your instances.

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is Amazon EC2 Systems Manager?

Patch management is an important measure to prevent malware from infecting your systems. Most malware attacks look for vulnerabilities that are publicly known and in most cases are already patched by the maker of the operating system. These publicly known vulnerabilities are well documented and therefore easier for an attacker to exploit than having to discover a new vulnerability.

Patches for these new vulnerabilities are available through Systems Manager within hours after Microsoft releases them. There are two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your EC2 instance. Second, you must install the Systems Manager agent on your EC2 instance. If you have used a recent Microsoft Windows Server 2012 R2 AMI published by AWS, Amazon has already installed the Systems Manager agent on your EC2 instance. You can confirm this by logging in to an EC2 instance and looking for Amazon SSM Agent under Programs and Features in Windows. To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see the documentation about installing the Systems Manager agent. If you forgot to attach the newly created role when launching your EC2 instance or if you want to attach the role to already running EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

To make sure your EC2 instance receives operating system patches from Systems Manager, you will use the default patch baseline provided and maintained by AWS, and you will define a maintenance window so that you control when your EC2 instances should receive patches. For the maintenance window to be able to run any tasks, you also must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: Systems Manager will use this role instead of EC2. Earlier we created the EC2SSM role with the AmazonEC2RoleforSSM policy, which allowed the Systems Manager agent on our instance to communicate with the Systems Manager service. Here we need a new role with the policy AmazonSSMMaintenanceWindowRole to make sure the Systems Manager service is able to execute commands on our instance.

Create the Systems Manager IAM role

To create the new IAM role for Systems Manager, follow the same procedure as in the previous section, but in Step 3, choose the AmazonSSMMaintenanceWindowRole policy instead of the previously selected AmazonEC2RoleforSSM policy.

Screenshot of creating the new IAM role for Systems Manager

Finish the wizard and give your new role a recognizable name. For example, I named my role MaintenanceWindowRole.

Screenshot of finishing the wizard and giving your new role a recognizable name

By default, only EC2 instances can assume this new role. You must update the trust policy to enable Systems Manager to assume this role.

To update the trust policy associated with this new role:

  1. Navigate to the IAM console and choose Roles in the navigation pane.
  2. Choose MaintenanceWindowRole and choose the Trust relationships tab. Then choose Edit trust relationship.
  3. Update the policy document by copying the following policy and pasting it in the Policy Document box. As you can see, I have added the ssm.amazonaws.com service to the list of allowed Principals that can assume this role. Choose Update Trust Policy.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

Associate a Systems Manager patch baseline with your instance

Next, you are going to associate a Systems Manager patch baseline with your EC2 instance. A patch baseline defines which patches Systems Manager should apply. You will use the default patch baseline that AWS manages and maintains. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your EC2 instance.

Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Managed Instances. Your new EC2 instance should be available there. If your instance is missing from the list, verify the following:

  1. Go to the EC2 console and verify your instance is running.
  2. Select your instance and confirm you attached the Systems Manager IAM role, EC2SSM.
  3. Make sure that you deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram at the start of this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. Check the Systems Manager Agent logs for any errors.

Now that you have confirmed that Systems Manager can manage your EC2 instance, it is time to associate the AWS maintained patch baseline with your EC2 instance:

  1. Choose Patch Baselines under Systems Manager Services in the navigation pane of the EC2 console.
  2. Choose the default patch baseline as highlighted in the following screenshot, and choose Modify Patch Groups in the Actions drop-down.
    Screenshot of choosing Modify Patch Groups in the Actions drop-down
  3. In the Patch group box, enter the same value you entered under the Patch Group tag of your EC2 instance in “Step 1: Configure your EC2 instance.” In this example, the value I enter is Windows Servers. Choose the check mark icon next to the patch group and choose Close.Screenshot of modifying the patch group

Define a maintenance window

Now that you have successfully set up a role and have associated a patch baseline with your EC2 instance, you will define a maintenance window so that you can control when your EC2 instances should receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your EC2 instances do not all reboot at the same time. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs.

To define a maintenance window:

  1. Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Maintenance Windows. Choose Create a Maintenance Window.
    Screenshot of starting to create a maintenance window in the Systems Manager console
  2. Select the Cron schedule builder to define the schedule for the maintenance window. In the example in the following screenshot, the maintenance window will start every Saturday at 10:00 P.M. UTC.
  3. To specify when your maintenance window will end, specify the duration. In this example, the four-hour maintenance window will end on the following Sunday morning at 2:00 A.M. UTC (in other words, four hours after it started).
  4. Systems manager completes all tasks that are in process, even if the maintenance window ends. In my example, I am choosing to prevent new tasks from starting within one hour of the end of my maintenance window because I estimated my patch operations might take longer than one hour to complete. Confirm the creation of the maintenance window by choosing Create maintenance window.
    Screenshot of completing all boxes in the maintenance window creation process
  5. After creating the maintenance window, you must register the EC2 instance to the maintenance window so that Systems Manager knows which EC2 instance it should patch in this maintenance window. To do so, choose Register new targets on the Targets tab of your newly created maintenance window. You can register your targets by using the same Patch Group tag you used before to associate the EC2 instance with the AWS-provided patch baseline.
    Screenshot of registering new targets
  6. Assign a task to the maintenance window that will install the operating system patches on your EC2 instance:
    1. Open Maintenance Windows in the EC2 console, select your previously created maintenance window, choose the Tasks tab, and choose Register run command task from the Register new task drop-down.
    2. Choose the AWS-RunPatchBaseline document from the list of available documents.
    3. For Parameters:
      1. For Role, choose the role you created previously (called MaintenanceWindowRole).
      2. For Execute on, specify how many EC2 instances Systems Manager should patch at the same time. If you have a large number of EC2 instances and want to patch all EC2 instances within the defined time, make sure this number is not too low. For example, if you have 1,000 EC2 instances, a maintenance window of 4 hours, and 2 hours’ time for patching, make this number at least 500.
      3. For Stop after, specify after how many errors Systems Manager should stop.
      4. For Operation, choose Install to make sure to install the patches.
        Screenshot of stipulating maintenance window parameters

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. Note that if you don’t want to wait, you can adjust the schedule to run sooner by choosing Edit maintenance window on the Maintenance Windows page of Systems Manager. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed on the Maintenance Windows page of Systems Manager and select your maintenance window.

Screenshot of the maintenance window successfully created

Monitor patch compliance

You also can see the overall patch compliance of all EC2 instances that are part of defined patch groups by choosing Patch Compliance under Systems Manager Services in the navigation pane of the EC2 console. You can filter by Patch Group to see how many EC2 instances within the selected patch group are up to date, how many EC2 instances are missing updates, and how many EC2 instances are in an error state.

Screenshot of monitoring patch compliance

In this section, you have set everything up for patch management on your instance. Now you know how to patch your EC2 instance in a controlled manner and how to check if your EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all EC2 instances you manage.

Summary

In Part 1 of this blog post, I have shown how to configure EC2 instances for use with Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also have shown how to use Systems Manager to keep your Microsoft Windows–based EC2 instances up to date. In Part 2 of this blog post tomorrow, I will show how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any CVEs.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the EC2 forum or the Amazon Inspector forum, or contact AWS Support.

– Koen

Updated AWS SOC Reports Are Now Available with 19 Additional Services in Scope

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/updated-aws-soc-reports-are-now-available-with-19-additional-services-in-scope/

AICPA SOC logo

Newly updated reports are available for AWS System and Organization Control Report 1 (SOC 1), formerly called AWS Service Organization Control Report 1, and AWS SOC 2: Security, Availability, & Confidentiality Report. You can download both reports for free and on demand in the AWS Management Console through AWS Artifact. The updated AWS SOC 3: Security, Availability, & Confidentiality Report also was just released. All three reports cover April 1, 2017, through September 30, 2017.

With the addition of the following 19 services, AWS now supports 51 SOC-compliant AWS services and is committed to increasing the number:

  • Amazon API Gateway
  • Amazon Cloud Directory
  • Amazon CloudFront
  • Amazon Cognito
  • Amazon Connect
  • AWS Directory Service for Microsoft Active Directory
  • Amazon EC2 Container Registry
  • Amazon EC2 Container Service
  • Amazon EC2 Systems Manager
  • Amazon Inspector
  • AWS IoT Platform
  • Amazon Kinesis Streams
  • AWS Lambda
  • AWS [email protected]
  • AWS Managed Services
  • Amazon S3 Transfer Acceleration
  • AWS Shield
  • AWS Step Functions
  • AWS WAF

With this release, we also are introducing a separate spreadsheet, eliminating the need to extract the information from multiple PDFs.

If you are not yet an AWS customer, contact AWS Compliance to access the SOC Reports.

– Chad

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/bluegreen-infrastructure-application-deployment-blog/

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

 

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

Amazon Inspector Update – Assessment Reporting, Proxy Support, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-inspector-update-assessment-reporting-proxy-support-and-more/

Amazon Inspector is our automated security assessment service. It analyzes the behavior of the applications that you run in AWS and helps you to identify potential security issues. In late 2015 I introduced you to Inspector and showed you how to use it (Amazon Inspector – Automated Security Assessment Service). You start by using tags to define the collection of AWS resources that make up your application (also known as the assessment target). Then you create a security assessment template and specify the set of rules that you would like to run as part of the assessment:

After you create the assessment target and the security assessment template, you can run it against the target resources with a click. The assessment makes use of an agent that runs on your Linux and Windows-based EC2 instances (read about AWS Agents to learn more). You can process the assessments manually or you can forward the findings to your existing ticketing system using AWS Lambda (read Scale Your Security Vulnerability Testing with Amazon Inspector to see how to do this).

Whether you run one instance or thousands, we recommend that you run assessments on a regular and frequent basis. You can run them on your development and integration instances as part of your DevOps pipeline; this will give you confidence that the code and the systems that you deploy to production meet the conditions specified by the rule packages that you selected when you created the security assessment template. You should also run frequent assessments against production systems in order to guard against possible configuration drift.

We have recently added some powerful new features to Amazon Inspector:

  • Assessment Reports – The new assessment reports provide a comprehensive summary of the assessment, beginning with an executive summary. The reports are designed to be shared with teams and with leadership, while also serving as documentation for compliance audits.
  • Proxy Support – You can now configure the agent to run within proxy environments (many of our customers have been asking for this).
  • CloudWatch Metrics – Inspector now publishes metrics to Amazon CloudWatch so that you can track and observe changes over time.
  • Amazon Linux 2017.03 Support – This new version of the Amazon Linux AMI is launching today and Inspector supports it now.

Assessment Reports
After an assessment runs completes, you can download a detailed assessment report in HTML or PDF form:

The report begins with a cover page and executive summary:

Then it summarizes the assessment rules and the targets that were tested:

Then it summarizes the findings for each rules package:

Because the report is intended to serve as documentation for compliance audits, it includes detailed information about each finding, along with recommendations for remediation:

The full report also indicates which rules were checked and passed for all target instances:

Proxy Support
The Inspector agent can now communicate with Inspector through an HTTPS proxy. For Linux instances, we support HTTPS Proxy, and for Windows instances, we support WinHTTP proxy. See the Amazon Inspector User Guide for instructions to configure Proxy support for the AWS Agent.

CloudWatch Metrics
Amazon Inspector now publishes metrics to Amazon CloudWatch after each run. The metrics are categorized by target and by template. An aggregate metric, which indicates how many assessment runs have been performed in the AWS account, is also available. You can find the metrics in the CloudWatch console, as usual:

Here are the metrics that are published on a per-target basis:

And here are the per-template metrics:

Amazon Linux 2017.03 Support
Many AWS customers use the Amazon Linux AMI and automatically upgrade as new versions become available. In order to provide these customers with continuous coverage from Amazon Inspector, we are now making sure that this and future versions of the AMI are supported by Amazon Inspector on launch day.

Available Now
All of these features are available now and you can start using them today!

Pricing is based on a per-agent, per-assessment basis and starts at $0.30 per assessment, declining to as low at $0.05 per assessment when you run 45,000 or more assessments per month (see the Amazon Inspector Pricing page for more information).

Jeff;

New – AWS Resource Tagging API

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/

AWS customers frequently use tags to organize their Amazon EC2 instances, Amazon EBS volumes, Amazon S3 buckets, and other resources. Over the past couple of years we have been working to make tagging more useful and more powerful. For example, we have added support for tagging during Auto Scaling, the ability to use up to 50 tags per resource, console-based support for the creation of resources that share a common tag (also known as resource groups), and the option to use Config Rules to enforce the use of tags.

As customers grow to the point where they are managing thousands of resources, each with up to 50 tags, they have been looking to us for additional tooling and options to simplify their work. Today I am happy to announce that our new Resource Tagging API is now available. You can use these APIs from the AWS SDKs or via the AWS Command Line Interface (CLI). You now have programmatic access to the same resource group operations that had been accessible only from the AWS Management Console.

Recap: Console-Based Resource Group Operations
Before I get in to the specifics of the new API functions, I thought you would appreciate a fresh look at the console-based grouping and tagging model. I already have the ability to find and then tag AWS resources using a search that spans one or more regions. For example, I can select a long list of regions and then search them for my EC2 instances like this:

After I locate and select all of the desired resources, I can add a new tag key by clicking Create a new tag key and entering the desired tag key:

Then I enter a value for each instance (the new ProjectCode column):

Then I can create a resource group that contains all of the resources that are tagged with P100:

After I have created the resource group, I can locate all of the resources by clicking on the Resource Groups menu:

To learn more about this feature, read Resource Groups and Tagging for AWS.

New API for Resource Tagging
The API that we are announcing today gives you power to tag, untag, and locate resources using tags, all from your own code. With these new API functions, you are now able to operate on multiple resource types with a single set of functions.

Here are the new functions:

TagResources – Add tags to up to 20 resources at a time.

UntagResources – Remove tags from up to 20 resources at a time.

GetResources – Get a list of resources, with optional filtering by tags and/or resource types.

GetTagKeys – Get a list of all of the unique tag keys used in your account.

GetTagValues – Get all tag values for a specified tag key.

These functions support the following AWS services and resource types:

AWS Service Resource Types
Amazon CloudFront Distribution.
Amazon EC2 AMI, Customer Gateway, DHCP Option, EBS Volume, Instance, Internet Gateway, Network ACL, Network Interface, Reserved Instance, Reserved Instance Listing, Route Table, Security Group – EC2 Classic, Security Group – VPC, Snapshot, Spot Batch, Spot Instance Request, Spot Instance, Subnet, Virtual Private Gateway, VPC, VPN Connection.
Amazon ElastiCache Cluster, Snapshot.
Amazon Elastic File System Filesystem.
Amazon Elasticsearch Service Domain.
Amazon EMR Cluster.
Amazon Glacier Vault.
Amazon Inspector Assessment.
Amazon Kinesis Stream.
Amazon Machine Learning Batch Prediction, Data Source, Evaluation, ML Model.
Amazon Redshift Cluster.
Amazon Relational Database Service DB Instance, DB Option Group, DB Parameter Group, DB Security Group, DB Snapshot, DB Subnet Group, Event Subscription, Read Replica, Reserved DB Instance.
Amazon Route 53 Domain, Health Check, Hosted Zone.
Amazon S3 Bucket.
Amazon WorkSpaces WorkSpace.
AWS Certificate Manager Certificate.
AWS CloudHSM HSM.
AWS Directory Service Directory.
AWS Storage Gateway Gateway, Virtual Tape, Volume.
Elastic Load Balancing Load Balancer, Target Group.

Things to Know
Here are a couple of things to keep in mind when you build code or write scripts that use the new API functions or the CLI equivalents:

Compatibility – The older, service-specific functions remain available and you can continue to use them.

Write Permission – The new tagging API adds another layer of permission on top of existing policies that are specific to a single AWS service. For example, you will need to have access to tag:tagResources and EC2:createTags in order to add a tag to an EC2 instance.

Read Permission – You will need to have access to tag:GetResources, tag:GetTagKeys, and tag:GetTagValues in order to call functions that access tags and tag values.

Pricing – There is no charge for the use of these functions or for tags.

Available Now
The new functions are supported by the latest versions of the AWS SDKs. You can use them to tag and access resources in all commercial AWS regions.

Jeff;

 

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

How to Remediate Amazon Inspector Security Findings Automatically

Post Syndicated from Eric Fitzgerald original https://aws.amazon.com/blogs/security/how-to-remediate-amazon-inspector-security-findings-automatically/

The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings, but also to automate addressing those findings.

Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings, and then implements the appropriate remediation based on the type of issue.

Use case

In this post’s example, I find a common vulnerability and exposure (CVE) for a missing update and use Lambda to call the Amazon EC2 Systems Manager to update the instance. However, this is just one use case and the underlying logic can be used for multiple cases such as software and application patching, kernel version updates, security permissions and roles changes, and configuration changes.

The solution

Overview

The solution in this blog post does the following:

  1. Launches a new Amazon EC2 instance, deploying the EC2 Simple Systems Manager (SSM) agent and its role to the instance.
  2. Deploys the Amazon Inspector agent to the instance by using EC2 Systems Manager.
  3. Creates an SNS topic to which Amazon Inspector will publish messages.
  4. Configures an Amazon Inspector assessment template to post finding notifications to the SNS topic.
  5. Creates the Lambda function that is triggered by notifications to the SNS topic and uses EC2 Systems Manager from within the Lambda function to perform automatic remediation on the instance.

1.  Launch an EC2 instance with EC2 Systems Manager enabled

In my previous Security Blog post, I discussed the use of EC2 user data to deploy the EC2 SSM agent to a Linux instance. To enable the type of autoremediation we are talking about, it is necessary to have the EC2 Systems Manager installed on your instances. If you already have EC2 Systems Manager installed on your instances, you can move on to Step 2. Otherwise, let’s take a minute to review how the process works:

  1. Create an AWS Identity and Access Management (IAM) role so that the on-instance EC2 SSM agent can communicate with EC2 Systems Manager. You can learn more about the process of creating a role while launching an instance.
  2. While launching the instance with the EC2 launch wizard, associate the role you just created with the new instance and provide the appropriate script as user data for your operating system and architecture to install the EC2 Systems Manager agent as the instance is launched. See the process and scripts.

Screenshot of configuring instance details

Note: You must change the scripts slightly when copying them from the instructions to the EC2 user data. The word region in the curl command must be replaced with the AWS region code (for example, us-east-1).

2.  Deploy the Amazon Inspector agent to the instance by using EC2 Systems Manager

You can deploy the Amazon Inspector agent with EC2 Systems Manager, with EC2 instance user data, or by connecting to an EC2 instance via SSH and running the installation steps manually. Because you just installed the EC2 SSM agent, you will use that method.

To deploy the Amazon Inspector agent:

  1. Navigate to the EC2 console in the desired region. In the navigation pane, choose Command History under Commands near the bottom of the list.
  2. Choose Run a command.
  3. Choose the AWS-RunShellScript command document, and then choose Select instances to specify the instance that you created previously. Note: If you do not see the instance in that list, you probably did not successfully install the EC2 SSM agent. This means you have to start over with the previous section. Common mistakes include failing to associate a role with the instance, failing to associate the correct policy with the role, or providing an incorrect user data script.
  4. Paste the following script in the Commands.
    #!/bin/bash
    cd /tmp
    curl -O https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install
    chmod a+x /tmp/install
    bash /tmp/install

  5. Choose Run to execute the script on the instance.

Screenshot of deploying the Amazon Inspector agent

3.  Create an SNS topic to which Amazon Inspector will publish messages

Amazon SNS uses topics, communication channels for sending messages and subscribing to notifications. You will create an SNS topic for this solution to which Amazon Inspector publishes messages whenever there is a security finding. Later, you will create a Lambda function that subscribes to this topic and receives a notification whenever a new security finding is generated.

To create an SNS topic:

  1. In the AWS Management Console, navigate to the SNS console.
  2. Choose Create topic. Type a topic name and a display name, and choose Create topic.
  3. From the list of displayed topics, choose the topic that you just created by selecting the check box to the left of the topic name, and then choose Edit topic policy from the Other topic actions drop-down list.
  4. In the Advanced view tab, find the Principal section of the policy document. In that section, replace the line that says “AWS”: “*” with the following text: “Service”: “inspector.amazonaws.com” (see the following screenshot).
  5. Choose Update policy to save the changes.
  6. Choose Edit topic policy On the Basic view tab, set the topic policy to allow Only me (topic owner) to subscribe to the topic, and choose Update policy to save the changes.

Screenshot of editing the topic policy

4.  Configure an Amazon Inspector assessment template to post finding notifications to the SNS topic

An assessment template is a configuration that tells Amazon Inspector how to construct a specific security evaluation. For example, an assessment template can tell Amazon Inspector which EC2 instances to target and which rules packages to evaluate. You can configure a template to tell Amazon Inspector to generate SNS notifications when findings are identified. In order to enable automatic remediation, you either create a new template or modify an existing template to set up SNS notifications to the SNS topic that you just created.

To enable automatic remediation:

  1. Sign in to the AWS Management Console and navigate to the Amazon Inspector console.
  2. Choose Assessment templates in the navigation pane.
  3. Choose one of your existing Amazon Inspector assessment templates. If you need to create a new Amazon Inspector template, type a name for the template and choose the Common Vulnerabilities and Exposures rules package. Then go back to the list and select the template.
  4. Expand the template so that you can see all the settings by choosing the right-pointing arrowhead in the row for that template.
  5. Choose the pencil icon next to the SNS topics.
  6. Add the SNS topic that you created in the previous section by choosing it from the Select a new topic to notify of events drop-down list (see the following screenshot).
  7. Choose Save to save your changes.

Screenshot of configuring the SNS topic

5.  Create the Lambda autoremediation function

Now, create a Lambda function that listens for Amazon Inspector to notify it of new security findings, and then tells the EC2 SSM agent to run the appropriate system update command (apt-get update or yum update) if the finding is for an unpatched CVE vulnerability.

Step 1: Create an IAM role for the Lambda function to send EC2 Systems Manager commands

A Lambda function needs specific permissions to interact with your AWS resources. You provide these permissions in the form of an IAM role, and the role has a policy attached that permits the Lambda function to receive SNS notifications and to send commands to the Amazon Inspector agent via EC2 Systems Manager.

To create the IAM role:

  1. Sign in to the AWS Management Console, and navigate to the IAM console.
  2. Choose Roles in the navigation pane, and then choose Create new role.
  3. Type a name for the role. You should (but are not required to) use a descriptive name such as Amazon Inspector-agent-autodeploy-lambda. Regardless of the name you choose, remember the name because you will need it in the next section.
  4. Choose the AWS Lambda role type.
  5. Attach the policies AWSLambdaBasicExecutionRole and AmazonSSMFullAccess.
  6. Choose Create the role.

Step 2: Create the Lambda function that will update the host by sending the appropriate commands through EC2 Systems Manager

Now, create the Lambda function. You can download the source code for this function from the .zip file link in the following procedure. Some things to note about the function are:

  • The function listens for notifications on the configured SNS topic, but only acts on notifications that are from Amazon Inspector that report a finding and are reporting a CVE vulnerability.
  • The function checks to ensure that the EC2 SSM agent is installed, running, and healthy on the EC2 instance for which the finding was reported.
  • The function checks the operating system of the EC2 instance and determines if it is a supported Linux distribution (Ubuntu or Amazon Linux).
  • The function sends the distribution-appropriate package update command (apt-get update or yum update) to the EC2 instance via EC2 Systems Manager.
  • The function does not reboot the agent. You either have to add that functionality yourself or reboot the agent manually.

To create the Lambda function:

  1. Sign in to the AWS Management Console in the region that you intend to use, and navigate to the Lambda console.
  2. Choose Create a Lambda function.
  3. On the Select a blueprint page, choose the Hello World Python blueprint and choose Next.
  4. On the Configure triggers page, choose SNS as the trigger, and choose the SNS topic that you created in the last section. Choose the Enable trigger check box and choose Next.
  5. Type a name and description for the function. Choose Python 2.7 runtime.
  6. Download and save this .zip file.
  7. Unzip the .zip file, and copy the entire contents of lambda-auto-remediate.py to your clipboard.
  8. Choose Edit code inline under Code entry type in the Lambda function, and replace all the existing text with the text that you just copied from lambda-auto-remediate.py.
  9. Select Choose an existing role from the Role drop-down list, and then in the Existing role box, choose the IAM role that you created in Step 1 of this section.
  10. Choose Next and then Create function to complete the creation of the function.

You now have a working system that monitors Amazon Inspector for CVE findings and will patch affected Ubuntu or Amazon Linux instances automatically. You can view or modify the source code for the function in the Lambda console. Additionally, Lambda and EC2 Systems Manager will generate logs whenever the function causes an agent to patch itself.

Note: If you have multiple CVE findings for an instance, the remediation commands might be executed more than once, but the package managers for Linux handle this efficiently. You still have to reboot the instances yourself, but EC2 Systems Manager includes a feature to do that as well.

Summary

Using Amazon Inspector with Lambda allows you to automate certain security tasks. Because Lambda supports Python and JavaScript, development of such automation is similar to automating any other kind of administrative task via scripting. Even better, you can take actions on EC2 instances in response to Amazon Inspector findings by using Lambda to invoke EC2 Systems Manager. This enables you to take instance-specific actions based on issues that Amazon Inspector finds. Combining these capabilities allows you to build event-driven security automation to help better secure your AWS environment in near real time.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this post, start a new thread on the Amazon Inspector forum.

– Eric

How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector

Post Syndicated from Eric Fitzgerald original https://aws.amazon.com/blogs/security/how-to-simplify-security-assessment-setup-using-ec2-systems-manager-and-amazon-inspector/

In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda.

This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

An overview of EC2 Systems Manager and EC2 Simple Systems Manager (SSM)

Amazon EC2 Systems Manager is a set of services that makes it easy to manage your Windows or Linux hosts running on EC2 instances. EC2 Systems Manager does this through an agent called EC2 Simple Systems Manager (SSM), which is installed on your instances. With SSM on your EC2 instances, you can save yourself an SSH or RDP session to the instance to perform management tasks.

With EC2 Systems Manager, you can perform various tasks at scale through a simple API, CLI, or EC2 Run Command. The EC2 Run Command can execute a Unix shell script on Linux instances or a Windows PowerShell script on Windows instances. When you use EC2 Systems Manager to run a script on an EC2 instance, the output is piped to a text file in Amazon S3 for you automatically. Therefore, you can examine the output without visiting the system or inventing your own mechanism for capturing console output.

The solution

Step 1: Enable EC2 Systems Manager and install the EC2 SSM agent

Setting up EC2 Systems Manager is relatively straightforward, but you must set up EC2 Systems Manager at the time you launch the instance. This is because the SSM agent will use an instance role to communicate with the EC2 Systems Manager securely. When launched with the appropriately configured IAM role, the EC2 instance is provided with a set of credentials that allows the SSM agent to perform actions on behalf of the account owner. The policy on the IAM role determines the permissions associated with these credentials.

The easiest way I have found to do this is to create the role, and then each time you launch an instance, associate the role with the instance and provide the SSM agent installation script in the instance’s user data in the launch wizard or API. Here’s how:

  1. Create an instance role so that the on-instance SSM agent can communicate with EC2 Systems Manager. If you already need an instance role for some other purpose, use the IAM console to attach the AmazonEC2RoleforSSM managed policy to your existing role.
  2. When launching the instance with the EC2 launch wizard, associate the role you just created with the new instance.
  3. When launching the instance with the EC2 launch wizard, provide the appropriate script as user data for your operating system and architecture to install the SSM agent as the instance is launched. To see this process and scripts in full, see Installing the SSM Agent.

Note: You must change the scripts slightly when copying them from the instructions to the EC2 user data: the word region in the curl command must be replaced with the AWS region code (for example, us-east-1).

When your instance starts, the SSM agent is installed. Having the SSM agent on the instance is the key component to the automated installation of the Amazon Inspector agent on the instance.

Step 2: Automatically install the Amazon Inspector agent when new EC2 instances are launched

Let’s assume that you will install the SSM agent when you first launch your instances. With that assumption in mind, you have two methods for installing the Amazon Inspector agent.

Method 1: Install the Amazon Inspector agent with user data

Just as we did above with the SSM agent, we can use the user data feature of EC2 to execute the Amazon Inspector agent installation script during instance launch. This is useful if you have decided not to install the SSM agent, but it is more work than necessary if you are in the habit of deploying the SSM agent at the launch of an instance.

To install the Amazon Inspector agent with user data on Linux systems, simply add the following commands to the User data box in the instance launch wizard (as shown in the following screenshot). This script works without modification on any Linux distribution that Amazon Inspector supports.

#!/bin/bash
cd /tmp
curl -O https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install
chmod +x /tmp/install
/tmp/install

Note: If you are adding these commands to existing user data, be sure that only the first line of user data is #!/bin/bash. You should not have multiple copies of this line.

Finish launching the EC2 instance and the Amazon Inspector agent is installed as the instance is starting for the first time. To read more about this process, see Working with AWS Agents on Linux-based Operating Systems.

Method 2: Install the Amazon Inspector agent whenever a new EC2 instance starts

In environments that launch new instances continually, installing the Amazon Inspector agent automatically when an instance starts prevents some additional work. As we discussed in the previous method, you need to modify your instance launch process to include the EC2 SSM agent. This means you need to configure your instances with an EC2 Systems Manager role, as well as run the EC2 SSM agent.

First, create an IAM role that gives your Lambda function the permissions it needs to deploy the Amazon Inspector agent. Then, create the Lambda job that uses the SSM RunShellScript to install the Amazon Inspector agent. Finally, set up Amazon CloudWatch Events to run the Lambda job whenever a new instance enters the Running state.

Here are the details of the three-step process:

Step 1 – Create an IAM role for the Lambda function to use to send commands to EC2 Systems Manager:

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. Choose Roles in the navigation pane. Choose Create new role.
  3. Type a name for a role. You should (but are not required to) use a descriptive name such as Inspector-agent-autodeploy-Lambda. Remember the name you choose because you will need it in Step 2.
  4. Choose the AWS Lambda role type.
  5. Attach the policies AWSLambdaBasicExecutionRole and AmazonSSMFullAccess.
  6. Choose Create the role to finish.

Step 2 – Create the Lambda function that will run EC2 Systems Manager commands to install the Amazon Inspector agent:

  1. Sign in to the AWS Management Console in your chosen region and navigate to the Lambda console.
  2. Choose Create a Lambda function.
  3. Skip Select blueprint.
  4. On the Configure triggers page, choose Next. Type a Name and Description for the function. Choose Python 2.7 for Runtime.
  5. Download and save autodeploy.py. Unzip the file, and copy the entire contents of autodeploy.py.
  6. From the Code entry type drop-down list, choose Edit code inline, and replace all the existing text with the text that you just copied from autodeploy.py.
  7. From the Role drop-down list, choose Choose an existing role, and then from the Existing role drop-down list, choose the role that you created in Step 1.
  8. Choose Next and then Create function to finish creating the function.

Step 3 – Set up CloudWatch Events to trigger the function:

  1. In the AWS Management Console in the same region as you used in Step 2, navigate to the CloudWatch console and then choose Events in the navigation pane.
  2. Choose Create rule. From the Select event source drop-down list, choose Amazon EC2.
  3. Choose Specific state(s) and Running. This tells CloudWatch to generate an event when an instance enters the Running state.
  4. Under Targets, choose Add target and then Lambda function.
  5. Choose the function that you created in Step 2.
  6. Click Configure details. Type a name and description for the event, and choose Create rule.

Summary

You have completed the setup! Now, whenever an EC2 instance enters the Running state (either on initial creation or on reboot), CloudWatch Events triggers an event that invokes the Lambda function that you created. The Lambda function then uses EC2 System Manager to install the Amazon Inspector agent on the instance.

In a subsequent AWS Security Blog post, I will show you how to take your security assessment automation a step further by automatically performing remediations for Amazon Inspector findings by using EC2 System Manager and Lambda.

If you have comments about this blog post, submit them in the “Comments” section below. If you have implementation questions, start a new thread on the Amazon Inspector forum.

– Eric

AWS Week in Review – September 27, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-27-2016/

Fourteen (14) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 26

Tuesday

September 27

Wednesday

September 28

Thursday

September 29

Friday

September 30

Saturday

October 1

Sunday

October 2

New & Notable Open Source

  • dynamodb-continuous-backup sets up continuous backup automation for DynamoDB.
  • lambda-billing uses NodeJS to automate billing to AWS tagged projects, producing PDF invoices.
  • vyos-based-vpc-wan is a complete Packer + CloudFormation + Troposphere powered setup of AMIs to run VyOS IPSec tunnels across multiple AWS VPCs, using BGP-4 for dynamic routing.
  • s3encrypt is a utility that encrypts and decrypts files in S3 with KMS keys.
  • lambda-uploader helps to package and upload Lambda functions to AWS.
  • AWS-Architect helps to deploy microservices to Lambda and API Gateway.
  • awsgi is an WSGI gateway for API Gateway and Lambda proxy integration.
  • rusoto is an AWS SDK for Rust.
  • EBS_Scripts contains some EBS tricks and triads.
  • landsat-on-aws is a web application that uses Amazon S3, Amazon API Gateway, and AWS Lambda to create an infinitely scalable interface to navigate Landsat satellite data.

New SlideShare Presentations

New AWS Marketplace Listings

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

AWS Week in Review – September 19, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2016/

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 19

Tuesday

September 20

Wednesday

September 21

Thursday

September 22

Friday

September 23

Saturday

September 24

Sunday

September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Register for and Attend This September 28 Webinar—Addressing Amazon Inspector Assessment Findings

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx12GVZRDTQCL6O/Register-for-and-Attend-This-September-28-Webinar-Addressing-Amazon-Inspector-As

AWS webinar image

As part of the AWS Webinar Series, AWS will present Addressing Amazon Inspector Assessment Findings on Wednesday, September 28. This webinar will start at 9:00 A.M. and end at 10:00 A.M. Pacific Time.

AWS Principal Security Engineer Eric Fitzgerald will review Amazon Inspector security assessment findings, and show how best to interpret and take action on them as a seamless part of your DevOps lifecycle.

You will learn how to:

  • Interpret Amazon Inspector security assessment findings.
  • Use AWS services to automate ticketing and change management submissions for findings.
  • Automate remediation based on assessment findings.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

Register for and Attend This September 28 Webinar—Addressing Amazon Inspector Assessment Findings

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/register-for-and-attend-this-september-28-webinar-addressing-amazon-inspector-assessment-findings/

AWS webinars logo

As part of the AWS Webinar Series, AWS will present Addressing Amazon Inspector Assessment Findings on Wednesday, September 28. This webinar will start at 9:00 A.M. and end at 10:00 A.M. Pacific Time.

AWS Principal Security Engineer Eric Fitzgerald will review Amazon Inspector security assessment findings, and show how best to interpret and take action on them as a seamless part of your DevOps lifecycle.

You will learn how to:

  • Interpret Amazon Inspector security assessment findings.
  • Use AWS services to automate ticketing and change management submissions for findings.
  • Automate remediation based on assessment findings.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/32-security-and-compliance-sessions-now-live-in-the-reinvent-2016-session-catalog/

re:Invent 2016 logo

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof?
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.

 

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.

 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is “yes,” this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.

 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.

 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.

 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.

 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.” This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.

 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.

 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.

 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.

 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.

 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.

 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.

 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.

 

The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.

 

For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.

 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.

 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.

 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.

 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.

 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.

 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.

 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources. In addition, we will show you how to configure network security using VPCs and security groups.

 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.

 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.

 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.

 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler.

 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).

 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.

 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.

 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.

 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

 

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.

 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.

– Craig

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3UX2WK7G84E5J/32-Security-and-Compliance-Sessions-Now-Live-in-the-re-Invent-2016-Session-Catal

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32 titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof? 
 
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
 
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.
  

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.
 
 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is "yes," this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type. 
 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.
 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.
 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
 
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.
 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.”
 
This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.
 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.
 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.
 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.
 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.
 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.
 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.
 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.
 
The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.
 
For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.
 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.
 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.
 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.
 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.
 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.
 
 
 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.
 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources.
 
In addition, we will show you how to configure network security using VPCs and security groups. 
 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.
 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.
 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.
 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
 
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler. 
 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).
 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.
 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.
 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.
 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.
 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.
 
– Craig

AWS Webinars – September 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-webinars-september-2016/

At the beginning of the month I blogged about the value of continuing education and shared an infographic that ilustrated the link between continued education and increased pay, higher effectiveness, and decreased proclivity to seek other employment. The pace of AWS innovation means that there’s always something new to learn. One way to do this is to attend some of our webinars. We design these webinars with a focus on training and education, and strongly believe that you can walk away from them ready, willing, and able to use a new AWS service or to try a new aspect of an existing one.

To that end, we have another great selection of webinars on the schedule for September. As always they are free, but they do fill up and I strongly suggest that you register ahead of time. All times are PT, and each webinar runs for one hour:

September 20

September 21

September 22

September 26

September 27

September 28

September 29


Jeff;

 

PS – Check out the AWS Webinar Archive for more great content!

 

In Case You Missed These: AWS Security Blog Posts from June, July, and August

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3KVD6T490MM47/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-June-July-and-August

In case you missed any AWS Security Blog posts from June, July, and August, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from a tagging limit increase to recording SSH sessions established through a bastion host.

August

August 16: Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency
We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted distributed denial of service (DDoS) activity.

August 15: Now Organize Your AWS Resources by Using up to 50 Tags per Resource
Tagging AWS resources simplifies the way you organize and discover resources, allocate costs, and control resource access across services. Many of you have told us that as the number of applications, teams, and projects running on AWS increases, you need more than 10 tags per resource. Based on this feedback, we now support up to 50 tags per resource. You do not need to take additional action—you can begin applying as many as 50 tags per resource today.

August 11: New! Import Your Own Keys into AWS Key Management Service
Today, we are happy to announce the launch of the new import key feature that enables you to import keys from your own key management infrastructure (KMI) into AWS Key Management Service (KMS). After you have exported keys from your existing systems and imported them into KMS, you can use them in all KMS-integrated AWS services and custom applications.

August 2: Customer Update: Amazon Web Services and the EU-US Privacy Shield
Recently, the European Commission and the US Government agreed on a new framework called the EU-US Privacy Shield, and on July 12, the European Commission formally adopted it. AWS welcomes this new framework for transatlantic data flow. As the EU-US Privacy Shield replaces Safe Harbor, we understand many of our customers have questions about what this means for them. The security of our customers’ data is our number one priority, so I wanted to take a few moments to explain what this all means.

August 2: How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment
In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up a high-availability (HA) partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.

July

July 28: Enable Your Federated Users to Work in the AWS Management Console for up to 12 Hours
AWS Identity and Access Management (IAM) supports identity federation, which enables external identities, such as users in your corporate directory, to sign in to the AWS Management Console via single sign-on (SSO). Now with a small configuration change, your AWS administrators can allow your federated users to work in the AWS Management Console for up to 12 hours, instead of having to reauthenticate every 60 minutes. In addition, administrators can now revoke active federated user sessions. In this blog post, I will show how to configure the console session duration for two common federation use cases: using Security Assertion Markup Language (SAML) 2.0 and using a custom federation broker that leverages the sts:AssumeRole* APIs (see this downloadable sample of a federation proxy). I will wrap up this post with a walkthrough of the new session revocation process.

July 28: Amazon Cognito Your User Pools is Now Generally Available
Amazon Cognito makes it easy for developers to add sign-up, sign-in, and enhanced security functionality to mobile and web apps. With Amazon Cognito Your User Pools, you get a simple, fully managed service for creating and maintaining your own user directory that can scale to hundreds of millions of users.

July 27: How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events
In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

July 25: AWS Becomes First Cloud Service Provider to Adopt New PCI DSS 3.2
We are happy to announce the availability of the Amazon Web Services PCI DSS 3.2 Compliance Package for the 2016/2017 cycle. AWS is the first cloud service provider (CSP) to successfully complete the assessment against the newly released PCI Data Security Standard (PCI DSS) version 3.2, 18 months in advance of the mandatory February 1, 2018, deadline. The AWS Attestation of Compliance (AOC), available upon request, now features 26 PCI DSS certified services, including the latest additions of Amazon EC2 Container Service (ECS), AWS Config, and AWS WAF (a web application firewall). We at AWS are committed to this international information security and compliance program, and adopting the new standard as early as possible once again demonstrates our commitment to information security as our highest priority. Our customers (and customers of our customers) can operate confidently as they store and process credit card information (and any other sensitive data) in the cloud knowing that AWS products and services are tested against the latest and most mature set of PCI compliance requirements.

July 20: New AWS Compute Blog Post: Help Secure Container-Enabled Applications with IAM Roles for ECS Tasks
Amazon EC2 Container Service (ECS) now allows you to specify an IAM role that can be used by the containers in an ECS task, as a new AWS Compute Blog post explains. 

July 14: New Whitepaper Now Available: The Security Perspective of the AWS Cloud Adoption Framework
Today, AWS released the Security Perspective of the AWS Cloud Adoption Framework (AWS CAF). The AWS CAF provides a framework to help you structure and plan your cloud adoption journey, and build a comprehensive approach to cloud computing throughout the IT lifecycle. The framework provides seven specific areas of focus or Perspectives: business, platform, maturity, people, process, operations, and security.

July 14: New Amazon Inspector Blog Post on the AWS Blog
On the AWS Blog yesterday, Jeff Barr published a new security-related blog post written by AWS Principal Security Engineer Eric Fitzgerald. Here’s the beginning of the post, which is entitled, Scale Your Security Vulnerability Testing with Amazon Inspector:

July 12: How to Use AWS CloudFormation to Automate Your AWS WAF Configuration with Example Rules and Match Conditions
We recently announced AWS CloudFormation support for all current features of AWS WAF. This enables you to leverage CloudFormation templates to configure, customize, and test AWS WAF settings across all your web applications. Using CloudFormation templates can help you reduce the time required to configure AWS WAF. In this blog post, I will show you how to use CloudFormation to automate your AWS WAF configuration with example rules and match conditions.

July 11: How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
In this blog post, I show how you can restrict S3 bucket access to a specific IAM role or user within an account using Conditions instead of with the NotPrincipal element. Even if another user in the same account has an Admin policy or a policy with s3:*, they will be denied if they are not explicitly listed. You can use this approach, for example, to configure a bucket for access by instances within an Auto Scaling group. You can also use this approach to limit access to a bucket with a high-level security need.

July 7: How to Use SAML to Automatically Direct Federated Users to a Specific AWS Management Console Page
In this blog post, I will show you how to create a deep link for federated users via the SAML 2.0 RelayState parameter in Active Directory Federation Services (AD FS). By using a deep link, your users will go directly to the specified console page without additional navigation.

July 6: How to Prevent Uploads of Unencrypted Objects to Amazon S3
In this blog post, I will show you how to create an S3 bucket policy that prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).

June

June 30: The Top 20 AWS IAM Documentation Pages so Far This Year
The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research. 

June 29: The Most Viewed AWS Security Blog Posts so Far in 2016
The following 10 posts are the most viewed AWS Security Blog posts that we published during the first six months of this year. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

June 25: AWS Earns Department of Defense Impact Level 4 Provisional Authorization
I am pleased to share that, for our AWS GovCloud (US) Region, AWS has received a Defense Information Systems Agency (DISA) Provisional Authorization (PA) at Impact Level 4 (IL4). This will allow Department of Defense (DoD) agencies to use the AWS Cloud for production workloads with export-controlled data, privacy information, and protected health information as well as other controlled unclassified information. This new authorization continues to demonstrate our advanced work in the public sector space; you might recall AWS was the first cloud service provider to obtain an Impact Level 4 PA in August 2014, paving the way for DoD pilot workloads and applications in the cloud. Additionally, we recently achieved a FedRAMP High provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB), also for AWS GovCloud (US), and today’s announcement allows DoD mission owners to continue to leverage AWS for critical production applications.

June 23: AWS re:Invent 2016 Registration Is Now Open
Register now for the fifth annual AWS re:Invent, the largest gathering of the global cloud computing community. Join us in Las Vegas for opportunities to connect, collaborate, and learn about AWS solutions. This year we are offering all-new technical deep-dives on topics such as security, IoT, serverless computing, and containers. We are also delivering more than 400 sessions, more hands-on labs, bootcamps, and opportunities for one-on-one engagements with AWS experts.

June 23: AWS Achieves FedRAMP High JAB Provisional Authorization
We are pleased to announce that AWS has received a FedRAMP High JAB Provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB) for the AWS GovCloud (US) Region. The new Federal Risk and Authorization Management Program (FedRAMP) High JAB Provisional Authorization is mapped to more than 400 National Institute of Standards and Technology (NIST) security controls. This P-ATO recognizes AWS GovCloud (US) as a secure environment on which to run highly sensitive government workloads, including Personally Identifiable Information (PII), sensitive patient records, financial data, law enforcement data, and other Controlled Unclassified Information (CUI).

June 22: AWS IAM Service Last Accessed Data Now Available for South America (Sao Paulo) and Asia Pacific (Seoul) Regions
In December, AWS IAM released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support two additional regions: South America (Sao Paulo) and Asia Pacific (Seoul). With this release, you can now view the date when an IAM entity last accessed an AWS service in these two regions. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

June 20: New Twitter Handle Now Live: @AWSSecurityInfo
Today, we launched a new Twitter handle: @AWSSecurityInfo. The purpose of this new handle is to share security bulletins, security whitepapers, compliance news and information, and other AWS security-related and compliance-related information. The scope of this handle is broader than that of @AWSIdentity, which focuses primarily on Security Blog posts. However, feel free to follow both handles!

June 15: Announcing Two New AWS Quick Start Reference Deployments for Compliance
As part of the Professional Services Enterprise Accelerator – Compliance program, AWS has published two new Quick Start reference deployments to assist federal government customers and others who need to meet National Institute of Standards and Technology (NIST) SP 800-53 (Revision 4) security control requirements, including those at the high-impact level. The new Quick Starts are AWS Enterprise Accelerator – Compliance: NIST-based Assurance Frameworks and AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST High-Impact Controls Featuring Trend Micro Deep Security. These Quick Starts address many of the NIST controls at the infrastructure layer. Furthermore, for systems categorized as high impact, AWS has worked with Trend Micro to incorporate its Deep Security product into a Quick Start deployment in order to address many additional high-impact controls at the workload layer (app, data, and operating system). In addition, we have worked with Telos Corporation to populate security control implementation details for each of these Quick Starts into the Xacta product suite for customers who rely upon that suite for governance, risk, and compliance workflows.

June 14: Now Available: Get Even More Details from Service Last Accessed Data
In December, AWS IAM released service last accessed data, which shows the time when an IAM entity (a user, group, or role) last accessed an AWS service. This provided a powerful tool to help you grant least privilege permissions. Starting today, it’s easier to identify where you can reduce permissions based on additional service last accessed data.

June 14: How to Record SSH Sessions Established Through a Bastion Host
A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC). In this blog post, I will show you how to leverage a bastion host to record all SSH sessions established with Linux instances. Recording SSH sessions enables auditing and can help in your efforts to comply with regulatory requirements.

June 14: AWS Granted Authority to Operate for Department of Commerce and NOAA
AWS already has a number of federal agencies onboarded to the cloud, including the Department of Energy, The Department of the Interior, and NASA. Today we are pleased to announce the addition of two more ATOs (authority to operate) for the Department of Commerce (DOC) and the National Oceanic and Atmospheric Administration (NOAA). Specifically, the DOC will be utilizing AWS for their Commerce Data Service, and NOAA will be leveraging the cloud for their “Big Data Project." According to NOAA, the goal of the Big Data Project is to “create a sustainable, market-driven ecosystem that lowers the cost barrier to data publication. This project will create a new economic space for growth and job creation while providing the public far greater access to the data created with its tax dollars.”

June 2: How to Set Up DNS Resolution Between On-Premises Networks and AWS by Using Unbound
In previous AWS Security Blog posts, Drew Dennis covered two options for establishing DNS connectivity between your on-premises networks and your Amazon Virtual Private Cloud (Amazon VPC) environments. His first post explained how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. His second post showed how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities. In this post, I will explain how you can set up DNS resolution between your on-premises DNS with Amazon VPC by using Unbound, an open-source, recursive DNS resolver. This solution is not a managed solution like Microsoft AD and Simple AD, but it does provide the ability to route DNS requests between on-premises environments and an Amazon VPC–provided DNS.

June 1: How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
In this blog post, I will show you how to store secrets on Amazon S3, and use AWS IAM roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments.

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

In Case You Missed These: AWS Security Blog Posts from June, July, and August

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-june-july-and-august/

In case you missed any AWS Security Blog posts from June, July, and August, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from a tagging limit increase to recording SSH sessions established through a bastion host.

August

August 16: Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency
We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted distributed denial of service (DDoS) activity.

August 15: Now Organize Your AWS Resources by Using up to 50 Tags per Resource
Tagging AWS resources simplifies the way you organize and discover resources, allocate costs, and control resource access across services. Many of you have told us that as the number of applications, teams, and projects running on AWS increases, you need more than 10 tags per resource. Based on this feedback, we now support up to 50 tags per resource. You do not need to take additional action—you can begin applying as many as 50 tags per resource today.

August 11: New! Import Your Own Keys into AWS Key Management Service
Today, we are happy to announce the launch of the new import key feature that enables you to import keys from your own key management infrastructure (KMI) into AWS Key Management Service (KMS). After you have exported keys from your existing systems and imported them into KMS, you can use them in all KMS-integrated AWS services and custom applications.

August 2: Customer Update: Amazon Web Services and the EU-US Privacy Shield
Recently, the European Commission and the US Government agreed on a new framework called the EU-US Privacy Shield, and on July 12, the European Commission formally adopted it. AWS welcomes this new framework for transatlantic data flow. As the EU-US Privacy Shield replaces Safe Harbor, we understand many of our customers have questions about what this means for them. The security of our customers’ data is our number one priority, so I wanted to take a few moments to explain what this all means.

August 2: How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment
In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up a high-availability (HA) partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.

July

July 28: Enable Your Federated Users to Work in the AWS Management Console for up to 12 Hours
AWS Identity and Access Management (IAM) supports identity federation, which enables external identities, such as users in your corporate directory, to sign in to the AWS Management Console via single sign-on (SSO). Now with a small configuration change, your AWS administrators can allow your federated users to work in the AWS Management Console for up to 12 hours, instead of having to reauthenticate every 60 minutes. In addition, administrators can now revoke active federated user sessions. In this blog post, I will show how to configure the console session duration for two common federation use cases: using Security Assertion Markup Language (SAML) 2.0 and using a custom federation broker that leverages the sts:AssumeRole* APIs (see this downloadable sample of a federation proxy). I will wrap up this post with a walkthrough of the new session revocation process.

July 28: Amazon Cognito Your User Pools is Now Generally Available
Amazon Cognito makes it easy for developers to add sign-up, sign-in, and enhanced security functionality to mobile and web apps. With Amazon Cognito Your User Pools, you get a simple, fully managed service for creating and maintaining your own user directory that can scale to hundreds of millions of users.

July 27: How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events
In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

July 25: AWS Becomes First Cloud Service Provider to Adopt New PCI DSS 3.2
We are happy to announce the availability of the Amazon Web Services PCI DSS 3.2 Compliance Package for the 2016/2017 cycle. AWS is the first cloud service provider (CSP) to successfully complete the assessment against the newly released PCI Data Security Standard (PCI DSS) version 3.2, 18 months in advance of the mandatory February 1, 2018, deadline. The AWS Attestation of Compliance (AOC), available upon request, now features 26 PCI DSS certified services, including the latest additions of Amazon EC2 Container Service (ECS), AWS Config, and AWS WAF (a web application firewall). We at AWS are committed to this international information security and compliance program, and adopting the new standard as early as possible once again demonstrates our commitment to information security as our highest priority. Our customers (and customers of our customers) can operate confidently as they store and process credit card information (and any other sensitive data) in the cloud knowing that AWS products and services are tested against the latest and most mature set of PCI compliance requirements.

July 20: New AWS Compute Blog Post: Help Secure Container-Enabled Applications with IAM Roles for ECS Tasks
Amazon EC2 Container Service (ECS) now allows you to specify an IAM role that can be used by the containers in an ECS task, as a new AWS Compute Blog post explains.

July 14: New Whitepaper Now Available: The Security Perspective of the AWS Cloud Adoption Framework
Today, AWS released the Security Perspective of the AWS Cloud Adoption Framework (AWS CAF). The AWS CAF provides a framework to help you structure and plan your cloud adoption journey, and build a comprehensive approach to cloud computing throughout the IT lifecycle. The framework provides seven specific areas of focus or Perspectives: business, platform, maturity, people, process, operations, and security.

July 14: New Amazon Inspector Blog Post on the AWS Blog
On the AWS Blog yesterday, Jeff Barr published a new security-related blog post written by AWS Principal Security Engineer Eric Fitzgerald. Here’s the beginning of the post, which is entitled, Scale Your Security Vulnerability Testing with Amazon Inspector:

July 12: How to Use AWS CloudFormation to Automate Your AWS WAF Configuration with Example Rules and Match Conditions
We recently announced AWS CloudFormation support for all current features of AWS WAF. This enables you to leverage CloudFormation templates to configure, customize, and test AWS WAF settings across all your web applications. Using CloudFormation templates can help you reduce the time required to configure AWS WAF. In this blog post, I will show you how to use CloudFormation to automate your AWS WAF configuration with example rules and match conditions.

July 11: How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
In this blog post, I show how you can restrict S3 bucket access to a specific IAM role or user within an account using Conditions instead of with the NotPrincipal element. Even if another user in the same account has an Admin policy or a policy with s3:*, they will be denied if they are not explicitly listed. You can use this approach, for example, to configure a bucket for access by instances within an Auto Scaling group. You can also use this approach to limit access to a bucket with a high-level security need.

July 7: How to Use SAML to Automatically Direct Federated Users to a Specific AWS Management Console Page
In this blog post, I will show you how to create a deep link for federated users via the SAML 2.0 RelayState parameter in Active Directory Federation Services (AD FS). By using a deep link, your users will go directly to the specified console page without additional navigation.

July 6: How to Prevent Uploads of Unencrypted Objects to Amazon S3
In this blog post, I will show you how to create an S3 bucket policy that prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).

June

June 30: The Top 20 AWS IAM Documentation Pages so Far This Year
The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

June 29: The Most Viewed AWS Security Blog Posts so Far in 2016
The following 10 posts are the most viewed AWS Security Blog posts that we published during the first six months of this year. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

June 25: AWS Earns Department of Defense Impact Level 4 Provisional Authorization
I am pleased to share that, for our AWS GovCloud (US) Region, AWS has received a Defense Information Systems Agency (DISA) Provisional Authorization (PA) at Impact Level 4 (IL4). This will allow Department of Defense (DoD) agencies to use the AWS Cloud for production workloads with export-controlled data, privacy information, and protected health information as well as other controlled unclassified information. This new authorization continues to demonstrate our advanced work in the public sector space; you might recall AWS was the first cloud service provider to obtain an Impact Level 4 PA in August 2014, paving the way for DoD pilot workloads and applications in the cloud. Additionally, we recently achieved a FedRAMP High provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB), also for AWS GovCloud (US), and today’s announcement allows DoD mission owners to continue to leverage AWS for critical production applications.

June 23: AWS re:Invent 2016 Registration Is Now Open
Register now for the fifth annual AWS re:Invent, the largest gathering of the global cloud computing community. Join us in Las Vegas for opportunities to connect, collaborate, and learn about AWS solutions. This year we are offering all-new technical deep-dives on topics such as security, IoT, serverless computing, and containers. We are also delivering more than 400 sessions, more hands-on labs, bootcamps, and opportunities for one-on-one engagements with AWS experts.

June 23: AWS Achieves FedRAMP High JAB Provisional Authorization
We are pleased to announce that AWS has received a FedRAMP High JAB Provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB) for the AWS GovCloud (US) Region. The new Federal Risk and Authorization Management Program (FedRAMP) High JAB Provisional Authorization is mapped to more than 400 National Institute of Standards and Technology (NIST) security controls. This P-ATO recognizes AWS GovCloud (US) as a secure environment on which to run highly sensitive government workloads, including Personally Identifiable Information (PII), sensitive patient records, financial data, law enforcement data, and other Controlled Unclassified Information (CUI).

June 22: AWS IAM Service Last Accessed Data Now Available for South America (Sao Paulo) and Asia Pacific (Seoul) Regions
In December, AWS IAM released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support two additional regions: South America (Sao Paulo) and Asia Pacific (Seoul). With this release, you can now view the date when an IAM entity last accessed an AWS service in these two regions. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

June 20: New Twitter Handle Now Live: @AWSSecurityInfo
Today, we launched a new Twitter handle: @AWSSecurityInfo. The purpose of this new handle is to share security bulletins, security whitepapers, compliance news and information, and other AWS security-related and compliance-related information. The scope of this handle is broader than that of @AWSIdentity, which focuses primarily on Security Blog posts. However, feel free to follow both handles!

June 15: Announcing Two New AWS Quick Start Reference Deployments for Compliance
As part of the Professional Services Enterprise Accelerator – Compliance program, AWS has published two new Quick Start reference deployments to assist federal government customers and others who need to meet National Institute of Standards and Technology (NIST) SP 800-53 (Revision 4) security control requirements, including those at the high-impact level. The new Quick Starts are AWS Enterprise Accelerator – Compliance: NIST-based Assurance Frameworks and AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST High-Impact Controls Featuring Trend Micro Deep Security. These Quick Starts address many of the NIST controls at the infrastructure layer. Furthermore, for systems categorized as high impact, AWS has worked with Trend Micro to incorporate its Deep Security product into a Quick Start deployment in order to address many additional high-impact controls at the workload layer (app, data, and operating system). In addition, we have worked with Telos Corporation to populate security control implementation details for each of these Quick Starts into the Xacta product suite for customers who rely upon that suite for governance, risk, and compliance workflows.

June 14: Now Available: Get Even More Details from Service Last Accessed Data
In December, AWS IAM released service last accessed data, which shows the time when an IAM entity (a user, group, or role) last accessed an AWS service. This provided a powerful tool to help you grant least privilege permissions. Starting today, it’s easier to identify where you can reduce permissions based on additional service last accessed data.

June 14: How to Record SSH Sessions Established Through a Bastion Host
A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC). In this blog post, I will show you how to leverage a bastion host to record all SSH sessions established with Linux instances. Recording SSH sessions enables auditing and can help in your efforts to comply with regulatory requirements.

June 14: AWS Granted Authority to Operate for Department of Commerce and NOAA
AWS already has a number of federal agencies onboarded to the cloud, including the Department of Energy, The Department of the Interior, and NASA. Today we are pleased to announce the addition of two more ATOs (authority to operate) for the Department of Commerce (DOC) and the National Oceanic and Atmospheric Administration (NOAA). Specifically, the DOC will be utilizing AWS for their Commerce Data Service, and NOAA will be leveraging the cloud for their “Big Data Project.” According to NOAA, the goal of the Big Data Project is to “create a sustainable, market-driven ecosystem that lowers the cost barrier to data publication. This project will create a new economic space for growth and job creation while providing the public far greater access to the data created with its tax dollars.”

June 2: How to Set Up DNS Resolution Between On-Premises Networks and AWS by Using Unbound
In previous AWS Security Blog posts, Drew Dennis covered two options for establishing DNS connectivity between your on-premises networks and your Amazon Virtual Private Cloud (Amazon VPC) environments. His first post explained how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. His second post showed how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities. In this post, I will explain how you can set up DNS resolution between your on-premises DNS with Amazon VPC by using Unbound, an open-source, recursive DNS resolver. This solution is not a managed solution like Microsoft AD and Simple AD, but it does provide the ability to route DNS requests between on-premises environments and an Amazon VPC–provided DNS.

June 1: How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
In this blog post, I will show you how to store secrets on Amazon S3, and use AWS IAM roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments.

If you have comments  about any of these posts, please add your comments in the “Comments” section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig