Tag Archives: Amazon SNS

Automate Your IT Operations Using AWS Step Functions and Amazon CloudWatch Events

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/automate-your-it-operations-using-aws-step-functions-and-amazon-cloudwatch-events/


Rob Percival, Associate Solutions Architect

Are you interested in reducing the operational overhead of your AWS Cloud infrastructure? One way to achieve this is to automate the response to operational events for resources in your AWS account.

Amazon CloudWatch Events provides a near real-time stream of system events that describe the changes and notifications for your AWS resources. From this stream, you can create rules to route specific events to AWS Step Functions, AWS Lambda, and other AWS services for further processing and automated actions.

In this post, learn how you can use Step Functions to orchestrate serverless IT automation workflows in response to CloudWatch events sourced from AWS Health, a service that monitors and generates events for your AWS resources. As a real-world example, I show automating the response to a scenario where an IAM user access key has been exposed.

Serverless workflows with Step Functions and Lambda

Step Functions makes it easy to develop and orchestrate components of operational response automation using visual workflows. Building automation workflows from individual Lambda functions that perform discrete tasks lets you develop, test, and modify the components of your workflow quickly and seamlessly. As serverless services, Step Functions and Lambda also provide the benefits of more productive development, reduced operational overhead, and no costs incurred outside of when the workflows are actively executing.

Example workflow

As an example, this post focuses on automating the response to an event generated by AWS Health when an IAM access key has been publicly exposed on GitHub. This is a diagram of the automation workflow:

AWS proactively monitors popular code repository sites for IAM access keys that have been publicly exposed. Upon detection of an exposed IAM access key, AWS Health generates an AWS_RISK_CREDENTIALS_EXPOSED event in the AWS account related to the exposed key. A configured CloudWatch Events rule detects this event and invokes a Step Functions state machine. The state machine then orchestrates the automated workflow that deletes the exposed IAM access key, summarizes the recent API activity for the exposed key, and sends the summary message to an Amazon SNS topic to notify the subscribers―in that order.

The corresponding Step Functions state machine diagram of this automation workflow can be seen below:

While this particular example focuses on IT automation workflows in response to the AWS_RISK_CREDENTIALS_EXPOSEDevent sourced from AWS Health, it can be generalized to integrate with other events from these services, other event-generating AWS services, and even run on a time-based schedule.

Walkthrough

To follow along, use the code and resources found in the aws-health-tools GitHub repo. The code and resources include an AWS CloudFormation template, in addition to instructions on how to use it.

Launch Stack into N. Virginia with CloudFormation

The Step Functions state machine execution starts with the exposed keys event details in JSON, a sanitized example of which is provided below:

{
    "version": "0",
    "id": "121345678-1234-1234-1234-123456789012",
    "detail-type": "AWS Health Event",
    "source": "aws.health",
    "account": "123456789012",
    "time": "2016-06-05T06:27:57Z",
    "region": "us-east-1",
    "resources": [],
    "detail": {
        "eventArn": "arn:aws:health:us-east-1::event/AWS_RISK_CREDENTIALS_EXPOSED_XXXXXXXXXXXXXXXXX",
        "service": "RISK",
        "eventTypeCode": "AWS_RISK_CREDENTIALS_EXPOSED",
        "eventTypeCategory": "issue",
        "startTime": "Sat, 05 Jun 2016 15:10:09 GMT",
        "eventDescription": [
            {
                "language": "en_US",
                "latestDescription": "A description of the event is provided here"
            }
        ],
        "affectedEntities": [
            {
                "entityValue": "ACCESS_KEY_ID_HERE"
            }
        ]
    }
}

After it’s invoked, the state machine execution proceeds as follows.

Step 1: Delete the exposed IAM access key pair

The first thing you want to do when you determine that an IAM access key has been exposed is to delete the key pair so that it can no longer be used to make API calls. This Step Functions task state deletes the exposed access key pair detailed in the incoming event, and retrieves the IAM user associated with the key to look up API activity for the user in the next step. The user name, access key, and other details about the event are passed to the next step as JSON.

This state contains a powerful error-handling feature offered by Step Functions task states called a catch configuration. Catch configurations allow you to reroute and continue state machine invocation at new states depending on potential errors that occur in your task function. In this case, the catch configuration skips to Step 3. It immediately notifies your security team that errors were raised in the task function of this step (Step 1), when attempting to look up the corresponding IAM user for a key or delete the user’s access key.

Note: Step Functions also offers a retry configuration for when you would rather retry a task function that failed due to error, with the option to specify an increasing time interval between attempts and a maximum number of attempts.

Step 2: Summarize recent API activity for key

After you have deleted the access key pair, you’ll want to have some immediate insight into whether it was used for malicious activity in your account. Another task state, this step uses AWS CloudTrail to look up and summarize the most recent API activity for the IAM user associated with the exposed key. The summary is in the form of counts for each API call made and resource type and name affected. This summary information is then passed to the next step as JSON. This step requires information that you obtained in Step 1. Step Functions ensures the successful completion of Step 1 before moving to Step 2.

Step 3: Notify security

The summary information gathered in the last step can provide immediate insight into any malicious activity on your account made by the exposed key. To determine this and further secure your account if necessary, you must notify your security team with the gathered summary information.

This final task state generates an email message providing in-depth detail about the event using the API activity summary, and publishes the message to an SNS topic subscribed to by the members of your security team.

If the catch configuration of the task state in Step 1 was triggered, then the security notification email instead directs your security team to log in to the console and navigate to the Personal Health Dashboard to view more details on the incident.

Lessons learned

When implementing this use case with Step Functions and Lambda, consider the following:

  • One of the most important parts of implementing automation in response to operational events is to ensure visibility into the response and resolution actions is retained. Step Functions and Lambda enable you to orchestrate your granular response and resolution actions that provides direct visibility into the state of the automation workflow.
  • This basic workflow currently executes these steps serially with a catch configuration for error handling. More sophisticated workflows can leverage the parallel execution, branching logic, and time delay functionality provided by Step Functions.
  • Catch and retry configurations for task states allow for orchestrating reliable workflows while maintaining the granularity of each Lambda function. Without leveraging a catch configuration in Step 1, you would have had to duplicate code from the function in Step 3 to ensure that your security team was notified on failure to delete the access key.
  • Step Functions and Lambda are serverless services, so there is no cost for these services when they are not running. Because this IT automation workflow only runs when an IAM access key is exposed for this account (which is hopefully rare!), the total monthly cost for this workflow is essentially $0.

Conclusion

Automating the response to operational events for resources in your AWS account can free up the valuable time of your engineers. Step Functions and Lambda enable granular IT automation workflows to achieve this result while gaining direct visibility into the orchestration and state of the automation.

For more examples of how to use Step Functions to automate the operations of your AWS resources, or if you’d like to see how Step Functions can be used to build and orchestrate serverless applications, visit Getting Started on the Step Functions website.

Open and Click Tracking Have Arrived

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/open-and-click-tracking-have-arrived/

We’re pleased to announce the addition of open and click tracking metrics to Amazon SES. These metrics will help you measure the effectiveness of the email campaigns you send using Amazon SES.

We’re also adding the ability to publish email sending metrics to Amazon Simple Notification Service (Amazon SNS) using event publishing. This feature gives you greater control over the sending notifications you receive through Amazon SNS.

What’s new in this release?

When you send an email using Amazon SES, we now collect metrics related to opens and clicks. Opens, in this sense, refers to the number of users who successfully received your email and opened it in their email clients; clicks refers to the number of users who received an email and clicked one or more links in it.

Additionally, you can now use event publishing to push email sending notifications—including open and click notifications—using Amazon SNS. Previously, you could send account-level notifications through Amazon SNS. These notifications were pretty limited: you could only receive notifications about bounces, complaints, and deliveries, and you would receive notifications about all of these events across your entire Amazon SES account. Now you can use event publishing to send notifications about deliveries, opens, clicks, bounces, and complaints. Furthermore, you can set up event publishing so that you only receive notifications about emails sent using the configuration sets you specify in those emails.

Why should I use open and click tracking?

Whether you are sending marketing emails, transactional emails, or notifications, you need to know how effective your communications are. The email sending metrics feature of Amazon SES gives you data about entire email response funnel—the total number of emails that were sent, bounced, viewed, and clicked. You can then transform those insights into action.

For example, the open and click tracking feature can help you identify the customers who are most interested in receiving the messages you send. By narrowing down your list of recipients and focusing on your most engaged customers, you can save money (by sending fewer messages), improve the response rates of your marketing campaigns (by targeting only the customers who are most interested in what you have to say), and protect your sender reputation (by reducing the number of bounces and complaints against your sending domain).

How do I enable open and click tracking?

If you’ve set up Sending Metrics in the past, then you can easily add open and click tracking to your existing configuration sets. On the Configuration Sets page, choose the configuration set that contains your sending event destination; edit the event destination, check the boxes for Open and Click (as shown in the image below), and then choose Save.

How does open and click tracking work?

Amazon SES makes very minor changes to your emails in order to make open and click tracking work. At the bottom of each message, we insert a 1 pixel by 1 pixel transparent GIF image. Each email includes a unique link to this image file; when the image is opened, we can tell exactly which message was opened and by whom.

To track clicks, we set up a redirect for each link in the message. When a recipient clicks a link, they are sent to an Amazon SES server, and are immediately forwarded to the destination address. As with open tracking, each of these redirect links is unique, allowing us to easily determine which recipient clicked the link, when they clicked it, and the email from which they arrived at the link.

Can I disable click tracking?

You can disable click tracking by adding a special tag to the anchor tags in your HTML. For example, if you were linking to the AWS home page, a normal anchor link would look something like this:

<a href="https://aws.amazon.com/">Amazon Web Services</a>

To disable click tracking for that same link, you would modify to look like this:

<a ses:no-track href="https://aws.amazon.com/">Amazon Web Services</a>

Because the ses:no-track attribute is non-standard HTML, we automatically remove it from the version of the email that arrives in your recipients’ inboxes.

How do I use event publishing with Amazon SNS?

If you’ve set up event destinations in the past, then the process of setting up an Amazon SNS event destination will be very familiar. You can add an Amazon SNS destination to an existing configuration set, or create a new configuration set that uses Amazon SNS as its event destination. To learn more, see “Set Up an Amazon SNS Event Destination for Amazon SES Event Publishing” in our Developer Guide.

We’re excited about this release. Let us know what you think of these new features in the SES Forum, or in the comments for this post.

New – Cross-Account Delivery of CloudWatch Events

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cross-account-delivery-of-cloudwatch-events/

CloudWatch Events allow you to track and respond to changes in your AWS resources. You get a near real-time stream of events that you can route to one or more targets (AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, and more) using rules. The events that are generated depend on the particular AWS service. For example, here are the events generated for EC2 instances:

Or for S3 (CloudTrail must be enabled in order to create rules that use these events):

See the CloudWatch Event Types list to see which services and events are available.

New Cross-Account Event Delivery
Our customers have asked us to extend CloudWatch Events to handle some interesting & powerful use cases that span multiple AWS accounts, and we are happy to oblige. Today we are adding support for controlled, cross-account delivery of CloudWatch Events. As you will see, you can now arrange to route events from one AWS account to another. As is the case with the existing event delivery model, you can use CloudWatch Events rules to specify which events you would like to send to another account.

Here are some of the use cases that have been shared with us:

Separation of Concerns – Customers would like to handle and respond to events in a separate account in order to implement advanced security schemes.

Rollup – Customers are using AWS Organizations and would like to track certain types of events across the entire organization, across a multitude of AWS accounts.

Each AWS account uses a resource event bus to distribute events. This object dates back to the introduction of CloudWatch Events, but has never been formally called out as such. AWS services, the PutEvents function, and other accounts can publish events to it.

The event bus (currently one per account, with plans to allow more in the future) now has an associated access policy. This policy specifies the set of AWS accounts that are allowed to send events to the bus. You can add one or more accounts, or you can specify that any account is allowed to send events.

You can create event distribution topologies that work on a fan-in or a fan-out basis. A fan-in model allows you to handle events from multiple accounts in one place. A fan-out model allows you to route different types of events to distinct locations and accounts.

In order to avoid the possibility of creating a loop, events that are sent from one account to another will not be sent to a third one. You should take this in to account when you are planning your cross-account implementation.

Using Cross-Account Event Delivery
In order to test this new feature, I made use of my work and my personal AWS accounts. I log in to my personal account and went to the CloudWatch Console. Then I select Event Buses, clicked on Add Permission, and enter the Account ID of my work account:

I can see all of my buses (just one is allowed right now) and permissions in one place:

Next, I log in to my work account and create a rule that will send events to the event bus in my personal account. In this case my personal account is interested in changes of state for EC2 instances running in my work account:

Back in my personal account, I create a rule that will fire on any EC2 event, targeting it at an SNS topic that is configured to send email:

After testing this rule with an EC2 instance launched in my personal account, I launch an instance in my work account and wait for the email message:

The account and resources fields in the message are from the source (work) account.

Things to Know
This functionality is available in all AWS Regions where CloudWatch Events is available and you can start using it today. It is also accessible from the CloudWatch Events APIs and the AWS Command Line Interface (CLI).

Events forwarded from one account to another are considered custom events. The sending account is charged $1 for every million events (see the CloudWatch Pricing page for more info).

Jeff;

PS – AWS CloudFormation support is in the works and coming soon!

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

How to Control TLS Ciphers in Your AWS Elastic Beanstalk Application by Using AWS CloudFormation

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-control-tls-ciphers-in-your-aws-elastic-beanstalk-application-by-using-aws-cloudformation/

Securing data in transit is critical to the integrity of transactions on the Internet. Whether you log in to an account with your user name and password or give your credit card details to a retailer, you want your data protected as it travels across the Internet from place to place. One of the protocols in widespread use to protect data in transit is Transport Layer Security (TLS). Every time you access a URL that begins with “https” instead of just “http”, you are using a TLS-secured connection to a website.

To demonstrate that your application has a strong TLS configuration, you can use services like the one provided by SSL Labs. There are also open source, command-line-oriented TLS testing programs such as testssl.sh (which I do not cover in this post) and sslscan (which I cover later in this post). The goal of testing your TLS configuration is to provide evidence that weak cryptographic ciphers are disabled in your TLS configuration and only strong ciphers are enabled. In this blog post, I show you how to control the TLS security options for your secure load balancer in AWS CloudFormation, pass the TLS certificate and host name for your secure AWS Elastic Beanstalk application to the CloudFormation script as parameters, and then confirm that only strong TLS ciphers are enabled on the launched application by testing it with SSLLabs.

Background

In some situations, it’s not enough to simply turn on TLS with its default settings and call it done. Over the years, a number of vulnerabilities have been discovered in the TLS protocol itself with codenames such as CRIME, POODLE, and Logjam. Though some vulnerabilities were in specific implementations, such as OpenSSL, others were vulnerabilities in the Secure Sockets Layer (SSL) or TLS protocol itself.

The only way to avoid some TLS vulnerabilities is to ensure your web server uses only the latest version of TLS. Some organizations want to limit their TLS configuration to the highest possible security levels to satisfy company policies, regulatory requirements, or other information security requirements. In practice, such limitations usually mean using TLS version 1.2 (at the time of this writing, TLS 1.3 is in the works) and using only strong cryptographic ciphers. Note that forcing a high-security TLS connection in this manner limits which types of devices can connect to your web server. I address this point at the end of this post.

The default TLS configuration in most web servers is compatible with the broadest set of clients (such as web browsers, mobile devices, and point-of-sale systems). As a result, older ciphers and protocol versions are usually enabled. This is true for the Elastic Load Balancing load balancer that is created in your Elastic Beanstalk application as well as for web server software such as Apache and nginx.  For example, TLS versions 1.0 and 1.1 are enabled in addition to 1.2. The RC4 cipher is permitted, even though that cipher is too weak for the most demanding security requirements. If your application needs to prioritize the security of connections over compatibility with legacy devices, you must adjust the TLS encryption settings on your application. The solution in this post helps you make those adjustments.

Prerequisites for the solution

Before you implement this solution, you must have a few prerequisites in place:

  1. You must have a hosted zone in Amazon Route 53 where the name of the secure application will be created. I use example.com as my domain name in this post and assume that I host example.com publicly in Route 53. To learn more about creating and hosting a zone publicly in Route 53, see Working with Public Hosted Zones.
  2. You must choose a name to be associated with the secure app. In this case, I use secure.example.com as the DNS name to be associated with the secure app. This means that I’m trying to create an Elastic Beanstalk application whose URL will be https://secure.example.com/.
  3. You must have a TLS certificate hosted in AWS Certificate Manager (ACM). This certificate must be issued with the name you decided in Step 2. If you are new to ACM, see Getting Started. If you are already familiar with ACM, request a certificate and get its Amazon Resource Name (ARN).Look up the ARN for the certificate that you created by opening the ACM console. The ARN looks something like: arn:aws:acm:eu-west-1:111122223333:certificate/12345678-abcd-1234-abcd-1234abcd1234.

Implementing the solution

You can use two approaches to control the TLS ciphers used by your load balancer: one is to use a predefined protocol policy from AWS, and the other is to write your own protocol policy that lists exactly which ciphers should be enabled. There are many ciphers and options that can be set, so the appropriate AWS predefined policy is often the simplest policy to use. If you have to comply with an information security policy that requires enabling or disabling specific ciphers, you will probably find it easiest to write a custom policy listing only the ciphers that are acceptable to your requirements.

AWS released two predefined TLS policies on March 10, 2017: ELBSecurityPolicy-TLS-1-1-2017-01 and ELBSecurityPolicy-TLS-1-2-2017-01. These policies restrict TLS negotiations to TLS 1.1 and 1.2, respectively. You can find a good comparison of the ciphers that these policies enable and disable in the HTTPS listener documentation for Elastic Load Balancing. If your requirements are simply “support TLS 1.1 and later” or “support TLS 1.2 and later,” those AWS predefined cipher policies are the best place to start. If you need to control your cipher choice with a custom policy, I show you in this post which lines of the CloudFormation template to change.

Download the predefined policy CloudFormation template

Many AWS customers rely on CloudFormation to launch their AWS resources, including their Elastic Beanstalk applications. To change the ciphers and protocol versions supported on your load balancer, you must put those options in a CloudFormation template. You can store your site’s TLS certificate in ACM and create the corresponding DNS alias record in the correct zone in Route 53.

To start, download the CloudFormation template that I have provided for this blog post, or deploy the template directly in your environment. This template creates a CloudFormation stack in your default VPC that contains two resources: an Elastic Beanstalk application that deploys a standard sample PHP application, and a Route 53 record in a hosted zone. This CloudFormation template selects the AWS predefined policy called ELBSecurityPolicy-TLS-1-2-2017-01 and deploys it.

Launching the sample application from the CloudFormation console

In the CloudFormation console, choose Create Stack. You can either upload the template through your browser, or load the template into an Amazon S3 bucket and type the S3 URL in the Specify an Amazon S3 template URL box.

After you click Next, you will see that there are three parameters defined: CertificateARN, ELBHostName, and HostedDomainName. Set the CertificateARN parameter to the ARN of the certificate you want to use for your application. Set the ELBHostName parameter to the hostname part of the URL. For example, if your URL were https://secure.example.com/, the HostedDomainName parameter would be example.com and the ELBHostName parameter would be secure.

For the sample application, choose Next and then choose Create, and the CloudFormation stack will be created. For your own applications, you might need to set other options such as a database, VPC options, or Amazon SNS notifications. For more details, see AWS Elastic Beanstalk Environment Configuration. To deploy an application other than our sample PHP application, create your own application source bundle.

Launching the sample application from the command line

In addition to launching the sample application from the console, you can specify the parameters from the command line. Because the template uses parameters, you can launch multiple copies of the application, specifying different parameters for each copy. To launch the application from a Linux command line with the AWS CLI, insert the correct values for your application, as shown in the following command.

aws cloudformation create-stack --stack-name "SecureSampleApplication" \
--template-url https://<URL of your CloudFormation template in S3> \
--parameters ParameterKey=CertificateARN,ParameterValue=<Your ARN> \
ParameterKey=ELBHostName,ParameterValue=<Your Host Name> \
ParameterKey=HostedDomainName,ParameterValue=<Your Domain Name>

When that command exits, it prints the StackID of the stack it created. Save that StackID for later so that you can fetch the stack’s outputs from the command line.

Using a custom cipher specification

If you want to specify your own cipher choices, you can use the same CloudFormation template and change two lines. Let’s assume your information security policies require you to disable any ciphers that use Cipher Block Chaining (CBC) mode encryption. These ciphers are enabled in the ELBSecurityPolicy-TLS-1-2-2017-01 managed policy, so to satisfy that security requirement, you have to modify the CloudFormation template to use your own protocol policy.

In the template, locate the three lines that define the TLSHighPolicy.

- Namespace:  aws:elb:policies:TLSHighPolicy
OptionName: SSLReferencePolicy
Value:      ELBSecurityPolicy-TLS-1-2-2017-01

Change the OptionName and Value for the TLSHighPolicy. Instead of referring to the AWS predefined policy by name, explicitly list all the ciphers you want to use. Change those three lines so they look like the following.

- Namespace: aws:elb:policies:TLSHighPolicy
OptionName: SSLProtocols
Value:  Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-GCM-SHA256

This protocol policy stipulates that the load balancer should:

  • Negotiate connections using only TLS 1.2.
  • Ignore any attempts by the client (for example, the web browser or mobile device) to negotiate a weaker cipher.
  • Accept four specific, strong combinations of cipher and key exchange—and nothing else.

The protocol policy enables only TLS 1.2, strong ciphers that do not use CBC mode encryption, and strong key exchange.

Connect to the secure application

When your CloudFormation stack is in the CREATE_COMPLETED state, you will find three outputs:

  1. The public DNS name of the load balancer
  2. The secure URL that was created
  3. TestOnSSLLabs output that contains a direct link for testing your configuration

You can either enter the secure URL in a web browser (for example, https://secure.example.com/), or click the link in the Outputs to open your sample application and see the demo page. Note that you must use HTTPS—this template has disabled HTTP on port 80 and only listens with HTTPS on port 443.

If you launched your application through the command line, you can view the CloudFormation outputs using the command line as well. You need to know the StackId of the stack you launched and insert it in the following stack-name parameter.

aws cloudformation describe-stacks --stack-name "<ARN of Your Stack>" \
--query 'Stacks[0].Outputs'

Test your application over the Internet with SSLLabs

The easiest way to confirm that the load balancer is using the secure ciphers that we chose is to enter the URL of the load balancer in the form on SSL Labs’ SSL Server Test page. If you do not want the name of your load balancer to be shared publicly on SSLLabs.com, select the Do not show the results on the boards check box. After a minute or two of testing, SSLLabs gives you a detailed report of every cipher it tried and how your load balancer responded. This test simulates many devices that might connect to your website, including mobile phones, desktop web browsers, and software libraries such as Java and OpenSSL. The report tells you whether these clients would be able to connect to your application successfully.

Assuming all went well, you should receive an A grade for the sample application. The biggest contributors to the A grade are:

  • Supporting only TLS 1.2, and not TLS 1.1, TLS 1.0, or SSL 3.0
  • Supporting only strong ciphers such as AES, and not weaker ciphers such as RC4
  • Having an X.509 public key certificate issued correctly by ACM

How to test your application privately with sslscan

You might not be able to reach your Elastic Beanstalk application from the Internet because it might be in a private subnet that is only accessible internally. If you want to test the security of your load balancer’s configuration privately, you can use one of the open source command-line tools such as sslscan. You can install and run the sslscan command on any Amazon EC2 Linux instance or even from your own laptop. Be sure that the Elastic Beanstalk application you want to test will accept an HTTPS connection from your Amazon Linux EC2 instance or from your laptop.

The easiest way to get sslscan on an Amazon Linux EC2 instance is to:

  1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.
  2. Run sudo yum install sslscan.
  3. After the command runs successfully, run sslscan secure.example.com to scan your application for supported ciphers.

The results are similar to Qualys’ results at SSLLabs.com, but the sslscan tool does not summarize and evaluate the results to assign a grade. It just reports whether your application accepted a connection using the cipher that it tried. You must decide for yourself whether that set of accepted connections represents the right level of security for your application. If you have been asked to build a secure load balancer that meets specific security requirements, the output from sslscan helps to show how the security of your application is configured.

The following sample output shows a small subset of the total output of the sslscan tool.

Accepted TLS12 256 bits AES256-GCM-SHA384
Accepted TLS12 256 bits AES256-SHA256
Accepted TLS12 256 bits AES256-SHA
Rejected TLS12 256 bits CAMELLIA256-SHA
Failed TLS12 256 bits PSK-AES256-CBC-SHA
Rejected TLS12 128 bits ECDHE-RSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-ECDSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-RSA-AES128-SHA256

An Accepted connection is one that was successful: the load balancer and the client were both able to use the indicated cipher. Failed and Rejected connections are connections whose load balancer would not accept the level of security that the client was requesting. As a result, the load balancer closed the connection instead of communicating insecurely. The difference between Failed and Rejected is based one whether the TLS connection was closed cleanly.

Comparing the two policies

The main difference between our custom policy and the AWS predefined policy is whether or not CBC ciphers are accepted. The test results with both policies are identical except for the results shown in the following table. The only change in the policy, and therefore the only change in the results, is that the cipher suites using CBC ciphers have been disabled.

Cipher Suite Name Encryption Algorithm Key Size (bits) ELBSecurityPolicy-TLS-1-2-2017-01 Custom Policy
ECDHE-RSA-AES256-GCM-SHA384 AESGCM 256 Enabled Enabled
ECDHE-RSA-AES256-SHA384 AES 256 Enabled Disabled
AES256-GCM-SHA384 AESGCM 256 Enabled Disabled
AES256-SHA256 AES 256 Enabled Disabled
ECDHE-RSA-AES128-GCM-SHA256 AESGCM 128 Enabled Enabled
ECDHE-RSA-AES128-SHA256 AES 128 Enabled Disabled
AES128-GCM-SHA256 AESGCM 128 Enabled Disabled
AES128-SHA256 AES 128 Enabled Disabled

Strong ciphers and compatibility

The custom policy described in the previous section prevents legacy devices and older versions of software and web browsers from connecting. The output at SSLLabs provides a list of devices and applications (such as Internet Explorer 10 on Windows 7) that cannot connect to an application that uses the TLS policy. By design, the load balancer will refuse to connect to a device that is unable to negotiate a connection at the required levels of security. Users who use legacy software and devices will see different errors, depending on which device or software they use (for example, Internet Explorer on Windows, Chrome on Android, or a legacy mobile application). The error messages will be some variation of “connection failed” because the Elastic Load Balancer closes the connection without responding to the user’s request. This behavior can be problematic for websites that must be accessible to older desktop operating systems or older mobile devices.

If you need to support legacy devices, adjust the TLSHighPolicy in the CloudFormation template. For example, if you need to support web browsers on Windows 7 systems (and you cannot enable TLS 1.2 support on those systems), you can change the policy to enable TLS 1.1. To do this, change the value of SSLReferencePolicy to ELBSecurityPolicy-TLS-1-1-2017-01.

Enabling legacy protocol versions such as TLS version 1.1 will allow older devices to connect, but then the application may not be compliant with the information security policies or business requirements that require strong ciphers.

Conclusion

Using Elastic Beanstalk, Route 53, and ACM can help you launch secure applications that are designed to not only protect data but also meet regulatory compliance requirements and your information security policies. The TLS policy, either custom or predefined, allows you to control exactly which cryptographic ciphers are enabled on your Elastic Load Balancer. The TLS test results provide you with clear evidence you can use to demonstrate compliance with security policies or requirements. The parameters in this post’s CloudFormation template also make it adaptable and reusable for multiple applications. You can use the same template to launch different applications on different secure URLs by simply changing the parameters that you pass to the template.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudFormation forum.

– Paco

Updated AWS SOC Reports Include Three New Regions and Three Additional Services

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/updated-aws-soc-reports-include-three-new-regions-and-three-additional-services/

 

SOC logo

The updated AWS Service Organization Control (SOC) 1 and SOC 2 Security, Availability, and Confidentiality Reports covering the period of October 1, 2016, through March 31, 2017, are now available. Because we are always looking for ways to improve the customer experience, the current AWS SOC 2 Confidentiality Report has been combined with the AWS SOC 2 Security & Availability Report, making for a seamless read. The updated AWS SOC 3 Security & Availability Report also is publicly available by download.

Additionally, the following three AWS services have been added to the scope of our SOC Reports:

The AWS SOC Reports now also include our three newest regions: US East (Ohio), Canada (Central), and EU (London). SOC Reports now cover 15 regions and supporting edge locations across the globe. See AWS Global Infrastructure for additional geographic information related to AWS SOC.

The updated SOC Reports are available now through AWS Artifact in the AWS Management Console. To request a report:

  1. Sign in to your AWS account.
  2. In the list of services under Security, Identity and Compliance, choose Compliance Reports. On the next page, choose the report you would like to review. Note that you might need to request approval from Amazon for some reports. Requests are reviewed and approved by Amazon within 24 hours.

For further information, see frequently asked questions about the AWS SOC program.  

– Chad

Build a Healthcare Data Warehouse Using Amazon EMR, Amazon Redshift, AWS Lambda, and OMOP

Post Syndicated from Ryan Hood original https://aws.amazon.com/blogs/big-data/build-a-healthcare-data-warehouse-using-amazon-emr-amazon-redshift-aws-lambda-and-omop/

In the healthcare field, data comes in all shapes and sizes. Despite efforts to standardize terminology, some concepts (e.g., blood glucose) are still often depicted in different ways. This post demonstrates how to convert an openly available dataset called MIMIC-III, which consists of de-identified medical data for about 40,000 patients, into an open source data model known as the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). It describes the architecture and steps for analyzing data across various disconnected sources of health datasets so you can start applying Big Data methods to health research.

Note: If you arrived at this page looking for more info on the movie Mimic 3: Sentinel, you might not enjoy this post.

OMOP overview

The OMOP CDM helps standardize healthcare data and makes it easier to analyze outcomes at a large scale. The CDM is gaining a lot of traction in the health research community, which is deeply involved in developing and adopting a common data model. Community resources are available for converting datasets, and there are software tools to help unlock your data after it’s in the OMOP format. The great advantage of converting data sources into a standard data model like OMOP is that it allows for streamlined, comprehensive analytics and helps remove the variability associated with analyzing health records from different sources.

OMOP ETL with Apache Spark

Observational Health Data Sciences and Informatics (OHDSI) provides the OMOP CDM in a variety of formats, including Apache Impala, Oracle, PostgreSQL, and SQL Server. (See the OHDSI Common Data Model repo in GitHub.) In this scenario, the data is moved to AWS to take advantage of the unbounded scale of Amazon EMR and serverless technologies, and the variety of AWS services that can help make sense of the data in a cost-effective way—including Amazon Machine Learning, Amazon QuickSight, and Amazon Redshift.

This example demonstrates an architecture that can be used to run SQL-based extract, transform, load (ETL) jobs to map any data source to the OMOP CDM. It uses MIMIC ETL code provided by Md. Shamsuzzoha Bayzid. The code was modified to run in Amazon Redshift.

Getting access to the MIMIC-III data

Before you can retrieve the MIMIC-III data, you must request access on the PhysioNet website, which is hosted on Amazon S3 as part of the Amazon Web Services (AWS) Public Dataset Program. However, you don’t need access to the MIMIC-III data to follow along with this post.

Solution architecture and loading process

The following diagram shows the architecture that is used to convert the MIMIC-III dataset to the OMOP CDM.

The data conversion process includes the following steps:

  1. The entire infrastructure is spun up using an AWS CloudFormation template. This includes the Amazon EMR cluster, Amazon SNS topics/subscriptions, an AWS Lambda function and trigger, and AWS Identity and Access Management (IAM) roles.
  2. The MIMIC-III data is read in via an Apache Spark program that is running on Amazon EMR. The files are registered as tables in Spark so that they can be queried by Spark SQL.
  3. The transformation queries are located in a separate Amazon S3 location, which is read in by Spark and executed on the newly registered tables to convert the data into OMOP form.
  4. The data is then written to a staging S3 location, where it is ready to be copied into Amazon Redshift.
  5. As each file is loaded in OMOP form into S3, the Spark program sends a message to an SNS topic that signifies that the load completed successfully.
  6. After that message is pushed, it triggers a Lambda function that consumes the message and executes a COPY command from S3 into Amazon Redshift for the appropriate table.

This architecture provides a scalable way to use various healthcare sources and convert them to OMOP format, where the only changes needed are in the SQL transformation files. The transformation logic is stored in an S3 bucket and is completely de-coupled from the Apache Spark program that runs on EMR and converts the data into OMOP form. This makes the transformation code portable and allows the Spark jar to be reused if other data sources are added—for example, electronic health records (EHR), billing systems, and other research datasets.

Note: For larger files, you might experience the five-minute timeout limitation in Lambda. In that scenario you can use AWS Step Functions to split the file and load it one piece at a time.

Scaling the solution

The transformation code runs in a Spark container that can scale out based on how you define your EMR cluster. There are no single points of failure. As your data grows, your infrastructure can grow without requiring any changes to the underlying architecture.

If you add more data sources, such as EHRs and other research data, the high-level view of the ETL would look like the following:

In this case, the loads of the different systems are completely independent. If the EHR load is four times the size that you expected and uses all the resources, it has no impact on the Research Data or HR System loads because they are in separate containers.

You can scale your EMR cluster based on the size of the data that you anticipate. For example, you can have a 50-node cluster in your container for loading EHR data and a 2-node cluster for loading the HR System. This design helps you scale the resources based on what you consume, as opposed to expensive infrastructure sitting idle.

The only code that is unique to each execution is any diffs between the CloudFormation templates (e.g., cluster size and SQL file locations) and the transformation SQL that resides in S3 buckets. The Spark jar that is executed as an EMR step is reused across all three executions.

Upgrading versions

In this architecture, upgrading the versions of Amazon EMR, Apache Hadoop, or Spark requires a one-time change to one line of code in the CloudFormation template:

"EMRC2SparkBatch": {
      "Type": "AWS::EMR::Cluster",
      "Properties": {
        "Applications": [
          {
            "Name": "Hadoop"
          },
          {
            "Name": "Spark"
          }
        ],
        "Instances": {
          "MasterInstanceGroup": {
            "InstanceCount": 1,
            "InstanceType": "m3.xlarge",
            "Market": "ON_DEMAND",
            "Name": "Master"
          },
          "CoreInstanceGroup": {
            "InstanceCount": 1,
            "InstanceType": "m3.xlarge",
            "Market": "ON_DEMAND",
            "Name": "Core"
          },
          "TerminationProtected": false
        },
        "Name": "EMRC2SparkBatch",
        "JobFlowRole": { "Ref": "EMREC2InstanceProfile" },
          "ServiceRole": {
                    "Ref": "EMRRole"
                  },
        "ReleaseLabel": "emr-5.0.0",
        "VisibleToAllUsers": true      
}
    }

Note that this example uses a slightly lower version of EMR so that it can use Spark 2.0.0 instead of Spark 2.1.0, which does not support nulls in CSV files.

You can also select the version in the Release list in the General Configuration section of the EMR console:

The data sources all have different CloudFormation templates, so you can upgrade one data source at a time or upgrade them all together. As long as the reusable Spark jar is compatible with the new version, none of the transformation code has to change.

Executing queries on the data

After all the data is loaded, it’s easy to tear down the CloudFormation stack so you don’t pay for resources that aren’t being used:

CloudFormationManager cf = new CloudFormationManager(); 
cf.terminateStack(stack);    

This includes the EMR cluster, Lambda function, SNS topics and subscriptions, and temporary IAM roles that were created to push the data to Amazon Redshift. The S3 buckets that contain the raw MIMIC-III data and the data in OMOP form remain because they existed outside the CloudFormation stack.

You can now connect to the Amazon Redshift cluster and start executing queries on the ten OMOP tables that were created, as shown in the following example:

select *
from drug_exposure
limit 100;

OMOP analytics tools

For information about open source analytics tools that are built on top of the OMOP model, visit the OHDSI Software page.

The following are examples of data visualizations provided by Achilles, an open source visualization tool for OMOP.

Conclusion

This post demonstrated how to convert MIMIC-III data into OMOP form using data tools that are built for scale and flexibility. It compared the architecture against a traditional data warehouse and showed how this design scales by mixing a scale-out technology with EMR and a serverless technology with Lambda. It also showed how you can lower your costs by using CloudFormation to create your data pipeline infrastructure. And by tearing down the stack after the data is loaded, you don’t pay for idle servers.

You can find all the code in the AWS Labs GitHub repo with detailed, step-by-step instructions on how to load the data from MIMIC-III to OMOP using this design.

If you have any questions or suggestions, please add them below.


About the Author

Ryan Hood is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys watching the Cubs win the World Series and attempting to Sous-vide anything he can find in his refrigerator.

 

 


Related

Create a Healthcare Data Hub with AWS and Mirth Connect

 

 

 

 

 

 

 

Amazon CloudWatch launches Alarms on Dashboards

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/amazon-cloudwatch-launches-alarms-on-dashboards/

Amazon CloudWatch is a service that gives customers the ability to monitor their applications, systems, and solutions running on Amazon Web Services by providing and collecting metrics, logs, and events about AWS resources in real time. CloudWatch automatically provides key resource measurements such as; latency, error rates, and CPU usage, while also enabling monitoring of custom metrics via customer-supplied logs and system data.

Last November, Amazon CloudWatch added new Dashboard Widgets to provide additional data visualization options for all available metrics. In order to provide customers with even more insight into their solutions and resources running on AWS, CloudWatch has launched Alarms on Dashboards. With this alarms enhancement, customers can view alarms and metrics in the same dashboard widget enabling them to perform data-driven troubleshooting and analysis.

CloudWatch dashboards are designed with a goal of providing better visibility when monitoring AWS resources across regions in a consolidated view. Since CloudWatch dashboards are highly customizable, users can create their own custom dashboards to graphically represent data for varying metrics such as utilization, performance, estimated billing, and now alarm conditions. An alarm tracks a single metric over time based on the value of the metric in relation to a specified threshold. When the alarm state changes, an action such an Auto Scaling policy is executed or a notification is sent to Amazon SNS, among other options.

With the ability to add alarms to dashboards, CloudWatch users have another mechanism to proactively monitor and receive alerts about their AWS resources and applications across multiple regions. In addition, the metric data associated with an alarm, which has been added to a dashboard, can be charted and reviewed. Alarms have three possible states:

  • OK: The value of the alarm metric does not meet the threshold
  • INSUFFICIENT DATA: Initial triggering of alarm metric or alarm metric data does not have enough data to determine whether it’s in the OK state or the ALARM state
  • ALARM: The value of the alarm metric meets the threshold

When added to a dashboard, alarms are displayed in red when in the Alarm state, gray when in the Insufficient data state and shown with no color fill when the alarm is in the OK state. Alarms added to a dashboard are supported with the following widgets: Line, Number, and Stacked Graph widgets.

  • Number widget: provides a quick and efficient view of the latest value of any desired metric. Using the widget with alarms, the view of the state of the alarm is shown with different background colors for the latest metric data.
  • Line widget: allows the visualization of the actual value of any collection of chosen metrics. Provides a view on the dashboard of the state of the alarm, which displays the alarm threshold and condition as a horizontal line. The threshold line can act as a good indicator to view the degree of the alarm.
  • Stack graph widget: allows customers to visualize the net total effect of any collection of chosen metrics. The stacked graph widget loads one metric over another in order to illustrate the distribution and contribution of a metric and has the option to display the contribution of metrics in percentages. With alarms, it also provides a view of the state of the alarm, which displays the alarm threshold and condition as a horizontal line.

Currently, adding multiple metrics onto the same widget for an alarm is in the works and this feature is evolving based on customer feedback.

Adding Alarms on Dashboards

Let’s take a quick look at the utilizing the Alarms on a CloudWatch Dashboard. In the AWS Console, I will go to the CloudWatch service. When in the CloudWatch console, select Dashboards. I will click the Create dashboard button and create the CloudWatchBlog dashboard.

 

Upon creation of my CloudWatchBlog dashboard, a dialog box will open to allow me to add widgets to the dashboard. I will forego adding widgets for now since I want to focus on adding alarms on my dashboard. Therefore, I will hit the Cancel button here and go to the Alarms section of the CloudWatch console.

Once in the Alarms section of the CloudWatch console, you will see all of your alarms and the state of each of the alarms for the current region displayed.

As we mentioned earlier, there are three types of alarm states and as you can see in my console above that all of the different alarms states for various alarms are being displayed. If desired, you can adjust your filter on the console to display alarms filtered by the alarm state type.

As an example, I am only interested in viewing the alarms with an alarm state of ALARM. Therefore, I will adjust the filter to show only the alarms in the current region with an alarm state as ALARM.

Now only the two alarms that have a current alarm state of ALARM are displayed. One of these alarms is for monitoring the provisioned write capacity units of an Amazon DynamoDB table, and the other is to monitor the CPU utilization of my active Amazon Elasticsearch instance.

Let’s examine the scenario in which I leverage my CloudWatchBlog dashboard as my troubleshooting mechanism for identifying and diagnosing issues with my Elasticsearch solution and its instances. I will first add the Amazon Elasticsearch CPU utilization alarm, ES Alarm, to my CloudWatchBlog dashboard. To add the alarm, I simply select the checkbox by the desired alarm, which in this case is ES Alarm. Then with the alarm selected, I click the Add to Dashboard button.

The Add to dashboard dialog box will open, allowing me to select my CloudWatchBlog dashboard. Additionally, I can select the widget type I would like to use for the display of my alarm. For the ES Alarm, I will choose the Line widget and complete the process of adding this alarm to my dashboard by clicking the Add to dashboard button.

Upon successfully adding ES Alarm to the CloudWatchBlog dashboard, you will see a confirmation notice displayed in the CloudWatch console.

If I then go to the Dashboard section of the console and select my CloudWatchBlog dashboard, I will see the line widget for my alarm, ES Alarm, on the dashboard. To ensure that my ES Alarm widget is a permanent part of the dashboard, I will click the Save dashboard button to preserve the addition of this widget on the dashboard.

As we discussed, one of the benefits of utilizing a CloudWatch dashboard is the ability to add several alarms from various regions onto a dashboard. Since my scenario is leveraging my dashboard as a troubleshooting mechanism for my Elasticsearch solution, I would like to have several alarms and metrics related to my solution displayed on the CloudWatchBlog dashboard. Given this, I will create another alarm for my Elasticsearch instance and add it to my dashboard.

I will first return to the Alarms section of the console and click the Create Alarm button.

The Create Alarm dialog box is displayed showing all of the current metrics available in this region. From the summary, I can quickly see that there are 21 metrics being tracked for Elasticsearch. I will click on the ES Metrics link to view the individual metrics that can be used to create my alarm.

I can review the individual metrics shown for my Elasticsearch instance, and choose which metric I want to base my new alarm on. In this case, I choose the WriteLatency metric by selecting the checkbox for this metric and then click the Next button.

 

The next screen is where I fill in all the details about my alarm: name, description, alarm threshold, time period, and alarm action. I will name my new alarm, ES Latency Alarm, and complete the rest of the aforementioned data fields. To complete the creation of my new alarm, I click the Create Alarm button.

I will see a confirmation message box at the top of the Alarms console upon successful completion of adding the alarm, and the status of the newly created alarm will be displayed in the alarms list.

Now I will add my ES Latency Alarm to my CloudWatchBlog dashboard. Again, I click on the checkbox by the alarm and then click the Add to Dashboard button.

This time when the Add to Dashboard dialog comes up, I will choose the Stacked area widget to display the ES Latency Alarm on my CloudWatchBlog dashboard. Clicking the Add to Dashboard button will complete the addition of my ES Latency Alarm widget to the dashboard.

Once back in the console, again I will see the confirmation noting the successful addition of the widget. I go to the Dashboards and click on the CloudWatchBlog dashboard and I can now view the two widgets in my dashboard. To include this widget in the dashboard permanently, I click the Save dashboard button.

The final thing to note about the new CloudWatch feature, Alarms on Dashboards, is that alarms and metrics from other regions can be added to the dashboard for a complete view for troubleshooting. Let’s add a metric to the dashboard with the alarms widget.

Within the console, I will move from my current region, US East (Ohio), to the US East (N. Virginia) region.

Now I will go to the Metric section of the CloudWatch console. This section displays the metrics from services used in the US East (N. Virginia) region.

My Elasticsearch solution triggers Lambda functions to capture all of the EmployeeInfo DynamoDB database CRUD (Create, Read, Update, Delete) changes via DynamoDB streams and write those changes into my Elasticsearch domain, taratestdomain. Therefore, I will add metrics to my CloudWatchBlog dashboard to track table metrics from DynamoDB.

Therefore, I am going to add the EmployeeInfo database ProvisionedWriteCapacityUnits metric to my CloudWatchBlog dashboard.

Back again in the Add to Dashboard dialog, I will select my CloudWatchBlog dashboard and choose to display this metric using the Number widget.

Now, the ProvisionedWriteCapacityUnits metric from the US East (N. Virginia) is displayed in the CloudWatchBlog dashboard with the Number widget added to the dashboard to with the alarms from the US East (Ohio). To make this update permanent in the dashboard, I will (you guessed it!) click the Save dashboard button.

Summary

Getting started with alarms on dashboards is easy. You can use alarms on dashboards across regions for another means of proactively monitoring alarms, build troubleshooting playbooks, and view desired metrics. You can also choose the metric first in the Metric UI and then change the type of widget according to the visualization that fits the metric.

Alarms on Dashboards are supported on Line, Stacked Area, and Number widgets. In addition, you can use Text widgets next to alarms on a dashboard to add steps or observations on how to handle changes in the alarm state. To learn more about Amazon CloudWatch widgets and about the additional dashboard widgets, visit the Amazon CloudWatch documentation and the CloudWatch Getting Started guide.

 

Tara

AWS Achieves FedRAMP Authorization for New Services in the AWS GovCloud (US) Region

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-authorization-for-a-wide-array-of-services/

Today, we’re pleased to announce an array of AWS services that are available in the AWS GovCloud (US) Region and have achieved Federal Risk and Authorization Management Program (FedRAMP) High authorizations. The FedRAMP Joint Authorization Board (JAB) has issued Provisional Authority to Operate (P-ATO) approvals, which are effective immediately. If you are a federal or commercial customer, you can use these services to process and store your critical workloads in the AWS GovCloud (US) Region’s authorization boundary with data up to the high impact level.

The services newly available in the AWS GovCloud (US) Region include database, storage, data warehouse, security, and configuration automation solutions that will help you increase your ability to manage data in the cloud. For example, with AWS CloudFormation, you can deploy AWS resources by automating configuration processes. AWS Key Management Service (KMS) enables you to create and control the encryption keys used to secure your data. Amazon Redshift enables you to analyze all your data cost effectively by using existing business intelligence tools to automate common administrative tasks for managing, monitoring, and scaling your data warehouse.

Our federal and commercial customers can now leverage our FedRAMP P-ATO to access the following services:

  • CloudFormation – CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use sample templates in CloudFormation, or create your own templates to describe the AWS resources and any associated dependencies or run-time parameters required to run your application.
  • Amazon DynamoDBAmazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit-millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.
  • Amazon EMRAmazon EMR provides a managed Hadoop framework that makes it efficient and cost effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR, and interact with data in other AWS data stores such as Amazon S3 and DynamoDB.
  • Amazon GlacierAmazon Glacier is a secure, durable, and low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions.
  • KMS – KMS is a managed service that makes it easier for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys. KMS is integrated with other AWS services to help you protect the data you store with these services. For example, KMS is integrated with CloudTrail to provide you with logs of all key usage and help you meet your regulatory and compliance needs.
  • Redshift – Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost effective to analyze all your data by using your existing business intelligence tools.
  • Amazon Simple Notification Service (SNS)Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or “fan out” messages to large numbers of recipients. SNS makes it simple and cost effective to send push notifications to mobile device users and email recipients or even send messages to other distributed services.
  • Amazon Simple Queue Service (SQS)Amazon SQS is a fully-managed message queuing service for reliably communicating among distributed software components and microservices—at any scale. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.
  • Amazon Simple Workflow Service (SWF)Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. SWF is a fully managed state tracker and task coordinator in the cloud.

AWS works closely with the FedRAMP Program Management Office (PMO), National Institute of Standards and Technology (NIST), and other federal regulatory and compliance bodies to ensure that we provide you with the cutting-edge technology you need in a secure and compliant fashion. We are working with our authorizing officials to continue to expand the scope of our authorized services, and we are fully committed to ensuring that AWS GovCloud (US) continues to offer government customers the most comprehensive mix of functionality and security.

– Chad

EC2 Run Command is Now a CloudWatch Events Target

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-run-command-is-now-a-cloudwatch-events-target/

Ok, time for another peanut butter and chocolate post! Let’s combine EC2 Run Command (New EC2 Run Command – Remote Instance Management at Scale) and CloudWatch Events (New CloudWatch Events – Track and Respond to Changes to Your AWS Resources) and see what we get.

EC2 Run Command is part of EC2 Systems Manager. It allows you to operate on collections of EC2 instances and on-premises servers reliably and at scale, in a controlled and selective fashion. You can run scripts, install software, collect metrics and log files, manage patches, and much more, on both Windows and Linux.

CloudWatch Events gives you the ability to track changes to AWS resources in near real-time. You get a stream of system events that you can easily route to one or more targets including AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, and built-in EC2 and EBS targets.

Better Together
Today we are bringing these two services together. You can now create CloudWatch Events rules that use EC2 Run Command to perform actions on EC2 instances or on-premises servers. This opens the door to all sorts of interesting ideas; here are a few that I came up with:

Final Log Collection – Collect application or system logs from instances that are being shut down (either manually or as a result of a scale-in operation initiated by Auto Scaling).

Error Log Condition – Collect logs after an application crash or a security incident.

Instance Setup – After an instance has started, download & install applications, set parameters and configurations, and launch processes.

Configuration Updates – When a config file is changed in S3, install it on applicable instances (perhaps determined by tags). For example, you could install an updated Apache web server config file on a set of properly tagged instances, and then restart the server so that it picks up the changes. Or, update an instance-level firewall each time the AWS IP Address Ranges are updated.

EBS Snapshot Testing and Tracking – After a fresh snapshot has been created, mount it on a test instance, check the filesystem for errors, and then index the files in the snapshot.

Instance Coordination – Every time an instance is launched or terminated, inform the others so that they can update internal tracking information or rebalance their workloads.

I’m sure that you have some more interesting ideas; please feel free to share them in the comments.

Time for Action!
Let’s set this up. Suppose I want to run a specific PowerShell script every time Auto Scaling adds another instance to an Auto Scaling Group.

I start by opening the CloudWatch Events Console and clicking on Create rule:

I configure my Event Source to be my Auto Scaling Group (AS-Main-1), and indicate that I want to take action when EC2 instances are launched successfully:

Then I set up the target. I choose SSM Run Command, pick the AWS-RunShellScript document, and indicate that I want the command to be run on the instances that are tagged as coming from my Auto Scaling group:

Then I click on Configure details, give my rule a name and a description, and click on Create rule:

With everything set up, the command service httpd start will be run on each instance launched as a result of a scale out operation.

Available Now
This new feature is available now and you can start using it today.

Jeff;

 

AWS Hot Startups – February 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-february-2017-2/

As we finish up the month of February, Tina Barr is back with some awesome startups.

-Ana


This month we are bringing you five innovative hot startups:

  • GumGum – Creating and popularizing the field of in-image advertising.
  • Jiobit – Smart tags to help parents keep track of kids.
  • Parsec – Offers flexibility in hardware and location for PC gamers.
  • Peloton – Revolutionizing indoor cycling and fitness classes at home.
  • Tendril – Reducing energy consumption for homeowners.

If you missed any of our January startups, make sure to check them out here.

GumGum (Santa Monica, CA)
GumGum logo1GumGum is best known for inventing and popularizing the field of in-image advertising. Founded in 2008 by Ophir Tanz, the company is on a mission to unlock the value held within the vast content produced daily via social media, editorials, and broadcasts in a variety of industries. GumGum powers campaigns across more than 2,000 premium publishers, which are seen by over 400 million users.

In-image advertising was pioneered by GumGum and has given companies a platform to deliver highly visible ads to a place where the consumer’s attention is already focused. Using image recognition technology, GumGum delivers targeted placements as contextual overlays on related pictures, as banners that fit on all screen sizes, or as In-Feed placements that blend seamlessly into the surrounding content. Using Visual Intelligence, GumGum can scour social media and broadcast TV for all images and videos related to a brand, allowing companies to gain a stronger understanding of their audience and how they are relating to that brand on social media.

GumGum relies on AWS for its Image Processing and Ad Serving operations. Using AWS infrastructure, GumGum currently processes 13 million requests per minute across the globe and generates 30 TB of new data every day. The company uses a suite of services including but not limited to Amazon EC2, Amazon S3, Amazon Kinesis, Amazon EMR, AWS Data Pipeline, and Amazon SNS. AWS edge locations allow GumGum to serve its customers in the US, Europe, Australia, and Japan and the company has plans to expand its infrastructure to Australia and APAC regions in the future.

For a look inside GumGum’s startup culture, check out their first Hackathon!

Jiobit (Chicago, IL)
Jiobit Team1
Jiobit was inspired by a real event that took place in a crowded Chicago park. A couple of summers ago, John Renaldi experienced every parent’s worst nightmare – he lost track of his then 6-year-old son in a public park for almost 30 minutes. John knew he wasn’t the only parent with this problem. After months of research, he determined that over 50% of parents have had a similar experience and an even greater percentage are actively looking for a way to prevent it.

Jiobit is the world’s smallest and longest lasting smart tag that helps parents keep track of their kids in every location – indoors and outdoors. The small device is kid-proof: lightweight, durable, and waterproof. It acts as a virtual “safety harness” as it uses a combination of Bluetooth, Wi-Fi, Multiple Cellular Networks, GPS, and sensors to provide accurate locations in real-time. Jiobit can automatically learn routes and locations, and will send parents an alert if their child does not arrive at their destination on time. The talented team of experienced engineers, designers, marketers, and parents has over 150 patents and has shipped dozens of hardware and software products worldwide.

The Jiobit team is utilizing a number of AWS services in the development of their product. Security is critical to the overall product experience, and they are over-engineering security on both the hardware and software side with the help of AWS. Jiobit is also working towards being the first child monitoring device that will have implemented an Alexa Skill via the Amazon Echo device (see here for a demo!). The devices use AWS IoT to send and receive data from the Jio Cloud over the MQTT protocol. Once data is received, they use AWS Lambda to parse the received data and take appropriate actions, including storing relevant data using Amazon DynamoDB, and sending location data to Amazon Machine Learning processing jobs.

Visit the Jiobit blog for more information.

Parsec (New York, NY)
Parsec logo large1
Parsec operates under the notion that everyone should have access to the best computing in the world because access to technology creates endless opportunities. Founded in 2016 by Benjy Boxer and Chris Dickson, Parsec aims to eliminate the burden of hardware upgrades that users frequently experience by building the technology to make a computer in the cloud available anywhere, at any time. Today, they are using their technology to enable greater flexibility in the hardware and location that PC gamers choose to play their favorite games on. Check out this interview with Benjy and our Startups team for a look at how Parsec works.

Parsec built their first product to improve the gaming experience; gamers no longer have to purchase consoles or expensive PCs to access the entertainment they love. Their low latency video streaming and networking technologies allow gamers to remotely access their gaming rig and play on any Windows, Mac, Android, or Raspberry Pi device. With the global reach of AWS, Parsec is able to deliver cloud gaming to the median user in the US and Europe with less than 30 milliseconds of network latency.

Parsec users currently have two options available to start gaming with cloud resources. They can either set up their own machines with the Parsec AMI in their region or rely on Parsec to manage everything for a seamless experience. In either case, Parsec uses the g2.2xlarge EC2 instance type. Parsec is using Amazon Elastic Block Storage to store games, Amazon DynamoDB for scalability, and Amazon EC2 for its web servers and various APIs. They also deal with a high volume of logs and take advantage of the Amazon Elasticsearch Service to analyze the data.

Be sure to check out Parsec’s blog to keep up with the latest news.

Peloton (New York, NY)
Peloton image 3
The idea for Peloton was born in 2012 when John Foley, Founder and CEO, and his wife Jill started realizing the challenge of balancing work, raising young children, and keeping up with personal fitness. This is a common challenge people face – they want to work out, but there are a lot of obstacles that stand in their way. Peloton offers a solution that enables people to join indoor cycling and fitness classes anywhere, anytime.

Peloton has created a cutting-edge indoor bike that streams up to 14 hours of live classes daily and has over 4,000 on-demand classes. Users can access live classes from world-class instructors from the convenience of their home or gym. The bike tracks progress with in-depth ride metrics and allows people to compete in real-time with other users who have taken a specific ride. The live classes even feature top DJs that play current playlists to keep users motivated.

With an aggressive marketing campaign, which has included high-visibility TV advertising, Peloton made the decision to run its entire platform in the cloud. Most recently, they ran an ad during an NFL playoff game and their rate of requests per minute to their site increased from ~2k/min to ~32.2k/min within 60 seconds. As they continue to grow and diversify, they are utilizing services such as Amazon S3 for thousands of hours of archived on-demand video content, Amazon Redshift for data warehousing, and Application Load Balancer for intelligent request routing.

Learn more about Peloton’s engineering team here.

Tendril (Denver, CO)
Tendril logo1
Tendril was founded in 2004 with the goal of helping homeowners better manage and reduce their energy consumption. Today, electric and gas utilities use Tendril’s data analytics platform on more than 140 million homes to deliver a personalized energy experience for consumers around the world. Using the latest technology in decision science and analytics, Tendril can gain access to real-time, ever evolving data about energy consumers and their homes so they can improve customer acquisition, increase engagement, and orchestrate home energy experiences. In turn, Tendril helps its customers unlock the true value of energy interactions.

AWS helps Tendril run its services globally, while scaling capacity up and down as needed, and in real-time. This has been especially important in support of Tendril’s newest solution, Orchestrated Energy, a continuous demand management platform that calculates a home’s thermal mass, predicts consumer behavior, and integrates with smart thermostats and other connected home devices. This solution allows millions of consumers to create a personalized energy plan for their home based on their individual needs.

Tendril builds and maintains most of its infrastructure services with open sources tools running on Amazon EC2 instances, while also making use of AWS services such as Elastic Load Balancing, Amazon API Gateway, Amazon CloudFront, Amazon Route 53, Amazon Simple Queue Service, and Amazon RDS for PostgreSQL.

Visit the Tendril Blog for more information!

— Tina Barr

How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules

Post Syndicated from Myles Hosford original https://aws.amazon.com/blogs/security/how-to-audit-your-aws-resources-for-security-compliance-by-using-custom-aws-config-rules/

AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes.

AWS provides a number of predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I will show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

Solution overview

In this post, I walk through the process required to create a custom Config rule by following these steps:

  1. Create a Lambda function containing the logic to determine if a resource is compliant or noncompliant.
  2. Create a custom Config rule that uses the Lambda function created in Step 1 as the source.
  3. Create a Lambda function that polls Config to detect noncompliant resources on a daily basis and send notifications via Amazon SNS.

Prerequisite

You must set up Config before you start creating custom rules. Follow the steps on Set Up AWS Config Using the Console or Set Up AWS Config Using the AWS CLI to enable Config and send the configuration changes to Amazon S3 for storage.

Custom rule – Blueprint

The first step is to create a Lambda function that contains the logic to determine if the Amazon VPC has VPC Flow Logs enabled (in other words, it is compliant or noncompliant with requirement 4.3 of the CIS AWS Foundation Benchmark). First, let’s take a look at the components that make up a custom rule, which I will call the blueprint.

#
# Custom AWS Config Rule - Blueprint Code
#

import boto3, json

def evaluate_compliance(config_item, r_id):
    return 'NONCOMPLIANT'

def lambda_handler(event, context):
    
    # Create AWS SDK clients & initialize custom rule parameters
    config = boto3.client('config')
    invoking_event = json.loads(event['invokingEvent'])
    compliance_value = 'NOT_APPLICABLE'
    resource_id = invoking_event['configurationItem']['resourceId']
                    
    compliance_value = evaluate_compliance(invoking_event['configurationItem'], resource_id)
              
    response = config.put_evaluations(
       Evaluations=[
            {
                'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
                'ComplianceResourceId': resource_id,
                'ComplianceType': compliance_value,
                'Annotation': 'Insert text here to detail why control passed/failed',
                'OrderingTimestamp': invoking_event['notificationCreationTime']
            },
       ],
       ResultToken=event['resultToken'])

The key components in the preceding blueprint are:

  1. The lambda_handler function is the function that is executed when the Lambda function invokes my function. I create the necessary SDK clients and set up some initial variables for the rule to use.
  2. The evaluate_compliance function contains my custom rule logic. This is the function that I will tailor later in the post to create the custom rule to detect whether the Amazon VPC has VPC Flow Logs enabled. The result (compliant or noncompliant) is assigned to the compliance_value.
  3. The Config API’s put_evaluations function is called to deliver an evaluation result to Config. You can then view the result of the evaluation in the Config console (more about that later in this post). The annotation parameter is used to provide supplementary information about how the custom evaluation determined the compliance.

Custom rule – Flow logs enabled

The example we use for the custom rule is requirement 4.3 from the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.” I update the blueprint rule that I just showed to do the following:

  1. Create an AWS Identity and Access Management (IAM) role that allows the Lambda function to perform the custom rule logic and publish the result to Config. The Lambda function will assume this role.
  2. Specify the resource type of the configuration item as EC2 VPC. This ensures that the rule is triggered when there is a change to any Amazon VPC resources.
  3. Add custom rule logic to the Lambda function to determine whether VPC Flow Logs are enabled for a given VPC.

Create an IAM role for Lambda

To create the IAM role, I go to the IAM console, choose Roles in the navigation pane, click Create New Role, and follow the wizard. In Step 2, I select the service role AWS Lambda, as shown in the following screenshot.

In Step 4 of the wizard, I attach the following managed policies:

  • AmazonEC2ReadOnlyAccess
  • AWSLambdaExecute
  • AWSConfigRulesExecutionRole

Finally, I name the new IAM role vpcflowlogs-role. This allows the Lambda function to call APIs such as EC2 describe flow logs to obtain the result for my compliance check. I assign this role to the Lambda function in the next step.

Create the Lambda function for the custom rule

To create the Lambda function that contains logic for my custom rule, I go to the Lambda console, click Create a Lambda Function, and then choose Blank Function.

When I configure the function, I name it vpcflowlogs-function and provide a brief description of the rule: “A custom rule to detect whether VPC Flow Logs is enabled.”

For the Lambda function code, I use the blueprint code shown earlier in this post and add the additional logic to determine whether VPC Flow Logs is enabled (specifically within the evaluate_compliance and is_flow_logs_enabled functions).

#
# Custom AWS Config Rule - VPC Flow Logs
#

import boto3, json

def evaluate_compliance(config_item, r_id):
    if (config_item['resourceType'] != 'AWS::EC2::VPC'):
        return 'NOT_APPLICABLE'

    elif is_flow_logs_enabled(r_id):
        return 'COMPLIANT'
    else:
        return 'NON_COMPLIANT'

def is_flow_logs_enabled(vpc_id):
    ec2 = boto3.client('ec2')
    response = ec2.describe_flow_logs(
        Filter=[
            {
                'Name': 'resource-id',
                'Values': [
                    r_id,
                ]
            },
        ],
    )
    if len(response[u'FlowLogs']) != 0: return True

def lambda_handler(event, context):
    
    # Create AWS SDK clients & initialize custom rule parameters
    config = boto3.client('config')
    invoking_event = json.loads(event['invokingEvent'])
    compliance_value = 'NOT_APPLICABLE'
    resource_id = invoking_event['configurationItem']['resourceId']
                    
    compliance_value = evaluate_compliance(invoking_event['configurationItem'], resource_id)
            
    response = config.put_evaluations(
       Evaluations=[
            {
                'ComplianceResourceType': invoking_event['configurationItem']['resourceType'],
                'ComplianceResourceId': resource_id,
                'ComplianceType': compliance_value,
                'Annotation': 'CIS 4.3 VPC Flow Logs',
                'OrderingTimestamp': invoking_event['notificationCreationTime']
            },
       ],
       ResultToken=event['resultToken'])

Below the Lambda function code, I configure the handler and role. As shown in the following screenshot, I select the IAM role I just created (vpcflowlogs-role) and create my Lambda function.

 When the Lambda function is created, I make a note of the Lambda Amazon Resource Name (ARN), which is the unique identifier used in the next step to specify this function as my Config rule source. (Be sure to replace placeholder value with your own value.)

Example ARN: arn:aws:lambda:ap-southeast-1:<your-account-id>:function:vpcflowlogs-function

Create a custom Config rule

The last step is to create a custom Config rule and use the Lambda function as the source. To do this, I go to the Config console, choose Add Rule, and choose Add Custom Rule. I give the rule a name, vpcflowlogs-configrule, and description, and I paste the Lambda ARN from the previous section.

Because this rule is specific to VPC resources, I set the Trigger type to Configuration changes and Resources to EC2: VPC, as shown in the following screenshot

I click Save to create the rule, and it is now live. Any VPC resources that are created or modified will now be checked against my VPC Flow Logs rule for compliance with the CIS Benchmark requirement.

From the Config console, I can now see if any resources do not comply with the control requirement, as shown in the following screenshot.

When I choose the rule, I see additional detail about the noncompliant resources (see the following screenshot). This allows me to view the Config timeline to determine when the resources became noncompliant, identify the resources’ owners, (if resources are following tagging best practices), and initiate a remediation effort.

Screenshot of the results of resources evaluated

Daily compliance assessment

Having created the custom rule, I now create a Lambda function to poll Config periodically to detect noncompliant resources. My Lambda function will run daily to assess for noncompliance with my custom rule. When noncompliant resources are detected, I send a notification by publishing a message to SNS.

Before creating the Lambda function, I create an SNS topic and subscribe to the topic for the email addresses that I want to receive noncompliance notifications. My SNS topic is called config-rules-compliance.

Note: The Lambda function will require permission to query Config and publish a message to SNS. For the purpose of this blog post, I created the following policy that allows publishing of messages to my SNS topic (config-rules-compliance), and I attached it to the vpcflowlogs-role role that my custom Config rule uses.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1485832788000",
            "Effect": "Allow",
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:ap-southeast-1:111111111111:config-rules-compliance"
            ]
        }
    ]
}

To create the Lambda function that performs the periodic compliance assessment, I go to the Lambda console, choose Create a Lambda Function and then choose Blank Function.

When configuring the Lambda trigger, I select CloudWatch Events – Schedule that allows the function to be executed periodically on a schedule I define. I then select rate(1 day) to get daily compliance assessments. For more information about scheduling events with Amazon CloudWatch, see Schedule Expressions for Rules.

scheduled-lambda-withborder

My Lambda function (see the following code) uses the vpcflowlogs-role IAM role that allows publishing of messages to my SNS topic.

'''
Lambda function to poll Config for noncompliant resources
'''

from __future__ import print_function

import boto3

# AWS Config settings
CONFIG_CLIENT = boto3.client('config')
MY_RULE = "vpcflowlogs-configrule"

# AWS SNS Settings
SNS_CLIENT = boto3.client('sns')
SNS_TOPIC = 'arn:aws:sns:ap-southeast-1:111111111111:config-rules-compliance'
SNS_SUBJECT = 'Compliance Update'

def lambda_handler(event, context):
    # Get compliance details
    non_compliant_detail = CONFIG_CLIENT.get_compliance_details_by_config_rule(\
    						ConfigRuleName=MY_RULE, ComplianceTypes=['NON_COMPLIANT'])

    if len(non_compliant_detail['EvaluationResults']) > 0:
        print('The following resource(s) are not compliant with AWS Config rule: ' + MY_RULE)
        non_complaint_resources = ''
        for result in non_compliant_detail['EvaluationResults']:
            print(result['EvaluationResultIdentifier']['EvaluationResultQualifier']['ResourceId'])
            non_complaint_resources = non_complaint_resources + \
    	    				result['EvaluationResultIdentifier']['EvaluationResultQualifier']['ResourceId'] + '\n'

        sns_message = 'AWS Config Compliance Update\n\n Rule: ' \
    				+ MY_RULE + '\n\n' \
     				+ 'The following resource(s) are not compliant:\n' \
     				+ non_complaint_resources

        SNS_CLIENT.publish(TopicArn=SNS_TOPIC, Message=sns_message, Subject=SNS_SUBJECT)

    else:
        print('No noncompliant resources detected.')

My Lambda function performs two key activities. First, it queries the Config API to determine which resources are noncompliant with my custom rule. This is done by executing the get_compliance_details_by_config_rule API call.

non_compliant_detail = CONFIG_CLIENT.get_compliance_details_by_config_rule(ConfigRuleName=MY_RULE, ComplianceTypes=['NON_COMPLIANT'])

Second, my Lambda function publishes a message to my SNS topic to notify me that resources are noncompliant, if they failed my custom rule evaluation. This is done using the SNS publish API call.

SNS_CLIENT.publish(TopicArn=SNS_TOPIC, Message=sns_message, Subject=SNS_SUBJECT)

This function provides an example of how to integrate Config and the results of the Config rules compliance evaluation into your operations and processes. You can extend this solution by integrating the results directly with your internal governance, risk, and compliance tools and IT service management frameworks.

Summary

In this post, I showed how to create a custom AWS Config rule to detect for noncompliance with security and compliance policies. I also showed how you can create a Lambda function to detect for noncompliance daily by polling Config via API calls. Using custom rules allows you to codify your internal or external security and compliance requirements and have a more effective view of your organization’s risks at a given time.

For more information about Config rules and examples of rules created for the CIS Benchmark, go to the aws-security-benchmark GitHub repository. If you have questions about the solution in this post, start a new thread on the AWS Config forum.

– Myles

Note: The content and opinions in this blog post are those of the author. This blog post is intended for informational purposes and not for the purpose of providing legal advice.

How to Remediate Amazon Inspector Security Findings Automatically

Post Syndicated from Eric Fitzgerald original https://aws.amazon.com/blogs/security/how-to-remediate-amazon-inspector-security-findings-automatically/

The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings, but also to automate addressing those findings.

Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings, and then implements the appropriate remediation based on the type of issue.

Use case

In this post’s example, I find a common vulnerability and exposure (CVE) for a missing update and use Lambda to call the Amazon EC2 Systems Manager to update the instance. However, this is just one use case and the underlying logic can be used for multiple cases such as software and application patching, kernel version updates, security permissions and roles changes, and configuration changes.

The solution

Overview

The solution in this blog post does the following:

  1. Launches a new Amazon EC2 instance, deploying the EC2 Simple Systems Manager (SSM) agent and its role to the instance.
  2. Deploys the Amazon Inspector agent to the instance by using EC2 Systems Manager.
  3. Creates an SNS topic to which Amazon Inspector will publish messages.
  4. Configures an Amazon Inspector assessment template to post finding notifications to the SNS topic.
  5. Creates the Lambda function that is triggered by notifications to the SNS topic and uses EC2 Systems Manager from within the Lambda function to perform automatic remediation on the instance.

1.  Launch an EC2 instance with EC2 Systems Manager enabled

In my previous Security Blog post, I discussed the use of EC2 user data to deploy the EC2 SSM agent to a Linux instance. To enable the type of autoremediation we are talking about, it is necessary to have the EC2 Systems Manager installed on your instances. If you already have EC2 Systems Manager installed on your instances, you can move on to Step 2. Otherwise, let’s take a minute to review how the process works:

  1. Create an AWS Identity and Access Management (IAM) role so that the on-instance EC2 SSM agent can communicate with EC2 Systems Manager. You can learn more about the process of creating a role while launching an instance.
  2. While launching the instance with the EC2 launch wizard, associate the role you just created with the new instance and provide the appropriate script as user data for your operating system and architecture to install the EC2 Systems Manager agent as the instance is launched. See the process and scripts.

Screenshot of configuring instance details

Note: You must change the scripts slightly when copying them from the instructions to the EC2 user data. The word region in the curl command must be replaced with the AWS region code (for example, us-east-1).

2.  Deploy the Amazon Inspector agent to the instance by using EC2 Systems Manager

You can deploy the Amazon Inspector agent with EC2 Systems Manager, with EC2 instance user data, or by connecting to an EC2 instance via SSH and running the installation steps manually. Because you just installed the EC2 SSM agent, you will use that method.

To deploy the Amazon Inspector agent:

  1. Navigate to the EC2 console in the desired region. In the navigation pane, choose Command History under Commands near the bottom of the list.
  2. Choose Run a command.
  3. Choose the AWS-RunShellScript command document, and then choose Select instances to specify the instance that you created previously. Note: If you do not see the instance in that list, you probably did not successfully install the EC2 SSM agent. This means you have to start over with the previous section. Common mistakes include failing to associate a role with the instance, failing to associate the correct policy with the role, or providing an incorrect user data script.
  4. Paste the following script in the Commands.
    #!/bin/bash
    cd /tmp
    curl -O https://d1wk0tztpsntt1.cloudfront.net/linux/latest/install
    chmod a+x /tmp/install
    bash /tmp/install

  5. Choose Run to execute the script on the instance.

Screenshot of deploying the Amazon Inspector agent

3.  Create an SNS topic to which Amazon Inspector will publish messages

Amazon SNS uses topics, communication channels for sending messages and subscribing to notifications. You will create an SNS topic for this solution to which Amazon Inspector publishes messages whenever there is a security finding. Later, you will create a Lambda function that subscribes to this topic and receives a notification whenever a new security finding is generated.

To create an SNS topic:

  1. In the AWS Management Console, navigate to the SNS console.
  2. Choose Create topic. Type a topic name and a display name, and choose Create topic.
  3. From the list of displayed topics, choose the topic that you just created by selecting the check box to the left of the topic name, and then choose Edit topic policy from the Other topic actions drop-down list.
  4. In the Advanced view tab, find the Principal section of the policy document. In that section, replace the line that says “AWS”: “*” with the following text: “Service”: “inspector.amazonaws.com” (see the following screenshot).
  5. Choose Update policy to save the changes.
  6. Choose Edit topic policy On the Basic view tab, set the topic policy to allow Only me (topic owner) to subscribe to the topic, and choose Update policy to save the changes.

Screenshot of editing the topic policy

4.  Configure an Amazon Inspector assessment template to post finding notifications to the SNS topic

An assessment template is a configuration that tells Amazon Inspector how to construct a specific security evaluation. For example, an assessment template can tell Amazon Inspector which EC2 instances to target and which rules packages to evaluate. You can configure a template to tell Amazon Inspector to generate SNS notifications when findings are identified. In order to enable automatic remediation, you either create a new template or modify an existing template to set up SNS notifications to the SNS topic that you just created.

To enable automatic remediation:

  1. Sign in to the AWS Management Console and navigate to the Amazon Inspector console.
  2. Choose Assessment templates in the navigation pane.
  3. Choose one of your existing Amazon Inspector assessment templates. If you need to create a new Amazon Inspector template, type a name for the template and choose the Common Vulnerabilities and Exposures rules package. Then go back to the list and select the template.
  4. Expand the template so that you can see all the settings by choosing the right-pointing arrowhead in the row for that template.
  5. Choose the pencil icon next to the SNS topics.
  6. Add the SNS topic that you created in the previous section by choosing it from the Select a new topic to notify of events drop-down list (see the following screenshot).
  7. Choose Save to save your changes.

Screenshot of configuring the SNS topic

5.  Create the Lambda autoremediation function

Now, create a Lambda function that listens for Amazon Inspector to notify it of new security findings, and then tells the EC2 SSM agent to run the appropriate system update command (apt-get update or yum update) if the finding is for an unpatched CVE vulnerability.

Step 1: Create an IAM role for the Lambda function to send EC2 Systems Manager commands

A Lambda function needs specific permissions to interact with your AWS resources. You provide these permissions in the form of an IAM role, and the role has a policy attached that permits the Lambda function to receive SNS notifications and to send commands to the Amazon Inspector agent via EC2 Systems Manager.

To create the IAM role:

  1. Sign in to the AWS Management Console, and navigate to the IAM console.
  2. Choose Roles in the navigation pane, and then choose Create new role.
  3. Type a name for the role. You should (but are not required to) use a descriptive name such as Amazon Inspector-agent-autodeploy-lambda. Regardless of the name you choose, remember the name because you will need it in the next section.
  4. Choose the AWS Lambda role type.
  5. Attach the policies AWSLambdaBasicExecutionRole and AmazonSSMFullAccess.
  6. Choose Create the role.

Step 2: Create the Lambda function that will update the host by sending the appropriate commands through EC2 Systems Manager

Now, create the Lambda function. You can download the source code for this function from the .zip file link in the following procedure. Some things to note about the function are:

  • The function listens for notifications on the configured SNS topic, but only acts on notifications that are from Amazon Inspector that report a finding and are reporting a CVE vulnerability.
  • The function checks to ensure that the EC2 SSM agent is installed, running, and healthy on the EC2 instance for which the finding was reported.
  • The function checks the operating system of the EC2 instance and determines if it is a supported Linux distribution (Ubuntu or Amazon Linux).
  • The function sends the distribution-appropriate package update command (apt-get update or yum update) to the EC2 instance via EC2 Systems Manager.
  • The function does not reboot the agent. You either have to add that functionality yourself or reboot the agent manually.

To create the Lambda function:

  1. Sign in to the AWS Management Console in the region that you intend to use, and navigate to the Lambda console.
  2. Choose Create a Lambda function.
  3. On the Select a blueprint page, choose the Hello World Python blueprint and choose Next.
  4. On the Configure triggers page, choose SNS as the trigger, and choose the SNS topic that you created in the last section. Choose the Enable trigger check box and choose Next.
  5. Type a name and description for the function. Choose Python 2.7 runtime.
  6. Download and save this .zip file.
  7. Unzip the .zip file, and copy the entire contents of lambda-auto-remediate.py to your clipboard.
  8. Choose Edit code inline under Code entry type in the Lambda function, and replace all the existing text with the text that you just copied from lambda-auto-remediate.py.
  9. Select Choose an existing role from the Role drop-down list, and then in the Existing role box, choose the IAM role that you created in Step 1 of this section.
  10. Choose Next and then Create function to complete the creation of the function.

You now have a working system that monitors Amazon Inspector for CVE findings and will patch affected Ubuntu or Amazon Linux instances automatically. You can view or modify the source code for the function in the Lambda console. Additionally, Lambda and EC2 Systems Manager will generate logs whenever the function causes an agent to patch itself.

Note: If you have multiple CVE findings for an instance, the remediation commands might be executed more than once, but the package managers for Linux handle this efficiently. You still have to reboot the instances yourself, but EC2 Systems Manager includes a feature to do that as well.

Summary

Using Amazon Inspector with Lambda allows you to automate certain security tasks. Because Lambda supports Python and JavaScript, development of such automation is similar to automating any other kind of administrative task via scripting. Even better, you can take actions on EC2 instances in response to Amazon Inspector findings by using Lambda to invoke EC2 Systems Manager. This enables you to take instance-specific actions based on issues that Amazon Inspector finds. Combining these capabilities allows you to build event-driven security automation to help better secure your AWS environment in near real time.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this post, start a new thread on the Amazon Inspector forum.

– Eric

Introducing the AWS IoT Button Enterprise Program

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-the-aws-iot-button-enterprise-program/

The AWS IoT Button first made its appearance on the IoT scene in October of 2015 at AWS re:Invent with the introduction of the AWS IoT service.  That year all re:Invent attendees received the AWS IoT Button providing them the opportunity to get hands-on with AWS IoT.  Since that time AWS IoT button has been made broadly available to anyone interested in the clickable IoT device.

During this past AWS re:Invent 2016 conference, the AWS IoT button was launched into the enterprise with the AWS IoT Button Enterprise Program.  This program is intended to help businesses to offer new services or improve existing products at the click of a physical button.  With the AWS IoT Button Enterprise Program, enterprises can use a programmable AWS IoT Button to increase customer engagement, expand applications and offer new innovations to customers by simplifying the user experience.  By harnessing the power of IoT, businesses can respond to customer demand for their products and services in real-time while providing a direct line of communication for customers, all via a simple device.

 

 

AWS IoT Button Enterprise Program

Let’s discuss how the new AWS IoT Button Enterprise Program works.  Businesses start by placing a bulk order of the AWS IoT buttons and provide a custom label for the branding of the buttons.  Amazon manufactures the buttons and pre-provisions the IoT button devices by giving each a certificate and unique private key to grant access to AWS IoT and ensure secure communication with the AWS cloud.  This allows for easier configuration and helps customers more easily get started with the programming of the IoT button device.

Businesses would design and build their IoT solution with the button devices and creation of device companion applications.  The AWS IoT Button Enterprise Program provides businesses some complimentary assistance directly from AWS to ensure a successful deployment.  The deployed devices then would only need to be configured with Wi-Fi at user locations in order to function.

 

 

For enterprises, there are several use cases that would benefit from the implementation of an IoT button solution. Here are some ideas:

  • Reordering services or custom products such as pizza or medical supplies
  • Requesting a callback from a customer service agent
  • Retail operations such as a call for assistance button in stores or restaurants
  • Inventory systems for capturing products amounts for inventory
  • Healthcare applications such as alert or notification systems for the disabled or elderly
  • Interface with Smart Home systems to turn devices on and off such as turning off outside lights or opening the garage door
  • Guest check-in/check-out systems

 

AWS IoT Button

At the heart of the AWS IoT Button Enterprise Program is the AWS IoT Button.  The AWS IoT button is a 2.4GHz Wi-Fi with WPA2-PSK enabled device that has three click types: Single click, Double click, and Long press.  Note that a Long press click type is sent if the button is pressed for 1.5 seconds or longer.  The IoT button has a small LED light with color patterns for the status of the IoT button.  A blinking white light signifies that the IoT button is connecting to Wi-Fi and getting an IP address, while a blinking blue light signifies that the button is in wireless access point (AP) mode.  The data payload that is sent from the device when pressed contains the device serial number, the battery voltage, and the click type.

Currently, there are 3 ways to get started building your AWS IoT button solution.  The first option is to use the AWS IoT Button companion mobile app.  The mobile app will create the required AWS IoT resources, including the creation of the TLS 1.2 certificates, and create an AWS IoT rule tied to AWS Lambda.  Additionally, it will enable the IoT button device via AWS IoT to be an event source that invokes a new AWS Lambda function of your choosing from the Lambda blueprints.  You can download the aforementioned mobile apps for Android and iOS below.

 

The second option is to use the AWS Lambda Blueprint Wizard as an easy way to start using your AWS IoT Button. Like the mobile app, the wizard will create the required AWS IoT resources for you and add an event source to your button that invokes a new Lambda function.

The third option is to follow the step by step tutorial in the AWS IoT getting started guide and leverage the AWS IoT console to create these resources manually.

Once you have configured your IoT button successfully and created a simple one-click solution using one of the aforementioned getting started guides, you should be ready to start building your own custom IoT button solution.   Using a click of a button, your business will be able to build new services for customers, offer new features for existing services, and automate business processes to operate more efficiently.

The basic technical flow of an AWS IoT button solution is as follows:

  • A button is clicked and secure connection is established with AWS IoT with TLS 1.2
  • The button data payload is sent to AWS IoT Device Gateway
  • The rules engine evaluates received messages (JSON) published into AWS IoT and performs actions or trigger AWS Services based defined business rules.
  • The triggered AWS Service executes or action is performed
  • The device state can be read, stored and set with Device Shadows
  • Mobile and Web Apps can receive and update data based upon action

Now that you have general knowledge about the AWS IoT button, we should jump into a technical walk-through of building an AWS IoT button solution.

 

AWS IoT Button Solution Walkthrough

We will dive more deeply into building an AWS IoT Button solution with a quick example of a use case for providing one-click customer service options for a business.

To get started, I will go to the AWS IoT console, register my IoT button as a Thing and create a Thing type.  In the console, I select the Registry and then Things options in console menu.

The name of my IoT thing in this example will be TEW-AWSIoTButton.  If you desire to categorize the IoT things, you can create a Thing type and assign a type to similar IoT ‘things’.  I will categorize my IoT thing, TEW-AWSIoTButton, as an IoTButton thing type with a One-click-device attribute key and select Create thing button.

After my AWS IoT button device, TEW-AWSIoTButton, is registered in the Thing Registry, the next step is to acquire the required X.509 certificate and keys.  I will have AWS IoT generate the certificate for this device, but the service allows for to use your own certificates.  Authenticating the connection with the X.509 certificates helps to protect the data exchange between your device and AWS IoT service.

When the certificates are generated with AWS IoT, it is important that you download and save all of the files created since the public and private keys will not be available after you leave the download page. Additionally, do not forget to download the root CA for AWS IoT from the link provided on the page with your generated certificates.

The newly created certificate will be inactive, therefore, it is vital that you activate the certificate prior to use.  AWS IoT uses the TLS protocol to authenticate the certificates using the TLS protocol’s client authentication mode.  The certificates enable asymmetric keys to be used with devices, and AWS IoT service will request and validate the certificate’s status and the AWS account against a registry of certificates.  The service will challenge for proof of ownership of the private key corresponding to the public key contained in the certificate.  The final step in securing the AWS IoT connection to my IoT button is to create and/or attach an IAM policy for authorization.

I will choose the Attach a policy button and then select Create a Policy option in order to build a specific policy for my IoT button.  In Name field of the new IoT policy, I will enter IoTButtonPolicy for the name of this new policy. Since the AWS IoT Button device only supports button presses, our AWS IoT button policy will only need to add publish permissions.  For this reason, this policy will only allow the iot:Publish action.

 

For the Resource ARN of the IoT policy, the AWS IoT buttons typically follow the format pattern of: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ iotbutton /ButtonSerialNumber.  This means that the Resource ARN for this IoT button policy will be:

I should note that if you are creating an IAM policy for an IoT device that is not an AWS IoT button, the Resource ARN format pattern would be as follows: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ YourTopic/ OptionalSubTopic/

The created policy for our AWS IoT Button, IoTButtonPolicy, looks as follows:

The next step is to return to the AWS IoT console dashboard, select Security and then Certificates menu options.  I will choose the certificate created in the aforementioned steps.

Then on the selected certificate page, I will select the Actions dropdown on the far right top corner.  In order to add the IoTButtonPolicy IAM policy to the certificate, I will click the Attach policy option.

 

I will repeat all of the steps mentioned above but this time I will add the TEW-AWSIoTButton thing by selecting the Attach thing option.

All that is left is to add the certificate and private key to the physical AWS IoT button and connect the AWS IoT Button to Wi-Fi in order to have the IoT button be fully functional.

Important to note: For businesses that have signed up to participate in the AWS IoT Button Enterprise Program, all of these aforementioned steps; Button logo branding, AWS IoT thing creation, obtaining certificate & key creation, and adding certificates to buttons, are completed for them by Amazon and AWS.  Again, this is to help make it easier for enterprises to hit the ground running in the development of their desired AWS IoT button solution.

Now, going back to the AWS IoT button used in our example, I will connect the button to Wi-Fi by holding the button until the LED blinks blue; this means that the device has gone into wireless access point (AP) mode.

In order to provide internet connectivity to the IoT button and start configuring the device’s connection to AWS IoT, I will connect to the button’s Wi-Fi network which should start with Button ConfigureMe. The first time the connection is made to the button’s Wi-Fi, a password will be required.  Enter the last 8 characters of the device serial number shown on the back of the physical AWS IoT button device.

The AWS IoT button is now configured and ready to build a system around it. The next step will be to add the actions that will be performed when the IoT button is pressed.  This brings us to the AWS IoT Rules engine, which is used to analyze the IoT device data payload coming from the MQTT topic stream and/or Device Shadow, and trigger AWS Services actions.  We will set up rules to perform varying actions when different types of button presses are detected.

Our AWS IoT button solution will be a simple one, we will set up two AWS IoT rules to respond to the IoT button being clicked and the button’s payload being sent to AWS IoT.  In our scenario, a single button click will represent that a request is being sent by a customer to a fictional organization’s customer service agent.  A double click, however, will represent that a text will be sent containing a customer’s fictional current account status.

The first AWS IoT rule created will receive the IoT button payload and connect directly to Amazon SNS to send an email only if the rule condition is fulfilled that the button click type is SINGLE. The second AWS IoT rule created will invoke a Lambda function that will send a text message containing customer account status only if the rule condition is fulfilled that the button click type is DOUBLE.

In order to create the AWS IoT rule that will send an email to subscribers of an SNS topic for requesting a customer service agent’s help, we will go to Amazon SNS and create a SNS topic.

I will create an email subscription to the topic with the fictional subscribed customer service email, which in this case is just my email address.  Of course, this could be several customer service representatives that are subscribed to the topic in order to receive emails for customer assistance requests.

Now returning to the AWS IoT console, I will select the Rules menu and choose the Create rule option. I first provide a name and description for the rule.

Next, I select the SQL version to be used for the AWS IoT rules engine.  I select the latest SQL version, however, if I did not choose to set a version, the default version of 2015-10-08 will be used. The rules engine uses a SQL-like syntax with statements containing the SELECT, FROM, and WHERE clauses.  I want to return a literal string for the message, which is not apart of the IoT button data payload.  I also want to return the button serial number as the accountnum, which are not apart of the payload.  Since the latest version, 2016-03-23, supports literal objects, I will be able to send a custom payload to Amazon SNS.

I have created the rule, all that is left is to add a rule action to perform when the rule is analyzed.  As I mentioned above, an email should be sent to customer service representatives when this rule is triggered by a single IoT button press.  Therefore, my rule action will be the Send a message as an SNS push notification to the SNS topic that I created to send an email to our fictional customer service reps aka me. Remember that the use of an IAM role is required to provide access to SNS resources; if you are using the console you have the option to create a new role or update an existing role to provide the correct permissions.  Also, since I am doing a custom message and pushing to SNS, I select the Message format type to be RAW.

Our rule has been created, now all that is left is for us to test that an email is successfully sent when the AWS IoT button is pressed once, and therefore the data payload has a click type of SINGLE.

A single press of our AWS IoT Button and the custom message is published to the SNS Topic, and the email shown below was sent to the subscribed customer service agents email addresses; in this example, to my email address.

 

In order to create the AWS IoT rule that will send a text via Lambda and a SNS topic for the scenario in which customers request account status to be sent when the IoT Button is pressed twice.  We will start by creating an AWS IoT rule with an AWS Lambda action.  To create this IoT rule, we first need to create a Lambda function and the SNS Topic with a SNS text based subscription.

First, I will go to the Amazon SNS console and create a SNS Topic. After the topic is created, I will create a SNS text subscription for our SNS topic and add a number that will receive the text messages. I will then copy the SNS Topic ARN for use in my Lambda function. Please note, that I am creating the SNS Topic in a different region than previously created SNS topic to use a region with support for sending SMS via SNS. In the Lambda function, I will need to ensure the correct region for the SNS Topic is used by including the region as a parameter of the constructor of the SNS object. The created SNS topic, aws-iot-button-topic-text is shown below.

 

We now will go to the AWS Lambda console and create a Lambda function with an AWS IoT trigger, an IoT Type as IoT Button, and the requested Device Serial Number will be the serial number on the back of our AWS IoT Button. There is no need to generate the certificate and keys in this step because the AWS IoT button is already configured with certificates and keys for secure communication with AWS IoT.

The next is to create the Lambda function,  IoTNotifyByText, with the following code that will receive the IoT button data payload and create a message to publish to Amazon SNS.

'use strict';

console.log('Loading function');
var AWS = require("aws-sdk");
var sns = new AWS.SNS({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    // Load the message as JSON object 
    var iotPayload = JSON.stringify(event, null, 2);
    
    // Create a text message from IoT Payload 
    var snsMessage = "Attention: Customer Info for Account #: " + event.accountnum + " Account Status: In Good Standing " + 
    "Balance is: 1234.56"
    
    // Log payload and SNS message string to the console and for CloudWatch Logs 
    console.log("Received AWS IoT payload:", iotPayload);
    console.log("Message to send: " + snsMessage);
    
    // Create params for SNS publish using SNS Topic created for AWS IoT button
    // Populate the parameters for the publish operation using required JSON format
    // - Message : message text 
    // - TopicArn : the ARN of the Amazon SNS topic  
    var params = {
        Message: snsMessage,
        TopicArn: "arn:aws:sns:us-east-1:xxxxxxxxxxxx:aws-iot-button-topic-text"
     };
     
     sns.publish(params, context.done);
};

All that is left is for us to do is to alter the AWS IoT rule automatically created when we created a Lambda function with an AWS IoT trigger. Therefore, we will go to the AWS IoT console and select Rules menu option. We will find and select the IoT button rule created by Lambda which usually has a name with a suffix that is equal to the IoT button device serial number.

 

Once the rule is selected, we will choose the Edit option beside the Rule query statement section.

We change the Select statement to return the serial number as the accountnum and click Update button to save changes to the AWS IoT rule.

Time to Test. I click the IoT button twice and wait for the green LED light to appear, confirming a successful connection was made and a message was published to AWS IoT. After a few seconds, a text message is received on my phone with the fictitious customer account information.

 

This was a simple example of how a business could leverage the AWS IoT Button in order to build business solutions for their customers.  With the new AWS IoT Button Enterprise Program which helps businesses in obtaining the quantities of AWS IoT buttons needed, as well as, providing AWS IoT service pre-provisioning and deployment support; Businesses can now easily get started in building their own customized IoT solution.

Available Now

The original 1st generation of the AWS IoT button is currently available on Amazon.com, and the 2nd generation AWS IoT button will be generally available in February.  The main difference in the IoT buttons are the amount of battery life and/or clicks available for the button.  Please note that right now if you purchase the original AWS IoT button, you will receive $20 in AWS credits when you register.

Businesses can sign up today for the AWS IoT Button Enterprise Program currently in Limited Preview. This program is designed to enable businesses to expand their existing applications or build new IoT capabilities with the cloud and a click of an IoT button device.  You can read more about the AWS IoT button and learn more about building solutions with a programmable IoT button on the AWS IoT Button product page.  You can also dive deeper into the AWS IoT service by visiting the AWS IoT developer guide, the AWS IoT Device SDK documentation, and/or the AWS Internet of Things Blog.

 

Tara