Tag Archives: chatbots

Enabling AWS Security Hub integration with AWS Chatbot

Post Syndicated from Ross Warren original https://aws.amazon.com/blogs/security/enabling-aws-security-hub-integration-with-aws-chatbot/

In this post, we show you how to configure AWS Chatbot to send findings from AWS Security Hub to Slack. Security Hub gives you a comprehensive view of your security high-priority alerts and security posture across your Amazon Web Services (AWS) accounts. AWS Chatbot is an interactive agent that makes it easy to monitor and interact with your AWS resources in your Slack channels and Amazon Chime chat rooms. This can enable your security teams to receive alerts in familiar Slack channels, facilitating collaboration and quick response to events.

We will be describing the preset formatting integration with AWS Chatbot, if you want to enrich the finding data you can follow the guide in How to Enable Custom Actions in AWS Security Hub. The steps listed in this post are ideal if you want to use the preset formatting of AWS Chatbot. The second path is a customized integration, which is useful when you need the finding data to be transformed or enriched.

With AWS Chatbot you can receive alerts and run commands to return diagnostic information, invoke AWS Lambda functions, and create AWS support cases so that your team can collaborate and respond to events faster.

This post is a follow up to a previous post, How to Enable Custom Actions in AWS Security Hub, where custom actions in Security Hub enabled notifications to be sent to Slack. In this post, we simplify the workflow by using an AWS CloudFormation template to create the information flow in Figure 1 below. This configures the custom action in Security Hub, an Amazon EventBridge rule, and an Amazon Simple Notification Service (Amazon SNS) topic to tie them all together.

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Figure 1: Information flow showing a Slack channel and Amazon Chime as options for AWS Chatbot integration

Configure AWS Chatbot and Security Hub

To get started you’ll need the following prerequisites:

We will now walk through configuring AWS Chatbot and Security Hub. You must have an AWS Account with GuardDuty and Security Hub enabled as well as a Slack account. Keep a virtual scratch pad handy to take note of your Slack Workspace ID and Slack Channel ID refer to as you configure the integration.

Security Hub supports two types of integration with EventBridge, both of which are supported by AWS Chatbot:

  • Standard Amazon CloudWatch events. Security Hub automatically sends all findings to EventBridge. Use this method to automatically send all Security Hub findings, or a filtered subset of findings, to an Amazon SNS topic to which AWS Chatbot subscribes.
  • Security Hub custom actions. Define custom actions in Security Hub and configure CloudWatch events rules to respond to those actions. The event rule uses its Amazon SNS topic target to forward notifications to the SNS topic AWS Chatbot is subscribed to.

We are going to focus on Security Hub custom actions. You might not initially want to have all Security Hub findings appear in Slack, so we’re going to create a Security Hub custom action to send only relevant findings to Slack. This workflow gives your security team the ability to manually provide notifications to Slack channels through AWS Chatbot. At the end of this post, I share an EventBridge Rule for those users who want all Security Hub findings in a Slack channel. I also provide some filter examples which will help you select the findings you receive in Slack.

Configure a Slack client

To allow AWS Chatbot to send notifications to your Slack channel, you must configure AWS Chatbot to work with Slack. Owners of Slack workspaces can approve the use of the AWS Chatbot and any workspace user can configure the workspace to receive notifications.

  1. Log in to your AWS console and navigate to the AWS Chatbot console.
  2. Select Slack from the dropdown menu and then Configure client.

    Figure 2: Configure a chat client

    Figure 2: Configure a chat client

  3. If you are not yet logged in to Slack, add your workspace name and log in to Slack.

    Figure 3: Slack workspace login

    Figure 3: Slack workspace login

  4. On the next screen where AWS Chatbot is requesting permission to your Slack workspace, choose Allow.
  5. Copy and save the Workspace ID. You will need it for the CloudFormation Template.

    Figure 4: Console with workplace ID

    Figure 4: Console with workplace ID

  6. You can now leave the AWS Chatbot console and log in to your Slack workspace where we can get the channel ID.
    1. If you do not have a Slack channel in your organization for findings, or you want to test this integration before deploying it to production, please follow the steps from Slack for creating a channel.
    2. If you are using the Slack desktop client, right-click on the Slack channel name and select Copy Link.

      Figure 5: Copy link from the desktop client

      Figure 5: Copy link from the desktop client

    3. If you are using the Slack web UI, right click on your Slack channel name, select Additional options, and then select Copy link.

      Figure 6: Copy link from the web UI

      Figure 6: Copy link from the web UI

  7. The last part of the resulting URL is your channel ID. For example, in the URL https://xxxxxxx.slack.com/archives/CSQRRLTHRT, CSQRRLTHRT is the channel ID. Write down your channel ID to use later.

Tie it all together

The CloudFormation template is going to create the following:

  • A Security Hub custom action named SendToSlack
  • An Amazon SNS topic named AWS Chatbot SNS Topic
  • An EventBridge rule to tie everything together
  1. Open the SecurityHub_to_AWSChatBot.yml CloudFormation template.
  2. Right-click and use Save As to save the template to your workstation.

    Note: The CloudFormation template is going to require your Slack workspace ID and channel ID from the previous step.

  3. Open the CloudFormation console.
  4. Select Create stack.
    Figure 7: Create a CloudFormation stack

    Figure 7: Create a CloudFormation stack

    1. Select Upload a template file

      Figure 8: Upload CloudFormation Template File

      Figure 8: Upload CloudFormation template file

    2. Select Choose file and navigate to the saved CloudFormation template from step 2.
    3. Select Next.
    4. Enter a stack name, such as “SecurityHubToAWSChatBot.”
    5. Enter your Slack channel ID and Slack workSpace ID (be careful not to transpose these IDs).
    6. Continue by selecting Next.
    7. On the Configure stack options screen you can add tags if required by your organization. The rest of the default options will work, click Next.
    8. Review stack details on the Review screen and scroll to the bottom.
    9. You must click the “I acknowledge that AWS CloudFormation might create IAM resources.” Check box before clicking Create Stack.

      Figure 9: IAM capabilities acknowledgment

      Figure 9: IAM capabilities acknowledgment

  5. After the CloudFormation template has completed successfully you will see ‘Create Complete’ in the CloudFormation console.

To test the configuration perform the following steps:

  1. Open the AWS Security Hub console, select a finding and choose the Actions drop down. Select Send_To_Slack — the custom action that you just created.

    Figure 10: Security Hub custom action drop down

    Figure 10: Security Hub custom action drop down

  2. Go to your Slack workspace and channel to verify receipt of the notification.

    Figure 11: Example Security Hub notification in Slack

    Figure 11: Example Security Hub notification in Slack

Bonus: Send all critical findings to Slack

You can also use this workflow to send all critical Security Hub findings to Slack.

To do this, configure an additional CloudWatch rule that can be used in conjunction with the custom action that we’ve already deployed. For example, your security team requires that all the critical severity findings go to your team’s Slack channel, but with the ability to also manually send other interesting or relevant findings to Slack.

  1. Go to the EventBridge console.
  2. Underneath Events, select Rules.
  3. Select Create Rule.
  4. Give the Rule a name ex: “All_SecurityHub_Findings_to_Slack.”
  5. In the Define Pattern section, select Event pattern and Custom pattern.

    Figure 12: EventBridge event pattern dialogue

    Figure 12: EventBridge event pattern dialogue

  6. Paste the following code into the Event pattern field and select Save.

    Note: You can edit this filter to fit your needs.

      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "ProductFields": {
            "aws/securityhub/SeverityLabel": [

  7. Leave the event bus as “AWS default event bus.”
  8. Under Select Targets, select SNS Topic from the drop down.
  9. Choose the Topic with “SNSTopicAWSChatBot” in the name.
  10. Configure any required tags.
  11. Select Create.

When Security Hub creates findings, it will send any findings with a severity label of Critical to your Slack channel.

Note: Depending on the volume of critical findings in your Security Hub console, the signal to noise ratio might be too much in Slack for you to provide actionable results. You should look at automating the response and remediation of critical findings following best practice guidance in the Security Hub console.


In this post we showed how to send findings from Security Hub to Slack using AWS Chatbot. This can help your team collaborate, and respond faster to operational events. In addition, AWS Chatbot enables an easy way to interact with your AWS resources. Running AWS CLI commands from Slack channels includes a list of the commands you can run.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Ross Warren

Ross Warren is a Solution Architect at AWS based in Northern Virginia. Prior to his work at AWS, Ross’ areas of focus included cyber threat hunting and security operations. Ross has worked at a handful of startups and has enjoyed the transition to AWS because he can continue to build solutions for customers on today’s most innovative platform.


Jose Ruiz

Jose Ruiz is a Sr. Solutions Architect – Security Specialist at AWS. He often enjoys “the road less traveled” and knows each technology has a security story often not spoken of. He takes this perspective when working with customers on highly complex solutions and driving security at the beginning of each build.

Serverless Architectures with AWS Lambda: Overview and Best Practices

Post Syndicated from Andrew Baird original https://aws.amazon.com/blogs/architecture/serverless-architectures-with-aws-lambda-overview-and-best-practices/

For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.

Examples of fully-serverless-application use cases include:

  • Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
  • Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
  • Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.

Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.

We recently published the “Serverless Architectures with AWS Lambda: Overview and Best Practices” whitepaper to provide the guidance and best practices you need to write better Lambda functions and build better serverless architectures.

Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:

  1. If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
  2. If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
  3. If you’ve already built a serverless application and you’d like to ensure your application has been Well Architected: The Serverless Application Lens: AWS Well Architected Framework (Whitepaper)

About the Author


Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.

New – Machine Learning Inference at the Edge Using AWS Greengrass

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-machine-learning-inference-at-the-edge-using-aws-greengrass/

What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.

Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.

Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.

Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.

ML Inference at the Edge
Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.

Here are a few of the many ways that you can put Greengrass ML Inference to use:

Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.

Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.

Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.

Greengrass ML Inference Overview
There are several different aspects to this new AWS feature. Let’s take a look at each one:

Machine Learning ModelsPrecompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.

Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.

Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:

These new features are available now and you can start using them today! To learn more read Perform Machine Learning Inference.



Glenn’s Take on re:Invent Part 2

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-part-2/

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We’ve got a lot of exciting announcements this week. I’m going to check in to the Architecture blog with my take on what’s interesting about some of the announcements from an cloud architectural perspective. My first post can be found here.

The Media and Entertainment industry has been a rapid adopter of AWS due to the scale, reliability, and low costs of our services. This has enabled customers to create new, online, digital experiences for their viewers ranging from broadcast to streaming to Over-the-Top (OTT) services that can be a combination of live, scheduled, or ad-hoc viewing, while supporting devices ranging from high-def TVs to mobile devices. Creating an end-to-end video service requires many different components often sourced from different vendors with different licensing models, which creates a complex architecture and a complex environment to support operationally.

AWS Media Services
Based on customer feedback, we have developed AWS Media Services to help simplify distribution of video content. AWS Media Services is comprised of five individual services that can either be used together to provide an end-to-end service or individually to work within existing deployments: AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, AWS Elemental MediaStore and AWS Elemental MediaTailor. These services can help you with everything from storing content safely and durably to setting up a live-streaming event in minutes without having to be concerned about the underlying infrastructure and scalability of the stream itself.

In my role, I participate in many AWS and industry events and often work with the production and event teams that put these shows together. With all the logistical tasks they have to deal with, the biggest question is often: “Will the live stream work?” Compounding this fear is the reality that, as users, we are also quick to jump on social media and make noise when a live stream drops while we are following along remotely. Worse is when I see event organizers actively selecting not to live stream content because of the risk of failure and and exposure — leading them to decide to take the safe option and not stream at all.

With AWS Media Services addressing many of the issues around putting together a high-quality media service, live streaming, and providing access to a library of content through a variety of mechanisms, I can’t wait to see more event teams use live streaming without the concern and worry I’ve seen in the past. I am excited for what this also means for non-media companies, as video becomes an increasingly common way of sharing information and adding a more personalized touch to internally- and externally-facing content.

AWS Media Services will allow you to focus more on the content and not worry about the platform. Awesome!

Amazon Neptune
As a civilization, we have been developing new ways to record and store information and model the relationships between sets of information for more than a thousand years. Government census data, tax records, births, deaths, and marriages were all recorded on medium ranging from knotted cords in the Inca civilization, clay tablets in ancient Babylon, to written texts in Western Europe during the late Middle Ages.

One of the first challenges of computing was figuring out how to store and work with vast amounts of information in a programmatic way, especially as the volume of information was increasing at a faster rate than ever before. We have seen different generations of how to organize this information in some form of database, ranging from flat files to the Information Management System (IMS) used in the 1960s for the Apollo space program, to the rise of the relational database management system (RDBMS) in the 1970s. These innovations drove a lot of subsequent innovations in information management and application development as we were able to move from thousands of records to millions and billions.

Today, as architects and developers, we have a vast variety of database technologies to select from, which have different characteristics that are optimized for different use cases:

  • Relational databases are well understood after decades of use in the majority of companies who required a database to store information. Amazon Relational Database (Amazon RDS) supports many popular relational database engines such as MySQL, Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle. We have even brought the traditional RDBMS into the cloud world through Amazon Aurora, which provides MySQL and PostgreSQL support with the performance and reliability of commercial-grade databases at 1/10th the cost.
  • Non-relational databases (NoSQL) provided a simpler method of storing and retrieving information that was often faster and more scalable than traditional RDBMS technology. The concept of non-relational databases has existed since the 1960s but really took off in the early 2000s with the rise of web-based applications that required performance and scalability that relational databases struggled with at the time. AWS published this Dynamo whitepaper in 2007, with DynamoDB launching as a service in 2012. DynamoDB has quickly become one of the critical design elements for many of our customers who are building highly-scalable applications on AWS. We continue to innovate with DynamoDB, and this week launched global tables and on-demand backup at re:Invent 2017. DynamoDB excels in a variety of use cases, such as tracking of session information for popular websites, shopping cart information on e-commerce sites, and keeping track of gamers’ high scores in mobile gaming applications, for example.
  • Graph databases focus on the relationship between data items in the store. With a graph database, we work with nodes, edges, and properties to represent data, relationships, and information. Graph databases are designed to make it easy and fast to traverse and retrieve complex hierarchical data models. Graph databases share some concepts from the NoSQL family of databases such as key-value pairs (properties) and the use of a non-SQL query language such as Gremlin. Graph databases are commonly used for social networking, recommendation engines, fraud detection, and knowledge graphs. We released Amazon Neptune to help simplify the provisioning and management of graph databases as we believe that graph databases are going to enable the next generation of smart applications.

A common use case I am hearing every week as I talk to customers is how to incorporate chatbots within their organizations. Amazon Lex and Amazon Polly have made it easy for customers to experiment and build chatbots for a wide range of scenarios, but one of the missing pieces of the puzzle was how to model decision trees and and knowledge graphs so the chatbot could guide the conversation in an intelligent manner.

Graph databases are ideal for this particular use case, and having Amazon Neptune simplifies the deployment of a graph database while providing high performance, scalability, availability, and durability as a managed service. Security of your graph database is critical. To help ensure this, you can store your encrypted data by running AWS in Amazon Neptune within your Amazon Virtual Private Cloud (Amazon VPC) and using encryption at rest integrated with AWS Key Management Service (AWS KMS). Neptune also supports Amazon VPC and AWS Identity and Access Management (AWS IAM) to help further protect and restrict access.

Our customers now have the choice of many different database technologies to ensure that they can optimize each application and service for their specific needs. Just as DynamoDB has unlocked and enabled many new workloads that weren’t possible in relational databases, I can’t wait to see what new innovations and capabilities are enabled from graph databases as they become easier to use through Amazon Neptune.

Look for more on DynamoDB and Amazon S3 from me on Monday.


Glenn at Tour de Mont Blanc