Tag Archives: automation

Building an automated knowledge repo with Amazon EventBridge and Zendesk

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-an-automated-knowledge-repo-with-amazon-eventbridge-and-zendesk/

Zendesk Guide is a smart knowledge base that helps customers harness the power of institutional knowledge. It enables users to build a customizable help center and customer portal.

This post shows how to implement a bidirectional event orchestration pattern between AWS services and an Amazon EventBridge third-party integration partner. This example uses support ticket events to build a customer self-service knowledge repository. It uses the EventBridge partner integration with Zendesk to accelerate the growth of a customer help center.

The examples in this post are part of a serverless application called FreshTracks. This is built in Vue.js and demonstrates SaaS integrations with Amazon EventBridge. To test this example, ask a question on the Fresh Tracks application.

The backend components for this EventBridge integration with Zendesk have been extracted into a separate example application in this GitHub repo.

How the application works

Routing Zendesk events with Amazon EventBridge.

Routing Zendesk events with Amazon EventBridge.

  1. A user searches the knowledge repository via a widget embedded in the web application.
  2. If there is no answer, the user submits the question via the web widget.
  3. Zendesk receives the question as a support ticket.
  4. Zendesk emits events when the support ticket is resolved.
  5. These events are streamed into a custom SaaS event bus in EventBridge.
  6. Event rules match events and send them downstream to an AWS Step Functions Express Workflow.
  7. The Express Workflow orchestrates Lambda functions to retrieve additional information about the event with the Zendesk API.
  8. A Lambda function uses the Zendesk API to publish a new help article from the support ticket data.
  9. The new article is searchable on the website widget for other users to read.

Before deploying this application, you must generate an API key from within Zendesk.

Creating the Zendesk API resource

Use an API to execute events on your Zendesk account from AWS. Follow these steps to generate a Zendesk API token. This is used by the application to authenticate Zendesk API calls.

To generate an API token

  1. Log in to the Zendesk dashboard.
  2. Click the Admin icon in the sidebar, then select Channels > API.
  3. Click the Settings tab, and make sure that Token Access is enabled.
  4. Click the + button to the right of Active API Tokens.

    Creating a Zendesk API token.

    Creating a Zendesk API token.

  5. Copy the token, and store it securely. Once you close this window, the full token is not displayed again.
  6. Click Save to return to the API page, which shows a truncated version of the token.

    Zendesk API token.

    Zendesk API token.

Configuring Zendesk with Amazon EventBridge

Step 1. Configuring your Zendesk event source.

  1. Go to your Zendesk Admin Center and select Admin Center > Integrations.
  2. Choose Connect in events Connector for Amazon EventBridge to open the page to configure your Zendesk event source.

    Zendesk integrations

    Zendesk integrations

  3. Enter your AWS account ID in the Amazon Web Services account ID field, and select the Region to receive events.
  4. Choose Save.

    Zendesk Amazon EventBridge configuration.

Step 2. Associate the Zendesk event source with a new event bus: 

  1. Log into the AWS Management Console and navigate to services > Amazon EventBridge > Partner event sources
    New event source

    New event source

     

  2. Select the radio button next to the new event source and choose Associate with event bus.
    Associating event source with event bus.

    Associating event source with event bus.

     

  3. Choose Associate.

Deploying the backend application

After associating the Event source with a new partner event bus, you can deploy backend services to receive events.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

When deploying the application stack, make sure to provide the custom event bus name, and Zendesk API credentials with --parameter-overrides.

sam deploy --parameter-overrides ZendeskEventBusName=aws.partner/zendesk.com/123456789/default ZenDeskDomain=MydendeskDomain ZenDeskPassword=myAPITOken ZenDeskUsername=myZendeskAgentUsername

You can find the name of the new Zendesk custom event bus in the custom event bus section of the EventBridge console.

Routing events with rules

When a support ticket is updated in Zendesk, a number of individual events are streamed to EventBridge, these include an event for each of:

  • Agent Assignment Changed
  • Comment Created
  • Status Changed
  • Brand Changed
  • Subject Changed

An EventBridge rule is used filter for events. The AWS Serverless Application Model (SAM) template defines the rule with the `AWS::Events::Rule` resource type. This routes the event downstream to an AWS Step Functions Express Workflow.  The EventPattern is shown below:

  ZendeskNewWebQueryClosed: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "New Web Query"
      EventBusName: 
         Ref: ZendeskEventBusName
      EventPattern: 
        account:
        - !Sub '${AWS::AccountId}'
        detail-type: 
        - "Support Ticket: Comment Created"
        detail:
          ticket_event:
            ticket:
              status: 
              - solved
              tags:
              - web_widget
              tags: 
              - guide
      Targets: 
        - RoleArn: !GetAtt [ MyStatesExecutionRole, Arn ]
          Arn: !Ref FreshTracksZenDeskQueryMachine
          Id: NewQuery

The tickets must have two specific tags (web_widget and guide) for this pattern to match. These are defined as separate fields to create an AND  matching rule, instead of declaring within the same array field to create an OR rule. A new comment on a support ticket triggers the event.

The Step Functions Express Workflow

The application routes events to a Step Functions Express Workflow that is defined in the application’s SAM template:

FreshTracksZenDeskQueryMachine:
    Type: "AWS::StepFunctions::StateMachine"
    Properties:
      StateMachineType: EXPRESS
      DefinitionString: !Sub |
               {
                    "Comment": "Create a new article from a zendeskTicket",
                    "StartAt": "GetFullZendeskTicket",
                    "States": {
                      "GetFullZendeskTicket": {
                      "Comment": "Get Full Ticket Details",
                      "Type": "Task",
                      "ResultPath": "$.FullTicket",
                      "Resource": "${GetFullZendeskTicket.Arn}",
                      "Next": "GetFullZendeskUser"
                      },
                      "GetFullZendeskUser": {
                      "Comment": "Get Full User Details",
                      "Type": "Task",
                      "ResultPath": "$.FullUser",
                      "Resource": "${GetFullZendeskUser.Arn}",
                      "Next": "PublishArticle"
                      },
                      "PublishArticle": {
                      "Comment": "Publish as an article",
                      "Type": "Task",
                      "Resource": "${CreateZendeskArticle.Arn}",
                      "End": true
                      }
                    }
                }
      RoleArn: !GetAtt [ MyStatesExecutionRole, Arn ]

This application is suited for a Step Functions Express Workflow because it is orchestrating short duration, high-volume, event-based workloads.  Each workflow task is idempotent and stateless. The Express Workflow carries the workload’s state by passing the output of one task to the input of the next. The Amazon States Language ResultPath definition is used to control where each tasks output is appended to workflow’s state before it is passed to the next task.

 

AWS StepFunctions Express workflow

AWS StepFunctions Express workflow

Lambda functions

Each task in this Express Workflow invokes a Lambda function defined within the example application’s SAM template. The Lambda functions use the Node.js Axios package to make a request to Zendesk’s API.  The Zendesk API credentials are stored in the Lambda function’s environment variables and accessible via ‘process.env’.

The first two Lambda functions in the workflow make a GET request to Zendesk. This retrieves additional data about the support ticket, the author, and the agent’s response.

The final Lambda function makes a POST request to Zendesk API. This creates and publishes a new article using this data.  The permission_group and section defined in this function must be set to your Zendesk account’s default permission group ID and FAQ section ID.

AWS Lambda function code

AWS Lambda function code

Integrating with your front-end application

Follow the instructions in the Fresh Tracks repo on GitHub to deploy the front-end application. This application includes Zendesk’s web widget script in the index.html page. The widget has been customized using Zendesk’s javascript API. This is implemented in the navigation component to insert custom forms into the widget and prefill the email address field for authenticated users. The backend application starts receiving Zendesk emitted events immediately.

The video below demonstrates the implementation from end to end.

Conclusion

This post explains how to set up EventBridge’s third-party integration with Zendesk to capture events. The example backend application demonstrates how to filter these events, and send downstream to a Step Functions Express Workflow. The Express Workflow orchestrates a series of stateless Lambda functions to gather additional data about the event. Zendesk’s API is then used to publish a new help guide article from this data.

This pattern provides a framework for bidirectional event orchestration between AWS services, custom web applications and third party integration partners. This can be replicated and applied to any number of third party integration partners.

This is implemented with minimal code to provide near real-time streaming of events and without adding latency to your application.

The possibilities are vast. I am excited to see how builders use this bidirectional serverless pattern to add even more value to their third party services.

Start here to learn about other SaaS integrations with Amazon EventBridge.

Automatic Instacart Bots

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/automatic_insta.html

Instacart is taking legal action against bots that automatically place orders:

Before it closed, to use Cartdash users first selected what items they want from Instacart as normal. Once that was done, they had to provide Cartdash with their Instacart email address, password, mobile number, tip amount, and whether they prefer the first available delivery slot or are more flexible. The tool then checked that their login credentials were correct, logged in, and refreshed the checkout page over and over again until a new delivery window appeared. It then placed the order, Koch explained.

I think I am writing a new book about hacking in general, and want to discuss this. First, does this count as a hack? I feel like it is, since it’s a way to subvert the Instacart ordering system.

When asked if this tool may give people an unfair advantage over those who don’t use the tool, Koch said, “at this point, it’s a matter of awareness, not technical ability, since people who can use Instacart can use Cartdash.” When pushed on how, realistically, not every user of Instacart is going to know about Cartdash, even after it may receive more attention, and the people using Cartdash will still have an advantage over people who aren’t using automated tools, Koch again said, “it’s a matter of awareness, not technical ability.”

Second, should Instacart take action against this? On the one hand, it isn’t “fair” in that Cartdash users get an advantage in finding a delivery slot. But it’s not really any different than programs that “snipe” on eBay and other bidding platforms.

Third, does Instacart even stand a chance in the long run. As various AI technologies give us more agents and bots, this is going to increasingly become the new normal. I think we need to figure out a fair allocation mechanism that doesn’t rely on the precise timing of submissions.

Introducing Dispatch

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/introducing-dispatch-da4b8a2a8072

By Kevin Glisson, Marc Vilanova, Forest Monsen

Netflix is pleased to announce the open-source release of our crisis management orchestration framework: Dispatch!

Okay, but what is Dispatch? Put simply, Dispatch is:

All of the ad-hoc things you’re doing to manage incidents today, done for you, and a bunch of other things you should’ve been doing, but have not had the time!

Dispatch helps us effectively manage security incidents by deeply integrating with existing tools used throughout an organization (Slack, GSuite, Jira, etc.,) Dispatch leverages the existing familiarity of these tools to provide orchestration instead of introducing another tool.

This means you can let Dispatch focus on creating resources, assembling participants, sending out notifications, tracking tasks, and assisting with post-incident reviews; allowing you to focus on actually fixing the issue! Sounds interesting? Continue reading!

The Challenge of Crisis Management

Managing incidents is a stressful job. You are dealing with many questions all at once: What’s the scope? Who can help me? Who do I need to engage? How do I manage all of this?

In general, every incident is unique and extraordinary, if the same incidents are happening over and over you’re firefighting.

There are four main components to Crisis Management that we are attempting to address:

  1. Resource Management — The management of not only data collected about the incident itself but all of the metadata about the response.
  2. Individual Engagement — Understanding the best way to engage individuals and teams, and doing so based on incident context.
  3. Life Cycle Management — Providing the Incident Commander (IC) tools to easily manage the life cycle of the incident.
  4. Incident Learning — Building on past incidents to speed up the resolution of future incidents.

We will use the following terminology throughout the rest of the discussion:

  • Incident Commanders are individuals that are responsible for driving the incident to resolution.
  • Incident Participants are individuals that are Subject Matter Experts (SMEs) that have been engaged to help resolve the incident.
  • Resources are documents, screenshots, logs or any other piece of digital information that is used during an incident.

The Checklist

For an average incident, there are quite a few steps to managing an incident and much of it is typically handled on an ad-hoc basis by a human. Let’s enumerate them:

  1. Declare an Incident — There are many different entry points to a potential incident: automated alerts, an internal notification, or an external notification.
  2. Determine Incident Commander — Determining the sole individual responsible for driving a particular incident to resolution based on the incident source, type, and priority.
  3. Create Communication Channels — Communication during incidents is key. Establishing dedicated and standardized channels for communication prevents the creation of communication silos.
  4. Create Incident Document — The central document responsible for containing up-to-date incident information, including a description of the incident, links to resources, rough notes from in-person meetings, open questions, action items, and timeline information.
  5. Engage Individual Resources — An incident commander will not be able to resolve an incident by themselves, they must identify and engage additional resources within the organization to help them.
  6. Orient Individual Resources — Engaging additional resources is not enough, the Incident Commander needs to orient these resources to the situation at hand.
  7. Notify Key Stakeholders — For any given incident, key stakeholders not directly involved in resolving the incident need to be made aware of the incident.
  8. Drive Incident to Resolution — The actual resolution of the incident, creating tasks, asking questions, and tracking answers. Making note of key learnings to be addressed after resolution.
  9. Perform Post Incident Review (PIR) — Review how the incident process was performed, tracking actions to be performed after the incident, and driving learning through structuring informal knowledge.

Each of these steps has the incident commander and incident participants moving through various systems and interfaces. Each context switch adds to the cognitive load on the responder and distracting them from resolving the incident itself.

Toward Better Crisis Management

Crisis management is not a new challenge, tools like Jira, PagerDuty, VictorOps are all helping organizations manage and respond to incidents. When setting out to automate our incident management process we had two main goals:

  1. Re-use existing tools users were already familiar with; reducing the learning curve to contributing to incidents.
  2. Catalog, store and analyze our incident data to speed up resolution.

Meet Dispatch!

Dispatch

Dispatch is a crisis management orchestration framework that manages incident metadata and resources. It uses tools already in use throughout an organization, providing incident participants a comprehensive crisis management toolset, allowing them to focus on resolving the incident.

Unlike many of our tools Dispatch is not tightly bound to AWS, Dispatch does not use any AWS APIs at all! While Dispatch doesn’t use AWS APIs, it leverages multiple APIs that are deeply embedded into the organization (e.g. Slack, GSuite, PagerDuty, etc.,). In addition to all of the built-in integrations, Dispatch provides multiple integration points that allow it to fit into just about any existing environment.

Although developed as a tool to help Netflix manage security incidents, nothing about Dispatch is specific to a security use-case. At its core, Dispatch aims to manage the entire lifecycle of an incident, focusing on engaging individuals and providing them the context they need to drive the incident to resolution.

Workflow

Let’s take a look at what an incident commander’s new workflow would look like using Dispatch:

Some key benefits of the new workflow are:

  • The incident commander no longer needs to manage access to resources or multiple data streams.
  • Communications are standardized (both in style and interval) across incidents.
  • Incident participants are automatically engaged based on the type, priority, and description of the incident.
  • Incident tasks are tracked and owners are reminded if they’re not completed on time.
  • All incident data is centrally tracked.
  • A common API is provided for internal users and tools.

We want to make reporting incidents as frictionless as possible, giving users a straightforward path to engage the resources they need in a time of crisis.

Jumping between different tools, ensuring data is correct and in sync is a low-value exercise for an incident commander. Instead, we centralized on two common tools to manage the entire lifecycle. Slack for managing incident metadata (e.g. status, title, description, priority, etc,.) and Google Doc and Google Drive for managing data itself.

When teams need to look across many incidents, Dispatch provides an Admin UI. This interface is also where incident knowledge is managed. From common terms and their definitions, individuals, teams, and services. The Admin UI is how we manage incident knowledge for use in future incidents.

Architecture

Dispatch makes use of the following components:

  • Python 3.8 with FastAPI (including helper packages)
  • VueJS UI
  • Postgres

We’re shipping Dispatch with built-in plugins that allow you to create and manage resources with GSuite (Docs, Drive, Sheets, Calendar, Groups), Jira, PagerDuty, and Slack. But the plugin architecture allows for integrations with whatever tools your organization is already using.

Getting Started

Dispatch is available now on the Netflix Open Source site. You can try out Dispatch using Docker. Detailed instructions on setup and configuration are available in our docs.

Interested in Contributing?

Feel free to reach out or submit pull requests if you have any suggestions. We’re looking forward to seeing what new plugins you create to make Dispatch work for you! We hope you’ll find Dispatch as useful as we do!

Oh, and we’re hiring — if you’d like to help us solve these sorts of problems, take a look at https://jobs.netflix.com/teams/security, and reach out!


Introducing Dispatch was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Automated Response and Remediation with AWS Security Hub

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/automated-response-and-remediation-with-aws-security-hub/

AWS Security Hub is a service that gives you aggregated visibility into your security and compliance status across multiple AWS accounts. In addition to consuming findings from Amazon services and integrated partners, Security Hub gives you the option to create custom actions, which allow a customer to manually invoke a specific response or remediation action on a specific finding. You can send custom actions to Amazon CloudWatch Events as a specific event pattern, allowing you to create a CloudWatch Events rule that listens for these actions and sends them to a target service, such as a Lambda function or Amazon SQS queue.

By creating custom actions mapped to specific finding type and by developing a corresponding Lambda function for that custom action, you can achieve targeted, automated remediation for these findings. This allows a customer to specifically decide if he or she wants to invoke a remediation action on a specific finding. A customer can also use these Lambda functions as the target of fully automated remediation actions that do not require any human review.

In this blog post, I’ll show you how to build custom actions, CloudWatch Event rules, and Lambda functions for a dozen targeted actions that can help you remediate CIS AWS Foundations Benchmark-related compliance findings. I’ll also cover use cases for sending findings to an issue management system and for automating security patching. To promote rapid deployment and adoption of this solution, you’ll deploy a majority of the necessary components via AWS CloudFormation.

Note: The full repository for current and future response and remediation templates is hosted on GitHub and includes additional technical guidance for expanding the solution provided in this post.

Solution architecture

Figure 1 - Solution Architecture Overview

Figure 1 – Solution Architecture Overview

Figure 1 shows how a finding travels from an integrated service to a custom action:

  1. Integrated services send their findings to Security Hub.
  2. From the Security Hub console, you’ll choose a custom action for a finding. Each custom action is then emitted as a CloudWatch Event.
  3. The CloudWatch Event rule triggers a Lambda function. This function is mapped to a custom action based on the custom action’s ARN.
  4. Dependent on the particular rule, the Lambda function that is invoked will perform a remediation action on your behalf.

For the purpose of this blog post, I’ll refer to the end-to-end combination of a custom action, a CloudWatch Event rule, a Lambda function, plus any supporting services needed to perform a specific action as a “playbook.” To demonstrate how a remediation solution works end-to-end, I’ll show you how to build your first playbook manually. You’ll deploy the remainder of the playbooks via CloudFormation.

I’ll also show you how to modify four of the playbooks (three of which are appended by an asterisk), as they use AWS Lambda environment variables to perform their actions, we’ll walk through populating these later.

Based on feedback from Security Hub customers, the following controls from the CIS AWS Foundations Benchmark will be supported by this blog post:

  • 1.3 – “Ensure credentials unused for 90 days or greater are disabled”
  • 1.4 – “Ensure access keys are rotated every 90 days or less”
  • 1.5 – “Ensure IAM password policy requires at least one uppercase letter”
  • 1.6 – “Ensure IAM password policy requires at least one lowercase letter”
  • 1.7 – “Ensure IAM password policy requires at least one symbol”
  • 1.8 – “Ensure IAM password policy requires at least one number”
  • 1.9 – “Ensure IAM password policy requires a minimum length of 14 or greater”
  • 1.10 – “Ensure IAM password policy prevents password reuse”
  • 1.11 – “Ensure IAM password policy expires passwords within 90 days or less”
  • 2.2 – “Ensure CloudTrail log file validation is enabled”
  • 2.3 – “Ensure the S3 bucket CloudTrail logs to is not publicly accessible”
  • 2.4 – “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs”*
  • 2.6 – “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket”*
  • 2.7 – “Ensure CloudTrail logs are encrypted at rest using AWS KMS CMKs”
  • 2.8 – “Ensure rotation for customer created CMKs is enabled”
  • 2.9 – “Ensure VPC flow logging is enabled in all VPCs”*
  • 4.1 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 22”
  • 4.2 – “Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389”
  • 4.3 – “Ensure the default security group of every VPC restricts all traffic”

You’ll also deploy and modify an additional playbook, “Send findings to JIRA.” You can find the high-level description of each playbook in each custom action creation script, as well as in the CloudFormation resource descriptions.

Note: If you want to send Security Hub findings to a security information and event management tool (SIEM) such as Amazon ElasticSearch Service or a third-party solution, you must change the CloudWatch Events event pattern to match all findings and use different targets such as Amazon Kinesis Data Streams to Kinesis Data Firehose to load your SIEM. This process is out of scope for this post.

Prerequisites

Ensure you have Security Hub and AWS Config turned on in your Region. Also, note that the solution in this blog post is meant to support a single account and will not support cross-account remediation as deployed. Refer to the Knowledge Center article How can I configure a Lambda function to assume a role from another AWS account? for basic information on cross-account roles for Lambda.

For the playbook “Apply Security Patches,” your EC2 instances must be managed by Systems Manager. For more information on managed instances, see AWS Systems Manager Managed Instances in the AWS Systems Manager User Guide.

Manually create a remediation playbook

To demonstrate the end-to-end process of building a playbook, I’ll first show you how to create one manually, before you deploy the remaining playbooks via CloudFormation. You’ll build a playbook to remediate Control 2.7 of the AWS Foundations Benchmark, “ensure CloudTrail logs are encrypted at rest using AWS Key Management Service (KMS) Customer Managed Keys (CMK).” Configuring CloudTrail to use KMS encryption (called SSE-KMS) provides additional confidentiality controls on you log data. To access your CloudTrail logs, users must not only have S3 read permissions for the corresponding log bucket, they now must be granted decrypt permissions by the KMS key policy.

Important note: The way this remediation, and all other remediation code is written, you can only target one finding at a time via the Action Menu.

You’ll achieve automated remediation by using a Lambda function to create a new KMS CMK and alias which identifies the non-compliant CloudTrail trail. You’ll then attach a KMS key policy that only allows the AWS account that owns the trail to decrypt the logs by using the IAM condition for StringEquals: kms:CallerAccount. You only need to run this playbook once per non-compliant CloudTrail trail.

To get started, follow these steps:

  1. Navigate to the Security Hub console, select Settings from the navigation pane, then select the Custom Actions tab.
  2. Choose Create custom action and enter values for Action name, Description, and Custom action ID, then choose Create custom action again, as shown in Figure 2.

    For the purpose of this blog post, I’ll refer to my action name as “CIS 2.7 RR” where the “RR” stands for “Response and remediation.”
     

    Figure 2 - Create custom action

    Figure 2 – Create custom action

  3. Copy the Amazon resource number (ARN) down, as you’ll need it in step 11.
  4. Navigate to the Lambda console and select Create function.
  5. Enter a function name, choose Python 3.7 runtime, and under Permissions select Create a new role with basic Lambda permissions. Then choose Create function.
  6. Scroll down to Execution role and select the hyperlink under Existing role. This will open a new tab in the IAM console.
  7. From the IAM console, select Add inline policy, then select the JSON tab, paste in the below IAM policy JSON, and select Review Policy.
    
    {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "kmssid",
            "Action": [
              "kms:CreateAlias",
              "kms:CreateKey",
              "kms:PutKeyPolicy"
            ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "cloudtrailsid",
            "Action": [
              "cloudtrail:UpdateTrail"
            ],
            "Effect": "Allow",
            "Resource": "*"
          }
        ]
      }
    

  8. Give the in-line policy a name and select Create policy.
  9. Back in the Lambda console, increase Timeout to 1 minute and Memory to 256MB. Scroll up to Function code, paste in the below code, and select Save.
    
     # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
     # SPDX-License-Identifier: MIT-0
     #
     # Permission is hereby granted, free of charge, to any person obtaining a copy of this
     # software and associated documentation files (the "Software"), to deal in the Software
     # without restriction, including without limitation the rights to use, copy, modify,
     # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
     # permit persons to whom the Software is furnished to do so.
     #
     # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
     # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
     # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
     # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
     # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
     # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
    
    import boto3
    import json
    import time
    
    def lambda_handler(event, context):
        # parse non-compliant trail from Security Hub finding
        noncompliantTrail = str(event['detail']['findings'][0]['Resources'][0]['Details']['Other']['name'])
        # parse account ID from Security Hub finding, will be needed for Key Policy
        accountID = str(event['detail']['findings'][0]['AwsAccountId'])
        
        # import boto3 clients for KMS and CloudTrail
        kms = boto3.client('kms')
        cloudtrail = boto3.client('cloudtrail')
    
        # create a new KMS CMK to encrypt the non-compliant trail
        try:
            createKey = kms.create_key(
            Description='Generated by Security Hub to remediate CIS 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs',
            KeyUsage='ENCRYPT_DECRYPT',
            Origin='AWS_KMS'
            )
            # save key id as a variable
            cloudtrailKey = str(createKey['KeyMetadata']['KeyId'])
            print("Created Key" + " " + cloudtrailKey)
        except Exception as e:
            print(e)
            print("KMS CMK creation failed")
            raise
            
        # wait 2 seconds for key creation to propogate
        time.sleep(2)
    
        # attach an alias for easy identification to the key - must always begin with "alias/"
        try:
            createAlias = kms.create_alias(
            AliasName='alias/' + noncompliantTrail + '-CMK',
            TargetKeyId=cloudtrailKey
            )
            print(createAlias)
        except Exception as e:
            print(e)
            print("Failed to create KMS Alias")
            raise
        
        # wait 1 second
        time.sleep(1)
    
        # policy name for PutKeyPolicy is always "default"
        policyName = 'default'
        # set Key Policy as JSON object
        keyPolicy={
            "Version": "2012-10-17",
            "Id": "Key policy created by CloudTrail",
            "Statement": [
                {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": [ "arn:aws:iam::" + accountID + ":root" ]
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow CloudTrail to encrypt logs",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:GenerateDataKey*",
                    "Resource": "*",
                    "Condition": {
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow CloudTrail to describe key",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "cloudtrail.amazonaws.com"
                    },
                    "Action": "kms:DescribeKey",
                    "Resource": "*"
                },
                {
                    "Sid": "Allow principals in the account to decrypt log files",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": [
                        "kms:Decrypt",
                        "kms:ReEncryptFrom"
                    ],
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        },
                        "StringLike": {
                            "kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:" + accountID + ":trail/" + noncompliantTrail
                        }
                    }
                },
                {
                    "Sid": "Allow alias creation during setup",
                    "Effect": "Allow",
                    "Principal": {
                        "AWS": "*"
                    },
                    "Action": "kms:CreateAlias",
                    "Resource": "*",
                    "Condition": {
                        "StringEquals": {
                            "kms:CallerAccount": accountID
                        }
                    }
                }
            ]
        }
        # attaches above key policy to key
        try:
            attachKeyPolicy = kms.put_key_policy(
                KeyId=cloudtrailKey,
                Policy=json.dumps(keyPolicy),
                PolicyName=policyName
            )
            print(attachKeyPolicy)
        except Exception as e:
            print(e)
            print("Failed to attach key policy to Key:" + " " + cloudtrailKey)
    
        # update CloudTrail with the new CMK
        try:
            encryptTrail = cloudtrail.update_trail(
            Name=noncompliantTrail,
            KmsKeyId=cloudtrailKey
            )
            print(encryptTrail)
            print("CloudTrail trail" + " " + noncompliantTrail + " " + "has been successfully encrypted!")
        except Exception as e:
            print(e)
            print("Failed to attach KMS CMK to CloudTrail")
    

  10. Navigate to the CloudWatch console, and choose Events, then Create rule.
  11. On the left, next to Event Pattern Preview, select Edit.
  12. Under Event Source, find Build custom event pattern and paste the JSON below into the text box. Replace the ARN under “resources” with the ARN of the custom action you created in step 3.
    
    {
      "source": [
        "aws.securityhub"
      ],
      "detail-type": [
        "Security Hub Findings - Custom Action"
      ],
      "resources": [
        "arn:aws:securityhub:us-west-2:123456789012:action/custom/test-action1"
      ]
    }
    

  13. On the same screen, under Targets on the right, select add Target, choose the Lambda function you created in step 4, then select Configure details.
  14. On the next screen, enter values for Name and Description, then select Create rule.
  15. Back in the Security Hub console, select Compliance standards from the menu on the left, then select View results to see the CIS AWS Benchmarks.
  16. Find rule 2.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs and select the hyperlink to see all findings related to that control.
  17. Select a finding that is FAILED, with a Resource type that reads AwsCloudTrailTrail.
  18. From the top right, select the Actions dropdown menu, then select CIS 2.7 RR (or whatever you named this action in step 2), as shown in Figure 3.

    Selecting this action will execute the rule you created in step 13, which will invoke the Lambda function you created in step 9.
     

    Figure 3 - Security Hub Custom Actions

    Figure 3 – Security Hub Custom Actions

  19. Navigate to the CloudTrail console to ensure your CloudTrail trail is updated with SSE-KMS.

This creation flow via the console is universal for all playbook development with Security Hub. You’ll use the Actions menu in the same way to trigger the playbooks you deploy via CloudFormation in the next section of this walkthrough.

Note: To monitor the actions that are taken by the playbooks’ Lambda functions, refer to the functions’ logs. Both success and error messages will appear to help you diagnose as needed.

Deploy remediation playbooks via CloudFormation

Download the CloudFormation template from GitHub and create a CloudFormation stack. For more information about how to create a CloudFormation stack, see Getting Started with AWS CloudFormation in the AWS CloudFormation User Guide.

After your stack has finished deploying, navigate to the Resources tab and select the hyperlink for each resource to be taken to their respective consoles. The logical ID for each resource will be prepended by the action the resource corresponds to. For example, in figure 4, CIS13RRCWEPermissions denotes CloudWatch Event permissions for AWS CIS Benchmark Control 1.3. This logical ID structure is used throughout the template.
 

Figure 4 - CloudFormation Resources

Figure 4 – CloudFormation Resources

Next, I’ll show you how to modify the Lambda functions associated with the following playbooks: “Send findings to JIRA,” CIS 2.4, CIS 2.6, and CIS 2.9.

Playbook modification: send findings to JIRA

Note: If you don’t currently have a JIRA Software Data Center deployment set up in your account, you can deploy one with a free evaluation period by following this Quick Start. If you are not interested in using JIRA or you use a different issue management tool, you can skip this section.

This playbook works by using the associated Lambda function to execute a Systems Manager automation document called AWS-CreateJiraIssue.

This Systems Manager document will in turn deploy a CloudFormation stack and create a custom Lambda function to map environmental variables (listed below) into JIRA via your playbook’s Lambda function.

Once complete, the CloudFormation stack will self-delete and the automation will be complete. To see this flow, refer to figure 5, below:
 

Figure 5 - Send to JIRA Architecture Diagram

Figure 5 – Send to JIRA Architecture Diagram

  1. A finding is selected and the custom action “Create JIRA Issue” is invoked, which triggers a CloudWatch Event.
  2. The CloudWatch Event rule will trigger a Lambda function.
  3. Lambda will invoke the AWS-CreateJiraIssue document via Systems Manager Automation.
  4. Systems Manager Automation will pass your Lambda environmental variables and Security Hub finding as document parameters.
  5. The document will create a CloudFormation Stack that contains another custom Lambda to invoke JIRA APIs for creating issues.
  6. The document’s Lambda function uses your parameters to create an issue in JIRA.
  7. JIRA will send back a response to note failure or success, and the CloudFormation stack will self-delete.

To get started, navigate to the Lambda console and find the function named SendToJIRA, then scroll down to Environmental variables: You should see 4, with the text “placeholder” next to each. The following steps walk you through how to populate them:

  1. Fill out the JIRA_API_PARAMETER field:
    1. Refer to Atlassian’s instructions to generate a JIRA API token
    2. Follow the Systems Manager Parameter Store walkthrough to create a parameter.
      1. In Step 6 of the linked instructions, choose Secure String and choose the default KMS key.
      2. In Step 7 of the linked instructions, paste in your newly generated JIRA API token.
    3. Copy the name of the parameter and paste it as the value of the JIRA_API_PARAMETER field.
  2. Fill out the JIRA_PROJECT field by pasting in your JIRA Software project’s Project Key.
  3. Fill out the JIRA_SECURITY_ISSUE_USER field:
    1. This value maps to a user in JIRA Software; refer to Atlassian’s instructions for how to create a user. Use the API key you generated in step 1.A as this user’s password.
    2. Paste the user name into the JIRA_SECURITY_ISSUE_USER field.
  4. Fill out the JIRA_URL field:
    1. From JIRA Software navigate to the System sub-menu of JIRA Administration and look for the Base URL field, underneath Settings – General Settings (see figure 6).
    2. Copy the Base URL value and paste it into the JIRA_URL field.
       
      Figure 6 - Base URL in JIRA

      Figure 6 – Base URL in JIRA

  5. When finished, select Save at the top of the Lambda console, then navigate to the Security Hub console and select Findings from the left-hand menu.
  6. Select any finding, and then, from the Actions dropdown in the top right, select Create JIRA Issue.
  7. Navigate to your JIRA instance and wait a few minutes for the new issue to appear, as shown in Figure 7.
     
    Figure 7 - Security Hub Finding in JIRA

    Figure 7 – Security Hub Finding in JIRA

    Note: If your issue hasn’t populated in JIRA after a few minutes, refer to the Systems Manager Automation console or CloudFormation console to find the stack that was created by Systems Manager and refer to the failure messages to troubleshoot further.

CIS 2.4 response & remediation playbook modification

CIS Control 2.4 is “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” The intent of this recommendation is to ensure that API activity recorded by CloudTrail is available to query in near real-time for the purpose of troubleshooting or security incident investigation with CloudWatch.

To send your CloudTrail logs to CloudWatch, the Lambda function for this playbook will create a brand new CloudWatch Logs group that has the name of the non-compliant CloudTrail trail in it for easy identification and then update your non-compliant CloudTrail trail to send its logs to the newly created log group.

To accomplish this, CloudTrail needs an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role for CloudTrail, follow these instructions from the CloudTrail user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-4_RR, then scroll down to Environmental variables.
  2. Find the variable CLOUDTRAIL_CW_LOGGING_ROLE_ARN with text that reads “placeholder” as the value.
  3. Paste the ARN of the IAM role into the field for this value, then select Save.

CIS 2.6 response & remediation playbook modification

CIS Control 2.6 is “Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket.” An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. AWS recommends that you enable bucket access logging on the CloudTrail S3 bucket. By enabling S3 bucket logging on target S3 buckets, you can capture all the events that might affect objects in the bucket. Configuring logs to be placed in a separate bucket enables centralized collection of access log information, which can be useful in security and incident response workflows.

Note: Security Hub supports CIS AWS Foundations controls only on resources in the same Region and owned by the same account as the one in which Security Hub is enabled and being used. For example, if you’re using Security Hub in the us-east-2 Region, and you’re storing CloudTrail logs in a bucket in the us-west-2 Region, Security Hub cannot find the bucket in the us-west-2 Region. The control returns a warning that the resource cannot be located. Similarly, if you’re aggregating logs from multiple accounts into a single bucket, the CIS control fails for all accounts except the account that owns the bucket.

To ensure the S3 bucket that contains your CloudTrail logs has access logging enabled, the Lambda function for this playbook invokes the Systems Manager document AWS-ConfigureS3BucketLogging. This document will enable access logging for that bucket. To avoid statically populating your S3 access logging bucket in the Lambda function’s code, you’ll pass that value in via an environmental variable.

Note: If you do not currently have an S3 bucket configured to receive access logs, follow the directions from the S3 user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-6_RR, then scroll down to Environmental variables.
  2. Find the variable ACCESS_LOGGING_BUCKET with text that reads ‘placeholder’ as the value.
  3. Paste the name of the S3 bucket that will receive the access logs into the field for this value and select Save.

CIS 2.9 response & remediation playbook modification

CIS Control 2.9 is “Ensure VPC flow logging is enabled in all VPCs.” The Amazon Virtual Private Cloud (VPC) flow logs feature enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you’ve created a flow log, you can view and retrieve its data in CloudWatch Logs. AWS recommends that you enable flow logging for packet rejects for VPCs. Flow logs provide visibility into network traffic that traverses the VPC and can detect anomalous traffic or provide insight into your security workflow.

To enable VPC flow logging for rejected packets, the Lambda function for this playbook will create a new CloudWatch Logs group. For easy identification, the name of the group will include the non-compliant VPC name. The Lambda function will programmatically update your VPC to enable flow logs to be sent to the newly created log group.

Similar to CloudTrail logging, VPC flow log need an IAM role and permissions to be allowed to publish logs to CloudWatch. To avoid creating multiple new IAM roles and policies via Lambda, you’ll populate the ARN of this IAM role in the Lambda environmental variables for this playbook.

Note: If you don’t currently have an IAM role that VPC flow logs can use to deliver logs to CloudWatch, follow the directions from the VPC user guide to create one.

To update this playbook:

  1. Navigate to the Lambda console and find the function named CIS_2-9_RR, then scroll down to Environmental variables.
  2. Find the variable flowLogRoleARN with text that reads ‘placeholder’ as the value.
  3. Paste in the ARN of the VPC flow logs IAM role the field for this value, and select Save.

Conclusion

In this blog post, I showed you how to create, deploy, and execute response and remediation playbooks for Security Hub. By combining custom actions, CloudWatch Event rules, and Lambda functions, you can quickly remediate non-compliant resources. I also showed you how to take pre-defined actions such as sending findings to JIRA, and how to deploy an additional seven playbooks via CloudFormation.

You can create playbooks that take other actions, such as updating Network Access Control Lists to help block malicious traffic from a TOR exit node via the UnauthorizedAccess:EC2/TorIPCaller finding from GuardDuty. Using playbooks with Security Hub is one more way to build toward the security and compliance of your AWS resources.

To avoid incurring additional charges from AWS resources, delete the CloudFormation stack you deployed as well as the resources you manually created for the CIS 2.7 response and remediation playbook.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Jonathon Rau

Jonathan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

How FactSet automated exporting data from Amazon DynamoDB to Amazon S3 Parquet to build a data analytics platform

Post Syndicated from Arvind Godbole original https://aws.amazon.com/blogs/big-data/how-factset-automated-exporting-data-from-amazon-dynamodb-to-amazon-s3-parquet-to-build-a-data-analytics-platform/

This is a guest post by Arvind Godbole, Lead Software Engineer with FactSet and Tarik Makota, AWS Principal Solutions Architect. In their own words “FactSet creates flexible, open data and software solutions for tens of thousands of investment professionals around the world, which provides instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, we are always working to improve the value that our products provide.”

One area that we’ve been looking into is the relevancy of search results for our clients. Given the wide variety of client use cases and the large number of searches per day, we needed a platform to store anonymized usage data and allow us to analyze that data to boost results using our custom scoring algorithm. Amazon EMR was the obvious choice to host the calculations, but the question arose on how to get our anonymized data into a form that Amazon EMR could use. We worked with AWS and chose to use Amazon DynamoDB to prepare the data for usage in Amazon EMR.

This post walks you through how FactSet takes data from a DynamoDB table and converts that data into Apache Parquet. We store the Parquet files in Amazon S3 to enable near real-time analysis with Amazon EMR. Along the way, we encountered challenges related to data type conversion, which we will explain and show how we were able to overcome these.

Workflow overview

Our workflow contained the following steps:

  1. Anonymized log data is stored into DynamoDB tables. These entries have different fields, depending on how the logs were generated. Whenever we create items in the tables, we use DynamoDB Streams to write out a record. The stream records contain information from a single item in a DynamoDB table.
  2. An AWS Lambda function is hooked into the DynamoDB stream to capture the new items stored in a DynamoDB table. We built our Lambda function off of the lambda-streams-to-firehose project on GitHub to convert the DynamoDB stream image to JSON, which we stringify and push to Amazon Kinesis Data Firehose.
  3. Kinesis Data Firehose transforms the JSON data into Parquet using data contained within an AWS Glue Data Catalog table.
  4. Kinesis Data Firehose stores the Parquet files in S3.
  5. An AWS Glue crawler discovers the schema of DynamoDB items and stores the associated metadata into the Data Catalog.

The following diagram illustrates this workflow.

AWS Glue provides tools to help with data preparation and analysis. A crawler can run on a DynamoDB table to take inventory of the table data and store that information in a Data Catalog. Other services can use the Data Catalog as an index to the location, schema, and types of the table data. There are other ways to add metadata into a Data Catalog, but the key idea is that you can update and modify the metadata easily. For more information, see Populating the AWS Glue Data Catalog.

Problem: Data type disparities

Using a variety of technologies to build a solution often requires mapping and converting data types between these technologies. The cloud is no exception. In our case, log items stored in DynamoDB contained attributes of type String Set. String Set values caused data conversion exceptions when Kinesis tried to transform the data to Parquet. After investigating the problem, we found the following:

  • As the crawler indexes the DynamoDB table, Set data types (StringSet, NumberSet) are stored in the Glue metadata catalog as set<string> and set<bigint>.
  • Kinesis Data Firehose uses that same catalog when it performs the conversion to Apache Parquet. The conversion requires valid Hive data types.
  • set<string> and set<bigint> are not valid Hive data types, so the conversion fails, and an exception is generated. The exception looks similar to the following code:
    [{
       "lastErrorCode": "DataFormatConversion.InvalidSchema",
       "lastErrorMessage": "The schema is invalid. Error parsing the schema: Error: type expected at the position 38 of 'array,used:bigint>>' but 'set' is found."
    }]

Solution: Construct data mapping

While working with the AWS team, we confirmed that the Kinesis Data Firehose converter needs valid Hive data types in the Data Catalog to succeed. When it comes to complex data types, Hive doesn’t support set<data_type>, but it does support the following:

  • ARRAY<data_type>
  • MAP<primitive_type, data_type
  • STRUCT<col_name : data_type [COMMENT col_comment], ...>
  • UNIONTYPE<data_type, data_type, ...>

In our case, this meant that we must convert set<string> and set<bigint> into array<string> and array<bigint>. Our first step was to manually change the types directly in the Data Catalog. After we updated the Data Catalog to change all occurrences of set<data_type> to array<data_type>, the Kinesis transformation to Parquet completed successfully.

Our business case calls for a data store that can store items with different attributes in the same table and the addition of new attributes on-the-fly. We took advantage of DynamoDB’s schema-less nature and ability to scale up and down on-demand so we could focus on our functionality and not the management of the underlying infrastructure. For more information, see Should Your DynamoDB Table Be Normalized or Denormalized?

If our data had a static schema, a manual change would be good enough. Given our business case, a manual solution wasn’t going to scale. Every time we introduced new attributes to the DynamoDB table, we needed to run the crawler, which re-created the metadata and overwrote the change.

Serverless event architecture

To automate the data type updates to the Data Catalog, we used Amazon EventBridge and Lambda to implement the modifications to the data type mapping. EventBridge is a serverless event bus that connects applications using events. An event is a signal that a system’s state has changed, such as the status of a Data Catalog table.

The following diagram shows the previous workflow with the new architecture.

  1. The crawler stays as-is and crawls the DynamoDB table to obtain the metadata.
  2. The metadata obtained by the crawler is stored in the Data Catalog. Previous metadata is updated or removed, and changes (manual or automated) are overwritten.
  3. The event GlueTableChanged in EventBridge listens to any changes to the Data Catalog tables. After we receive the event that there was a change to the table, we trigger the Lambda function.
  4. The Lambda function uses AWS SDK to update the Glue Catalog table using the glue.update_table() API to replace occurrences of set<data_type> with array<data_type>.

To set up EventBridge, we set Event pattern to be “Pre-defined pattern by service”. For service provider, we selected AWS and Glue as service. Event Type we selected “Glue Data Catalog Table State Change”. The following screenshot shows the EventBridge configuration that sends events to the Lambda function that updates the Data Catalog.

The following is the baseline Lambda code:

# This is NOT production worthy code please modify and implement error handling routines as appropriate
import json
import logging
import boto3

glue = boto3.client('glue')

logger = logging.getLogger()
logger.setLevel(logging.INFO)

# Define subsegments manually
def table_contains_set(databaseName, tableName):
    
    # returns Glue Catalog description for Table structure
    response = glue.get_table( DatabaseName=databaseName,Name=tableName)
    logger.info(response)  
    
    # loop thru all the Columns of the table 
    isModified = False
    for i in response['Table']['StorageDescriptor']['Columns']: 
        logger.info("## Column: " + str(i['Name']))
        # if Column datatype starts with set< then change it to array<
        if i['Type'].find("set<") != -1:
            i['Type'] = i['Type'].replace("set<", "array<")
            isModified = True
            logger.info(i['Type'])
    
    if isModified:
        # following 3 statements simply clean up the response JSON so that update_table API call works
        del response['Table']['DatabaseName']
        del response['Table']['CreateTime']
        del response['Table']['UpdateTime']
        glue.update_table(DatabaseName=databaseName,TableInput=response['Table'],SkipArchive=True)
        
    logger.info("============ ### =============") 
    logger.info(response)
    
    return True
    
def lambda_handler(event, context):
    logger.info('## EVENT')
    # logger.info(event)
    # This is Sample of the event payload that would be received
    # { 'version': '0', 
    #   'id': '2b402842-21f5-1d76-1a9a-c90076d1d7da', 
    #   'detail-type': 'Glue Data Catalog Table State Change', 
    #   'source': 'aws.glue', 
    #   'account': '1111111111', 
    #   'time': '2019-08-18T02:53:41Z', 
    #   'region': 'us-east-1', 
    #   'resources': ['arn:aws:glue:us-east-1:111111111:table/ddb-glue-fh/ddb_glu_fh_sample'], 
    #   'detail': {
    #           'databaseName': 'ddb-glue-fh', 
    #           'changedPartitions': [], 
    #           'typeOfChange': 'UpdateTable', 
    #           'tableName': 'ddb_glu_fh_sample'
    #    }
    # }
    
    # get the database and table name of the Glue table triggered the event
    databaseName = event['detail']['databaseName']
    tableName = event['detail']['tableName']
    logger.info("DB: " + databaseName + " | Table: " + tableName)
    
    table_contains_set(databaseName, tableName)
   
    # TODO implement and modify
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

The Lambda function is straightforward; this post provides a basic skeleton. You can use this as a template to implement your own functionality for your specific data.

Conclusion

Simple things such as data type conversion and mapping can create unexpected outcomes and challenges when data crosses service boundaries. One of the advantages of AWS is the wide variety of tools with which you can create robust and scalable solutions tailored to your needs. Using event-driven architecture, we solved our data type conversion errors and automated the process to eliminate the issue as we move forward.

 


About the Authors

Arvind Godbole is a Lead Software Engineer at FactSet Research Systems. He has experience in building high-performance, high-availability client facing products and services, ranging from real-time financial applications to search infrastructure. He is currently building an analytics platform to gain insights into client workflows. He holds a B.S. in Computer Engineering from the University of California, San Diego

 

 

 

Tarik Makota is a Principal Solutions Architect with the Amazon Web Services. He provides technical guidance, design advice and thought leadership to AWS’ customers across US Northeast. He holds an M.S. in Software Development and Management from Rochester Institute of Technology.

 

 

Protect your veggies from hail with a Raspberry Pi Zero W

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/protect-your-veggies-from-hail-with-a-raspberry-pi-zero-w/

Tired of losing vegetable crops to frequent summertime hail storms, Nick Rogness decided to build something to protect them. And the result is brilliant!

Digital Garden with hail protection

Tired of getting your garden destroyed by hail storms? I was, so I did something about it…maker style!

“I live in a part of the country where hail and severe weather are commonplace during the summer months,” Nick explains in his Hackster tutorial. “I was getting frustrated every year when my wife’s garden was get demolished by the nightly hail storms losing our entire haul of vegetable goodies!”

Nick drew up plans for a solution to his hail problem, incorporating liner actuators bolted to a 12ft × 12ft frame that surrounds the vegetable patch. When a storm is on the horizon, the actuators pull a heavy-duty tarp over the garden.

Nick connected two motor controllers to a Raspberry Pi Zero W. The Raspberry Pi then controls the actuators to pull the tarp, either when a manual rocker switch is flipped or when it’s told to do so via weather-controlled software.

“Software control of the garden was accomplished by using a Raspberry Pi and MQTT to communicate via Adafruit IO to reach the mobile app on my phone,” Nick explains. The whole build is powered by a 12V Marine deep-cycle battery that’s charged using a solar panel.

You can view the full tutorial on Hackster, including the code for the project.

The post Protect your veggies from hail with a Raspberry Pi Zero W appeared first on Raspberry Pi.

Orchestrating a security incident response with AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/orchestrating-a-security-incident-response-with-aws-step-functions/

In this post I will show how to implement the callback pattern of an AWS Step Functions Standard Workflow. This is used to add a manual approval step into an automated security incident response framework. The framework could be extended to remediate automatically, according to the individual policy actions defined. For example, applying alternative actions, or restricting actions to specific ARNs.

The application uses Amazon EventBridge to trigger a Step Functions Standard Workflow on an IAM policy creation event. The workflow compares the policy action against a customizable list of restricted actions. It uses AWS Lambda and Step Functions to roll back the policy temporarily, then notify an administrator and wait for them to approve or deny.

Figure 1: High-level architecture diagram.

Important: the application uses various AWS services, and there are costs associated with these services after the Free Tier usage. Please see the AWS pricing page for details.

You can deploy this application from the AWS Serverless Application Repository. You then create a new IAM Policy to trigger the rule and run the application.

Deploy the application from the Serverless Application Repository

  1. Find the “Automated-IAM-policy-alerts-and-approvals” app in the Serverless Application Repository.
  2. Complete the required application settings
    • Application name: an identifiable name for the application.
    • EmailAddress: an administrator’s email address for receiving approval requests.
    • restrictedActions: the IAM Policy actions you want to restrict.

      Figure 2 Deployment Fields

  3. Choose Deploy.

Once the deployment process is completed, 21 new resources are created. This includes:

  • Five Lambda functions that contain the business logic.
  • An Amazon EventBridge rule.
  • An Amazon SNS topic and subscription.
  • An Amazon API Gateway REST API with two resources.
  • An AWS Step Functions state machine

To receive Amazon SNS notifications as the application administrator, you must confirm the subscription to the SNS topic. To do this, choose the Confirm subscription link in the verification email that was sent to you when deploying the application.

EventBridge receives new events in the default event bus. Here, the event is compared with associated rules. Each rule has an event pattern defined, which acts as a filter to match inbound events to their corresponding rules. In this application, a matching event rule triggers an AWS Step Functions execution, passing in the event payload from the policy creation event.

Running the application

Trigger the application by creating a policy either via the AWS Management Console or with the AWS Command Line Interface.

Using the AWS CLI

First install and configure the AWS CLI, then run the following command:

aws iam create-policy --policy-name my-bad-policy1234 --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketObjectLockConfiguration",
                "s3:DeleteObjectVersion",
                "s3:DeleteBucket"
            ],
            "Resource": "*"
        }
    ]
}'

Using the AWS Management Console

  1. Go to Services > Identity Access Management (IAM) dashboard.
  2. Choose Create policy.
  3. Choose the JSON tab.
  4. Paste the following JSON:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:GetBucketObjectLockConfiguration",
                    "s3:DeleteObjectVersion",
                    "s3:DeleteBucket"
                ],
                "Resource": "*"
            }
        ]
    }
  5. Choose Review policy.
  6. In the Name field, enter my-bad-policy.
  7. Choose Create policy.

Either of these methods creates a policy with the permissions required to delete Amazon S3 buckets. Deleting an S3 bucket is one of the restricted actions set when the application is deployed:

Figure 3 default restricted actions

This sends the event to EventBridge, which then triggers the Step Functions state machine. The Step Functions state machine holds each state object in the workflow. Some of the state objects use the Lambda functions created during deployment to process data.

Others use Amazon States Language (ASL) enabling the application to conditionally branch, wait, and transition to the next state. Using a state machine decouples the business logic from the compute functionality.

After triggering the application, go to the Step Functions dashboard and choose the newly created state machine. Choose the current running state machine from the executions table.

Figure 4 State machine executions.

You see a visual representation of the current execution with the workflow is paused at the AskUser state.

Figure 5 Workflow Paused

These are the states in the workflow:

ModifyData
State Type: Pass
Re-structures the input data into an object that is passed throughout the workflow.

ValidatePolicy
State type: Task. Services: AWS Lambda
Invokes the ValidatePolicy Lambda function that checks the new policy document against the restricted actions.

ChooseAction
State type: Choice
Branches depending on input from ValidatePolicy step.

TempRemove
State type: Task. Service: AWS Lambda
Creates a new default version of the policy with only permissions for Amazon CloudWatch Logs and deletes the previously created policy version.

AskUser
State type: Choice
Sends an approval email to user via SNS, with the task token that initiates the callback pattern.

UsersChoice
State type: Choice
Branch based on the user action to approve or deny.

Denied
State type: Pass
Ends the execution with no further action.

Approved
State type: Task. Service: AWS Lambda
Restores the initial policy document by creating as a new version.

AllowWithNotification
State type: Task. Services: AWS Lambda
With no restricted actions detected, the user is still notified of change (via an email from SNS) before execution ends.

The callback pattern

An important feature of this application is the ability for an administrator to approve or deny a new policy. The Step Functions callback pattern makes this possible.

The callback pattern allows a workflow to pause during a task and wait for an external process to return a task token. The task token is generated when the task starts. When the AskUser function is invoked, it is passed a task token. The task token is published to the SNS topic along with the API resources for approval and denial. These API resources are created when the application is first deployed.

When the administrator clicks on the approve or deny links, it passes the token with the API request to the receiveUser Lambda function. This Lambda function uses the incoming task token to resume the AskUser state.

The lifecycle of the task token as it transitions through each service is shown below:

Figure 6 Task token lifecycle

  1. To invoke this callback pattern, the askUser state definition is declared using the .waitForTaskToken identifier, with the task token passed into the Lambda function as a payload parameter:
    "AskUser":{
     "Type": "Task",
     "Resource": "arn:aws:states:::lambda:invoke.waitForTaskToken",
     "Parameters":{  
     "FunctionName": "${AskUser}",
     "Payload":{  
     "token.$":"$$.Task.Token"
      }
     },
      "ResultPath":"$.taskresult",
      "Next": "usersChoice"
      },
  2. The askUser Lambda function can then access this token within the event object:
    exports.handler = async (event,context) => {
        let approveLink = `process.env.APIAllowEndpoint?token=${JSON.stringify(event.token)}`
        let denyLink = `process.env.APIDenyEndpoint?token=${JSON.stringify(event.token)}
    //code continues
  3. The task token is published to an SNS topic along with the message text parameter:
        let params = {
     TopicArn: process.env.Topic,
     Message: `A restricted Policy change has been detected Approve:${approveLink} Or Deny:${denyLink}` 
    }
     let res = await sns.publish(params).promise()
    //code continues
  4. The administrator receives an email with two links, one to approve and one to deny. The task token is appended to these links as a request query string parameter named token:

    Figure 7 Approve / deny email.

  5. Using the Amazon API Gateway proxy integration, the task token is passed directly to the recieveUser Lambda function from the API resource, and accessible from within in the function code as part of the event’s queryStringParameter object:
    exports.handler = async(event, context) => {
    //some code
        let taskToken = event.queryStringParameters.token
    //more code
    
  6.  The token is then sent back to the askUser state via an API call from within the recieveUser Lambda function.  This API call also defines the next course of action for the workflow to take.
    //some code 
    let params = {
            output: JSON.stringify({"action":NextAction}),
            taskToken: taskTokenClean
        }
    let res = await stepfunctions.sendTaskSuccess(params).promise()
    //code continues
    

Each Step Functions execution can last for up to a year, allowing for long wait periods for the administrator to take action. There is no extra cost for a longer wait time as you pay for the number of state transitions, and not for the idle wait time.

Conclusion

Using EventBridge to route IAM policy creation events directly to AWS Step Functions reduces the need for unnecessary communication layers. It helps promote good use of compute resources, ensuring Lambda is used to transform data, and not transport or orchestrate.

Using Step Functions to invoke services sequentially has two important benefits for this application. First, you can identify the use of restricted policies quickly and automatically. Also, these policies can be removed and held in a ‘pending’ state until approved.

Step Functions Standard Workflow’s callback pattern can create a robust orchestration layer that allows administrators to review each change before approving or denying.

For the full code base see the GitHub repository https://github.com/bls20AWS/AutomatedPolicyOrchestrator.

For more information on other Step Functions patterns, see our documentation on integration patterns.

Multi-branch CodePipeline strategy with event-driven architecture

Post Syndicated from Henrique Bueno original https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/

Henrique Bueno, DevOps Consultant, Professional Services

This blog post presents a solution for automated pipelines creation in AWS CodePipeline when a new branch is created in an AWS CodeCommit repository. A use case for this solution is when a GitFlow approach using CodePipeline is required. The strategy presented here is used by AWS customers to enable the use of GitFlow using only AWS tools.

CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.

CodeCommit is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.

GitFlow is a branching model designed around the project release. This provides a robust framework for managing larger projects. Gitflow is ideally suited for projects that have a scheduled release cycle.

Applicability

When using CodePipeline to orchestrate pipelines and CodeCommit as a code source, in addition to setting a repository, you must also set which branch will trigger the pipeline. This configuration works perfectly for the trunk-based strategy, in which you have only one main branch and all the developers interact with this single branch. However, when you need to work with a multi-branching strategy like GitFlow, the requirement to set a pipeline for each branch brings additional challenges.

It’s important to note that trunk-based is, by far, the best strategy for taking full advantage of a DevOps approach; this is the branching strategy that AWS recommends to its customers. On the other hand, many customers like to work with multiple branches and believe it justifies the effort and complexity in dealing with branching merges. This solution is for these customers.

Solution Overview

One of the great benefits of working with Infrastructure as Code is the ability to create multiple identical environments through a single template. This example uses AWS CloudFormation templates to provision pipelines and other necessary resources, as shown in the following diagram.

Multi-branch CodePipeline strategy with event-driven architecture

Multi-branch CodePipeline strategy with event-driven architecture

The template is hosted in an Amazon S3 bucket. An AWS Lambda function deploys a new AWS CloudFormation stack based on this template. This Lambda function is trigged for an Amazon CloudWatch Events rule that looks for events at the CodePipeline repository.

The CreatePipeline Events rule

The AWS CloudFormation snippet that creates the Events rule follows. The Events rule monitors create and delete branches events in all repositories, triggering the CreatePipeline Lambda function.

#----------------------------------------------------------------------#
# EventRule to trigger LambdaPipeline lambda
#----------------------------------------------------------------------#
  CreatePipelineRule:
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern:
        source:
          - aws.codecommit
        detail-type:
          - 'CodeCommit Repository State Change'
        detail:
          event:
              - referenceDeleted
              - referenceCreated
          referenceType:
            - branch
      State: ENABLED
      Targets: 
      - Arn: !GetAtt CreatePipeline.Arn
        Id: CreatePipeline

 

The CreatePipeline Lambda function

The Lambda function receives the event details, parses the variables, and executes the appropriate actions. If the event is referenceCreated, then the stack is created; otherwise the stack is deleted. The stack name created or deleted is the junction of the repository name plus the new branch name. This is a very simple function.

#----------------------------------------------------------------------#
# Lambda for Stack Creation
#----------------------------------------------------------------------#
import boto3
def lambda_handler(event, context):
    Region = event['region']
    Account = event['account']
    RepositoryName = event['detail']['repositoryName']
    NewBranch = event['detail']['referenceName']
    Event = event['detail']['event']
    if NewBranch == "master":
        quit()
    if Event == "referenceCreated":
        cf_client = boto3.client('cloudformation')
        cf_client.create_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}',
            TemplateURL=f'https://s3.amazonaws.com/{Account}-templates/TemplatePipeline.yaml',
            Parameters=[
                {
                    'ParameterKey': 'RepositoryName',
                    'ParameterValue': RepositoryName,
                    'UsePreviousValue': False
                },
                {
                    'ParameterKey': 'BranchName',
                    'ParameterValue': NewBranch,
                    'UsePreviousValue': False
                }
            ],
            OnFailure='ROLLBACK',
            Capabilities=['CAPABILITY_NAMED_IAM']
        )
    else:
        cf_client = boto3.client('cloudformation')
        cf_client.delete_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}'
        )

The logic for creating only the CI or the CI+CD is on the AWS CloudFormation template. The Conditions section of AWS CloudFormation analyzes the new branch name.

Conditions: 
    BranchMaster: !Equals [ !Ref BranchName, "master" ]
    BranchDevelop: !Equals [ !Ref BranchName, "develop"]
    Setup: !Equals [ !Ref Setup, true ]
  • If the new branch is named master, then a stack will be created containing CI+CD pipelines, with deploy stages in the homologation and production environments.
  • If the new branch is named develop, then a stack will be created containing CI+CD pipelines, with a deploy stage in the Dev environment.
  • If the new branch has any other name, then the stack will be created with only a CI pipeline.

Since the purpose of this blog post is to present only a sample of automated pipelines creation, the pipelines used here are for examples only: they don’t deploy to any environment.

 

Applicability

This event-driven strategy permits pipelines to be created or deleted along with the branches. Since the entire environment is created using Infrastructure as Code and the template is the same for all pipelines, there is no possibility of different configuration issues between environments and pipeline stages.

A GitFlow simulation could resemble that shown in the following diagram:

GitFlow Diagram

GitFlow Diagram

  1. First, a CodeCommit repository is created, along with the master branch and its respective pipeline (CI+CD).
  2. The developer creates a branch called develop based on the master branch. The pipeline (CI+CD at Dev) is automatically created.
  3. The developer creates a feature-branch called feature-a based on the develop branch. The CI pipeline for this branch is automatically created.
  4. The developer creates a Pull Request from the feature-a branch to the develop branch. As soon as the Pull Request is accepted and merged and the feature-a branch is deleted, its pipeline is automatically deleted.

The same process can be followed for the release branch and hotfix branch. Once the branch is created, a new pipeline is created for it which follows its branch lifecycle.

Pipelines Stacks

 

Implementation

Before you start, make sure that the AWS CLI is installed and configured on your machine by following these steps:

  1. Clone the repository.
  2. Create the prerequisites stack.
  3. Copy the AWS CloudFormation template to Amazon S3.
  4. Copy the seed.zip file to the Amazon S3 bucket.
  5. Create the first repository and its pipeline.
  6. Create the develop branch.
  7. Create the first feature branch.
  8. Create the first Pull Request.
  9. Execute the Pull Request approval.
  10. Cleanup.

 

1. Clone the repository

Clone the repository with the sample code.

The main files are:

  • Setup.yaml: an AWS CloudFormation template for creating pipeline prerequisites.
  • TemplatePipeline.yaml: an AWS CloudFormation template for pipeline creation.
  • seed/buildspec/CIAction.yaml: a configuration file for an AWS CodeBuild project at the CI stage.
  • seed/buildspec/CDAction.yaml: a configuration file for a CodeBuild project at the CD stage.
# Command to clone the repository
git clone https://github.com/aws-samples/aws-codepipeline-multi-branch-strategy.git
cd aws-codepipeline-multi-branch-strategy

 

2. Create the prerequisite stack

The Setup stack creates the resources that are prerequisites for pipeline creation, as shown in the following chart.

These resources are created only once and they fit all the pipelines created in this example.

Resources

# Command to create Setup stack
aws cloudformation deploy --stack-name Setup-Pipeline \
--template-file Setup.yaml --region us-east-1 --capabilities CAPABILITY_NAMED_IAM

 

3. Copy the AWS CloudFormation template to Amazon S3

For the Lambda function to deploy a new pipeline stack, it needs to get the AWS CloudFormation template from somewhere. To enable it to do so, you need to save the template inside the Amazon S3 bucket that you just created at the Setup stack.

# Command that copy Template to S3 Bucket
aws s3 cp TemplatePipeline.yaml s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

4. Copy the seed.zip file to the Amazon S3 bucket

CodeCommit permits you to populate a repository at the moment of its creation as a first commit. The content of this first commit can be saved in a .zip file in an Amazon S3 bucket. Use this CodeCommit option to populate your repository with BuildSpec files for CodeBuild.

# Command to create zip file with the Buildspec folder content.
zip -r seed.zip buildspec

# Command that copy seed.zip file to S3 Bucket.
aws s3 cp seed.zip s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

5. Create the first repository and its pipeline

Now that the Setup stack is created and the seed file is stored in an Amazon S3 bucket, create the first CodeCommit repository. Every time that you want to create a new repository, execute the command below to create a new stack.

# Command to create the stack with the CodeCommit repository,
# CodeBuild Projects and the Pipeline for the master branch.
# Note: Change "myapp" by the name you want.

RepoName="myapp"
aws cloudformation deploy --stack-name Repo-$RepoName --template-file TemplatePipeline.yaml \
--parameter-overrides RepositoryName=$RepoName Setup=true \
--region us-east-1 --capabilities CAPABILITY_NAMED_IAM

When the stack is created, in addition to the CodeCommit repository, the CodeBuild projects and the master branch pipeline are also created. By default, a CodeCommit repository is created empty, with no branch. When the repository is populated with the seed.zip file, the master branch is created.

Resources

Access the CodeCommit repository to see the seed files at the master branch. Access the CodePipeline console to see that there’s a new pipeline with the name as the repository. This pipeline contains the CI+CD stages (homolog and prod).

 

6. Create the develop branch

To simulate a real development scenario, create a new branch called develop based on the master branch. In the GitFlow concept these two (master and develop) branches are fixed and never deleted.

When this new branch is created, the Events rule identifies that there’s a change on this repository and triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains the CI+CD stages (Dev).

# Configure Git Credentials using AWS CLI Credential Helper
mkdir myapp
cd myapp
git config --global credential.helper '!aws codecommit credential-helper [email protected]'
git config --global credential.UseHttpPath true

# Clone the CodeCommit repository
# You can get the URL in the CodeCommit Console
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp .

# Create the develop branch
# For more details: https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-branch.html
git checkout -b develop
git push origin develop

 

7. Create the first feature branch

Now that there are two main and fixed branches (master and develop), you can create a feature branch. In the GitFlow concept, feature branches have a short lifetime and are frequently merged to the develop branch. This type of branch only exists during the development period. When the feature development is finished, it is merged to the develop branch and the feature branch is deleted.

# Create the feature-branch branch
# make sure that you are at develop branch
git checkout -b feature-abc
git push origin feature-abc

Just as with the develop branch, when this new branch is created, the Events rule triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains only the CI stage, without a CD stage.

Pipelines-2

 

8. Create the first Pull Request

It’s possible to simulate the end of a feature development, when the branch is ready to be merged with the develop branch. Keep in mind that the feature branch has a short lifecycle:  when the merge is done, the feature branch is deleted along with its pipeline.

# Create the Pull Request
aws codecommit create-pull-request --title "My Pull Request" \
--description "Please review these changes by Tuesday" \
--targets repositoryName=$RepoName,sourceReference=feature-abc,destinationReference=develop \
--region us-east-1

 

9. Execute the Pull Request approval

To merge the feature branch to the develop branch, the Pull Request needs to be approved. In a real scenario, a teammate should do a peer review before approval.

# Accept the Pull Request
# You can get the Pull-Request-ID in the json output of the create-pull-request command
aws codecommit merge-pull-request-by-fast-forward --pull-request-id <PULL_REQUEST_ID_FROM_PREVIOUS_COMMAND> \
--repository-name $RepoName --region us-east-1

# Delete the feature-branch
aws codecommit delete-branch --repository-name $RepoName --branch-name feature-abc --region us-east-1

The new code is integrated with the develop branch. If there’s a conflict, it needs to be solved. After that, the featurebranch is deleted together with its pipeline. The Event Rule triggers the CreatePipeline Lambda function to delete the pipeline for its branch. Access the CodePipeline console to see that the pipeline for the feature branch is deleted.

Pipelines

 

10. Cleanup

To remove the resources created as part of this blog post, follow these steps:

Delete Pipeline Stacks

# Delete all the pipeline Stacks created by CreatePipeline Lambda
Pipelines=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Pipeline`) == `true`].[StackName]' --output text)
while read -r Pipeline rest; do aws cloudformation delete-stack --stack-name $Pipeline --region us-east-1 ; done <<< $Pipelines

Delete Repository Stacks

# Delete all the Repository stacks Stacks created by Step 5.
Repos=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Repo-`) == `true`].[StackName]' --output text)
while read -r Repo rest; do aws cloudformation delete-stack --stack-name $Repo --region us-east-1 ; done <<< $Repos

Delete Setup-Pipeline Stack

# Cleaning Bucket before Stack deletion 
aws s3 rm s3://"$(aws sts get-caller-identity --query Account --output text)"-templates --recursive

# Delete Setup Stack
aws cloudformation delete-stack --stack-name Setup-Pipeline --region us-east-1 

 

Conclusion

This blog post discussed how you can work with event-driven strategy and Infrastructure as Code to implement a multi-branch pipeline flow using CodePipeline. It demonstrated how an Events rule and Lambda function can be used to fully orchestrate the creation and deletion of pipelines.

 

About the Author

Henrique Bueno

Henrique Bueno

Henrique is a DevOps Consultant in the Brazilian Professional Services Team at Amazon Web Services. He has helped AWS customers design, build and deploy cloud native applications by following the Twelve-Factor App methodology.

Integrating SonarQube as a pull request approver on AWS CodeCommit

Post Syndicated from David Jackson original https://aws.amazon.com/blogs/devops/integrating-sonarqube-as-a-pull-request-approver-on-aws-codecommit/

Integrating SonarQube as a pull request approver on AWS CodeCommit

On Nov 25th, AWS CodeCommit launched a new feature that allows customers to configure approval rules on pull requests. Approval rules act as a gate on your source code changes. Pull requests which fail to satisfy the required approvals cannot be merged into your important branches. Additionally, CodeCommit launched the ability to create approval rule templates, which are rulesets that can automatically be applied to all pull requests created for one or more repositories in your AWS account. With templates, it becomes simple to create rules like “require one approver from my team” for any number of repositories in your AWS account.

A common problem for software developers is accidentally or unintentionally merging code with bugs, defects, or security vulnerabilities into important master branches. Once bad code is merged into a master branch, it can be difficult to remove. It’s also potentially costly if the code is deployed into production environments and causes outages or other serious issues. Using CodeCommit’s new features, adding required approvers to your repository pull requests can help identify and mitigate those issues before they are merged into your master branches.

The most rudimentary use of required approvers is to require at least one team member to approve each pull request. While adding human team members as approvers is an important part of the pull request workflow, this feature can also be used to require ‘robot’ approvers of your pull requests, and you can trigger them automatically on each new or updated pull request. Robotic approvers can help find issues that humans miss and enforce best practices regarding code style, test coverage, and more.

Customers have been asking us how we can integrate code review tools with AWS CodeCommit pull requests. I encourage you to check out Amazon CodeGuru Reviewer, which is a service that uses program analysis and machine learning to detect potential defects that are difficult for developers to find and recommends fixes in your Java code, and was launched in preview at the AWS Re:Invent 2019 conference. Another popular tool is SonarQube, which is an open-source platform for performing code quality analysis. It helps detect defects, bugs, and security vulnerabilities in your pull requests. This blog post shows you how to integrate SonarQube into the pull requests workflow.

This post shows…

Time to read10 minutes
Time to complete20 minutes
Cost to complete (estimated)$0.40/month for secret, ~$0.02 per build on CodeBuild. $0-1 for CodeCommit user depending on current free tier status. (at publication time)
Learning levelIntermediate (200)
Services usedAWS CodeCommit, AWS CodeBuild, AWS CloudFormation, Amazon Elastic Compute Cloud (EC2), AWS CloudWatch Events, AWS Identity and Access Management, AWS Secrets Manager

Solution overview

In this solution, you create a CodeCommit repository that requires a successful SonarQube quality analysis before pull requests can be merged. You can create the required AWS resources in your account by using the provided AWS CloudFormation template. This template creates the following resources:

  • A new CodeCommit repository, containing a starter Java project that uses the Apache Maven build system, as well as a custom buildspec.yml file to facilitate communication with SonarQube and CodeCommit.
  • An AWS CodeBuild project which invokes your SonarQube instance on build, then reports the status of the analysis back to CodeCommit.
  • An Amazon CloudWatch Events Rule, which listens for pullRequestCreated and pullRequestSourceBranchUpdated events from CodeCommit, and invokes your CodeBuild project.
  • An AWS Secrets Manager secret, which securely stores and provides the username and password of your SonarQube user to the CodeBuild project on-demand.
  • IAM roles for CodeBuild and CloudWatch events.

Although this tutorial showcases a Java project with Maven, the design principles should also apply for other languages and build systems with SonarQube integrations.

Design

The following diagram shows the flow of data, starting with a new or updated pull request on CodeCommit. CloudWatch Events listens for these events and invokes your CodeBuild project. The CodeBuild container clones your repository source commit, performs a Maven install, and invokes the quality analysis on SonarQube, using the credentials obtained from AWS Secrets Manager. When finished, CodeBuild leaves a comment on your pull request, and potentially approves your pull request.

 

Diagram showing the flow of data between the AWS service components, as well as the SonarQube.

Prerequisites

For this walkthrough, you require:

  • An AWS account
  • A SonarQube server instance (Optional setup instructions included if you don’t have one already)

SonarQube instance setup (Optional)

This tutorial shows a basic setup of SonarQube on Amazon EC2 for informational purposes only. It does not include details about securing your Amazon EC2 instance or SonarQube installation. Please be sure you have secured your environments before placing sensitive data on them.

  1. To start, get a SonarQube server instance up and running. If you are already using SonarQube, feel free to skip these instructions and just note down your host URL and port number for later. If you don’t have one already, I recommend using a fresh Amazon EC2 instance for the job. You can get up and running quickly in just a few commands. I’ve selected an Amazon Linux 2 AMI for my EC2 instance.
  2. Download and install the latest JDK 11 module. Because I am using an Amazon Linux 2 EC2 instance, I can directly install Amazon Corretto 11 with yum.

$ sudo yum install java-11-amazon-corretto-headless

  1. After it’s installed, verify you’re using this version of Java:

$ sudo alternatives --config java

  1. Choose the Java 11 version you just installed.
  2. Download the latest SonarQube installation.
  3. Copy the zip-file onto your Amazon EC2 instance.
  4. Unzip the file into your home directory:

$ unzip sonarqube-8.0.zip -d ~/

This will copy the files into a directory like /home/ec2-user/sonarqube-8.0.

Now, start the server!

$ ~/sonarqube-8.0/bin/linux-x86-64/sonar.sh start

This should start a SonarQube server running on an address like http://<instance-address>:9000. It may take a few moments for the server to start.

Steps

Follow these steps to create automated pull request approvals.

Create a SonarQube User

Get started by creating a SonarQube user from your SonarQube webpage. This user is the identity used by the robot caller to your SonarQube for this workflow.

  1. Go to the Administration tab on your SonarQube instance.
  2. Choose Security, then Users, as shown in the following screenshot.Screenshot showing where to find the user management options inside SonarQube.
  3. Choose Create User. Fill in the form, and note down the Login and Password You will need to provide these values when creating the following AWS resources.
  4. Choose Create.

Create AWS resources

For this integration, you need to create some AWS resources:

  • AWS CodeCommit repository
  • AWS CodeBuild project
  • Amazon CloudWatch Events rule (to trigger builds when pull requests are created or updated)
  • IAM role (for CodeBuild to assume)
  • IAM role (for CloudWatch Events to assume and invoke CodeBuild)
  • AWS Secrets Manager secret (to store and manage your SonarQube user credentials)

I have created an AWS CloudFormation template to provision these resources for you. You can download the template from the sample repository on GitHub for this blog demo. This repository also contains the sample code which will be uploaded to your CodeCommit repository. The contents of this GitHub repository will automatically be copied into your new CodeCommit repository for you when you create this CloudFormation stack. This is because I’ve conveniently uploaded a zip-file of the contents into a publicly-readable S3 bucket, and am using it within this CloudFormation template.

  1. Download or copy the CloudFormation template from GitHub and save it as template.yaml on your local computer.
  2. At the CloudFormation console, choose Create Stack (with new resources).
  3. Choose Upload a template file.
  4. Choose Choose file and select the template.yaml file you just saved.
  5. Choose Next.
  6. Give your stack a name, optionally update the CodeCommit repository name and description, and paste in the username and password of the SonarQube user you created.
  7. Choose Next.
  8. Review the stack options and choose Next.
  9. On Step 4, review your stack, acknowledge the required capabilities, and choose Create Stack.
  10. Wait for the stack creation to complete before proceeding.
  11. Before leaving the AWS CloudFormation console, choose the Resources tab and note down the newly created CodeBuildRole’s Physical Id, as shown in the following screenshot. You need this in the next step. Screenshot showing the Physical Id of the CodeBuild role created through CloudFormation.

Create an Approval Rule Template

Now that your resources are created, create an Approval Rule Template in the CodeCommit console. This template allows you to define a required approver for new pull requests on specific repositories.

  1. On the CodeCommit console home page, choose Approval rule templates in the left panel. Choose Create template.
  2. Give the template a name (like Require SonarQube approval) and optionally, a description.
  3. Set the number of approvals needed as 1.
  4. Under Approval pool members, choose Add.
  5. Set the approver type to Fully qualified ARN. Since the approver will be the identity obtained by assuming the CodeBuild execution role, your approval pool ARN should be the following string:
    arn:aws:sts::<Your AccountId>:assumed-role/<Your CodeBuild IAM role name>/*
    The CodeBuild IAM role name is the Physical Id of the role you created and noted down above. You can also find the full name either in the IAM console or the AWS CloudFormation stack details. Adding this role to the approval pool allows any identity assuming your CodeBuild role to satisfy this approval rule.
  6. Under Associated repositories, find and choose your repository (PullRequestApproverBlogDemo). This ensures that any pull requests subsequently created on your repository will have this rule by default.
  7. Choose Create.

Update the repository with a SonarQube endpoint URL

For this step, you update your CodeCommit repository code to include the endpoint URL of your SonarQube instance. This allows CodeBuild to know where to go to invoke your SonarQube.

You can use the AWS Management Console to make this code change.

  1. Head back to the CodeCommit home page and choose your repository name from the Repositories list.
  2. You need a new branch on which to update the code. From the repository page, choose Branches, then Create branch.
  3. Give the new branch a name (such as update-url) and make sure you are branching from master. Choose Create branch.
  4. You should now see two branches in the table. Choose the name of your new branch (update-url) to start browsing the code on this branch. On the update-url branch, open the buildspec.yml file by choosing it.
  5. Choose Edit to make a change.
  6. In the pre_build steps, modify line 17 with your SonarQube instance url and listen port number, as shown in the following screenshot.Screenshot showing buildspec yaml code.
  7. To save, scroll down and fill out the author, email, and commit message. When you’re happy, commit this by choosing Commit changes.

Create a Pull Request

You are now ready to create a pull request!

  1. From the CodeCommit console main page, choose Repositories and PullRequestApproverBlogDemo.
  2. In the left navigation panel, choose Pull Requests.
  3. Choose Create pull request.
  4. Select master as your destination branch, and your new branch (update-url) as the source branch.
  5. Choose Compare.
  6. Give your pull request a title and description, and choose Create pull request.

It’s time to see the magic in action. Now that you’ve created your pull request, you should already see that your pull request requires one approver but is not yet approved. This rule comes from the template you created and associated earlier.

You’ll see images like the following screenshot if you browse through the tabs on your pull request:

Screenshot showing that your pull request has 0 of 1 rule satisfied, with 0 approvals. Screenshot showing a table of approval rules on this pull request which were applied by a template. Require SonarQube approval is listed but not yet satisfied.

Thanks to the CloudWatch Events Rule, CodeBuild should already be hard at work cloning your repository, performing a build, and invoking your SonarQube instance. It is able to find the SonarQube URL you provided because CodeBuild is cloning the source branch of your pull request. If you choose to peek at your project in the CodeBuild console, you should see an in-progress build.

Once the build has completed, head back over to your CodeCommit pull request page. If all went well, you’ll be able to see that SonarQube approved your pull request and left you a comment. (Or alternatively, failed and also left you a comment while not approving).

The Activity tab should resemble that in the following screenshot:

Screenshot showing that a comment was made by SonarQube through CodeBuild, and that the quality gate passed. The comment includes a link back to the SonarQube instance.

The Approvals tab should resemble that in the following screenshot:

Screenshot of Approvals tab on the pull request. The approvals table shows an approval by the SonarQube and that the rule to require SonarQube approval is satisfied.

Suppose you need to make a change to your pull request. If you perform updates to your source branch, the approval status will be reset. As your push completes, a new SonarQube analysis will begin just as it did the first time.

Once your SonarQube thresholds are satisfied and your pull request is approved, feel free to merge it!

Cleanup

To avoid incurring additional charges, you may want to delete the AWS resources you created for this project. To do this, simply navigate to the CloudFormation console, select the stack you created above, and choose Delete. If you are sure you want to delete, confirm by choosing Delete stack. CloudFormation will delete all the resources you created with this stack.

Conclusion

In this tutorial, you created a workflow to watch for pull request changes to your repository, triggered a CodeBuild project execution which invoked your SonarQube for code quality analysis, and then reported back to CodeCommit to approve your pull request.

I hope this guide illustrates the potential power of combining pull request approval rules with robotic approvers. While this example is specifically about integrating SonarQube, the same pattern can be used to invoke other robotic approvers using CodeBuild, or by invoking an AWS Lambda function instead.

This tutorial was written and tested using SonarQube Version 8.0 (build 29455).

Transforming DevOps at Broadridge on AWS

Post Syndicated from Som Chatterjee original https://aws.amazon.com/blogs/devops/transforming-devops-for-a-fintech-on-aws/

by Tom Koukourdelis (Broadridge – Vice President, Head of Global Cloud Platform Development and Engineering), Sreedhar Reddy (Broadridge – Vice President, Enterprise Cloud Architecture)

We have seen large enterprises in all industry segments meaningfully utilizing AWS to build new capabilities and deliver business value. While doing so, enterprises have to balance existing systems, processes, tools, and culture while innovating at pace with industry disruptors. Broadridge Financial Solutions, Inc. (NYSE: BR) is no exception. Broadridge is a $4 billion global FinTech leader and a leading provider of investor communications and technology-driven solutions to banks, broker-dealers, asset and wealth managers, and corporate issuers.

This blog post explores how we adopted AWS at scale while being secured and compliant, as well as delivering a high degree of productivity for our builders on AWS. It also describes the steps we took to create technical (a cloud solution as a foundation based on AWS) and procedural (organizational) capabilities by leveraging AWS cloud adoption constructs. The improvement in our builder productivity and agility directly contributes to rolling out differentiated business capabilities addressing our customer needs in a timely manner. In this post, we share real-life learnings and takeaways to adopt AWS at scale, transform business and application team experiences, and deliver customer delight.

Background

At Broadridge we have number of distributed and mainframe systems supporting multiple financial services domains and sub-domains such as post trade, proxy communications, financial and regulatory reporting, portfolio management, and financial operations. The majority of these systems were built and deployed years ago at on-premises data centers all over the US and abroad.

Builder personas at Broadridge are diverse in terms of location, culture, and the technology stack they use to build and support applications (we use a number of front-end JS frameworks; .NET; Java; ColdFusion for web development; ORMs for data entity relational mapping; IBM MQ; Apache Camel for messaging; databases like SQL, Oracle, Sybase, and other open source stacks for transaction management; databases, and batch processing on virtualized and bare metal instances). With more than 200 on-premises distributed applications and mainframe systems across front-, mid-, and back-office ecosystems, we wanted to leverage AWS to improve efficiency and build agility, and to reduce costs. The ability to reach customers at new geographies, reduced time to market, and opportunities to build new business competencies were key parameters as well.

Broadridge’s core tenents for cloud adoption

When AWS adoption within Broadridge attained a critical mass (known as the Foundation stage of adoption), the business and technology leadership teams defined our posture of cloud adoption and shared them with teams across the organization using the following tenets. Enterprises looking to adopt AWS at scale should define similar tenets fit for their organizations in plain language understandable by everyone across the board.

  • Iterate: Understanding that we cannot disrupt ongoing initiatives, small and iterative approach of moving workloads to cloud in waves— rinse and repeat— was to be adopted. Staying away from long-drawn, capital-intensive big bangs were to be avoided.
  • Fully automate: Starting from infrastructure deployment to application build, test, and release, we decided early on that automation and no-touch deployment are the right approach both to leverage cloud capabilities and to fuel a shift toward a matured DevOps culture.
  • Trust but verify only exceptions: Security and regulatory compliance are paramount for an organization like Broadridge. Guardrails (such as service control policies, managed AWS Config rules, multi-account strategy) and controls (such as PCI, NIST control frameworks) are iteratively developed to baseline every AWS account and AWS resource deployed. Manual security verification of workloads isn’t needed unless an exception is raised. Defense in depth (distancing attack surface from sensitive data and resources using multi-layered security) strategies were to be applied.
  • Go fast; re-hosting is acceptable: Not every workload needs to go through years of rewriting and refactoring before it is deemed suitable for the cloud. Minor tweaking (light touch re-platforming) to go fast (such as on-premises Oracle to RDS for Oracle) is acceptable.
  • Timeliness and small wins are key: Organizations spend large sums of capital to completely rewrite applications and by the time they are done, the business goal and customer expectation will have changed. That leads to material dissatisfaction with customers. We wanted to avoid that by setting small, measurable targets.
  • Cloud fluency: Investment in training and upskilling builders and leaders across the organization (developers, infra-ops, sec-ops, managers, salesforce, HR, and executive leadership) were to be to made to build fluency on the cloud.

The first milestone

The first milestone in our adoption journey was synonymous with Project stage of adoption and had the following characteristics.

A controlled sprawl of shadow IT

We first gave small teams with little to no exposure to critical business functions (such as customer data and SLA-oriented workloads) sandboxes to test out proofs of concepts (PoC) on AWS. We created the cloud sandboxes with least privilege, and added additional privileges upon request after verification. During this time, our key AWS usage characteristics were:

  • Manual AWS account setup with least privilege
  • Manual IAM role creation with role boundaries and authentication and authorization from the existing enterprise Active Directory
  • Integration with existing Security Information and Event Management (SIEM) tools to audit role sprawl and config changes
  • Proofs of concepts only
  • Account tagging for chargeback and tracking purposes
  • No automated build, test, deploy, or integration with existing delivery pipeline
  • Small and definitive timeframes for PoCs with defined goals

A typical AWS environment at this stage will resemble that shown in the following diagram:

Representative AWS usage during first milestone

As shown above, at this time the corporate assets were connected to a highly restrictive AWS environment through VPN. The access to the AWS environment were setup based on AWS Identity primitives or IAM roles mapped to and federated with the on-premises Active Directory. There was a single VPC setup for a sandbox account with no egress to the internet. There were no customer data hosted on this AWS environment and the AWS environment was connected with our SIEM of choice.

Early adopters became first educators and mentors

Members of the first teams to carry out proofs of concept on AWS shared learnings with each other and with the leadership team within Broadridge. This helped build communities of practices (CoPs) over time. Initial CoPs established were for networking and security, and were later extended to various practices like Terraform, Chef, and Jenkins.

Tech PMO team within Broadridge as the quasi-central cloud team

Ownership is vital no matter how small the effort and insignificant the impact of risky experimentation. The ownership of account setup, role creation, integration with on-premises AD and SIEM, and oversight to ensure that the experimentation does not pose any risk to the brand led us to build a central cloud team with experienced AWS and infrastructure practitioners. This team created a process for cloud migration with first manual guardrails of allowed and disallowed actions, manual interventions, and checkpoints built in every step.

At this stage, a representative pattern of work products across teams resembles what is shown below.

Work products across teams during initial stages of AWS usage

As the diagram suggests, individual application teams built overlapping—and, in many cases, identical—technical building blocks across the teams. This was acceptable as the teams were experimenting and running PoCs on AWS. In an actual production application delivery, the blocks marked with a * would be considered technical and functional waste—that is, undifferentiated lift which increases the cost of doing business.

The second milestone

In hindsight, this is perhaps the most important milestone in our cloud adoption journey. This step was marked with following key characteristics:

  • Every new team doing PoCs are rebuilding the same building blocks: This includes networking (VPCs and security groups), identity primitives (account, roles, and policies), monitoring (Amazon CloudWatch setup and custom metrics), and compute (images with org-mandated security patches).
  • The teams usually asking the same first fundamental questions: These include questions such as: What is an ideal CIDR block range? How do we integrate with SIEM? How do we spin up web servers on Amazon EC2? How do we secure access to data? How do we setup workload monitoring?
  • Security reviews rarely finding new security gaps but adding time to the process: A central security group as part of the central cloud team reviewed every new account request and every new service usage request without finding new security gaps when the application team used the baseline guardrails.
  • Manual effort is spent on tagging, chargeback, and other approvals: A portion of the application PoC/minimum viable product (MVP) lifecycle was spent on housekeeping. While housekeeping was necessary, the effort spent was undifferentiated.

The follow diagram represents the efforts for every team during the first phase.

Team wise efforts showing duplicative work

As shown above, every application team spent effort on building nearly the same capabilities before they could begin developing their team specific application functionalities and assets. The common blocks of work are undifferentiated and leads to spending effort which also varies depending on the efficiency of the team.

During this step, learnings from the PoCs led us to establish the tenets shared earlier in this post. To address the learnings, Broadridge established a cloud platform team. The cloud platform team, also referred to as the cloud enablement engine (CEE), is a team of builders who create the foundational building blocks on AWS that address common infrastructure, security, monitoring, auditing, and break-glass controls. At the same time, we established a cloud business office (CBO) as a liaison between the application and business teams and the CEE. CBO exists to manage and prioritize foundational requirements from multiple application teams as they go online on AWS and helps create the product backlog for CEE.

Cloud Enablement Engine Responsibilities:

  • Build out foundational building blocks utilizing AWS multi-account strategy
  • Build security guardrails, compliance controls, infrastructure as code automation, auditing and monitoring controls
  • Implement cloud platform backlog that funnel from CBO as common asks from app teams
  • Work with our AWS team to understand service roadmap, future releases, and provide feedback

Cloud Business Office Responsibilities:

  • Identify and prioritize repeating technical building blocks that cuts across multiple teams
  • Establish acceptable architecture patterns based on application use cases
  • Manage cloud programs to ensure CEE deliverables and business expectations align
  • Identify skilling needs, budget, and track spend
  • Contribute to the cloud platform backlog
  • Work with AWS team to understand service roadmap, future releases, and provide feedback

These teams were set up to scale AWS adoption, put building blocks into the hands of the applications teams, and ultimately deliver differentiated capabilities to Broadridge’s business teams and end customers. The following diagram translates the relationship and modus operandi among the teams:

CEE and CBO working model

Upon establishing the conceptual working model, the CBO and CEE teams looked at solutions from AWS to enable them to achieve the working model quickly. The starting point was AWS Landing Zone (ALZ). ALZ is an AWS solution based on the AWS multi-account strategy. It is a set of vetted constructs and best practices that we use as mechanisms to accelerate AWS adoption.

AWS multi-account strategy

The multi-account strategy employs best practices around separation of concerns, reduction of blast radius, account setup based on Software Development Life Cycle (SDLC) phases, and base operational roles for auditing, monitoring, security, and compliance, as shown in the above diagram. This strategy defines the need for having centralized shared or core accounts, which works as the master account for monitoring, governance, security, and auditing. A number of AWS services like Amazon GuardDuty, AWS Security Hub, and AWS Config configurations are set in these centralized accounts. Spoke or child accounts are vended as per a team’s requirement which are spun up with these governance, monitoring, and security defaults connected to the centralized account for log capturing, threat detection, configuration management, and security management.

The third milestone

The third milestone is synonymous with the Foundation stage of adoption

Using the ALZ construct, our CEE team developed a core set of principles to be used by every application team. Based on our core tenets, the CEE team built out an entry point (a web-based UI workflow application). This web UI was the entry point for any application team requesting an environment within AWS for experimentation or to begin the application development life cycle. Simplistically, the web UI sat on top of an automation engine built using APIs from AWS, ALZ components (Account Vending Machine, Shared Services Account, Logging Account, Security Account, default security groups, default IAM roles, and AD groups), and Terraform based code. The CBO team helped establish the common architecture patterns that was codified into this engine.

Team on-boarding workflow using foundational building blocks on AWS

An Angular based web UI is the starting point for application team to request for the AWS accounts. The web UI entry point asks a number of questions validating the type of account requested along with its intended purpose, ingress/egress requirements, high availability and disaster recovery requirements, business unit for charge back and ownership purposes. Once all information is entered, it sends out a notification based on a preset organization dispatch matrix rule. Upon receiving the request, the approver has the option to approve it or asks further clarification question. Once satisfactorily answered the approver approves the account vending request and a Terraform code is kicked in to create the default account.

When an account is created through this process, the following defaults are set up for a secure environment for development, testing, and staging. Similar guardrails are deployed in the production accounts as well.

  • Creates a new account under an existing AWS Organizational Unit (OU) based on the input parameters. Tags the chargeback codes, custom tags, and also integrates the resources with existing CMDB
  • Connects the new account to the master shared services and logging account as per the AWS Landing Zone constructs
  • Integrates with the CloudWatch event bus as a sender account
  • Runs stsAssumeRole commands on the new account to create infosec cross-account roles
  • Defines actions, conditions, role limits, and account policies
  • Creates environment variables related to the account in the parameter store within AWS Systems Manager
  • Connects the new account to TrendMicro for AV purposes
  • Attaches the default VPC of the new account to an existing AWS Transit Gateway
  • Generates a Splunk key for the account to store in the Splunk KV store
  • Uses AWS APIs to attach Enterprise support to the new account
  • Creates or amends a new AD group based on the IAM role
  • Integrates as an Amazon Macie member account
  • Enables AWS Security Hub for the account by running an enable-security-hub call
  • Sets up Chef runner for the new account
  • Runs account setting lock procedures to set Amazon S3 public settings, EBS default encryption setting
  • Enable firewall by setting AWS WAF rules for the account
  • Integrates the newly created account with CloudHealth and Dome9

Deploying all these guardrails in any new accounts removes the need for manual setup and intervention. This gives application developers the needed freedom to stop worrying about infrastructure and access provisioning while giving them a higher speed to value.

Using these technical and procedural cloud adoption constructs, we have been able to reduce application onboarding time. This has led to quicker delivery of business capability with the application teams focusing only on what differentiates their business rather than repeatedly building undifferentiated work products. This has also led to creation of mature building blocks over time for use of the application teams. Using these building blocks the teams are also modernizing applications by iteratively replacing old application blocks.

Conclusion

In summary, we are able to deliver better business outcomes and differentiated customer experience by:

  • Building common asks as reusable and automated enterprise assets and improving the overall enterprise-wide maturity by indexing on and growing these assets.
  • Depending on an experienced team to deliver baseline operational controls and guardrails.
  • Improving their security posture with higher-level and managed AWS security services instead of rebuilding everything from the ground up.
  • Using the Cloud Business Office to improve funneling of common asks. This helps the next team on AWS to benefit from a readily available set of approved services and application blueprints.

We will continue to build on and maturing these reusable building blocks by using AWS services and new feature releases.

 

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

Integrating CodePipeline with on-premises Bitbucket Server

Post Syndicated from Alex Rosa original https://aws.amazon.com/blogs/devops/integrating-codepipeline-with-on-premises-bitbucket-server/

This blog post demonstrates how to integrate AWS CodePipeline with on-premises Bitbucket Server. If you want to integrate with Bitbucket Cloud, see Integrating Git with AWS CodePipeline. The AWS Lambda function provided can get the source code from a Bitbucket Server repository whenever the user sends a new code push and store it in a designed Amazon Simple Storage Service (Amazon S3) bucket.

The Bitbucket Server integration uses webhooks configured in the Bitbucket repository. Webhooks are ideal for this case and avoid the need for performing frequent polling to check for changes in the repository.

Some security protections are available with this solution:

  • The Amazon S3 bucket has encryption enabled using SSE-AES, and every object created is encrypted by default.
  • The Lambda function accepts only events signed by the Bitbucket Server.
  • All environment variables used by the Lambda function are encrypted in rest using AWS KMS.

Overview

During the creation of the CloudFormation stack, you can select either Amazon API Gateway or Elastic Load Balancing to communicate with the Lambda function. The following diagram shows how the integration works.

 

This diagram explains the solution flow, from an user code push to Bitbucket server to the CodePipeline trigger and what happen in the between.

 

  1. The user pushes code to the Bitbucket repository.
  2. Based on that user action, the Bitbucket server generates a new webhook event and sends it to Elastic Load Balancing or API Gateway based on which endpoint type you selected during AWS CloudFormation stack creation.
  3. API Gateway or Elastic Load Balancing forwards the request to the Lambda function, which checks the message signature using the secret configured in the webhook. If the signature is valid, then the Lambda function moves to the next step.
  4. The Lambda function calls the Bitbucket server API and requests that it generate a ZIP package with the content of the branch modified by the user in Step 1.
  5. The Lambda function sends the ZIP package to the Amazon S3 bucket.
  6. CodePipeline is triggered when it detects a new or updated file in the Amazon S3 bucket path.

Requirements

Before starting the solution setup, make sure you have:

  • An Amazon S3 bucket available to store the Lambda function setup files
  •  NPM or Yarn to install the package dependencies
  • AWS CLI

Setup

Follow these steps for setup.

Creating a personal token on the Bitbucket server

Create a personal token on the Bitbucket server that the Lambda function uses to access the repository.

  1. Log in to the Bitbucket server.
  2. Choose your user avatar, then choose Manage Account.
  3. On the Account screen, choose Personal access tokens.
  4. Choose Create a token.
  5. Fill out the form with the token name. In the Permissions section, leave Read for Projects and Repositories as is.
  6. Choose Create to finish.

Launch a CloudFormation stack

Using the steps below you will upload the Lambda function and Lambda layer ZIP files to an Amazon S3 bucket and launch the AWS CloudFormation stack to create the resources on your AWS account.

1. Clone the Git repository containing the solution source code:

git clone https://github.com/aws-samples/aws-codepipeline-bitbucket-integration.git

2. Install the NodeJS packages with npm:

cd code
npm install
cd ..

3. Prepare the packages for deployment.

aws cloudformation package --template-file ./infra/infra.yaml --s3-bucket your_bucket_name --output-template-file package.yaml

4. Edit the AWS CloudFormation parameters file.

Open the file located at infra/parameters.json in your favorite text editor and replace the parameters as follows:

BitbucketSecret – Bitbucket webhook secret used to sign webhook events. You should define the secret and use the same value here and in the Bitbucket server webhook.
BitbucketServerUrl – URL of your Bitbucket Server, such as https://server:port.
BitbucketToken – Bitbucket server personal token used by the Lambda function to access the Bitbucket API.
EndpointType – Select the type of endpoint to integrate with the Lambda function. It can be the Application Load Balancer or the API Gateway.
LambdaSubnets – Subnets where the Lambda function runs.
LBCIDR – CIDR allowed to communicate with the Load Balancer. It should allow the Bitbucket server IP address. Leave it blank if you are using the API Gateway endpoint type.
LBSubnets – Subnets where the Application Load Balancer runs. Leave it blank if you are using the API Gateway endpoint type.
LBSSLCertificateArn – SSL Certificate to associate with the Application Load Balancer. Leave it blank if you are using the API Gateway endpoint type.
S3BucketCodePipelineName – Amazon S3 bucket name that this stack creates to store the Bitbucket repository content.
VPCID – VPC ID where the Application Load Balancer and the Lambda function run.
WebProxyHost – Hostname of your proxy server used by the Lambda function to access the Bitbucket server, such as myproxy.mydomain.com. If you don’t need a web proxy, leave it blank.
WebProxyPort – Port of your proxy server used by the Lambda function to access the Bitbucket server, such as 8080. If you don’t need a web proxy leave it blank.

5. Create the CloudFormation stack:

aws cloudformation create-stack --stack-name CodePipeline-Bitbucket-Integration --template-body file:///package.yaml --parameters file:///infra/parameters.json --capabilities CAPABILITY_NAMED_IAM

Creating a webhook on the Bitbucket Server

Next, create the webhook on Bitbucket server to notify the Lambda function of push events to the repository:

  1. Log into the Bitbucket server and navigate to the repository page.
  2. Choose Repository settings.
  3. Select Webhook.
  4. Choose Create webhook.
  5. Fill out the form with the name of the webhook, such as CodePipeline.
  6. Fill out the URL field with the API Gateway or Load Balancer URL. To obtain this URL, choose the Outputs tab of the AWS CloudFormation stack.
  7. Fill out the Secret field with the same value used in the AWS CloudFormation stack.
  8. In the Events section, ensure Push is selected.
  9. Choose Create to finish.
  10. Repeat these steps for each repository in which you want to enable the integration.

Configuring your pipeline

Finally, change your pipeline on CodePipeline to use the Amazon S3 bucket created by the AWS CloudFormation stack as the source of your pipeline.

The Lambda function uploads the files to the Amazon S3 bucket using the following path structure:

Project Name/Repository Name/Branch Name.zip

Now, every time someone pushes code to the Bitbucket repository, your pipeline starts automatically.

Cleaning up

If you want to remove the integration and clean up the resources created at AWS, you need to delete the CloudFormation stack. Run the command below to delete the stack and associated resources.

aws cloudformation delete-stack --stack-name CodePipeline-Bitbucket-Integration 

Conclusion

This post demonstrated how to integrate your on-premises Bitbucket Server with CodePipeline.

Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy

Post Syndicated from Noha Ghazal original https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CodeBuild artifacts with AWS CodeDeploy, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into CodeBuild, then the output is deployed on CodeDeploy.

Solution overview

The functioning pipeline creates a fully managed build service that compiles your source code. It then produces code artifacts that can be used by CodeDeploy to deploy to your production environment automatically.

The deployment workflow starts by placing the application code on the GitHub repository. To automate this scenario, I added source code management to the Jenkins project under the Source Code section. I chose the GitHub option, which by design clones a copy from the GitHub repo content in the Jenkins local workspace directory.

In the second step of my automation procedure, I enabled a trigger for the Jenkins server using an “Poll SCM” option. This option makes Jenkins check the configured repository for any new commits/code changes with a specified frequency. In this testing scenario, I configured the trigger to perform every two minutes. The automated Jenkins deployment process works as follows:

  1. Jenkins checks for any new changes on GitHub every two minutes.
  2. Change determination:
    1. If Jenkins finds no changes, Jenkins exits the procedure.
    2. If it does find changes, Jenkins clones all the files from the GitHub repository to the Jenkins server workspace directory.
  3. The File Operation plugin deletes all the files cloned from GitHub. This keeps the Jenkins workspace directory clean.
  4. The AWS CodeBuild plugin zips the files and sends them to a predefined Amazon S3 bucket location then initiates the CodeBuild project, which obtains the code from the S3 bucket. The project then creates the output artifact zip file, and stores that file again on the S3 bucket.
  5. The HTTP Request plugin downloads the CodeBuild output artifacts from the S3 bucket.
    I edited the S3 bucket policy to allow access from the Jenkins server IP address. See the following example policy:

    {
      "Version": "2012-10-17",
      "Id": "S3PolicyId1",
      "Statement": [
        {
          "Sid": "IPAllow",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": "arn:aws:s3:::examplebucket/*",
          "Condition": {
             "IpAddress": {"aws:SourceIp": "x.x.x.x/x"},  <--- IP of the Jenkins server
          } 
        } 
      ]
    }
    
    

    This policy enables the HTTP request plugin to access the S3 bucket. This plugin doesn’t use the IAM instance profile or the AWS access keys (access key ID and secret access key).

  6. The output artifact is a compressed ZIP file. The CodeDeploy plugin by design requires the files to be unzipped to zip them and send them over to the S3 bucket for the CodeDeploy deployment. For that, I used the File Operation plugin to perform the following:
    1. Unzip the CodeBuild zipped artifact output in the Jenkins root workspace directory. At this point, the workspace directory should include the original zip file downloaded from the S3 bucket from Step 5 and the files extracted from this archive.
    2. Delete the original .zip file, and leave only the source bundle contents for the deployment.
  7. The CodeDeploy plugin selects and zips all workspace directory files. This plugin uses the CodeDeploy application name, deployment group name, and deployment configurations that you configured to initiate a new CodeDeploy deployment. The CodeDeploy plugin then uploads the newly zipped file according to the S3 bucket location provided to CodeDeploy as a source code for its new deployment operation.

Walkthrough

In this post, I walk you through the following steps:

  • Creating resources to build the infrastructure, including the Jenkins server, CodeBuild project, and CodeDeploy application.
  • Accessing and unlocking the Jenkins server.
  • Creating a project and configuring the CodeDeploy Jenkins plugin.
  • Testing the whole CI/CD pipeline.

Create the resources

In this section, I show you how to launch an AWS CloudFormation template, a tool that creates the following resources:

  • Amazon S3 bucket—Stores the GitHub repository files and the CodeBuild artifact application file that CodeDeploy uses.
  • IAM S3 bucket policy—Allows the Jenkins server access to the S3 bucket.
  • JenkinsRole—An IAM role and instance profile for the Amazon EC2 instance for use as a Jenkins server. This role allows Jenkins on the EC2 instance to access the S3 bucket to write files and access to create CodeDeploy deployments.
  • CodeDeploy application and CodeDeploy deployment group.
  • CodeDeploy service role—An IAM role to enable CodeDeploy to read the tags applied to the instances or the EC2 Auto Scaling group names associated with the instances.
  • CodeDeployRole—An IAM role and instance profile for the EC2 instances of CodeDeploy. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • CodeBuildRole—An IAM role to be used by CodeBuild to access the S3 bucket and create the build projects.
  • Jenkins server—An EC2 instance running Jenkins.
  • CodeBuild project—This is configured with the S3 bucket and S3 artifact.
  • Auto Scaling group—Contains EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancer.
  • Auto Scaling launch configurations—For use by the Auto Scaling group.
  • Security groups—For the Jenkins server, the load balancer, and the CodeDeploy EC2 instances.

 

  1. To create the CloudFormation stack (for example in the AWS Frankfurt Region) click the below link:
    .

    .
  2. Choose Next and provide the following values on the Specify Details page:
    • For Stack name, name your stack as you prefer.
    • For CodedeployInstanceCount, choose the default of t2.medium.
      To check the supported instance types by AWS Region, see Supported Regions.
    • For InstanceCount, keep the default of 3, to launch three EC2 instances for CodeDeploy.
    • For JenkinsInstanceType, keep the default of t2.medium.
    • For KeyName, choose an existing EC2 key pair in your AWS account. Use this to connect by using SSH to the Jenkins server and the CodeDeploy EC2 instances. Make sure that you have access to the private key of this key pair.
    • For PublicSubnet1, choose a public subnet from which the load balancer, Jenkins server, and CodeDeploy web servers launch.
    • For PublicSubnet2, choose a public subnet from which the load balancers and CodeDeploy web servers launch.
    • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
    • For YourIPRange, enter the CIDR block of the network from which you connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, go to https://www.whatismyip.com/ to find your IP address, and then enter your IP address followed by /32. If you don’t have a static IP address (or aren’t sure if you have one), enter 0.0.0.0/0. Then, any address can reach your Jenkins server.
      .
  3. Choose Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box.
  5. Choose Create and wait for the CloudFormation stack status to change to CREATE_COMPLETE. This takes approximately 6–10 minutes.
  6. Check the resulting values on the Outputs tab. You need them later.
    .
  7. Browse to the ELBDNSName value from the Outputs tab, verifying that you can see the Sample page. You should see a congratulatory message.
  8. Your Jenkins server should be ready to deploy.

Access and unlock your Jenkins server

In this section, I discuss how to access, unlock, and customize your Jenkins server.

  1. Copy the JenkinsServerDNSName value from the Outputs tab of the CloudFormation stack, and paste it into your browser.
  2. To unlock the Jenkins server, SSH to the server using the IP address and key pair, following the instructions from Unlocking Jenkins.
  3. Use the root user to Cat the log file (/var/log/jenkins/jenkins.log) and copy the automatically generated alphanumeric password (between the two sets of asterisks). Then, use the password to unlock your Jenkins server, as shown in the following screenshots.
    .
  4. On the Customize Jenkins page, choose Install suggested plugins.

  5. Wait until Jenkins installs all the suggested plugins. When the process completes, you should see the check marks alongside all of the installed plugins.
    .
    .
  6. On the Create First Admin User page, enter a user name, password, full name, and email address of the Jenkins user.
  7. Choose Save and continue, Save and finish, and Start using Jenkins.
    .
    After you install all the needed Jenkins plugins along with their required dependencies, the Jenkins server restarts. This step should take about two minutes. After Jenkins restarts, refresh the page. Your Jenkins server should be ready to use.

Create a project and configure the CodeDeploy Jenkins plugin

Now, to create our project in Jenkins we need to configure the required Jenkins plugin.

  1. Sign in to Jenkins with the user name and password that you created earlier and click on Manage Jenkins then Manage Plugins.
  2. From the Available tab search for and select the below plugins then choose Install without restart:
    .
    AWS CodeDeploy
    AWS CodeBuild
    Http Request
    File Operations
    .
  3. Select the Restart Jenkins when installation is complete and no jobs are running.
    Jenkins will take couple of minutes to download the plugins along with their dependencies then will restart.
  4. Login then choose New Item, Freestyle project.
  5. Enter a name for the project (for example, CodeDeployApp), and choose OK.
    .

    .
  6. On the project configuration page, under Source Code Management, choose Git. For Repository URL, enter the URL of your GitHub repository.
    .

    .
  7. For Build Triggers, select the Poll SCM check box. In the Schedule, for testing enter H/2 * * * *. This entry tells Jenkins to poll GitHub every two minutes for updates.
    .

    .
  8. Under Build Environment, select the Delete workspace before build starts check box. Each Jenkins project has a dedicated workspace directory. This option allows you to wipe out your workspace directory with each new Jenkins build, to keep it clean.
    .

    .
  9. Under Build Actions, add a Build Step, and AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys.
    .
    .
  10. From the CloudFormation stack Outputs tab, copy the AWS CodeBuild project name (myProjectName) and paste it in the Project Name field. Also, set the Region that you are using and choose Use Jenkins source.
    It is a best practice is to store AWS credentials for CodeBuild in the native Jenkins credential store. For more information, see the Jenkins AWS CodeBuild Plugin wiki.
    .
    .
  11. To make sure that all files cloned from the GitHub repository are deleted choose Add build step and select File Operation plugin, then click Add and select File Delete. Under File Delete operation in the Include File Pattern, type an asterisk.
    .
    .
  12. Under Build, configure the following:
    1. Choose Add a Build step.
    2. Choose HTTP Request.
    3. Copy the S3 bucket name from the CloudFormation stack Outputs tab and paste it after (http://s3-eu-central-1.amazonaws.com/) along with the name of the zip file codebuild-artifact.zip as the value for HTTP Plugin URL.
      Example: (http://s3-eu-central-1.amazonaws.com/mybucketname/codebuild-artifact.zip)
    4. For Ignore SSL errors?, choose Yes.
      .

      .
  13. Under HTTP Request, choose Advanced and leave the default values for Authorization, Headers, and Body. Under Response, for Output response to file, enter the codebuild-artifact.zip file name.
    .

    .
  14. Add the two build steps for the File Operations plugin, in the following order:
    1. Unzip action: This build step unzips the codebuild-artifact.zip file and places the contents in the root workspace directory.
    2. File Delete action: This build step deletes the codebuild-artifact.zip file, leaving only the source bundle contents for deployment.
      .
      .
  15. On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box.
  16. Enter the following values from the Outputs tab of your CloudFormation stack and leave the other settings at their default (blank):
    • For AWS CodeDeploy Application Name, enter the value of CodeDeployApplicationName.
    • For AWS CodeDeploy Deployment Group, enter the value of CodeDeployDeploymentGroup.
    • For AWS CodeDeploy Deployment Config, enter CodeDeployDefault.OneAtATime.
    • For AWS Region, choose the Region where you created the CodeDeploy environment.
    • For S3 Bucket, enter the value of S3BucketName.
      The CodeDeploy plugin uses the Include Files option to filter the files based on specific file names existing in your current Jenkins deployment workspace directory. The plugin zips specified files into one file. It then sends them to the location specified in the S3 Bucket parameter for CodeDeploy to download and use in the new deployment.
      .
      As shown below, in the optional Include Files field, I used (**) so all files in the workspace directory get zipped.
      .
      .
  17. Choose Deploy Revision. This option registers the newly created revision to your CodeDeploy application and gets it ready for deployment.
  18. Select the Wait for deployment to finish? check box. This option allows you to view the CodeDeploy deployments logs and events on your Jenkins server console output.
    .
    .
    Now that you have created a project, you are ready to test deployment.

Testing the whole CI/CD pipeline

To test the whole solution, put an application on your GitHub repository. You can download the sample from here.

The following screenshot shows an application tree containing the application source files, including text and binary files, executables, and packages:

In this example, the application files are the templates directory, test_app.py file, and web.py file.

The appspec.yml file is the main application specification file telling CodeDeploy how to deploy your application. Jenkins uses the AppSpec file to manage each deployment as a series of lifecycle event “hooks”, as defined in the file. For information about how to create a well-formed AppSpec file, see AWS CodeDeploy AppSpec File Reference.

The buildspec.yml file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a build spec as part of the source code, or you can define a build spec when you create a build project. For more information, see How AWS CodeBuild Works.

The scripts folder contains the scripts that you would like to run during the CodeDeploy LifecycleHooks execution with respect to your application requirements. For more information, see Plan a Revision for AWS CodeDeploy.

To test this solution, perform the following steps:

  1. Unzip the application files and send them to your GitHub repository, run the following git commands from the path where you placed your sample application:
    $ git add -A
    
    $ git commit -m 'Your first application'
    
    $ git push
  2. On the Jenkins server dashboard, wait for two minutes until the previously set project trigger starts working. After the trigger starts working, you should see a new build taking place.
    .

    .
  3. In the Jenkins server Console Output page, check the build events and review the steps performed by each Jenkins plugin. You can also review the CodeDeploy deployment in detail, as shown in the following screenshot:
    .

On completion, Jenkins should report that you have successfully deployed a web application. You can also use your ELBDNSName value to confirm that the deployed application is running successfully.

.

.Conclusion

In this post, I outlined how you can use a Jenkins open-source automation server to deploy CodeBuild artifacts with CodeDeploy. I showed you how to construct a functioning CI/CD pipeline with these tools. I walked you through how to build the deployment infrastructure and automatically deploy application version changes from GitHub to your production environment.

Hopefully, you have found this post informative and the proposed solution useful. As always, AWS welcomes all feedback or comment.

About the Author

.

 

Noha Ghazal is a Cloud Support Engineer at Amazon Web Services. She is is a subject matter expert for AWS CodeDeploy. In her role, she enjoys supporting customers with their CodeDeploy and other DevOps configurations. Outside of work she enjoys drawing portraits, fishing and playing video games.

 

 

The grilled cheese-making robot of your dreams

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/the-grilled-cheese-making-robot-of-your-dreams/

Ummm…YES PLEASE!

Cheeseborg: The Grilled Cheese Robot!

More cool stuff at http://www.tabb.me and http://www.evankhill.com Cheeseborg has one purpose: to create the best grilled cheese it possibly can! Cheeseborg is fully automated, voice activated, and easy to move. With Google Assistant SDK integration, Cheeseborg can even be used as a part of your smart home.

Does it use a Raspberry Pi, please?

Sometimes we’ll see a project online and find ourselves hoping and praying that it uses a Raspberry Pi, just so we have a reason to share it with you all.

That’s how it was when I saw Cheeseborg, the grilled cheese robot, earlier this week. “Please, please, please…” I prayed to the robot gods, as I chowed down on a grilled cheese at my desk (true story), and, by the grace of all that is good in this world, my plea was answered.

Cheeseborg: the grilled cheese robot

Cheeseborg uses both an Arduino Mega and a Raspberry Pi 3 in its quest to be the best ever automated chef in the world. The Arduino handles the mechanics, while our deliciously green wonder board runs the Google Assistant SDK, allowing you to make grilled cheese via voice command.

Saying “Google, make me a grilled cheese” will set in motion a series of events leading to the production of a perfectly pressed sammie, ideal for soup dunking or solo snacking.

The robot uses a vacuum lifter to pick up a slice of bread, dropping it onto an acrylic tray before repeating the process with a slice of cheese and then a second slice of bread. Then the whole thing is pushed into a panini press that has been liberally coated in butter spray (not shown for video aesthetics), and the sandwich is toasted, producing delicious ooey-gooey numminess out the other side.

Pareidolia much?

Here at Raspberry Pi, we give the Cheeseborg five slices out of five, and look forward to one day meeting Cheeseborg for real, so we can try out its scrummy wares.

ooooey-gooey numminess

You can find out more about Cheeseborg here.

Toastie or grilled cheese


Yes, there’s a difference: but which do you prefer? What makes them different? And what’s your favourite filling for this crispy, cheesy delight?

The post The grilled cheese-making robot of your dreams appeared first on Raspberry Pi.

Building and testing polyglot applications using AWS CodeBuild

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/building-and-testing-polyglot-applications-using-aws-codebuild/

Prakash Palanisamy, Solutions Architect

Microservices are becoming the new normal, and it’s natural to use multiple different programming languages for different microservices in the same application. This blog post explains how easy it is to build polyglot applications, test them, and package them for deployment using a single AWS CodeBuild project.

CodeBuild adds support for Polyglot builds using runtime-versions. With CodeBuild, it is possible to specify multiple runtimes in the buildspec file as part of the install phase. Runtimes can be composed of different major versions of the same programming language, or different programming languages altogether. For a complete list of supported runtime versions and how they must be specified in the buildspec file, see the build specification reference for CodeBuild.

This post provides an example of a microservices application comprised of three different microservices written using different programming languages. The complete code is available in the GitHub repository aws-codebuild-polyglot-application.

  • A microservice providing a greeting message (microservice-greeting) is written in Python.
  • A microservice obtaining users’ details (microservice-name) from an Amazon DynamoDB table is written using Node.js.
  • Another microservice making API calls to the above-mentioned name and greeting services, and provide a personalized greeting (microservice-webapp), is written in Java.

All of these microservices are deployed as serverless functions in AWS Lambda and front-ended by an API interface using Amazon API Gateway. To enable automated packaging and deployment, use AWS Serverless Application Model (AWS SAM) and the AWS SAM CLI. The complete application is deployed locally using DynamoDB Local and the sam local command. Automated UI testing uses the built-in headless browsers in the standard CodeBuild containers. To run DynamoDB Local and SAM Local docker runtime is being used hence the privileged mode should be enabled in the CodeBuild project.

The example buildspec file contains the following code:

version: 0.2

env:
  variables:
    ARTIFACT_BUCKET: "polyglot-codebuild-artifact-eu-west-1"

phases:
  install:
    runtime-versions:	
      nodejs: 10
      java: corretto8
      python: 3.7
      docker: 18
    commands:
      - pip3 install --upgrade aws-sam-cli selenium pylint
  pre_build:
    commands:
      - python --version
      - pylint $CODEBUILD_SRC_DIR/microservices-greeting/greeting_handler.py
      - own_ip=`hostname -i`
      - sed -ie "s/CODEBUILD_IP/$own_ip/g" $CODEBUILD_SRC_DIR/env-webapp.json
  build:
    commands:
      - node --version
      - cd $CODEBUILD_SRC_DIR/microservices-name
      - npm install
      - npm run build
      - java -version
      - cd $CODEBUILD_SRC_DIR/microservices-webapp
      - mvn clean package -Plambda
      - cd $CODEBUILD_SRC_DIR
      - |
        sam package --template-file greeting-sam.yaml \
        --output-template-file greeting-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file name-sam.yaml \
        --output-template-file name-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file webapp-sam.yaml \
        --output-template-file webapp-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
  post_build:
    commands:
      - cd $CODEBUILD_SRC_DIR
      - echo "Starting local DynamoDB instance"
      - docker network create codebuild-local
      - |
        docker run -d -p 8000:8000 --network codebuild-local \
        --name dynamodb amazon/dynamodb-local
      - |
        aws dynamodb create-table --endpoint-url http://127.0.0.1:8000 \
        --table-name NamesTable \
        --attribute-definitions AttributeName=Id,AttributeType=S \
        --key-schema AttributeName=Id,KeyType=HASH \
        --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
      - |
        aws dynamodb --endpoint-url http://127.0.0.1:8000 batch-write-item \
        --cli-input-json file://ddb_names.json
      - echo "Starting APIs locally..."
      - |
        nohup sam local start-api --template greeting-sam.yaml \
        --port 3000 --host 0.0.0.0 &> greeting.log &
      - |
        nohup sam local start-api --template name-sam.yaml \
        --port 3001 --host 0.0.0.0 --env-vars env.json \
        --docker-network codebuild-local &> name.log &
      - |
        nohup sam local start-api --template webapp-sam.yaml \
        --port 3002 --host 0.0.0.0 \
        --env-vars env-webapp.json &> webapp.log &
      - cd $CODEBUILD_SRC_DIR/website
      - nohup python3 -m http.server 8080 &>webserver.log &
      - sleep 20
      - echo "Starting headless UI testing..."
      - python $CODEBUILD_SRC_DIR/tests/testsuite.py
artifacts:
  files:
    - greeting-sam-packaged.yaml
    - name-sam-packaged.yaml
    - webapp-sam-packaged.yaml

In the env section, an environment variable named ARTIFACT_BUCKET is uploaded and initialized. It contains the name of the S3 bucket in which the sam package command generated the AWS SAM template. This variable can be overridden either in the build project or while running the build. For more information about the order of precedence, see Run a Build (Console).

In the install phase, there are two sequences: runtime-versions and commands. As part of the runtime versions, four runtimes are specified: Three programming languages’ runtime (Node.js, Java, and Python), which are required to build the microservices, and Docker runtime to deploy the application locally using the sam local command. In the commands sequence, the required packages are installed and updated.

In the pre_build phase, use pylint to lint the Python-based microservice. Get the IP address of the CodeBuild container, to be used later while running the application locally.

In the build phase, use the command sequence to build individual microservices based on their programming language. Use the sam package command to create the Lambda deployment package and generate an updated AWS SAM template.

In the post_build phase, start a DynamoDB Local container as a daemon, create a table in that database, and insert the user details in to that table. After that, start a local instance of greeting, name, and web app microservices using the sam local command. The repository also contains a simple static website that can make API calls to these microservices and provide a user-friendly response. This website is hosted locally using Python’s built-in HTTP server. After the application starts, execute the automated UI testing by running the testsuite.py script. This test script uses Selenium WebDriver. It runs UI tests on headless Firefox and Chrome browsers to validate whether the local deployment of the application works as expected.

If all the phases complete successfully, the updated AWS SAM template files are stored as artifacts based on the specification in artifacts section.

The repository also contains an AWS CloudFormation template that creates a pipeline on AWS CodePipeline with AWS CodeCommit as sources. It builds the application using the previously mentioned buildspec in CodeBuild, and deploys the application using AWS CloudFormation.

Conclusion

In this post, you learned how to use the runtime-versions capability in CodeBuild to build a polyglot application. You also learned about automated UI testing using built-in headless browsers in CodeBuild. Using polygot build capabilities of CodeBuild you shall efficiently handle building applications developed on multiple programming languages using a single build project, instead of creating and maintaining multiple build projects. Similarly, since all the dependent components are built as part of the same project, it also improves the ease of testing including the integration between components.

Motion-controlled water fountain…for cats!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/motion-controlled-cat-water-fountain/

Tired of the constant trickle of your cat’s water fountain? Set up motion detection and put your cat in control.


Cats are fickle

My cat, Jimmy, loves drinking from running water. Or from the sink. Or from whatever glass I am currently using. Basically, my cat loves drinking out of anything that isn’t his water bowl…because like all cats, he’s fickle and lives to cause his humans as much aggravation as possible.

Here’s a photo of my gorgeous boy, because what cat owner doesn’t like showing off their cat at the slightest opportunity?

Jimmy’s getting better now, thanks to the introduction of a pet water fountain in the kitchen, and we’ve somehow tricked him into using it — but what I don’t like is how the constant trickle of water makes me want to pee all the time.

Thankfully, this motion-controlled water foundation from Hackster.io maker vladimirm is here to save the day by only turning on the fountain when his cat approached it.

Motion-controlled pet water foundation

So how does it work? Vladimir explains:

When the PIR sensor detects movement, it sends a message to the radio dongle plugged to the Raspberry Pi, which sends the message to the MQTT server. On the other side, the MQTT message is processed by the Home Assistant, which then, using the automation, triggers the smart plug and starts the configured countdown.

The build uses an old Raspberry Pi 1 Model B, and a BigClown Motion Detector Kit, alongside a TP-Link smart plug and an open-source Home Assistant. The Home Assistant smartphone app documents when the smart plug is activated, and for how long, which also means you can track when your pet is drinking and check they’re getting enough water.

Vladimir goes into far more detail in the project tutorial. Now go help your cat stay hydrated!

The post Motion-controlled water fountain…for cats! appeared first on Raspberry Pi.

Clap on, clap off with the Raspberry Pi Clapper 👏👏

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/clap-on-clap-off-raspberry-pi-clapper/

While many people use off-the-shelf automation setups for their electrical appliances, Ash Puckett’s Raspberry Pi Clapper pays homage to the king of infomercial classics.

Remember this?

The Clapper (1989)

Uploaded by Travis Doucette on 2013-06-03.

Build your own Raspberry Pi Clapper

Sometimes, the best Raspberry Pi projects don’t need thousands of lines of code and a makerspace full of tech to make an impact: Ash Puckett‘s Clapper uses only a Raspberry Pi and a USB microphone as a basis. After that, it’s up to you to integrate the device into whatever project you wish, from home lighting and security systems to entertainment consoles — really anything you can switch from one state to another, including a Raspberry Pi!

GitHub user nikhiljohn10’s clap detection script allows the USB mic to pick up the control clap. With the help of the RPi.GPIO and PyAudio libraries, Ash demonstrates that the Clapper works by turning on and off a red LED attached to the Pi.

You will find instructions for putting together the code and running it on your Pi on the project’s Howchoo page. Howchoo also hosts some of Ash’s other Raspberry Pi projects, including a music streaming device, a smart clock, and a Pi-powered calendar.

Try the Clapper

Why not give the Clapper a go, and let us know what you decide to use it for!

I, for one, will secretly set one up to mess with all the lights in the office — what could possibly go wrong?

The post Clap on, clap off with the Raspberry Pi Clapper 👏👏 appeared first on Raspberry Pi.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

Eight(ish) Raspberry Pi projects for the summer

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-summer-projects/

The sun is actually shining here in Cambridge, and with it, summer-themed Raspberry Pi projects are sprouting like mushrooms across our UK-based community (even though mushrooms don’t like hot weather…). So we thought we’d gather some of our favourite Pi-powered projects perfect for the sun-drenched outdoors.

Air quality monitors and solar radiation

With the sun out in all its glory, we’re spending far more time outside than is usual for UK summer. To protect yourself and your adventurous loved ones, you might want to build a Raspberry Pi device to monitor solar radiation.

Raspberry Pi summer project

“Solar radiation is the radiation, or energy, we get from the sun.” explains project designer Uladzislau Bayouski. “Measurements for solar radiation are higher on clear, sunny day and usually low on cloudy days. When the sun is down, or there are heavy clouds blocking the sun, solar radiation is measured at zero.”

To measure more health-related environmental conditions, you could build this air quality monitor and keep an eye on local pollution.

Particulater air quality Oliver Crask Raspberry Pi summer project

Maker Oliver Crask describes the project:

Data is collected by the particulates sensor and is combined with readings of temperature, humidity, and air pressure. This data is then transferred to the cloud, where it is visualised on a dashboard.

If you’ve been building your own hackable weather station using our free guide, these are also great add-ons to integrate into that project.

Build Your Own weather station kit assembled Raspberry Pi summer project

Automatic pet and plant feeders

While we’re spending our days out in the sun, we need to ensure that our pets and plants are still getting all the attention they need.

This automatic chicken feeder by Instructables user Bertil Vandekerkhove uses a Raspberry Pi to remotely control the release of chicken feed. No more rushing to get home to feed your feathered friends!

Raspberry Pi summer project

And while we’re automating our homes, let us not forget the plants! iPlanty is an automated plant-watering system that will ensure your favourite plant babies get all the moisture they need while you’re away from your home or office.

Planty Project

An automated Plant watering solution that waters my plant every day at 8:30

Electromagnetic bike shed lock

If, like me, you live in constant fear that your beloved bike may be stolen, this electromagnetic bike shed lock is the solution you need.

Raspberry Pi summer project

The lock system allows for only one user per lock at any one time, meaning that your bike needs to be removed before anyone else can use their RFID card to access the shed.

Time-lapse cameras

With so much sunlight available, now is the perfect time to build a time-lapse camera for your garden or local beauty spot. Alex D’s Zero W time-lapse HAT allows for some glorious cinematic sliding that’s really impressed us.

Slider Test Sunset

Slider settings: -960 mm drive distance -400 steps -28 seconds interval Camera settings (Canon EOS 550D): – Magic Lantern auto ettr – max ISO 1600 – max Exposure 10 seconds

If you don’t think you can match Alex’s PCB milling skills, you can combine our free Raspberry Pi timelapse resource and Adafruit’s motorised camera slider for a similar project!

Infrared laser tag

Raspberry Pi summer project

While it’s sunny and warm, why not make this Raspberry Pi Zero W laser tag for the kids…

…and then lock them outside, and enjoy a Pimms and a sit-down in peace. We’re here for you, suffering summer holiday parents. We understand.

Self-weighing smart suitcase

“We’re all going on a summer holiday”, and pj_dc’s smart suitcase will not only help you track of your case’s location, it’ll also weigh your baggage.

Raspberry Pi summer project

Four 50kg load cells built into the base of the case allow for weight measurement of its contents, while a GPS breakout board and antenna let you track where it is.

Our free resources

While they’re not all summer-themed, our free Raspberry Pi, Code Club, and CoderDojo resources will keep you and your family occupied over the summer months whenever you’ve had a little too much of the great outdoors. From simple Scratch projects through to Python and digital making builds, we’ve got something for makers of all levels and tastes!

Getting started with Raspberry Pi summmer projects

If you’re new to Raspberry Pi, begin with our Getting started guide. And if you’re looking for even more projects to try, our online community shares a sea of tutorials on Twitter every week.

The post Eight(ish) Raspberry Pi projects for the summer appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 47

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/06/01/timeshiftgrafanabuzz-1w-issue-47/

Welcome to TimeShift We cover a lot of ground this week with posts on general monitoring principles, home automation, how CERN uses open source projects in their particle acceleration work, and more. Have an article you’d like highlighted here? Get in touch.
We’re excited to be a sponsor of Monitorama PDX June 4-6. If you’re going, please be sure and say hello! Latest Release: Grafana 5.1.3 This latest point release fixes a scrolling issue that was reported in Firefox.

MagPi 70: Home automation with Raspberry Pi

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-70-home-automation/

Hey folks, Rob here! It’s the last Thursday of the month, and that means it’s time for a brand-new The MagPi. Issue 70 is all about home automation using your favourite microcomputer, the Raspberry Pi.

Cover of The MagPi 70 — Raspberry Pi home automation and tech upcycling

Home automation in this month’s The MagPi!

Raspberry Pi home automation

We think home automation is an excellent use of the Raspberry Pi, hiding it around your house and letting it power your lights and doorbells and…fish tanks? We show you how to do all of that, and give you some excellent tips on how to add even more automation to your home in our ten-page cover feature.

Upcycle your life

Our other big feature this issue covers upcycling, the hot trend of taking old electronics and making them better than new with some custom code and a tactically placed Raspberry Pi. For this feature, we had a chat with Martin Mander, upcycler extraordinaire, to find out his top tips for hacking your old hardware.

Article on upcycling in The MagPi 70 — Raspberry Pi home automation and tech upcycling

Upcycling is a lot of fun

But wait, there’s more!

If for some reason you want even more content, you’re in luck! We have some fun tutorials for you to try, like creating a theremin and turning a Babbage into an IoT nanny cam. We also continue our quest to make a video game in C++. Our project showcase is headlined by the Teslonda on page 28, a Honda/Tesla car hybrid that is just wonderful.

Diddyborg V2 review in The MagPi 70 — Raspberry Pi home automation and tech upcycling

We review PiBorg’s latest robot

All this comes with our definitive reviews and the community section where we celebrate you, our amazing community! You’re all good beans

Teslonda article in The MagPi 70 — Raspberry Pi home automation and tech upcycling

An amazing, and practical, Raspberry Pi project

Get The MagPi 70

Issue 70 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

New subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? We’ve launched a new way to subscribe to the print version of The MagPi: you can now take out a monthly £4 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

The MagPi subscription offer — Raspberry Pi home automation and tech upcycling

You can also take out a twelve-month print subscription and get a Pi Zero W plus case and adapter cables absolutely free! This offer does not currently have an end date.

That’s it for today! See you next month.

Animated GIF: a door slides open and Captain Picard emerges hesitantly

The post MagPi 70: Home automation with Raspberry Pi appeared first on Raspberry Pi.