Tag Archives: AWS CodeBuild

Fine-grained Continuous Delivery With CodePipeline and AWS Step Functions

Post Syndicated from Richard H Boyd original https://aws.amazon.com/blogs/devops/new-fine-grained-continuous-delivery-with-codepipeline-and-aws-stepfunctions/

Automating your software release process is an important step in adopting DevOps best practices. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline was modeled after the way that the retail website Amazon.com automated software releases, and many early decisions for CodePipeline were based on the lessons learned from operating a web application at that scale.

However, while most cross-cutting best practices apply to most releases, there are also business specific requirements that are driven by domain or regulatory requirements. CodePipeline attempts to strike a balance between enforcing best practices out-of-the-box and offering enough flexibility to cover as many use-cases as possible.

To support use cases requiring fine-grained customization, we are launching today a new AWS CodePipeline action type for starting an AWS Step Functions state machine execution. Previously, accomplishing such a workflow required you to create custom integrations that marshaled data between CodePipeline and Step Functions. However, you can now start either a Standard or Express Step Functions state machine during the execution of a pipeline.

With this integration, you can do the following:

·       Conditionally run an Amazon SageMaker hyper-parameter tuning job

·       Write and read values from Amazon DynamoDB, as an atomic transaction, to use in later stages of the pipeline

·       Run an Amazon Elastic Container Service (Amazon ECS) task until some arbitrary condition is satisfied, such as performing integration or load testing

Example Application Overview

In the following use case, you’re working on a machine learning application. This application contains both a machine learning model that your research team maintains and an inference engine that your engineering team maintains. When a new version of either the model or the engine is released, you want to release it as quickly as possible if the latency is reduced and the accuracy improves. If the latency becomes too high, you want the engineering team to review the results and decide on the approval status. If the accuracy drops below some threshold, you want the research team to review the results and decide on the approval status.

This example will assume that a CodePipeline already exists and is configured to use a CodeCommit repository as the source and builds an AWS CodeBuild project in the build stage.

The following diagram illustrates the components built in this post and how they connect to existing infrastructure.

Architecture Diagram for CodePipline Step Functions integration

First, create a Lambda function that uses Amazon Simple Email Service (Amazon SES) to email either the research or engineering team with the results and the opportunity for them to review it. See the following code:

import json
import os
import boto3
import base64

def lambda_handler(event, context):
    email_contents = """
    <html>
    <body>
    <p><a href="{url_base}/{token}/success">PASS</a></p>
    <p><a href="{url_base}/{token}/fail">FAIL</a></p>
    </body>
    </html>
"""
    callback_base = os.environ['URL']
    token = base64.b64encode(bytes(event["token"], "utf-8")).decode("utf-8")

    formatted_email = email_contents.format(url_base=callback_base, token=token)
    ses_client = boto3.client('ses')
    ses_client.send_email(
        Source='[email protected]',
        Destination={
            'ToAddresses': [event["team_alias"]]
        },
        Message={
            'Subject': {
                'Data': 'PLEASE REVIEW',
                'Charset': 'UTF-8'
            },
            'Body': {
                'Text': {
                    'Data': formatted_email,
                    'Charset': 'UTF-8'
                },
                'Html': {
                    'Data': formatted_email,
                    'Charset': 'UTF-8'
                }
            }
        },
        ReplyToAddresses=[
            '[email protected]',
        ]
    )
    return {}

To set up the Step Functions state machine to orchestrate the approval, use AWS CloudFormation with the following template. The Lambda function you just created is stored in the email_sender/app directory. See the following code:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  NotifierFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: email_sender/
      Handler: app.lambda_handler
      Runtime: python3.7
      Timeout: 30
      Environment:
        Variables:
          URL: !Sub "https://${TaskTokenApi}.execute-api.${AWS::Region}.amazonaws.com/Prod"
      Policies:
      - Statement:
        - Sid: SendEmail
          Effect: Allow
          Action:
          - ses:SendEmail
          Resource: '*'

  MyStepFunctionsStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      RoleArn: !GetAtt SFnRole.Arn
      DefinitionString: !Sub |
        {
          "Comment": "A Hello World example of the Amazon States Language using Pass states",
          "StartAt": "ChoiceState",
          "States": {
            "ChoiceState": {
              "Type": "Choice",
              "Choices": [
                {
                  "Variable": "$.accuracypct",
                  "NumericLessThan": 96,
                  "Next": "ResearchApproval"
                },
                {
                  "Variable": "$.latencyMs",
                  "NumericGreaterThan": 80,
                  "Next": "EngineeringApproval"
                }
              ],
              "Default": "SuccessState"
            },
            "EngineeringApproval": {
                 "Type":"Task",
                 "Resource":"arn:aws:states:::lambda:invoke.waitForTaskToken",
                 "Parameters":{  
                    "FunctionName":"${NotifierFunction.Arn}",
                    "Payload":{
                      "latency.$":"$.latencyMs",
                      "team_alias":"[email protected]",
                      "token.$":"$$.Task.Token"
                    }
                 },
                 "Catch": [ {
                    "ErrorEquals": ["HandledError"],
                    "Next": "FailState"
                 } ],
              "Next": "SuccessState"
            },
            "ResearchApproval": {
                 "Type":"Task",
                 "Resource":"arn:aws:states:::lambda:invoke.waitForTaskToken",
                 "Parameters":{  
                    "FunctionName":"${NotifierFunction.Arn}",
                    "Payload":{  
                       "accuracy.$":"$.accuracypct",
                       "team_alias":"[email protected]",
                       "token.$":"$$.Task.Token"
                    }
                 },
                 "Catch": [ {
                    "ErrorEquals": ["HandledError"],
                    "Next": "FailState"
                 } ],
              "Next": "SuccessState"
            },
            "FailState": {
              "Type": "Fail",
              "Cause": "Invalid response.",
              "Error": "Failed Approval"
            },
            "SuccessState": {
              "Type": "Succeed"
            }
          }
        }

  TaskTokenApi:
    Type: AWS::ApiGateway::RestApi
    Properties: 
      Description: String
      Name: TokenHandler
  SuccessResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !Ref TokenResource
      PathPart: "success"
      RestApiId: !Ref TaskTokenApi
  FailResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !Ref TokenResource
      PathPart: "fail"
      RestApiId: !Ref TaskTokenApi
  TokenResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !GetAtt TaskTokenApi.RootResourceId
      PathPart: "{token}"
      RestApiId: !Ref TaskTokenApi
  SuccessMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      HttpMethod: GET
      ResourceId: !Ref SuccessResource
      RestApiId: !Ref TaskTokenApi
      AuthorizationType: NONE
      MethodResponses:
        - ResponseParameters:
            method.response.header.Access-Control-Allow-Origin: true
          StatusCode: 200
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS
        Credentials: !GetAtt APIGWRole.Arn
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:states:action/SendTaskSuccess"
        IntegrationResponses:
          - StatusCode: 200
            ResponseTemplates:
              application/json: |
                {}
          - StatusCode: 400
            ResponseTemplates:
              application/json: |
                {"uhoh": "Spaghetti O's"}
        RequestTemplates:
          application/json: |
              #set($token=$input.params('token'))
              {
                "taskToken": "$util.base64Decode($token)",
                "output": "{}"
              }
        PassthroughBehavior: NEVER
        IntegrationResponses:
          - StatusCode: 200
      OperationName: "TokenResponseSuccess"
  FailMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      HttpMethod: GET
      ResourceId: !Ref FailResource
      RestApiId: !Ref TaskTokenApi
      AuthorizationType: NONE
      MethodResponses:
        - ResponseParameters:
            method.response.header.Access-Control-Allow-Origin: true
          StatusCode: 200
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS
        Credentials: !GetAtt APIGWRole.Arn
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:states:action/SendTaskFailure"
        IntegrationResponses:
          - StatusCode: 200
            ResponseTemplates:
              application/json: |
                {}
          - StatusCode: 400
            ResponseTemplates:
              application/json: |
                {"uhoh": "Spaghetti O's"}
        RequestTemplates:
          application/json: |
              #set($token=$input.params('token'))
              {
                 "cause": "Failed Manual Approval",
                 "error": "HandledError",
                 "output": "{}",
                 "taskToken": "$util.base64Decode($token)"
              }
        PassthroughBehavior: NEVER
        IntegrationResponses:
          - StatusCode: 200
      OperationName: "TokenResponseFail"

  APIDeployment:
    Type: AWS::ApiGateway::Deployment
    DependsOn:
      - FailMethod
      - SuccessMethod
    Properties:
      Description: "Prod Stage"
      RestApiId:
        Ref: TaskTokenApi
      StageName: Prod

  APIGWRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Principal:
              Service:
                - "apigateway.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 
                 - 'states:SendTaskSuccess'
                 - 'states:SendTaskFailure'
                Resource: '*'
  SFnRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Principal:
              Service:
                - "states.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 
                 - 'lambda:InvokeFunction'
                Resource: !GetAtt NotifierFunction.Arn

 

After you create the CloudFormation stack, you have a state machine, an Amazon API Gateway REST API, a Lambda function, and the roles each resource needs.

Your pipeline invokes the state machine with the load test results, which contain the accuracy and latency statistics. It decides which, if either, team to notify of the results. If the results are positive, it returns a success status without notifying either team. If a team needs to be notified, the Step Functions asynchronously invokes the Lambda function and passes in the relevant metric and the team’s email address. The Lambda function renders an email with links to the pass/fail response so the team can choose the Pass or Fail link in the email to respond to the review. You use the REST API to capture the response and send it to Step Functions to continue the state machine execution.

The following diagram illustrates the visual workflow of the approval process within the Step Functions state machine.

StepFunctions StateMachine for approving code changes

 

After you create your state machine, Lambda function, and REST API, return to CodePipeline console and add the Step Functions integration to your existing release pipeline. Complete the following steps:

  1. On the CodePipeline console, choose Pipelines.
  2. Choose your release pipeline.CodePipeline before adding StepFunction integration
  3. Choose Edit.CodePipeline Edit View
  4. Under the Edit:Build section, choose Add stage.
  5. Name your stage Release-Approval.
  6. Choose Save.
    You return to the edit view and can see the new stage at the end of your pipeline.CodePipeline Edit View with new stage
  7. In the Edit:Release-Approval section, choose Add action group.
  8. Add the Step Functions StateMachine invocation Action to the action group. Use the following settings:
    1. For Action name, enter CheckForRequiredApprovals.
    2. For Action provider, choose AWS Step Functions.
    3. For Region, choose the Region where your state machine is located (this post uses US West (Oregon)).
    4. For Input artifacts, enter BuildOutput (the name you gave the output artifacts in the build stage).
    5. For State machine ARN, choose the state machine you just created.
    6. For Input type¸ choose File path. (This parameter tells CodePipeline to take the contents of a file and use it as the input for the state machine execution.)
    7. For Input, enter results.json (where you store the results of your load test in the build stage of the pipeline).
    8. For Variable namespace, enter StepFunctions. (This parameter tells CodePipeline to store the state machine ARN and execution ARN for this event in a variable namespace named StepFunctions. )
    9. For Output artifacts, enter ApprovalArtifacts. (This parameter tells CodePipeline to store the results of this execution in an artifact called ApprovalArtifacts. )Edit Action Configuration
  9. Choose Done.
    You return to the edit view of the pipeline.
    CodePipeline Edit Configuration
  10. Choose Save.
  11. Choose Release change.

When the pipeline execution reaches the approval stage, it invokes the Step Functions state machine with the results emitted from your build stage. This post hard-codes the load-test results to force an engineering approval by increasing the latency (latencyMs) above the threshold defined in the CloudFormation template (80ms). See the following code:

{
  "accuracypct": 100,
  "latencyMs": 225
}

When the state machine checks the latency and sees that it’s above 80 milliseconds, it invokes the Lambda function with the engineering email address. The engineering team receives a review request email similar to the following screenshot.

review email

If you choose PASS, you send a request to the API Gateway REST API with the Step Functions task token for the current execution, which passes the token to Step Functions with the SendTaskSuccess command. When you return to your pipeline, you can see that the approval was processed and your change is ready for production.

Approved code change with stepfunction integration

Cleaning Up

When the engineering and research teams devise a solution that no longer mixes performance information from both teams into a single application, you can remove this integration by deleting the CloudFormation stack that you created and deleting the new CodePipeline stage that you added.

Conclusion

For more information about CodePipeline Actions and the Step Functions integration, see Working with Actions in CodePipeline.

Building a CI/CD pipeline for multi-region deployment with AWS CodePipeline

Post Syndicated from Akash Kumar original https://aws.amazon.com/blogs/devops/building-a-ci-cd-pipeline-for-multi-region-deployment-with-aws-codepipeline/

This post discusses the benefits of and how to build an AWS CI/CD pipeline in AWS CodePipeline for multi-region deployment. The CI/CD pipeline triggers on application code changes pushed to your AWS CodeCommit repository. This automatically feeds into AWS CodeBuild for static and security analysis of the CloudFormation template. Another CodeBuild instance builds the application to generate an AMI image as output. AWS Lambda then copies the AMI image to other Regions. Finally, AWS CloudFormation cross-region actions are triggered and provision the instance into target Regions based on AMI image.

The solution is based on using a single pipeline with cross-region actions, which helps in provisioning resources in the current Region and other Regions. This solution also helps manage the complete CI/CD pipeline at one place in one Region and helps as a single point for monitoring and deployment changes. This incurs less cost because a single pipeline can deploy the application into multiple Regions.

As a security best practice, the solution also incorporates static and security analysis using cfn-lint and cfn-nag. You use these tools to scan CloudFormation templates for security vulnerabilities.

The following diagram illustrates the solution architecture.

Multi region AWS CodePipeline architecture

Multi region AWS CodePipeline architecture

Prerequisites

Before getting started, you must complete the following prerequisites:

  • Create a repository in CodeCommit and provide access to your user
  • Copy the sample source code from GitHub under your repository
  • Create an Amazon S3 bucket in the current Region and each target Region for your artifact store

Creating a pipeline with AWS CloudFormation

You use a CloudFormation template for your CI/CD pipeline, which can perform the following actions:

  1. Use CodeCommit repository as source code repository
  2. Static code analysis on the CloudFormation template to check against the resource specification and block provisioning if this check fails
  3. Security code analysis on the CloudFormation template to check against secure infrastructure rules and block provisioning if this check fails
  4. Compilation and unit test of application code to generate an AMI image
  5. Copy the AMI image into target Regions for deployment
  6. Deploy into multiple Regions using the CloudFormation template; for example, us-east-1, us-east-2, and ap-south-1

You use a sample web application to run through your pipeline, which requires Java and Apache Maven for compilation and testing. Additionally, it uses Tomcat 8 for deployment.

The following table summarizes the resources that the CloudFormation template creates.

Resource NameTypeObjective
CloudFormationServiceRoleAWS::IAM::RoleService role for AWS CloudFormation
CodeBuildServiceRoleAWS::IAM::RoleService role for CodeBuild
CodePipelineServiceRoleAWS::IAM::RoleService role for CodePipeline
LambdaServiceRoleAWS::IAM::RoleService role for Lambda function
SecurityCodeAnalysisServiceRoleAWS::IAM::RoleService role for security analysis of provisioning CloudFormation template
StaticCodeAnalysisServiceRoleAWS::IAM::RoleService role for static analysis of provisioning CloudFormation template
StaticCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for static analysis of provisioning CloudFormation template
SecurityCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for security analysis of provisioning CloudFormation template
CodeBuildProjectAWS::CodeBuild::ProjectCodeBuild for compilation, testing, and AMI creation
CopyImageAWS::Lambda::FunctionPython Lambda function for copying AMI images into other Regions
AppPipelineAWS::CodePipeline::PipelineCodePipeline for CI/CD

To start creating your pipeline, complete the following steps:

  • Launch the CloudFormation stack with the following link:
Launch button for CloudFormation

Launch button for CloudFormation

  • Choose Next.
  • For Specify details, provide the following values:
ParameterDescription
Stack nameName of your stack
OtherRegion1Input the target Region 1 (other than current Region) for deployment
OtherRegion2Input the target Region 2 (other than current Region) for deployment
RepositoryBranchBranch name of repository
RepositoryNameRepository name of the project
S3BucketNameInput the S3 bucket name for artifact store
S3BucketNameForOtherRegion1Create a bucket in target Region 1 and specify the name for artifact store
S3BucketNameForOtherRegion2Create a bucket in target Region 2 and specify the name for artifact store

Choose Next.

  • On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources.
  • Choose Create.
  • Wait for the CloudFormation stack status to change to CREATE_COMPLETE (this takes approximately 5–7 minutes).

When the stack is complete, your pipeline should be ready and running in the current Region.

  • To validate the pipeline, check the images and EC2 instances running into the target Regions and also refer the AWS CodePipeline Execution summary as below.
AWS CodePipeline Execution Summary

AWS CodePipeline Execution Summary

We will walk you through the following steps for creating a multi-region deployment pipeline:

1. Using CodeCommit as your source code repository

The deployment workflow starts by placing the application code on the CodeCommit repository. When you add or update the source code in CodeCommit, the action generates a CloudWatch event, which triggers the pipeline to run.

2. Static code analysis of CloudFormation template to provision AWS resources

Historically, AWS CloudFormation linting was limited to the ValidateTemplate action in the service API. This action tells you if your template is well-formed JSON or YAML, but doesn’t help validate the actual resources you’ve defined.

You can use a linter such as the cfn-lint tool for static code analysis to improve your AWS CloudFormation development cycle. The tool validates the provisioning CloudFormation template properties and their values (mappings, joins, splits, conditions, and nesting those functions inside each other) against the resource specification. This can cover the most common of the underlying service constraints and help encode some best practices.

The following rules cover underlying service constraints:

  • E2530 – Checks that Lambda functions have correctly configured memory sizes
  • E3025 – Checks that your RDS instances use correct instance types for the database engine
  • W2001 – Checks that each parameter is used at least once

You can also add this step as a pre-commit hook for your GIT repository if you are using CodeCommit or GitHub.

You provision a CodeBuild project for static code analysis as the first step in CodePipeline after source. This helps in early detection of any linter issues.

3. Security code analysis of CloudFormation template to provision AWS resources

You can use Stelligent’s cfn_nag tool to perform additional validation of your template resources for security. The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure provisioning and validates against AWS best practices. For example:

  • IAM rules that are too permissive (wildcards)
  • Security group rules that are too permissive (wildcards)
  • Access logs that aren’t enabled
  • Encryption that isn’t enabled
  • Password literals

You provision a CodeBuild project for security code analysis as the second step in CodePipeline. This helps detect any insecure infrastructure provisioning issues.

4. Compiling and testing application code and generating an AMI image

Because you use a Java-based application for this walkthrough, you use Amazon Corretto as your JVM. Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto comes with long-term support that includes performance enhancements and security fixes.

You also use Apache Maven as a build automation tool to build the sample application, and the HashiCorp Packer tool to generate an AMI image for the application.

You provision a CodeBuild project for compilation, unit testing, AMI generation, and storing the AMI ImageId in the Parameter Store, which the CloudFormation template uses as the next step of the pipeline.

5. Copying the AMI image into target Regions

You use a Lambda function to copy the AMI image into target Regions so the CloudFormation template can use it to provision instances into that Region as the next step of the pipeline. It also writes the target Region AMI ImageId into the target Region’s Parameter Store.

6. Deploying into multiple Regions with the CloudFormation template

You use the CloudFormation template as a cross-region action to provision AWS resources into a target Region. CloudFormation uses Parameter Store’s ImageId as reference and provisions the instances into the target Region.

Cleaning up

To avoid additional charges, you should delete the following AWS resources after you validate the pipeline:

  • The cross-region CloudFormation stack in the target and current Regions
  • The main CloudFormation stack in the current Region
  • The AMI you created in the target and current Regions
  • The Parameter Store AMI_VERSION in the target and current Regions

Conclusion

You have now created a multi-region deployment pipeline in CodePipeline without having to worry about the mechanics of creating and copying AMI images across Regions. CodePipeline abstracts the creating and copying of the images in the background in each Region. You can now upload new source code changes to the CodeCommit repository in the primary Region, and changes deploy automatically to other Regions. Cross-region actions are very powerful and are not limited to deploy actions. You can also use them with build and test actions.

Using CodeBuild in Spinnaker for continuous integration

Post Syndicated from Muhammad Mansoor original https://aws.amazon.com/blogs/devops/using-codebuild-in-spinnaker-for-continuous-integration/

Continuous integration is a DevOps software development practice in which developers regularly merge their code changes into a central repository, then run automated builds and tests. Continuous integration (CI) most often refers to the build or integration stage of the software release process and entails both an automation component (such as a CI or build service) and a cultural component (such as learning to integrate frequently). This example configures AWS CodeBuild to provide CI capabilities in Spinnaker.

Overview of Concepts

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. Because CodeBuild is a managed service, you don’t need to provision any resources such as build servers. As a start a build process, CodeBuild automatically allocates resources for you.
Spinnaker is an open-source tool built by Netflix for continuous integration/continuous deployment (CI/CD). The two core features of Spinnaker are application management and application deployment. Application management manages the state of your application and application deployment is used to build continuous delivery workflows.
Spinnaker defines CI/CD workflows as pipelines. A pipeline consists of one or more stages. A stage defines part of the workflow. A pipeline is usually part of an application in Spinnaker. Multiple applications can be logically grouped together as a project..

Prerequisites

In order to configure CodeBuild in Spinnaker, you need the following:

  • An AWS account
  • A CodeBuild project in your AWS Account.
  • Spinnaker installed and running on an Amazon EC2 Instance in AWS.

Using CodeBuild in Spinnaker

This section walks you through the process of creating a new project, application, and pipeline and adding CodeBuild as one of the stages. Before you start using CodeBuild in Spinnaker, you need to enable support for CodeBuild.

Enable AWS in Spinnaker

Give the Amazon EC2 instance additional IAM permissions to CodeBuild projects via the EC2 instance profile.
We will give the EC2 Instance additional permissions via EC2 Instance Profile to AWS CodeBuild projects.
When you install Spinnaker in AWS, you configure two roles:

  1. Spinnaker managing role: Spinnaker authenticates itself as this role. This role is assigned to the Amazon EC2 instance on which Spinnaker is running.
  2. Spinnaker managed role: Instead of giving permissions directly to Spinnaker, define policies in a managed role and enable trust between the roles so that the managing role can assume the managed role and perform the necessary AWS SDK API calls. The managed role requires additional permissions so that it can call CodeBuild via the AWS SDK. This is done by adding the following inline policy to the managed role in IAM:
    {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": [
              "codebuild:StopBuild",
              "codebuild:ListProjects",
              "codebuild:StartBuild",
              "codebuild:BatchGetBuilds"
            ],
            "Resource": "*"
          }
        ]
      }

Use SSH to connect to the Amazon EC2 instance on which you have installed Spinnaker.
Validate whether you can assume the role by running the following command:

aws sts get-caller-identity

The output should match the following:

{
    "Account": "111222333444",
    "UserId": "AAAAABBBBBCCCCCDDDDDD:i-01aa01aa01aa091aa0",
    "Arn": "arn:aws:sts::111222333444:assumed-role/Spinnaker-Managing-Role/i-01aa01aa01aa091aa0"
}

The above output indicates that the Amazon EC2 instance can assume the Spinnaker managing role. This assumed role contains the actual permissions to interact with CodeBuild.
List the CodeBuild project by running the following command:

aws codebuild list-projects --region us-east-1 --output table

The output should list all of your CodeBuild Projects. An example output will look like the following:

--------------------------------
|         ListProjects         |
+------------------------------+
||          projects          ||
|+----------------------------+|
||  MyFirstProject            ||
||  EKSBuildProject           ||
||  ServelessBuildProject     ||
||  ................          ||
|+----------------------------+|

If you are not able to list the projects or get any kind of authentication, check the trust between the managing and managed role.

Spinnaker uses Halyard to configure, install, and update itself. To configure AWS as one of the providers in Spinnaker, use the Halyard Command Line Interface (CLI). Run the following commands in the instance on which you have installed Halyard.

  1. Start by defining your AWS account credentials in the terminal:
    export AWS_ACCOUNT=my-aws-account
    export AWS_ACCOUNT_ID=[YOUR_AWS_ACCOUNT_ID]
    export AWS_ROLE_NAME=role/Spinnaker-Managed-Role
    
  2. Add AWS as a cloud provider using the Halyard CLI:
    hal config provider aws account add ${AWS_ACCOUNT} \
      --account-id ${AWS_ACCOUNT_ID} \
      --assume-role ${AWS_ROLE_NAME} \
      --regions us-east-1
    
  3. Enable AWS as a cloud provider:
    hal config provider aws enable
  4. Add CodeBuild as a cloud provider:
    hal config ci codebuild account add ${AWS_ACCOUNT} \
    --account-id ${AWS_ACCOUNT_ID} \
    --assume-role ${AWS_ROLE_NAME} \
    --regions us-east-1
  5. Enable CodeBuild in Spinnaker:
    hal config ci codebuild enable
  6. Apply the new configuration and re-deploy Spinnaker:
    hal deploy apply

If you don’t have a CodeBuild project to use as a stage in Spinnaker, you can follow these instructions to create a new CodeBuild project. It must have a source provider local to the CodeBuild project: it should not be using source from a previous stage in AWS CodePipeline.

Creating an application in Spinnaker

Start by logging in to your Spinnaker instance and then we will creating a new application in Spinnaker.

  1. From the top navigation bar choose Applications, then Create Application. Enter the name of the application and owner email and select aws from the list of the Cloud Providers, as shown in the following screenshot:Creating a new application in Spinnaker
  2. Once you have created a new application, Spinnaker takes you to the Infrastructure section of the application. From this screen, choose Pipelines, as shown in the following screenshot:Select Pipelines after you create the application
  3. Choose Configure a new pipeline to create a new pipeline and give it a name (such as My First Pipeline), then choose Create, as shown in the following screenshot:Select “Configure a new pipeline”
  4. When prompted, enter a value as the Pipeline Name, as shown in the following screenshot: Enter the name of the Pipeline
  5. Once the pipeline is created, Spinnaker takes you to the newly created pipeline. From this new screen, choose Add Stage, as shown in the following screenshot:Adding a new stage to the Pipeline.
  6. Select AWS CodeBuild and assign this stage a name by entering a value in the Stage Name field.Select AWS CodeBuild from the drop down of Type.
  7. Configure additional details:Basic Settings:
    • Account: Select the CodeBuild CI account that you configured.
    • Project Name: Select the CodeBuild Project that you want Spinnaker to trigger when this stage is executed.

    Source Configuration:

    • Source: (Optional) Select the source of the build to override the source artifact already defined in your CodeBuild project.
    • Source Version: (Optional) If a source version for the build is not specified, the artifact version is used. If the artifact doesn’t have a version, the latest version is used. See the CodeBuild reference for more information.
    • Buildspec: (Optional) If an inline buildspec definition is not specified, the buildspec configured in the CodeBuild project is used.
    • Secondary Sources: (Optional) Selecting the secondary sources of the build allows you to override the secondary source artifact already defined in your CodeBuild project. If not specified, secondary sources configured in CodeBuild project are used.

    Environment Configuration:

    • Image: (Optional) Select the image in which the build runs if you want to override the image defined in the CodeBuild project. If not specified, the image configured in the CodeBuild project is used.
  8. Choose Save Change to save this stage and then choose PIPELINES to go to the pipelines. You can see the pipeline you just created, as shown in the following screenshot:After pipeline has been saved.

Testing the pipeline

Congratulations! You have successfully integrated CodeBuild as one of the stages in Spinnaker. Let’s test this pipeline.

  1. On the same page on which you can see the list of the pipelines, choose Start Manual Execution and select the newly created pipeline as shown in the following screenshot.Prompt to run a pipeline.
  2. Once you confirm, Spinnaker starts executing your pipeline. You can check the progress of the pipeline by selecting the pipeline, then choosing Execution Details, as shown in the following screenshot.Pipeline running in progress.
  3. Once the pipeline has finished executing, you can see that the status of the task is SUCCEEDED, as shown in the following screenshot:After Pipeline has finished.You can click on the Build Link and CloudWatch Logs from the above screen

Congratulations! You have now successfully integrated (and executed) CodeBuild in Spinnaker.

Further Reading

Cleanup

If you created a new CodeBuild project navigate to the CodeBuild section of AWS Console and delete the CodeBuild project that you created.

Conclusion

In the above post, we went through the concepts of Spinnaker and walked you through the process of using CodeBuild as a stage in Spinnaker Pipeline.

Integration is just one part of a well-defined CI/CD pipeline. In addition to CodeBuild, you can also use Spinnaker to deploy to AWS EKS.

Happy building!

New – Building a Continuous Integration Workflow with Step Functions and AWS CodeBuild

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-building-a-continuous-integration-workflow-with-step-functions-and-aws-codebuild/

Automating your software build is an important step to adopt DevOps best practices. To help you with that, we built AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces packages that are ready for deployment.

However, there are so many possible customizations in our customers’ build processes, and we have seen developers spend time in creating their own custom workflows to coordinate the different activities required by their software build. For example, you may want to run, or not, some tests, or skip static analysis of your code when you need to deploy a quick fix. Depending on the results of your unit tests, you may want to take different actions, or be notified via SNS.

To simplify that, we are launching today a new AWS Step Functions service integration with CodeBuild. Now, during the execution of a state machine, you can start or stop a build, get build report summaries, and delete past build executions records.

In this way, you can define your own workflow-driven build process, and trigger it manually or automatically. For example you can:

With this integration, you can use the full capabilities of Step Functions to automate your software builds. For example, you can use a Parallel state to create parallel builds for independent components of the build. Starting from a list of all the branches in your code repository, you can use a Map state to run a set of steps (automating build, unit tests, and integration tests) for each branch. You can also leverage in the same workflow other Step Functions service integrations. For instance, you can send a message to an SQS queue to track your activities, or start a containerized application you just built using Amazon ECS and AWS Fargate.

Using Step Functions for a Workflow-Driven Build Process
I am working on a Java web application. To be sure that it works as I add new features, I wrote a few tests using JUnit Jupiter. I want those tests to be run just after the build process, but not always because tests can slow down some quick iterations. When I run tests, I want to store and view the reports of my tests using CodeBuild. At the end, I want to be notified in an SNS topic if the tests run, and if they were successful.

I created a repository in CodeCommit and I included two buildspec files for CodeBuild:

  • buildspec.yml is the default and is using Apache Maven to run the build and the tests, and then is storing test results as reports.
version: 0.2
phases:
  build:
    commands:
      - mvn package
artifacts:
  files:
    - target/binary-converter-1.0-SNAPSHOT.jar
reports:
  SurefireReports:
    files:
      - '**/*'
    base-directory: 'target/surefire-reports'
  • buildspec-notests.yml is doing only the build, and no tests are executed.
version: 0.2
phases:
  build:
    commands:
      - mvn package -DskipTests
artifacts:
  files:
    - target/binary-converter-1.0-SNAPSHOT.jar

To set up the CodeBuild project and the Step Functions state machine to automate the build, I am using AWS CloudFormation with the following template:

AWSTemplateFormatVersion: 2010-09-09
Description: AWS Step Functions sample project for getting notified on AWS CodeBuild test report results
Resources:
  CodeBuildStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      RoleArn: !GetAtt [ CodeBuildExecutionRole, Arn ]
      DefinitionString:
        !Sub
          - |-
            {
              "Comment": "An example of using CodeBuild to run (or not run) tests, get test results and send a notification.",
              "StartAt": "Run Tests?",
              "States": {
                "Run Tests?": {
                  "Type": "Choice",
                  "Choices": [
                    {
                      "Variable": "$.tests",
                      "BooleanEquals": false,
                      "Next": "Trigger CodeBuild Build Without Tests"
                    }
                  ],
                  "Default": "Trigger CodeBuild Build With Tests"
                },
                "Trigger CodeBuild Build With Tests": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::codebuild:startBuild.sync",
                  "Parameters": {
                    "ProjectName": "${projectName}"
                  },
                  "Next": "Get Test Results"
                },
                "Trigger CodeBuild Build Without Tests": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::codebuild:startBuild.sync",
                  "Parameters": {
                    "ProjectName": "${projectName}",
                    "BuildspecOverride": "buildspec-notests.yml"
                  },
                  "Next": "Notify No Tests"
                },
                "Get Test Results": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::codebuild:batchGetReports",
                  "Parameters": {
                    "ReportArns.$": "$.Build.ReportArns"
                  },
                  "Next": "All Tests Passed?"
                },
                "All Tests Passed?": {
                  "Type": "Choice",
                  "Choices": [
                    {
                      "Variable": "$.Reports[0].Status",
                      "StringEquals": "SUCCEEDED",
                      "Next": "Notify Success"
                    }
                  ],
                  "Default": "Notify Failure"
                },
                "Notify Success": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::sns:publish",
                  "Parameters": {
                    "Message": "CodeBuild build tests succeeded",
                    "TopicArn": "${snsTopicArn}"
                  },
                  "End": true
                },
                "Notify Failure": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::sns:publish",
                  "Parameters": {
                    "Message": "CodeBuild build tests failed",
                    "TopicArn": "${snsTopicArn}"
                  },
                  "End": true
                },
                "Notify No Tests": {
                  "Type": "Task",
                  "Resource": "arn:${AWS::Partition}:states:::sns:publish",
                  "Parameters": {
                    "Message": "CodeBuild build without tests",
                    "TopicArn": "${snsTopicArn}"
                  },
                  "End": true
                }
              }
            }
          - {snsTopicArn: !Ref SNSTopic, projectName: !Ref CodeBuildProject}
  SNSTopic:
    Type: AWS::SNS::Topic
  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      ServiceRole: !Ref CodeBuildServiceRole
      Artifacts:
        Type: NO_ARTIFACTS
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/standard:2.0
      Source:
        Type: CODECOMMIT
        Location: https://git-codecommit.us-east-1.amazonaws.com/v1/repos/binary-converter
  CodeBuildExecutionRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action: "sts:AssumeRole"
            Principal:
              Service: states.amazonaws.com
      Path: "/"
      Policies:
        - PolicyName: CodeBuildExecutionRolePolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - "sns:Publish"
                Resource:
                  - !Ref SNSTopic
              - Effect: Allow
                Action:
                  - "codebuild:StartBuild"
                  - "codebuild:StopBuild"
                  - "codebuild:BatchGetBuilds"
                  - "codebuild:BatchGetReports"
                Resource: "*"
              - Effect: Allow
                Action:
                  - "events:PutTargets"
                  - "events:PutRule"
                  - "events:DescribeRule"
                Resource:
                  - !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/StepFunctionsGetEventForCodeBuildStartBuildRule"
  CodeBuildServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action: "sts:AssumeRole"
            Effect: Allow
            Principal:
              Service: codebuild.amazonaws.com
      Path: /
      Policies:
        - PolicyName: CodeBuildServiceRolePolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                - "logs:CreateLogGroup"
                - "logs:CreateLogStream"
                - "logs:PutLogEvents"
                - "codebuild:CreateReportGroup"
                - "codebuild:CreateReport"
                - "codebuild:UpdateReport"
                - "codebuild:BatchPutTestCases"
                - "codecommit:GitPull"
                Resource: "*"
Outputs:
  StateMachineArn:
    Value: !Ref CodeBuildStateMachine
  ExecutionInput:
    Description: Sample input to StartExecution.
    Value:
      >
        {}

When the CloudFormation stack has been created, there are two CodeBuild tasks in the state machine definition:

  • The first CodeBuild task is using a synchronous integration (startBuild.sync) to automatically wait for the build to terminate before progressing to the next step:
"Trigger CodeBuild Build With Tests": {
  "Type": "Task",
  "Resource": "arn:aws:states:::codebuild:startBuild.sync",
  "Parameters": {
    "ProjectName": "CodeBuildProject-HaVamwTeX8kM"
  },
  "Next": "Get Test Results"
}
  • The second CodeBuild task is using the BuildspecOverride parameter to override the default buildspec file used by the build with the one not running tests:
"Trigger CodeBuild Build Without Tests": {
  "Type": "Task",
  "Resource": "arn:aws:states:::codebuild:startBuild.sync",
  "Parameters": {
    "ProjectName": "CodeBuildProject-HaVamwTeX8kM",
    "BuildspecOverride": "buildspec-notests.yml"
  },
  "Next": "Notify No Tests"
},

The first step is a Choice that looks into the input of the state machine execution to decide if to run tests, or not. For example, to run tests I can give in input:

{
  "tests": true
}

This is the visual workflow of the execution running tests, all tests are passed.

I change the value of "tests" to false, and start a new execution that goes on a different branch.

This time the buildspec is not executing tests, and I get a notification that no tests were run.

When starting this workflow automatically after an activity on GitHub or CodeCommit, I could look into the last commit message for specific patterns, and customize the build process accordingly. For example, I could skip tests if the  [skip tests] string is part of the commit message. Similarly, in a production environment I could skip code static analysis, to have faster integration for urgent changes, if the [skip static analysis] message in included in the commit.

Extending the Workflow for Containerized Applications
A great way to distribute applications to different environments, is to package them as Docker images. In this way, I can also add a step to my build workflow and start the containerized application in an Amazon ECS task (running on AWS Fargate) for the Quality Assurance (QA) team.

First, I create an image repository in ECR and add permissions to the service role used by the CodeBuild project to upload to ECR, as described here.

Then, in the code repository, I follow this example to add:

  • A Dockerfile to prepare the Docker container with the software build, and start the application.
  • A buildspec-docker.yml file with the commands to create and upload the Docker image.

The final workflow is automating all these steps:

  1. Building the software from the source code.
  2. Creating the Docker image.
  3. Uploading of the Docker image to ECR.
  4. Starting the QA environment on ECS and Fargate.
  5. Sending an SNS notification that the QA environment is ready.

The workflow and its steps can easily be customized based on your requirements. For example, with a few changes, you can adapt the buildspec file to push the image to Docker Hub.

Available Now
The CodeBuild service integration is available in all commercial and GovCloud regions where Step Functions and CodeBuild services are offered. For regional availability, please see the AWS Region Table. For more information, please look at the documentation.

As AWS Serverless Hero Gojko Adzic pointed out on the AWS DevOps Blog, CodeBuild can also be used to execute administrative tasks. The integration with Step Functions opens a whole set of new possibilities.

Let me know what are you going to use this new service integration for!

Danilo

Using AWS CodeDeploy and AWS CodePipeline to Deploy Applications to Amazon Lightsail

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/using-aws-codedeploy-and-aws-codepipeline-to-deploy-applications-to-amazon-lightsail/

This post is contributed by Mike Coleman | Developer Advocate for Lightsail | Twitter: @mikegcoleman

Introduction

Amazon Lightsail is the easiest way to get started in the cloud, allowing you to get your application running on your own virtual server in a matter of minutes. But, what do you do if you want to update that running application?

In order to automate the process of both deploying and updating software many developers are turning to automated workflows. AWS has a full complement of tools that allow you to build deployment pipelines to cover a wide array of use cases.

This blog post provides guidance on how to configure Lightsail to work with AWS CodePipeline and AWS CodeDeploy to automatically deploy (or update) an application every time you push a change to GitHub. Even though this tutorial provides detailed, step-by-step instructions, you may still want to read some of the docs (CodeDeploy and CodePipeline) if you’re unfamiliar with deployment pipelines.

After completing this walkthrough you’ll be on your way to implementing more complex pipelines that might include build and test steps.

 

Prerequisites

You need the following to complete this walkthrough:

  • A GithHub account in which to fork the demo code
  • git installed on your local machine, and a basic understanding on how to use it
  • An AWS account with sufficient privileges to create AWS Identity and Access (IAM) users, policies, and service roles as well as to create resources with the following services: Amazon S3, CodePipeline, CodeDeploy and Lightsail
  • The AWS CLI installed and configured on your local machine

 

Document Important Values

As you go through this tutorial, you’ll need to take note of a few key values. Open up a text editor of your choice, and copy and paste the template below into a new document. As you go through the guide, when instructed, copy the values into your text document

S3 Bucket Name:

Access Key ID:

Secret Key:

IAM User ARN:

Note: Both the Access key ID and Secret access key should be protected in the same manner you protect any sensitive username / password pair.

 

Solution Overview

The rest of this blog guides you through the process of setting up your deployment pipeline. You start by creating a service role for CodeDeploy, an Amazon S3 bucket, and an IAM user. After deploying these services, you create a Lightsail instance. You also install and configure the CodeDeploy agent, as well as registering the instance with CodeDeploy. Finally, you create an application in CodeDeploy, and configure CodePipeline to kick off a new deployment whenever you push changes to GitHub.

 

Create a service role

In AWS a service role is a role that an AWS service assumes to perform actions on your behalf. The policies that you attach to the service role determine which AWS resources the service can access and what it can do with those resources. Below you use AWS IAM to create a service role for CodeDeploy that has the necessary permissions to work with Lightsail.

  1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Roles, and then choose Create role.                                iam role
  3. On the Create role page, choose AWS service, and from the Choose the service that will use this role list, choose CodeDeploy.                                  iam role for service
  4. Near the bottom of the screen under Select your use case, choose CodeDeploy and click Next: Permissions.
  5. On the Attached permissions policy page, the default permission policy (AWSCodeDeployRule) is displayed. You can click on the policy name if you’d like to review the details of the policy.Click Next: Tags, and then click Next: Review.
  6. On the Create Role page, in Role name, enter a name for the service role (for example, CodeDeployServiceRole).role name
  7. Click Create role.

 

Create an S3 bucket

In this section, you create an Amazon S3 bucket to store the deployment artifact created by CodeDeploy. This artifact is a compressed file containing your source code files, and any scripts that need to run as part of the installation or update process.

  1. Sign in to the AWS Management Console and open the S3 console, and click Create Bucket. create s3 bucket
  2. Enter a name under Bucket name. The name must be unique across all of S3.
    Be sure to copy the S3 bucket name into your text document.
  3. Ensure Block all public access is checked.                             name s3 bucket
  4. Click Create bucket.

 

Create IAM policy

An IAM policy is a set of rules that, when attached to an identity or resource, defines their permissions. For this use case, you want a policy that only allows AWS CodeDeploy agent to read the S3 bucket you just created. In the next set of steps, you define the policy. Then in the subsequent section you apply that policy to an IAM user. That user ultimately is associated with the CodeDeploy agent running on your Lightsail instance.

  1. Sign in to the AWS Management Console and open the IAM console.
  2. In the navigation pane, choose Policies, and then choose Create policy. create iam policy
  3. Click on the JSON tab.                                                                                            JSON tab

Erase the content in the editor window, and paste in the code from below.

NOTE: Be sure to replace <S3 Bucket Name> with the name of the S3 bucket you created in the previous step.

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Action": [

        "s3:Get*",

        "s3:List*"

      ],

      "Resource": [

        "arn:aws:s3:::<S3 Bucket Name>/*"

      ]

    }

  ]

}

JSON editor revised

  1. Click Review policy.
  2. Enter CodeDeployS3BucketPolicy for the policy name.
  3. Click Create policy.

Create an IAM user

Because you cannot assign an IAM role to a Lightsail instance, you need to create your IAM user with the appropriate permissions. In this case the user will need to be able to list the contents of the S3 bucket you just created.

  1. Stay in the IAM console.
  2. In the navigation pane, choose Users, and then choose Add user.                                      IAM user creation
  3. Enter LightSailCodeDeployUser for the User name and click Programmatic access under Select AWS access type (you use programmatic access since this user account will never need to log into the console). Click Next: permissions.        set user name
  4. Click Attach existing polices directly. Enter CodeDeployS3BucketPolicy in the search box, and check the box next to the CodeDeployS3BucketPolicy policy.                                        attach policies to the S3 bucket
  5. Click Next: Tags. Click Next: Review. Click Create user.
  6. Copy the Access key ID and Secret access key into your text document. You will need to click Show to display the secret access key. Note: If you do not copy these values now, you cannot go back and retrieve them from the console. You will need to create a new set of credentials

.access key and secret access key

       7. Click Close.

8. Click on the user you just and copy the User ARN into your document.  USR ARN

At this point you created an S3 bucket where CodeDeploy can store your build artifact, as well as the IAM components you need (a service role, IAM Policy, and an IAM user) to configure the CodeDeploy agent. In the next step, you actually deploy a Lightsail instance with the CodeDeploy agent, and then you register that instance with CodeDeploy

Create a Lightsail instance and install the CodeDeploy agent

In this section you create the Lightsail instance where you want your code to run. In order for the instance to work with CodeDeploy you must install the CodeDeploy agent. The agent installation is done by providing a startup script that runs when the instance is first created.

  1. Log in to the AWS Management Console, and navigate to the Lightsail home page.
  2. Click Create instance.
  3. Lightsail instance location
  4. Ensure that you’re creating your instance in the correct AWS Region.
  5. Under Pick your instance image click on Linux/Unix. Click on OS Only. Select Amazon Linux.instance image lightsail
  6. Scroll down and click + Add launch script. In the code below paste in your Access key ID, Secret access key, and IAM User ARN from your text document. Also replace <Desired Region> with the Region that you deployed instance into (e.g. us-west-2).

After you edit the code below, paste it into the launch script edit window. The configuration below allows the CodeDeploy agent run with the permissions you assigned to the IAM user earlier. These permissions allow the CodeDeploy agent to download the deployment artifact created by the CodeDeploy service from the S3 bucket where it will be stored. Additionally, the agent will use the information in the artifact to deploy or update your application.

mkdir /etc/codedeploy-agent/

mkdir /etc/codedeploy-agent/conf

cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml

---

aws_access_key_id: <Access Key ID>

aws_secret_access_key: <Secret Access Key>

iam_user_arn: <IAM User ARN>

region: <Desired Region>

EOT

wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install

chmod +x ./install

sudo ./install auto

     6. Enter codedeploy for the instance name under Identify your instance. Click Create Instance.

Verify the CodeDeploy agent

Wait about 5-10 minutes for the instance to boot up and run the startup script.

In this section you verify that the CodeDeploy agent is up and running, register your instance with CodeDeploy, and, finally, tag the instance.

Note: You will be using the command line for both your Lightsail instance and on your local machine. Pay careful attention to the instructions to ensure you’re issuing the commands on the right command line.

  1. Start an SSH session by clicking on the terminal icon next to the name of your instancessh session for instance
  2. On the command line of the Lightsail terminal session enter the command below to verify the CodeDeploy agent is running.

sudo service codedeploy-agent status

You should see a response similar to the one below (the PID will be different):

The AWS CodeDeploy agent is running as PID 2783

      3. Enter the command below using the AWS CLI in a terminal session on your local machine to register your Lightsail instance with CodeDeploy.

NOTE: Replace <IAM User ARN> with the value in your document and the <Desired Region> with the appropriate Region.
NOTE: If you did not name your Lightsail instance codedeploy you will need to adjust the –instance-name parameter accordingly.
NOTE: The command does not provide any output

aws deploy register-on-premises-instance --instance-name codedeploy --iam-user-arn <IAM User ARN> --region <Desired Region>

  1. Enter the following command using the AWS CLI in a terminal session on your local machine to tag your Lightsail instance in CodeDeploy. The tag will be used by CodeDeploy to know where to install your code.
    NOTE: If you did not name your Lightsail instance codedeploy you will need to adjust the –instance-name parameter accordingly.
    NOTE: Replace <Desired Region> with the appropriate Region.
    NOTE: The command does not provide any output

aws deploy add-tags-to-on-premises-instances --instance-names codedeploy --tags Key=Name,Value=CodeDeployLightsailDemo --region <Desired Region>

      5. Enter the command below using the AWS CLI in a terminal session on your local machine to verify your machine was successfully registered:
          NOTE: Replace <Desired Region> with the appropriate Region.

aws deploy list-on-premises-instances --region <Desired Region>

You should see output similar to:

{
"instanceNames": [
     "codedeploy"
  ]
}

At this point you are now ready to setup your actual code deployment using CodePipeline and CodeDeploy.

Setup the application in CodeDeploy

  1. Navigate to the CodeDeploy console, make sure you’re in the correct Region, and click Create application.codedeploy create application
  2. Enter CodeDeployLightsailDemo for the Application name and select EC2/On-premises under Compute platform. Click Create application.
  3. In the Deployment groups section Click Create deployment group.
  4. Enter CodeDeployLightsailDemoDeploymentGroup for the Deployment group name.
  5. Click in the text box for Enter a service role and select the service role you created earlier (CodeDeployServiceRole)
  6. Under Environment configuration check the box for On-premises instances. Under Key enter Name and under Value enter. environment configuration
  7. Under Load balancer uncheck Enable load balancing.
  8. Click Create deployment group.

 

Fork the GitHub Repo

In this section, you connect your GitHub account to CodePipeline so that whenever you push a change to GitHub your new code will automatically be deployed. For this tutorial, I placed a small demo application in a GitHub repository. These next steps guide you through forking that code into your own GitHub account.

  1. Sign into GithHub.
  2. Navigate to the demo repository: http://github.com/mikegcoleman/codedeploygithubdemo
  3. To the right of the repository name at the top click the Forkfork github
  4. Click on the account that you want to fork the repository into.
    After a few seconds the fork process completes, and you are redirected to the new repo in your account.

Setup CodePipeline

As the name implies, CodePipeline allows you to create an automated set of steps for your application deployment. For instance, build and test processes that must happen before a final deployment step. In this example you’re going to build a very simple pipeline that will redeploy your application when a change is pushed to the associated GitHub repository.

  1. Navigate to the CodePipeline console, ensure you’re in the correct Region, and click Create pipeline.
  2. Enter CodeDeployLightsailDemoPipeline for the Pipeline name.
  3. Click on Advanced Settings. Under Artifact store click the radio button next to Custom Location. Click into the Bucket text box and select the S3 bucket you created earlier. Click Next.codepipeline advanced settings
  4. From the Source provider drop down choose Click Connect to GitHub and follow any prompts to authorize CodePipeline to access your GitHub account.
    1. Note: If you’ve connected GitHub previously there will not be any additional prompts.
  5. Click in the Repository text box and select the repository you forked earlier. The name should be <your github username>/codedeploygithubdemo.
  6. Click in the Branch box and choose Click Next. Since there isn’t a build stage click Skip build stage and confirm by clicking Skip.
  7. Choose AWS CodeDeploy from the Deploy provider Ensure the appropriate Region is selected. Choose CodeDeployLightsailDemo from the Application name list. Choose CodeDeployLightsailDemoDeploymentGroup from the Deployment group list.
    1. Click Next.
    2. Click Create pipeline.codedeploy configuration
  8. You’ll be taken to the details page for your pipeline, and can watch the status of the pipeline update. Once the Deploy step has a status of succeeded feel free to move on to the next section.                                                                                           source for code

 

Test and Update the Application

In this final section you to verify the application deployed, and then you’ll make an update to the application to kick off a new deployment. Finally, you’ll verify that the update was successfully pushed to your server.

  1. Navigate in your web browser to the IP address of your Lightsail instance. You can find the IP address on the card for your instance on the Lightsail home page. You should see a simple webpage displayed.
    ip address
  2. Move to the command line of your local machine, and clone the GitHub repository, being sure to insert your GitHub username.NOTE: If you are using SSH to authenticate to GitHub adjust the GitHub command accordinglygit clone https://github.com/<your github username>/codedeploygithubdemoYou should see output similar to the following:

    Cloning into 'codedeploygithubdemo'...
    remote: Enumerating objects: 49, done.
    remote: Counting objects: 100% (49/49), done.
    remote: Compressing objects: 100% (33/33), done.
    remote: Total 49 (delta 25), reused 36 (delta 12), pack-reused 0
    Unpacking objects: 100% (49/49), done.

  3. Change into the directory with the website code:cd codedeploygithubdemo
  4. Using an editor of your choice edit the html file by changing the background color to purple:background-color: purple;
  5. Push the changes to GitHub by issuing each of the following commands one at a time:git add index.html
    git commit -m “new background color”
    git push origin master
  6. Navigate back to the CodePipeline console and click on the name of your pipeline. You should see that the change from GitHub has been picked up and the pipeline is deploying your website. Once the pipeline has successfully completed move to the next step.
  7. Reload to demo website to see that the background color has changed.

Conclusion

Congratulations on finishing this tutorial. You should now have an understanding of how to automate the deployment and updating of your application running on Lightsail using CodeDeploy and CodePipeline. As a next step, you might want to try and deploy a more complex application to Lightsail, including adding a build step. You can find information on how to do that in AWS CodeBuild documentation.

 

 

 

Using AWS CodeBuild to execute administrative tasks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/devops/using-aws-codebuild-to-execute-administrative-tasks/

This article is a guest post from AWS Serverless Hero Gojko Adzic.

At MindMup, we started using AWS CodeBuild to quickly lift and shift support tasks to the cloud. MindMup is a collaborative mind-mapping tool, used by millions of teachers and students to collaborate on assignments, structure ideas, and organize and navigate complex information. Still, the team behind the product consists of just two people, and we’re both responsible for everything from sales and product management to programming and customer support. One of the key reasons why such a tiny team can support a large group of users is that we tend to automate all recurring tasks in order to free up our time for more productive work.

Administrative support tasks often start as ad-hoc command line scripts, with manual intervention to resolve exceptions. As the scripts stabilize, humans can be less involved, so teams look for ways of scheduling and automating job executions. For infrastructure deployed to AWS, this also means moving away from running scripts from on-premises developers or operations computers to running in the cloud. With utilization-based pricing and on-demand capacity, AWS Lambda and AWS Fargate are the two obvious choices for running such tasks in AWS. There is a third option, often overlooked: CodeBuild. Although CodeBuild is designed for a completely different purpose, it offers some compelling features that make it very easy to set up and run periodic support jobs, especially as a first easy step towards a more systematic solution.

Solution overview

CodeBuild is, as the name suggests, a managed service for executing typical software build jobs. In some ways, such as each job having an associated IAM permissions, CodeBuild is similar to Lambda and Fargate. One of fundamental differences between Codebuild jobs and Lambda functions or Fargate tasks is the location of the executable definition of the job. The executable definition of a Lambda function is in a ZIP archive deployed to Lambda. For Fargate, the executable definition is in a Docker container image, deployed in a task with Amazon ECS or in a Kubernetes pod with Amazon EKS. Both services require an explicit deployment to update the executable definition of a task. For CodeBuild jobs, the executable definition is not deployed to an AWS service. Instead, it is in a source code control system that you can manage locally or using a service such as GitHub or AWS CodeCommit.

Sitting alongside the rest of the source code, each CodeBuild task has an entry-point configuration file, by convention called a buildspec.yml. The buildspec.yml file lists the programming language runtimes required by the job, and the steps to execute before, during and after the build job. For example, the following buildspec.yml sets up a build environment for JavaScript with Node.js 12, installs dependencies, runs tests, and then produces a deployment package using webpack.

version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 12
    commands:
      - npm install
  build:
    commands:
      - npm test
      - npm run web pack

Usually, the buildspec.yml file involves some variant of installing dependencies, compiling code and running tests, then packaging and versioning artifacts. But the steps of a buildspec.yml file are actually just shell commands, so CodeBuild doesn’t necessarily need to run tasks related to compiling or packaging. It can execute any sequence of Unix commands, scripts, or binaries. This makes CodeBuild a uniquely compelling choice for the transition from running shell scripts on an operations machine to running a shell script in the cloud.

Comparing CodeBuild and Lambda for administrative tasks

The major advantage of CodeBuild over Lambda functions for support jobs is that the scripts can be significantly more flexible. Moving from shell scripts to Lambda functions usually means rewriting the task in a language such as JavaScript or Python. You can execute a shell script from a Lambda function when using Amazon Linux 1 instances, or even use a Bash custom runtime, but when using CodeBuild, you can execute the same shell script without changes.

Lambda functions usually run only in a single language. Support tasks often perform a chain of actions, and different steps might require utilities written in different languages. Running such varied tasks with a Lambda function would require constructing a custom Lambda runtime, or splitting steps into multiple functions with different runtimes, and then somehow coordinating and passing data between them. AWS Step Functions can be used to coordinate the workflow, but most support tasks are a sequence of steps, to be executed in order if the previous one succeeds. With CodeBuild, you can configure the task to include all required runtimes.

Support tasks often need to transform the outputs of one tool and pass it into a different tool. For example, select rows from a database containing expired accounts, then filter out only the user emails, separate the data with commas, and send to an automated mailer with a template. Tools such as grep, awk, and sed become invaluable for such transformations. However, they aren’t available on new Lambda runtimes.

Lambda runtimes based on Amazon Linux 2 bundle only the absolutely minimal operating system packages. Even the basic command line Linux utilities, such as which, are not packaged with the recent Lambda runtimes. On the other hand, CodeBuild runs tasks in a full-blown Linux environment. Executing support tasks through CodeBuild means that you can pipe results into all the standard Unix tools, without having to use half-baked replacements written in a scripting language.

For applications running in the AWS ecosystem, support tasks often need to communicate with AWS services or resources. Standard CodeBuild environments also come with the aws command line tools, so you can use them without any additional setup. This becomes especially important for moving data from and to Amazon S3, where command line tools have operations for batch uploads or downloads or recursive directory synchronization. Those operations are not directly available through the programming language SDK libraries.

It is, of course, possible to install additional binaries to Lambda functions by building them for the right Linux environment. Because the standard shared system libraries are also not in the recent Lambda runtimes, compiling additional tools is akin to building a Linux distribution from scratch. With CodeBuild, most standard tools are included already, and you can add additional tools to the system by using an operating system package manager (apt-get or yum).

CodeBuild execution environments can also be more flexible in terms of execution time and performance constraints. Lambda tasks are currently limited to fifteen minutes. The only performance setting you can influence is the memory size, which proportionally impacts the CPU power. The highest setting is currently 3GB memory, which assigns two virtual cores. CodeBuild allows you to configure tasks which can run for up to 8 hours. You can also explicitly select a compute type, including using GPU processors and going all the way up to 255 GB memory or 72 virtual CPU cores. This makes CodeBuild an interesting choice for tasks that need to potentially run longer than fifteen minutes, that are very computationally intensive, or that need a lot of working memory.

On the other hand, compared to Lambda functions, CodeBuild jobs start significantly slower and running them in parallel is not as easy or convenient. For example, by default you can only run up to 60 CodeBuild tasks in parallel, but this is a soft limit that you can increase. However, support tasks are mostly batch jobs by nature, so saving a few seconds or being able to execute thousands of such tasks in parallel is not usually important.

Comparing CodeBuild or Amazon EC2/Fargate for administrative tasks

Most of the limitations of Lambda functions for admin jobs could be solved by running a virtual machine through Amazon EC2. In fact, running tasks on Amazon EC2 was the usual way of lifting support tasks from the operations computers and moving them into the cloud until Lambda became available. However, due to how Amazon EC2 instances are billed, teams often bundled all the operations tasks on a single Amazon EC2 instance. That instance needed a superset of all the security privileges required by the various tasks, opening potential security risks. That’s where Fargate can help. Fargate runs container-based tasks on demand, offering utilization-based billing and removing many restrictions of Lambda, such as the 15-minute runtime and reduced operating system environment, also allowing you to choose execution environments more flexibly.

This means that, compared to Fargate tasks, CodeBuild execution is more or less comparable in terms of what you can run and how much power you can assign to your tasks. Both can use a custom Docker container, and both run a full-blown operating system with all the standard binaries. They also have similar terms of start-up time and parallelization. However, setting up a CodeBuild job and updating it later is much easier than with Fargate using tasks or pods.

With Fargate, you need to provide a custom Docker container with the right entry point. CodeBuild lets you use custom containers or choose standard images provided by AWS, including Ubuntu or AWS Linux instances. Likewise, configuring a Fargate task involves deploying in an Amazon VPC, and if the task needs to access other AWS services, setting up a NAT gateway. CodeBuild tasks have network access by default, and can be deployed in a VPC if required.

Updating support scripts can also be easier with CodeBuild than with Fargate. Deploying a new version of a task into Fargate involves building a new Docker container and uploading it to a container manager such as Amazon ECS or Amazon EKS. Deploying a new version of a CodeBuild job involves committing to the version control system, without the need to set up a CI/CD pipeline. This makes CodeBuild a compelling way of setting up support tasks, especially for larger organizations with strict access rules. Support people can update tasks definitions by having access to the source code control system, without the need to get access to production resources on AWS.

Fargate environments are transient, similar to Lambda functions. If you want to preserve some files between job runs (for example, compiled task binaries or installed dependencies), you would have to manage that manually with Fargate. CodeBuild supports artifact caching out of the box, so it’s significantly easier to preserve data files or installed dependencies between runs.

Potential downsides

Although taking supporting tasks directly from the source code repository is one of the biggest advantages of CodeBuild over Fargate or Lambda, it can also be a major drawback. Ensuring that the scripts are always in a stable condition requires discipline regarding committing to the trunk. Without such discipline, untested or unstable code might be used for admin tasks by mistake. A potential workaround for teams without good trunk commit discipline would be to use a specific branch for CodeBuild tasks, and then merge code into that branch once it is ready to be released.

Using support scripts directly from a source code repository makes it more complicated to synchronize versions with other deployed software. If you need the support scripts to track the exact version of code that was deployed to other services, it’s probably safer and easier to use Lambda functions or Fargate containers with an explicit deployment step.

Executing support tasks through CodeBuild

CodeBuild jobs take a bit more setup than Lambda functions, but significantly less than Fargate tasks. Below is an example of a CodeBuild job set up through AWS CloudFormation.

Architecture diagram for the CodeBuild being used for administrative tasks

Here are a few things to note:

  • You can add the required IAM permissions for the task into the Policies section of the CodeBuildRole resource.
  • The Environment section of the CodeBuildProject resource is where you can define the container image, choose the virtual hardware or set up environment variables to configure the task.
  • Environment variables are directly available for the shell commands listed in the buildspec.yml file, so this trick allows you to easily parameterize jobs to use resources from the same AWS CloudFormation template.
  • The Location and BuildSpec properties in the Source section define the source code repository, and the path of the buildspec.yml file within the repository.
Resources:
  CodeBuildRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "codebuild.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'logs:*'
                Resource: '*'

  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref JobName
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: NO_ARTIFACTS
      LogsConfig:
        CloudWatchLogs:
          Status: ENABLED
      Cache:
        Type: NO_CACHE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/standard:3.0
        EnvironmentVariables:
          - Name: SYSTEM_BUCKET 
            Value: !Ref SystemBucketName
      Source:
        Type: GITHUB
        Location: !Ref GithubRepository 
        GitCloneDepth: 1
        BuildSpec: !Ref BuildSpecPath 
        ReportBuildStatus: False
        InsecureSsl: False
      TimeoutInMinutes: !Ref TimeoutInMinutes

CodeBuild jobs usually run after changes to source code files. Support tasks usually need to run on a periodic schedule. The previous snippet did not define the Triggers property for the CodeBuild job, so it will not track source code changes or run automatically. Instead, you can set up an Amazon CloudWatch Event rule (or optionally use Amazon EventBridge, that provides more sophisticated rules) that will periodically trigger the CodeBuild job. Here is how to do that with AWS CloudFormation:

  RunCodeBuildJobRole:
    Condition: ScheduleRuns
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: StartTask 
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'codebuild:StartBuild'
                Resource:
                  - !GetAtt CodeBuildProject.Arn

  RunCodeBuildJobRoleRule:
    Condition: ScheduleRuns
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-scheduler'
      Description: Periodically runs codebuild job to archive defunct accounts
      ScheduleExpression: !Ref ScheduleRate
      State: ENABLED
      Targets:
        - Arn: !GetAtt CodeBuildProject.Arn
          Id: CodeBuildProject
          RoleArn: !GetAtt RunCodeBuildJobRole.Arn

Note the ScheduleExpression property of the RunCodeBuildJobRoleRule resource. You can use any supported CloudWatch schedule expression there to set up when or how frequently your job runs.

Observability and audit logs

If a support job fails for any reason, people need to know. Luckily, CodeBuild already integrates nicely with CloudWatch to report job statuses, so you can set up another CloudWatch Event rule that tracks failures and alerts someone about it. To make notifications flexible, you can send them to an Amazon SNS topic. You can then subscribe for email notifications or forward those alerts somewhere else easily. The following wires up notifications with an AWS CloudFormation template.

  SnsPublishRole:
    Condition: CreateSNSNotifications
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'SNS:Publish'
                Resource:
                  - !Ref SnsTopicArn

  CodeBuildNotificationRule:
    Condition: CreateSNSNotifications
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-fail-notification'
      Description: Notify about codebuild project failures
      RoleArn: !GetAtt SnsPublishRole.Arn
      EventPattern:
        source:
          - "aws.codebuild"
        detail-type:
          - "CodeBuild Build State Change"
        detail:
          build-status:
            - "FAILED"
            - "STOPPED"
          project-name:
            - !Ref CodeBuildProject
      State: ENABLED
      Targets:
        - Arn: Ref SnsTopicArn
          Id: NotificationTopic

Another option to keep the execution of your tasks under control is to generate a report using the test report functionality introduced a few months ago and specify in the buildspec.yml file about the location of the files that store results you want to include in your report.

Testing administrative tasks

Note the build-status list inside the CodeBuildNotificationRule resource. This defines a list of statuses about which you want to publish alerts. In the previous snippet, the list does not include successful runs. That’s because it’s usually not necessary to take any action when a support job runs successfully. However, during initial testing you may want to add IN_PROGRESS (notify when a task starts) and SUCCEEDED (notify when the job ends without an error).

Finally, one of the biggest challenges when moving scripts from an operations machine to running in CodeBuild is to create the right IAM policies. Command-line users on operations machines usually have a wide set of privileges, and identifying the minimum required for a specific job usually involves starting small, then iterating over failed attempts and opening up required operations. Running that process directly through CodeBuild can be quite slow. Instead, I suggest setting up a separate IAM policy for the job, then assigning it both to the role for the CodeBuild task, and to a command-line role or a command-line user. You can then iterate quickly directly on the command line and identify all required IAM operations, then remove the additional command-line user when done.

Conclusion

The next time you need to move a support task to the cloud, and you need a rich execution environment, consider using CodeBuild, at least as the initial step towards a more systematic solution. It will allow you to quickly get a script up and running with all the benefits of IAM isolation, scheduled execution, and reliable notifications.

Gojko is author of the Running Serverless book and interactive course. He is currently working on Video Puppet, a tool for editing videos as easily as editing text. You can reach out to him on Twitter.

Enhancing automated database continuous integration with AWS CodeBuild and Amazon RDS Database Snapshot

Post Syndicated from bobyeh original https://aws.amazon.com/blogs/devops/enhancing-automated-database-continuous-integration-with-aws-codebuild-and-amazon-rds-database-snapshot/

In major integration merges, it’s sometimes necessary to verify the changes with existing online data. To inspect the changes with a cloned database can give us confidence to deploy to the production database. This post demonstrates how to use AWS CodeBuild and Amazon RDS Database Snapshot to verify your code revisions in both the application layer and the underlying layer, ensuring that your existing data works seamlessly with your revised code.

Making code revisions using continuous integration requires running periodic verification to ensure that your new deliverable works functionally and reliably. It’s easy to focus attention solely on the surface level changes made to the application layer. However, it’s important to remember to inspect the changes made to the underlying data layer too.

From the application layer, users modify the data model for different reasons. Any data model definition change in the application layer maps to a schema change in the database. For those services backed with a relational database (RDBMS), a user might perform data definition language (DDL) operations directly toward a database schema or rely on an object-relational mapping (ORM) library to migrate the schema to fit the application revision. These schema changes (CREATE, DROP, ALTER, TRUNCATE, etc.) can be very critical, especially for those services serving real customers.

Performing proper verification and simulation for these changes mitigates the risk of bringing down services. After the changes are applied, fundamental operation testing (CRUD – CREATE, READ, UPDATE, DELETE) toward data models is mandatory; this leads to data control language (DCL) operations (INSERT, SELECT, UPDATE, DELETE, etc.). After all the necessary steps, a user can move on to the deployment stage.

About this page

  • Time to read:6 minutes
  • Time to complete:30 minutes
  • Cost to complete (estimated):Less than $1 for 1-GB database snapshot and restored instance
  • Learning level:Advanced (300)
  • Services used:AWS CodeBuild, IAM, RDS

Solution overview

This example uses a buildspec file in CodeBuild. Set up a build project that points to a source control repository containing that buildspec file. The CodeBuild runtime environment restores the database server from an RDS snapshot.We restore snapshot to an Amazon Aurora cluster as example through AWS Command Line Interface (AWS CLI). After the database is restored, the build process starts to run your integration process, which is in mock code in the buildspec definition. After the verification stage, CodeBuild drops the restored database.

 

Architecture diagram showing an overview of how we use CodeBuild to restore a database snapshot to verify and validate the new database schema change.

Prerequisites

The following components are required to implement this example:

Walkthrough

Follow these steps to execute the solution.

Prepare your build specification file

Before you begin, prepare your CodeBuild Build Specification file with following information:

  • db-cluster-identifier-prefix
  • db-snapshot-identifier
  • region-ID
  • account-ID
  • vpc-security-group-id

The db-cluster-identifier-prefix creates a temporary database followed by a timestamp. Make sure that this value does not overlap with any other databases. The db-snapshot-identifier points to the snapshot you are calling to run with your application. Region-ID and account-ID describe the account on which you are running. The vpc-security-group-id indicates the security group you use in the CodeBuild environment and temporary database.

YAML
Version: 0.2
phases:
  install:
    runtime-versions:
      python: 3.7
pre_build:
  commands:
    - pip3 install awscli --upgrade --user
    - export DATE=`date +%Y%m%d%H%M`
    - export DBIDENTIFIER=db-cluster-identifier-prefix-$DATE
    - echo $DBIDENTIFIER
    - aws rds restore-db-cluster-from-snapshot --snapshot-identifier arn:aws:rds:region-ID:account-ID:cluster-snapshot:db-snapshot-identifier –vpc-security-group-ids vpc-security-group-id --db-cluster-identifier $DBIDENTIFIER --engine aurora
    - while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
    - echo "Temp db ready"
    - export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
    - echo $ENDPOINT
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    - echo proceed db migrate update, DDL proceed here
    - echo proceed application test, CRUD test run here
post_build:
  commands:
    - echo Build completed on `date`
    - echo $DBNAME
    - aws rds delete-db-cluster --db-cluster-identifier $DBIDENTIFIER --skip-final-snapshot &

 

After you finish editing the file, name it buildspec.yml. Save it in the root directory with which you plan to build, then commit the file into your code repository.

  1. Open the CodeBuild console.
  2. Choose Create build project.
  3. In Project Configuration, enter the name and description for the build project.
  4. In Source, select the source provider for your code repository.
  5. In Environment image, choose Managed image, Ubuntu, and the latest runtime version.
  6. Choose the appropriate service role for your project.
  7. In the Additional configuration menu, select the VPC with your Amazon RDS database snapshots, as shown in the following screenshot, and then select Validate VPC Settings. For more information, see Use CodeBuild with Amazon Virtual Private Cloud.
  8. In Security Groups, select the security group needed for the CodeBuild environment to access your temporary database.
  9. In Build Specifications, select Use a buildspec file.

CodeBuild Project Additional Configuration - VPC

Grant permission for the build project

Follow these steps to grant permission.

  1. Navigate to the AWS Management Console Policies.
  2. Choose Create a policy and select the JSON tab.To give CodeBuild access to the Amazon RDS resource in the pre_build stage, you must grant RestoreDBClusterFromSnapshot and DeleteDBCluster. Follow the least privilege guideline and limit the DeleteDBCluster action point to “arn:aws:rds:*:*:cluster: db-cluster-identifier-*”.
  3. Copy the following code and paste it into your policy:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": "rds:RestoreDBClusterFromSnapshot",
          "Resource": "*"
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": "rds:DeleteDBCluster*",
          "Resource": "arn:aws:rds:*:*:cluster:db-cluster-identifier-*"
        }
      ]
    }
  4. Choose Review Policy.
  5. Enter a Name and Description for this policy, then choose Create Policy.
  6. After the policy is ready, attach it to your CodeBuild service role, as shown in the following screenshot.

Attach created policy to IAM role

Use database snapshot restore to launch the build process

  1. Navigate back to CodeBuild and locate the project you just created.
  2. Give an appropriate timeout setting and make sure that you set it to the correct branch for your repository.
  3. Choose Start Build.
  4. Open the Build Log to view the database cluster from your snapshot in the pre_build stage, as shown in the following screenshot.CodeBuild ProjectBuild Log - pre_build stage
  5. In the build stage, use $ENDPOINT to point your application to this temporary database, as shown in the following screenshot.CodeBuild Project Build Log - build stage
  6. In the post_build, delete the cluster, as shown in the following screenshot.CodeBuild Project Build log - post build stage

Test your database schema change

After you set up this pipeline, you can begin to test your database schema change within your application code. This example defines several steps in the Build Specifications file to migrate the schema and run with the latest application code. In this example, you can verify that all the modifications fit from the application to the database.

YAML
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    # run a script to apply your latest schema change
    - echo proceed db migrate update
    # start the latest code, and run your own testing
    - echo proceed application test

After validation

After we validated the database schema change in the above steps, a suitable strategy for deployment to production should be utilized that would align with the criteria to satisfy the business goals.

Cleaning up

To avoid incurring future charges, delete the resources as following steps:

  1. Open the CodeBuild console
  2. Click the project you created for this test.
  3. Click the delete build project and input delete to confirm deletion.

Conclusion

In this post, you created a mechanism to set up a temporary database and limit access into the build runtime. The temporary database stands alone and isolated. This mechanism can be applied to secure the permission control for the database snapshot, or not to break any existing environment. The database engine applies to all available RDS options, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. This provides options, without impacting any existing environments, for critical events triggered by major changes in the production database schema, or data format changes required by business decisions.

 

Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman

Post Syndicated from Juan Lamadrid original https://aws.amazon.com/blogs/devops/automating-your-api-testing-with-aws-codebuild-aws-codepipeline-and-postman/

Today, enterprises of all shapes and sizes are engaged in some form of digital transformation. Many recognize that successful digital transformation requires continuous evolution powered by a robust API strategy. APIs enable the creation of new products, improvement of the customer experience, transformation of business processes, and ultimately, the agility needed to create sustainable business value. Hence, it stands to reason that adopting DevOps best practices such as Continuous Integration into your API development lifecycle helps improve the quality of your APIs and accelerate your API strategy.

In this post, we highlight how to automate API testing using serverless technologies, including AWS CodePipeline and AWS CodeBuild, along with Postman. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready to deploy without the need to provision, manage, and scale your own build servers.

We take advantage of a new feature in CodeBuild called Reports that allows us to view the reports generated by functional or integration tests. We keep an eye on valuable metrics such as Pass Rate %, Test Run Duration, and the number of Passed versus Failed/Error test cases.

Postman is an industry-recognized tool used for API development that makes it easy to both develop and test your APIs. Postman also includes command-line integration with its command-line Collection Runner, Newman. Newman can easily be integrated with your continuous integration servers and build systems. Our CodePipeline pipeline uses CodeBuild to invoke the Newman command line interface and execute tests created with Postman. We cover the steps in detail below.

Solution Overview

In this post, we demonstrate how to automate the deployment and testing of the Pet Store API that is available as a sample API with API Gateway. This is a simple API that integrates via HTTP proxy to a demo Pet Store API. The API contains endpoints to list pets, get a pet by specific id, and add a pet.

The following diagram depicts the architecture of this simple Pet Store demo API.

Simple PetStore API Architecture

 

 

The following diagram depicts the AWS CodePipeline pipeline architecture we use to test the PetStore API.

 

The AWS CodePipline pipeline architecture we use to test our API.

After execution of this pipeline, you have a fully operational API that has been tested for specific functional requirements. These test cases and their results are visible in the Reports section of the CodeBuild console.

Building the PetStore API Pipeline

To get started, follow these steps:

Step 1. Fork the Github repository

Log into your GitHub account and fork the following repository: https://github.com/aws-samples/aws-codepipeline-codebuild-with-postman

Step 2. Clone the forked repository

Clone the forked repository into your local development environment.
git clone https://github.com/<YOUR_GITHUB_USERNAME>/aws-codepipeline-codebuild-with-postman

Step 3. Create an Amazon S3 bucket

This bucket contains resources related to this project. We refer to this bucket as the project’s root bucket.
Using the AWS CLI: aws s3 mb s3://<REPLACE_ME_WITH_UNIQUE_BUCKET_NAME>

Step 4. Edit the buildspec file

The buildspec file petstore-api-buildspec.yml contains the instructions to package the resources defined in your SAM template, petstore-api.yaml. This build spec is executed by CodeBuild within the build stage (BuildPetStoreAPI) of the pipeline.

1. Replace the following text REPLACE_ME_WITH_UNIQUE_BUCKET_NAME in the petstore-api-buildspec.yml with the bucket name created above in step 3.

2. Commit this change to your repository.

Step 5. Store Postman collection and environment files in S3

1. Navigate to the directory 02postman

For this project we included a Postman collection file, PetStoreAPI.postman_collection.json, that validates the PetStore API’s functionality. You can import the collection and environment file into Postman using the instructions here to see the tests associated with each API endpoint.

The following screenshot is an example specific to testing a GET request to the /pets endpoint(1). We make sure the GET request returns a JSON array(2) along with the inclusion of a Content-Type header (3) and a response time of less than 200ms (4). In the Test results tab (5), you can see we passed these tests when calling the API.

Postman screenshot showing tests for specific endpoint.

2. Save the Postman collection file in S3 using the AWS CLI

aws s3 cp PetStoreAPI.postman_collection.json \
s3://<REPLACE_ME_WITH_UNIQUE_BUCKET_NAME>/postman-env-files/PetStoreAPI.postman_collection.json

3. Save the Postman environment file to S3 using the AWS CLI

aws s3 cp PetStoreAPIEnvironment.postman_environment.json \
s3://<REPLACE_ME_WITH_UNIQUE_BUCKET_NAME>/postman-env-files/PetStoreAPIEnvironment.postman_environment.json

Step 6. Create the PetStore API pipeline

We now create the AWS CodePipeline PetStoreAPI pipeline that will both deploy and test our API. We use AWS CloudFormation template (petstore-api-pipeline.yaml) to define the pipeline and required stages, as noted in our pipeline architecture diagram.

Navigate back to the project’s root directory

To launch this template, you need to fill in a few parameters:
BucketRoot: unique bucket folder you created above
GitHubBranch: master
GitHubRepositoryName: aws-codepipeline-codebuild-with-postman
GitHubToken: your github personal access token
You can create your github token here (for select scopes: check repo and admin:repohook)
GitHubUser = your github username

2. Use the AWS CLI to deploy the AWS CloudFormation template as follows

aws cloudformation create-stack --stack-name petstore-api-pipeline \
--template-body file://./petstore-api-pipeline.yaml \
--parameters \
ParameterKey=BucketRoot,ParameterValue=<REPLACE_ME_WITH_UNIQUE_BUCKET_NAME> \
ParameterKey=GitHubBranch,ParameterValue=<REPLACE_ME_GITHUB_BRANCH> \
ParameterKey=GitHubRepositoryName,ParameterValue=<REPLACE_ME_GITHUB_REPO> \
ParameterKey=GitHubToken,ParameterValue=<REPLACE_ME_GITHUB_TOKEN> \
ParameterKey=GitHubUser,ParameterValue=<REPLACE_ME_GITHUB_USERNAME> \
--capabilities CAPABILITY_NAMED_IAM

This command creates a CodePipeline pipeline and required stages to deploy and test our API using CodeBuild and Newman. Open the CodePipeline console to watch your pipeline execute and monitor the different stages, as shown in the following screenshot.

PetStore API AWS CodePipeline
The last stage of the pipeline uses CodeBuild and Newman to execute the tests created with Postman. You should now have a fully functional API visible in the Amazon API Gateway console.

Review AWS CodeBuild configuration

For this pipeline, we use CodeBuild to both deploy our API in the build stage and to test our API in the test stage of the pipeline. For the deploy stage, CodeBuild uses AWS Serverless Application Model (SAM) to build and deploy our API. We focus on the test stage and how we use CodeBuild to run functional tests against our API.

Take a look at the buildspec file (postman-newman-buildspec.yml)that CodeBuild uses to execute the test. Recall that our goal for this stage is to execute functional tests that we created earlier using Postman and to visualize these test results in CodeBuild Reports:

version: 0.2

env:
  variables:
    key: "S3_BUCKET"

phases:
  install:
    runtime-versions:
      nodejs: 10
    commands:
      - npm install -g newman
      - yum install -y jq

  pre_build:
    commands:
      - aws s3 cp "s3://${S3_BUCKET}/postman-env-files/PetStoreAPIEnvironment.postman_environment.json" ./02postman/
      - aws s3 cp "s3://${S3_BUCKET}/postman-env-files/PetStoreAPI.postman_collection.json" ./02postman/
      - cd ./02postman
      - ./update-postman-env-file.sh

  build:
    commands:
      - echo Build started on `date` from dir `pwd`
      - newman run PetStoreAPI.postman_collection.json --environment PetStoreAPIEnvironment.postman_environment.json -r junit

reports:
  JUnitReports: # CodeBuild will create a report group called "SurefireReports".
    files: #Store all of the files
      - '**/*'
    base-directory: '02postman/newman' # Location of the reports

 

In the install phase, we install the required Newman library. Recall this is the library that uses Postman collection and environment files to execute tests from the CLI. We also install the jq library that allows you to query JSON.

In the pre_build phase, we execute commands that set up our test environment. In this case, we need to grab the Postman collection and environment files from Amazon S3. Then we use a shell script, update-postman-env-file.sh to update the Postman environment file with the API Gateway URL for the API created in the build stage. Lets take a look at the shell script executed by CodeBuild:

#!/usr/bin/env bash

#This shell script updates Postman environment file with the API Gateway URL created
# via the api gateway deployment

echo "Running update-postman-env-file.sh"

api_gateway_url=`aws cloudformation describe-stacks \
  --stack-name petstore-api-stack \
  --query "Stacks[0].Outputs[*].{OutputValueValue:OutputValue}" --output text`

echo "API Gateway URL:" ${api_gateway_url}

jq -e --arg apigwurl "$api_gateway_url" '(.values[] | select(.key=="apigw-root") | .value) = $apigwurl' \
  PetStoreAPIEnvironment.postman_environment.json > PetStoreAPIEnvironment.postman_environment.json.tmp \
  && cp PetStoreAPIEnvironment.postman_environment.json.tmp PetStoreAPIEnvironment.postman_environment.json \
  && rm PetStoreAPIEnvironment.postman_environment.json.tmp

echo "Updated PetStoreAPIEnvironment.postman_environment.json"

cat PetStoreAPIEnvironment.postman_environment.json

 

This shell script wraps AWS API commands to get the required API Gateway URL from the AWS CloudFormation stack output and uses this value to update the Postman environment file. Notice how we also use the jq library installed earlier.

Once this is done, we move on to the build phase in our postman-newman-buildspec.yml. Note in the commands section how we execute the Newman command line runner by passing the required Postman collection and environment files. Also, notice how we specify to Newman that we want these reports in JUnit style output. This is very important, as this allows CodeBuild Reports to consume and visualize this output.

Once our test run is complete, we specify in our buildspec file where our test results JUnit files are available. This allows CodeBuild Reports to consume our JUnit test results for visualization.

You can accomplish all of this without having to provision, manage, and scale your own build servers.

Working with CodeBuild’s test reporting feature

CodeBuild announced a new reporting feature that allows you to view the reports generated by functional or integration tests. You can use your test reports to view trends and test and failure rates to help you optimize builds. The test file format can be JUnit XML or Cucumber JSON. You can create your test cases with any test framework that can create files in one of those formats (for example, Surefire JUnit plugin, TestNG, or Cucumber).

Using this feature, you can see the history of your test runs and see duration for the entire report, as shown in the following screenshots:

 

test run trends

 

test run summary information

It also provides details for individual test cases within a report, as shown in the following screenshot.

 

details for individual test cases within a report

 

 

You can select any individual test case to see its details. The following screenshot shows details of a failed test case.

details of a failed test case

 

 

Please note that at the time of this publication, the CodeBuild reporting feature is in preview.

Cleanup

After the tests are completed, we recommend the following steps to clean-up the resources created in this post and avoid any charges.

1. Delete the AWS CloudFormation stack petstore-api-stack to delete the PetStore API deployed by the pipeline stack

2. Delete the pipeline artifact bucket created by the petstore-api-pipeline stack.

This is the bucket referred to as CodePipelineArtifactBucket in the resources tab of the petstore-api-pipeline stack and begins with the name: petstore-api-pipeline-codepipeline-artifact-bucket. This bucket needs to be deleted in order to delete the pipeline stack.

3. Delete the AWS CloudFormation stack petstore-api-pipeline to delete the AWS CodePipeline pipeline that builds and deploys the PetStore API.

Conclusion

Continuous Integration is a DevOps best practice that helps improve software quality. In this blog post, we showed how you can use AWS Services such as CodeBuild and CodePipeline with Postman, a powerful API testing and development tool, to easily adopt Continuous Integration and DevOps best practices into your API development process.

Testing and creating CI/CD pipelines for AWS Step Functions

Post Syndicated from Matt Noyce original https://aws.amazon.com/blogs/devops/testing-and-creating-ci-cd-pipelines-for-aws-step-functions-using-aws-codepipeline-and-aws-codebuild/

AWS Step Functions allow users to easily create workflows that are highly available, serverless, and intuitive. Step Functions natively integrate with a variety of AWS services including, but not limited to, AWS Lambda, AWS Batch, AWS Fargate, and Amazon SageMaker. It offers the ability to natively add error handling, retry logic, and complex branching, all through an easy-to-use JSON-based language known as the Amazon States Language.

AWS CodePipeline is a fully managed Continuous Delivery System that allows for easy and highly configurable methods for automating release pipelines. CodePipeline allows the end-user the ability to build, test, and deploy their most critical applications and infrastructure in a reliable and repeatable manner.

AWS CodeCommit is a fully managed and secure source control repository service. It eliminates the need to support and scale infrastructure to support highly available and critical code repository systems.

This blog post demonstrates how to create a CI/CD pipeline to comprehensively test an AWS Step Function state machine from start to finish using CodeCommit, AWS CodeBuild, CodePipeline, and Python.

CI/CD pipeline steps

The pipeline contains the following steps, as shown in the following diagram.

CI/CD pipeline steps

  1. Pull the source code from source control.
  2. Lint any configuration files.
  3. Run unit tests against the AWS Lambda functions in codebase.
  4. Deploy the test pipeline.
  5. Run end-to-end tests against the test pipeline.
  6. Clean up test state machine and test infrastructure.
  7. Send approval to approvers.
  8. Deploy to Production.

Prerequisites

In order to get started building this CI/CD pipeline there are a few prerequisites that must be met:

  1. Create or use an existing AWS account (instructions on creating an account can be found here).
  2. Define or use the example AWS Step Function states language definition (found below).
  3. Write the appropriate unit tests for your Lambda functions.
  4. Determine end-to-end tests to be run against AWS Step Function state machine.

The CodePipeline project

The following screenshot depicts what the CodePipeline project looks like, including the set of stages run in order to securely, reliably, and confidently deploy the AWS Step Function state machine to Production.

CodePipeline project

Creating a CodeCommit repository

To begin, navigate to the AWS console to create a new CodeCommit repository for your state machine.

CodeCommit repository

In this example, the repository is named CalculationStateMachine, as it contains the contents of the state machine definition, Python tests, and CodeBuild configurations.

CodeCommit structure

Breakdown of repository structure

In the CodeCommit repository above we have the following folder structure:

  1. config – this is where all of the Buildspec files will live for our AWS CodeBuild jobs.
  2. lambdas – this is where we will store all of our AWS Lambda functions.
  3. tests – this is the top-level folder for unit and end-to-end tests. It contains two sub-folders (unit and e2e).
  4. cloudformation – this is where we will add any extra CloudFormation templates.

Defining the state machine

Inside of the CodeCommit repository, create a State Machine Definition file called sm_def.json that defines the state machine in Amazon States Language.

This example creates a state machine that invokes a collection of Lambda functions to perform calculations on the given input values. Take note that it also performs a check against a specific value and, through the use of a Choice state, either continues the pipeline or exits it.

sm_def.json file:

{
  "Comment": "CalulationStateMachine",
  "StartAt": "CleanInput",
  "States": {
    "CleanInput": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "CleanInput",
        "Payload": {
          "input.$": "$"
        }
      },
      "Next": "Multiply"
    },
    "Multiply": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Multiply",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Choice"
    },
    "Choice": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.Payload.result",
          "NumericGreaterThanEquals": 20,
          "Next": "Subtract"
        }
      ],
      "Default": "Notify"
    },
    "Subtract": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Subtract",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Add"
    },
    "Notify": {
      "Type": "Task",
      "Resource": "arn:aws:states:::sns:publish",
      "Parameters": {
        "TopicArn": "arn:aws:sns:us-east-1:657860672583:CalculateNotify",
        "Message.$": "$$",
        "Subject": "Failed Test"
      },
      "End": true
    },
    "Add": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Add",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "Next": "Divide"
    },
    "Divide": {
      "Type": "Task",
      "Resource": "arn:aws:states:::lambda:invoke",
      "Parameters": {
        "FunctionName": "Divide",
        "Payload": {
          "input.$": "$.Payload"
        }
      },
      "End": true
    }
  }
}

This will yield the following AWS Step Function state machine after the pipeline completes:

State machine

CodeBuild Spec files

The CI/CD pipeline uses a collection of CodeBuild BuildSpec files chained together through CodePipeline. The following sections demonstrate what these BuildSpec files look like and how they can be used to chain together and build a full CI/CD pipeline.

AWS States Language linter

In order to determine whether or not the State Machine Definition is valid, include a stage in your CodePipeline configuration to evaluate it. Through the use of a Ruby Gem called statelint, you can verify the validity of your state machine definition as follows:

lint_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      ruby: 2.6
    commands:
      - yum -y install rubygems
      - gem install statelint

  build:
    commands:
      - statelint sm_def.json

If your configuration is valid, you do not see any output messages. If the configuration is invalid, you receive a message telling you that the definition is invalid and the pipeline terminates.

Lambda unit testing

In order to test your Lambda function code, you need to evaluate whether or not it passes a set of tests. You can test each individual Lambda function deployed and used inside of the state machine. You can feed various inputs into your Lambda functions and assert that the output is what you expect it to be. In this case, you use Python pytest to kick-off tests and validate results.

unit_test_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - pip3 install -r tests/requirements.txt

  build:
    commands:
      - pytest -s -vvv tests/unit/ --junitxml=reports/unit.xml

reports:
  StateMachineUnitTestReports:
    files:
      - "**/*"
    base-directory: "reports"

Take note that in the CodeCommit repository includes a directory called tests/unit, which includes a collection of unit tests that are run and validated against your Lambda function code. Another very important part of this BuildSpec file is the reports section, which generates reports and metrics about the results, trends, and overall success of your tests.

CodeBuild test reports

After running the unit tests, you are able to see reports about the results of the run. Take note of the reports section of the BuildSpec file, along with the –junitxml=reports/unit.xml command run along with the pytest command. This generates a set of reports that can be visualized in CodeBuild.

Navigate to the specific CodeBuild project you want to examine and click on the specific execution of interest. There is a tab called Reports, as seen in the following screenshot:

Test reports

Select the specific report of interest to see a breakdown of the tests that have run, as shown in the following screenshot:

Test visualization

With Report Groups, you can also view an aggregated list of tests that have run over time. This report includes various features such as the number of average test cases that have run, average duration, and the overall pass rate, as shown in the following screenshot:

Report groups

The AWS CloudFormation template step

The following BuildSpec file is used to generate an AWS CloudFormation template that inject the State Machine Definition into AWS CloudFormation.

template_sm_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8

  build:
    commands:
      - python template_statemachine_cf.py

The Python script that templates AWS CloudFormation to deploy the State Machine Definition given the sm_def.json file in your repository follows:

template_statemachine_cf.py file:

import sys
import json

def read_sm_def (
    sm_def_file: str
) -> dict:
    """
    Reads state machine definition from a file and returns it as a dictionary.

    Parameters:
        sm_def_file (str) = the name of the state machine definition file.

    Returns:
        sm_def_dict (dict) = the state machine definition as a dictionary.
    """

    try:
        with open(f"{sm_def_file}", "r") as f:
            return f.read()
    except IOError as e:
        print("Path does not exist!")
        print(e)
        sys.exit(1)

def template_state_machine(
    sm_def: dict
) -> dict:
    """
    Templates out the CloudFormation for creating a state machine.

    Parameters:
        sm_def (dict) = a dictionary definition of the aws states language state machine.

    Returns:
        templated_cf (dict) = a dictionary definition of the state machine.
    """
    
    templated_cf = {
        "AWSTemplateFormatVersion": "2010-09-09",
        "Description": "Creates the Step Function State Machine and associated IAM roles and policies",
        "Parameters": {
            "StateMachineName": {
                "Description": "The name of the State Machine",
                "Type": "String"
            }
        },
        "Resources": {
            "StateMachineLambdaRole": {
                "Type": "AWS::IAM::Role",
                "Properties": {
                    "AssumeRolePolicyDocument": {
                        "Version": "2012-10-17",
                        "Statement": [
                            {
                                "Effect": "Allow",
                                "Principal": {
                                    "Service": "states.amazonaws.com"
                                },
                                "Action": "sts:AssumeRole"
                            }
                        ]
                    },
                    "Policies": [
                        {
                            "PolicyName": {
                                "Fn::Sub": "States-Lambda-Execution-${AWS::StackName}-Policy"
                            },
                            "PolicyDocument": {
                                "Version": "2012-10-17",
                                "Statement": [
                                    {
                                        "Effect": "Allow",
                                        "Action": [
                                            "logs:CreateLogStream",
                                            "logs:CreateLogGroup",
                                            "logs:PutLogEvents",
                                            "sns:*"             
                                        ],
                                        "Resource": "*"
                                    },
                                    {
                                        "Effect": "Allow",
                                        "Action": [
                                            "lambda:InvokeFunction"
                                        ],
                                        "Resource": "*"
                                    }
                                ]
                            }
                        }
                    ]
                }
            },
            "StateMachine": {
                "Type": "AWS::StepFunctions::StateMachine",
                "Properties": {
                    "DefinitionString": sm_def,
                    "RoleArn": {
                        "Fn::GetAtt": [
                            "StateMachineLambdaRole",
                            "Arn"
                        ]
                    },
                    "StateMachineName": {
                        "Ref": "StateMachineName"
                    }
                }
            }
        }
    }

    return templated_cf


sm_def_dict = read_sm_def(
    sm_def_file='sm_def.json'
)

print(sm_def_dict)

cfm_sm_def = template_state_machine(
    sm_def=sm_def_dict
)

with open("sm_cfm.json", "w") as f:
    f.write(json.dumps(cfm_sm_def))

Deploying the test pipeline

In order to verify the full functionality of an entire state machine, you should stand it up so that it can be tested appropriately. This is an exact replica of what you will deploy to Production: a completely separate stack from the actual production stack that is deployed after passing appropriate end-to-end tests and approvals. You can take advantage of the AWS CloudFormation target supported by CodePipeline. Please take note of the configuration in the following screenshot, which shows how to configure this step in the AWS console:

Deploy test pipeline

End-to-end testing

In order to validate that the entire state machine works and executes without issues given any specific changes, feed it some sample inputs and make assertions on specific output values. If the specific assertions pass and you get the output that you expect to receive, you can proceed to the manual approval phase.

e2e_tests_buildspec.yaml file:

version: 0.2
env:
  git-credential-helper: yes
phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - pip3 install -r tests/requirements.txt

  build:
    commands:
      - pytest -s -vvv tests/e2e/ --junitxml=reports/e2e.xml

reports:
  StateMachineReports:
    files:
      - "**/*"
    base-directory: "reports"

Manual approval (SNS topic notification)

In order to proceed forward in the CI/CD pipeline, there should be a formal approval phase before moving forward with a deployment to Production. Using the Manual Approval stage in AWS CodePipeline, you can configure the pipeline to halt and send a message to an Amazon SNS topic before moving on further. The SNS topic can have a variety of subscribers, but in this case, subscribe an approver email address to the topic so that they can be notified whenever an approval is requested. Once the approver approves the pipeline to move to Production, the pipeline will proceed with deploying the production version of the Step Function state machine.

This Manual Approval stage can be configured in the AWS console using a configuration similar to the following:

Manual approval

Deploying to Production

After the linting, unit testing, end-to-end testing, and the Manual Approval phases have passed, you can move on to deploying the Step Function state machine to Production. This phase is similar to the Deploy Test Stage phase, except the name of your AWS CloudFormation stack is different. In this case, you also take advantage of the AWS CloudFormation target for CodeDeploy:

Deploy to production

After this stage completes successfully, your pipeline execution is complete.

Cleanup

After validating that the test state machine and Lambda functions work, include a CloudFormation step that will tear-down the existing test infrastructure (as it is no longer needed). This can be configured as a new CodePipeline step similar to the below configuration:

CloudFormation Template for cleaning up resources

Conclusion

You have linted and validated your AWS States Language definition, unit tested your Lambda function code, deployed a test AWS state machine, run end-to-end tests, received Manual Approval to deploy to Production, and deployed to Production. This gives you and your team confidence that any changes made to your state machine and surrounding Lambda function code perform correctly in Production.

 

About the Author

matt noyce profile photo

 

Matt Noyce is a Cloud Application Architect in Professional Services at Amazon Web Services.
He works with customers to architect, design, automate, and build solutions on AWS
for their business needs.

Identifying and resolving security code vulnerabilities using Snyk in AWS CI/CD Pipeline

Post Syndicated from Jay Yeras original https://aws.amazon.com/blogs/devops/identifying-and-resolving-vulnerabilities-in-your-code/

The majority of companies have embraced open-source software (OSS) at an accelerated rate even when building proprietary applications. Some of the obvious benefits for this shift include transparency, cost, flexibility, and a faster time to market. Snyk’s unique combination of developer-first tooling and best in class security depth enables businesses to easily build security into their continuous development process.

Even for teams building proprietary code, use of open-source packages and libraries is a necessity. In reality, a developer’s own code is often a small core within the app, and the rest is open-source software. While relying on third-party elements has obvious benefits, it also presents numerous complexities. Inadvertently introducing vulnerabilities into your codebase through repositories that are maintained in a distributed fashion and with widely varying levels of security expertise can be common, and opens up applications to effective attacks downstream.

There are three common barriers to truly effective open-source security:

  1. The security task remains in the realm of security and compliance, often perpetuating the siloed structure that DevOps strives to eliminate and slowing down release pace.
  2. Current practice may offer automated scanning of repositories, but the remediation advice it provides is manual and often un-actionable.
  3. The data generated often focuses solely on public sources, without unique and timely insights.

Developer-led application security

This blog post demonstrates techniques to improve your application security posture using Snyk tools to seamlessly integrate within the developer workflow using AWS services such as Amazon ECR, AWS Lambda, AWS CodePipeline, and AWS CodeBuild. Snyk is a SaaS offering that organizations use to find, fix, prevent, and monitor open source dependencies. Snyk is a developer-first platform that can be easily integrated into the Software Development Lifecycle (SDLC). The examples presented in this post enable you to actively scan code checked into source code management, container images, and serverless, creating a highly efficient and effective method of managing the risk inherent to open source dependencies.

Prerequisites

The examples provided in this post assume that you already have an AWS account and that your account has the ability to create new IAM roles and scope other IAM permissions. You can use your integrated development environment (IDE) of choice. The examples reference AWS Cloud9 cloud-based IDE. An AWS Quick Start for Cloud9 is available to quickly deploy to either a new or existing Amazon VPC and offers expandable Amazon EBS volume size.

Sample code and AWS CloudFormation templates are available to simplify provisioning the various services you need to configure this integration. You can fork or clone those resources. You also need a working knowledge of git and how to fork or clone within your source provider to complete these tasks.

cd ~/environment && \ 
git clone https://github.com/aws-samples/aws-modernization-with-snyk.git modernization-workshop 
cd modernization-workshop 
git submodule init 
git submodule update

Configure your CI/CD pipeline

The workflow for this example consists of a continuous integration and continuous delivery pipeline leveraging AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS Fargate, as shown in the following screenshot.

CI/CD Pipeline

For simplicity, AWS CloudFormation templates are available in the sample repo for services.yaml, pipeline.yaml, and ecs-fargate.yaml, which deploy all services necessary for this example.

Launch AWS CloudFormation templates

A detailed step-by-step guide can be found in the self-paced workshop, but if you are familiar with AWS CloudFormation, you can launch the templates in three steps. From your Cloud9 IDE terminal, change directory to the location of the sample templates and complete the following three steps.

1) Launch basic services

aws cloudformation create-stack --stack-name WorkshopServices --template-body file://services.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \
--stack-name "WorkshopServices" --query "Stacks[0].[StackStatus]" \
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

2) Launch Fargate:

aws cloudformation create-stack --stack-name WorkshopECS --template-body file://ecs-fargate.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \ 
--stack-name "WorkshopECS" --query "Stacks[0].[StackStatus]" \ 
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

3) From your Cloud9 IDE terminal, change directory to the location of the sample templates and run the following command:

aws cloudformation create-stack --stack-name WorkshopPipeline --template-body file://pipeline.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \
--stack-name "WorkshopPipeline" --query "Stacks[0].[StackStatus]" \
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

Improving your security posture

You need to sign up for a free account with Snyk. You may use your Google, Bitbucket, or Github credentials to sign up. Snyk utilizes these services for authentication and does not store your password. Once signed up, navigate to your name and select Account Settings. Under API Token, choose Show, which will reveal the token to copy, and copy this value. It will be unique for each user.

Save your password to the session manager

Run the following command, replacing abc123 with your unique token. This places the token in the session parameter manager.

aws ssm put-parameter --name "snykAuthToken" --value "abc123" --type SecureString

Set up application scanning

Next, you need to insert testing with Snyk after maven builds the application. The simplest method is to insert commands to download, authorize, and run the Snyk commands after maven has built the application/dependency tree.

The sample Dockerfile contains an environment variable from a value passed to the docker build command, which contains the token for Snyk. By using an environment variable, Snyk automatically detects the token when used.

#~~~~~~~SNYK Variable~~~~~~~~~~~~ 
# Declare Snyktoken as a build-arg ARG snyk_auth_token
# Set the SNYK_TOKEN environment variable ENV
SNYK_TOKEN=${snyk_auth_token}
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Download Snyk, and run a test, looking for medium to high severity issues. If the build succeeds, post the results to Snyk for monitoring and reporting. If a new vulnerability is found, you are notified.

# package the application
RUN mvn package -Dmaven.test.skip=true

#~~~~~~~SNYK test~~~~~~~~~~~~
# download, configure and run snyk. Break build if vulns present, post results to `https://snyk.io/`
RUN curl -Lo ./snyk "https://github.com/snyk/snyk/releases/download/v1.210.0/snyk-linux"
RUN chmod -R +x ./snyk
#Auth set through environment variable
RUN ./snyk test --severity-threshold=medium
RUN ./snyk monitor

Set up docker scanning

Later in the build process, a docker image is created. Analyze it for vulnerabilities in buildspec.yml. First, pull the Snyk token snykAuthToken from the parameter store.

env:
  parameter-store:
    SNYK_AUTH_TOKEN: "snykAuthToken"

Next, in the prebuild phase, install Snyk.

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=$(aws ecr describe-repositories --repository-name petstore_frontend --query=repositories[0].repositoryUri --output=text)
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
      - PWD=$(pwd)
      - PWDUTILS=$(pwd)
      - curl -Lo ./snyk "https://github.com/snyk/snyk/releases/download/v1.210.0/snyk-linux"
      - chmod -R +x ./snyk

Next, in the build phase, pass the token to the docker compose command, where it is retrieved in the Dockerfile code you set up to test the application.

build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - cd modules/containerize-application
      - docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest.

You can further extend the build phase to authorize the Snyk instance for testing the Docker image that’s produced. If it passes, you can pass the results to Snyk for monitoring and reporting.

build:
    commands:
      - $PWDUTILS/snyk auth $SNYK_AUTH_TOKEN
      - $PWDUTILS/snyk test --docker $REPOSITORY_URI:latest
      - $PWDUTILS/snyk monitor --docker $REPOSITORY_URI:latest
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG

For reference, a sample buildspec.yaml configured with Snyk is available in the sample repo. You can either copy this file and overwrite your existing buildspec.yaml or open an editor and replace the contents.

Testing the application

Now that services have been provisioned and Snyk tools have been integrated into your CI/CD pipeline, any new git commit triggers a fresh build and application scanning with Snyk detects vulnerabilities in your code.

In the CodeBuild console, you can look at your build history to see why your build failed, identify security vulnerabilities, and pinpoint how to fix them.

Testing /usr/src/app...
✗ Medium severity vulnerability found in org.primefaces:primefaces
Description: Cross-site Scripting (XSS)
Info: https://snyk.io/vuln/SNYK-JAVA-ORGPRIMEFACES-31642
Introduced through: org.primefaces:[email protected]
From: org.primefaces:[email protected]
Remediation:
Upgrade direct dependency org.primefaces:[email protected] to org.primefaces:[email protected] (triggers upgrades to org.primefaces:[email protected])
✗ Medium severity vulnerability found in org.primefaces:primefaces
Description: Cross-site Scripting (XSS)
Info: https://snyk.io/vuln/SNYK-JAVA-ORGPRIMEFACES-31643
Introduced through: org.primefaces:[email protected]
From: org.primefaces:[email protected]
Remediation:
Upgrade direct dependency org.primefaces:[email protected] to org.primefaces:[email protected] (triggers upgrades to org.primefaces:[email protected])
Organisation: sample-integrations
Package manager: maven
Target file: pom.xml
Open source: no
Project path: /usr/src/app
Tested 37 dependencies for known vulnerabilities, found 2 vulnerabilities, 2 vulnerable paths.
The command '/bin/sh -c ./snyk test' returned a non-zero code: 1
[Container] 2020/02/14 03:46:22 Command did not exit successfully docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest . exit status 1
[Container] 2020/02/14 03:46:22 Phase complete: BUILD Success: false
[Container] 2020/02/14 03:46:22 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest .. Reason: exit status 1

Remediation

Once you remediate your vulnerabilities and check in your code, another build is triggered and an additional scan is performed by Snyk. This time, you should see the build pass with a status of Succeeded.

You can also drill down into the CodeBuild logs and see that Snyk successfully scanned the Docker Image and found no package dependency issues with your Docker container!

[Container] 2020/02/14 03:54:14 Running command $PWDUTILS/snyk test --docker $REPOSITORY_URI:latest
Testing 300326902600.dkr.ecr.us-west-2.amazonaws.com/petstore_frontend:latest...
Organisation: sample-integrations
Package manager: rpm
Docker image: 300326902600.dkr.ecr.us-west-2.amazonaws.com/petstore_frontend:latest
✓ Tested 190 dependencies for known vulnerabilities, no vulnerable paths found.

Reporting

Snyk provides detailed reports for your imported projects. You can navigate to Projects and choose View Report to set the frequency with which the project is checked for vulnerabilities. You can also choose View Report and then the Dependencies tab to see which libraries were used. Snyk offers a comprehensive database and remediation guidance for known vulnerabilities in their Vulnerability DB. Specifics on potential vulnerabilities that may exist in your code would be contingent on the particular open source dependencies used with your application.

Cleaning up

Remember to delete any resources you may have created in order to avoid additional costs. If you used the AWS CloudFormation templates provided here, you can safely remove them by deleting those stacks from the AWS CloudFormation Console.

Conclusion

In this post, you learned how to leverage various AWS services to build a fully automated CI/CD pipeline and cloud IDE development environment. You also learned how to utilize Snyk to seamlessly integrate with AWS and secure your open-source dependencies and container images. If you are interested in learning more about DevSecOps with Snyk and AWS, then I invite you to check out this workshop and watch this video.

 

About the Author

Author Photo

 

Jay is a Senior Partner Solutions Architect at AWS bringing over 20 years of experience in various technical roles. He holds a Master of Science degree in Computer Information Systems and is a subject matter expert and thought leader for strategic initiatives that help customers embrace a DevOps culture.

 

 

NextGen Healthcare: Build and Deployment Pipelines with AWS

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/nextgen-healthcare-build-and-deployment-pipelines-with-aws/

Owen Zacharias, Vice President of Application Delivery at NextGen Healthcare, explains to AWS Solutions Architect Andrea Sabet how his company developed a series of build and deployment pipelines using native AWS services in the highly regulated healthcare sector.

Learn how the following services can be used to build and deploy infrastructure and application code:

Discover how AWS resources can be rapidly created and updated as part of a CI/CD pipeline while ensuring HIPAA compliance through approved/vetted AWS Identity and Access Management (IAM) roles that AWS CloudFormation is permitted to assume.

February’s AWS Architecture Monthly magazine is all about healthcare. Check it out on Kindle Newsstand, download the PDF, or see it on Flipboard.

*Check out more This Is My Architecture video series.

Building Windows containers with AWS CodePipeline and custom actions

Post Syndicated from Dmitry Kolomiets original https://aws.amazon.com/blogs/devops/building-windows-containers-with-aws-codepipeline-and-custom-actions/

Dmitry Kolomiets, DevOps Consultant, Professional Services

AWS CodePipeline and AWS CodeBuild are the primary AWS services for building CI/CD pipelines. AWS CodeBuild supports a wide range of build scenarios thanks to various built-in Docker images. It also allows you to bring in your own custom image in order to use different tools and environment configurations. However, there are some limitations in using custom images.

Considerations for custom Docker images:

  • AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images.
  • AWS CodeBuild provides a limited set of instance types to run the builds. You might have to use a custom image if the build job requires higher memory, CPU, graphical subsystems, or any other functionality that is not part of the out-of-the-box provided Docker image.

Windows-specific limitations

  • AWS CodeBuild supports Windows builds only in a limited number of AWS regions at this time.
  • AWS CodeBuild executes Windows Server containers using Windows Server 2016 hosts, which means that build containers are huge—it is not uncommon to have an image size of 15 GB or more (with .NET Framework SDK installed). Windows Server 2019 containers, which are almost half as small, cannot be used due to host-container mismatch.
  • AWS CodeBuild runs build jobs inside Docker containers. You should enable privileged mode in order to build and publish Linux Docker images as part of your build job. However, DIND is not supported on Windows and, therefore, AWS CodeBuild cannot be used to build Windows Server container images.

The last point is the critical one for microservice type of applications based on Microsoft stacks (.NET Framework, Web API, IIS). The usual workflow for this kind of applications is to build a Docker image, push it to ECR and update ECS / EKS cluster deployment.

Here is what I cover in this post:

  • How to address the limitations stated above by implementing AWS CodePipeline custom actions (applicable for both Linux and Windows environments).
  • How to use the created custom action to define a CI/CD pipeline for Windows Server containers.

CodePipeline custom actions

By using Amazon EC2 instances, you can address the limitations with Windows Server containers and enable Windows build jobs in the regions where AWS CodeBuild does not provide native Windows build environments. To accommodate the specific needs of a build job, you can pick one of the many Amazon EC2 instance types available.

The downside of this approach is additional management burden—neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly. There are ways to set up a Jenkins build cluster on AWS and integrate it with CodeBuild and CodeDeploy, but these options are too “heavy” for the simple task of building a Docker image.

There is a different way to tackle this problem: AWS CodePipeline provides APIs that allow you to extend a build action though custom actions. This example demonstrates how to add a custom action to offload a build job to an Amazon EC2 instance.

Here is the generic sequence of steps that the custom action performs:

  • Acquire EC2 instance (see the Notes on Amazon EC2 build instances section).
  • Download AWS CodePipeline artifacts from Amazon S3.
  • Execute the build command and capture any errors.
  • Upload output artifacts to be consumed by subsequent AWS CodePipeline actions.
  • Update the status of the action in AWS CodePipeline.
  • Release the Amazon EC2 instance.

Notice that most of these steps are the same regardless of the actual build job being executed. However, the following parameters will differ between CI/CD pipelines and, therefore, have to be configurable:

  • Instance type (t2.micro, t3.2xlarge, etc.)
  • AMI (builds could have different prerequisites in terms of OS configuration, software installed, Docker images downloaded, etc.)
  • Build command line(s) to execute (MSBuild script, bash, Docker, etc.)
  • Build job timeout

Serverless custom action architecture

CodePipeline custom build action can be implemented as an agent component installed on an Amazon EC2 instance. The agent polls CodePipeline for build jobs and executes them on the Amazon EC2 instance. There is an example of such an agent on GitHub, but this approach requires installation and configuration of the agent on all Amazon EC2 instances that carry out the build jobs.

Instead, I want to introduce an architecture that enables any Amazon EC2 instance to be a build agent without additional software and configuration required. The architecture diagram looks as follows:

Serverless custom action architecture

There are multiple components involved:

  1. An Amazon CloudWatch Event triggers an AWS Lambda function when a custom CodePipeline action is to be executed.
  2. The Lambda function retrieves the action’s build properties (AMI, instance type, etc.) from CodePipeline, along with location of the input artifacts in the Amazon S3 bucket.
  3. The Lambda function starts a Step Functions state machine that carries out the build job execution, passing all the gathered information as input payload.
  4. The Step Functions flow acquires an Amazon EC2 instance according to the provided properties, waits until the instance is up and running, and starts an AWS Systems Manager command. The Step Functions flow is also responsible for handling all the errors during build job execution and releasing the Amazon EC2 instance once the Systems Manager command execution is complete.
  5. The Systems Manager command runs on an Amazon EC2 instance, downloads CodePipeline input artifacts from the Amazon S3 bucket, unzips them, executes the build script, and uploads any output artifacts to the CodePipeline-provided Amazon S3 bucket.
  6. Polling Lambda updates the state of the custom action in CodePipeline once it detects that the Step Function flow is completed.

The whole architecture is serverless and requires no maintenance in terms of software installed on Amazon EC2 instances thanks to the Systems Manager command, which is essential for this solution. All the code, AWS CloudFormation templates, and installation instructions are available on the GitHub project. The following sections provide further details on the mentioned components.

Custom Build Action

The custom action type is defined as an AWS::CodePipeline::CustomActionType resource as follows:

  Ec2BuildActionType: 
    Type: AWS::CodePipeline::CustomActionType
    Properties: 
      Category: !Ref CustomActionProviderCategory
      Provider: !Ref CustomActionProviderName
      Version: !Ref CustomActionProviderVersion
      ConfigurationProperties: 
        - Name: ImageId 
          Description: AMI to use for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: InstanceType
          Description: Instance type for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: Command
          Description: Command(s) to execute.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String 
        - Name: WorkingDirectory 
          Description: Working directory for the command to execute.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
        - Name: OutputArtifactPath 
          Description: Path of the file(-s) or directory(-es) to use as custom action output artifact.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
      InputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0
      OutputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0 
      Settings: 
        EntityUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/documents/${RunBuildJobOnEc2Instance}"
        ExecutionUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/states/home#/executions/details/{ExternalExecutionId}"

The custom action type is uniquely identified by Category, Provider name, and Version.

Category defines the stage of the pipeline in which the custom action can be used, such as build, test, or deploy. Check the AWS documentation for the full list of allowed values.

Provider name and Version are the values used to identify the custom action type in the CodePipeline console or AWS CloudFormation templates. Once the custom action type is installed, you can add it to the pipeline, as shown in the following screenshot:

Adding custom action to the pipeline

The custom action type also defines a list of user-configurable properties—these are the properties identified above as specific for different CI/CD pipelines:

  • AMI Image ID
  • Instance Type
  • Command
  • Working Directory
  • Output artifacts

The properties are configurable in the CodePipeline console, as shown in the following screenshot:

Custom action properties

Note the last two settings in the Custom Action Type AWS CloudFormation definition: EntityUrlTemplate and ExecutionUrlTemplate.

EntityUrlTemplate defines the link to the AWS Systems Manager document that carries over the build actions. The link is visible in AWS CodePipeline console as shown in the following screenshot:

Custom action's EntityUrlTemplate link

ExecutionUrlTemplate defines the link to additional information related to a specific execution of the custom action. The link is also visible in the CodePipeline console, as shown in the following screenshot:

Custom action's ExecutionUrlTemplate link

This URL is defined as a link to the Step Functions execution details page, which provides high-level information about the custom build step execution, as shown in the following screenshot:

Custom build step execution

This page is a convenient visual representation of the custom action execution flow and may be useful for troubleshooting purposes as it gives an immediate access to error messages and logs.

The polling Lambda function

The Lambda function polls CodePipeline for custom actions when it is triggered by the following CloudWatch event:

  source: 
    - "aws.codepipeline"
  detail-type: 
    - "CodePipeline Action Execution State Change"
  detail: 
    state: 
      - "STARTED"

The event is triggered for every CodePipeline action started, so the Lambda function should verify if, indeed, there is a custom action to be processed.

The rest of the lambda function is trivial and relies on the following APIs to retrieve or update CodePipeline actions and deal with instances of Step Functions state machines:

CodePipeline API

AWS Step Functions API

You can find the complete source of the Lambda function on GitHub.

Step Functions state machine

The following diagram shows complete Step Functions state machine. There are three main blocks on the diagram:

  • Acquiring an Amazon EC2 instance and waiting while the instance is registered with Systems Manager
  • Running a Systems Manager command on the instance
  • Releasing the Amazon EC2 instance

Note that it is necessary to release the Amazon EC2 instance in case of error or exception during Systems Manager command execution, relying on Fallback States to guarantee that.

You can find the complete definition of the Step Function state machine on GitHub.

Step Functions state machine

Systems Manager Document

The AWS Systems Manager Run Command does all the magic. The Systems Manager agent is pre-installed on AWS Windows and Linux AMIs, so no additional software is required. The Systems Manager run command executes the following steps to carry out the build job:

  1. Download input artifacts from Amazon S3.
  2. Unzip artifacts in the working folder.
  3. Run the command.
  4. Upload output artifacts to Amazon S3, if any; this makes them available for the following CodePipeline stages.

The preceding steps are operating-system agnostic, and both Linux and Windows instances are supported. The following code snippet shows the Windows-specific steps.

You can find the complete definition of the Systems Manager document on GitHub.

mainSteps:
  - name: win_enable_docker
    action: aws:configureDocker
    inputs:
      action: Install

  # Windows steps
  - name: windows_script
    precondition:
      StringEquals: [platformType, Windows]
    action: aws:runPowerShellScript
    inputs:
      runCommand:
        # Ensure that if a command fails the script does not proceed to the following commands
        - "$ErrorActionPreference = \"Stop\""

        - "$jobDirectory = \"{{ workingDirectory }}\""
        # Create temporary folder for build artifacts, if not provided
        - "if ([string]::IsNullOrEmpty($jobDirectory)) {"
        - "    $parent = [System.IO.Path]::GetTempPath()"
        - "    [string] $name = [System.Guid]::NewGuid()"
        - "    $jobDirectory = (Join-Path $parent $name)"
        - "    New-Item -ItemType Directory -Path $jobDirectory"
                # Set current location to the new folder
        - "    Set-Location -Path $jobDirectory"
        - "}"

        # Download/unzip input artifact
        - "Read-S3Object -BucketName {{ inputBucketName }} -Key {{ inputObjectKey }} -File artifact.zip"
        - "Expand-Archive -Path artifact.zip -DestinationPath ."

        # Run the build commands
        - "$directory = Convert-Path ."
        - "$env:PATH += \";$directory\""
        - "{{ commands }}"
        # We need to check exit code explicitly here
        - "if (-not ($?)) { exit $LASTEXITCODE }"

        # Compress output artifacts, if specified
        - "$outputArtifactPath  = \"{{ outputArtifactPath }}\""
        - "if ($outputArtifactPath) {"
        - "    Compress-Archive -Path $outputArtifactPath -DestinationPath output-artifact.zip"
                # Upload compressed artifact to S3
        - "    $bucketName = \"{{ outputBucketName }}\""
        - "    $objectKey = \"{{ outputObjectKey }}\""
        - "    if ($bucketName -and $objectKey) {"
                    # Don't forget to encrypt the artifact - CodePipeline bucket has a policy to enforce this
        - "        Write-S3Object -BucketName $bucketName -Key $objectKey -File output-artifact.zip -ServerSideEncryption aws:kms"
        - "    }"
        - "}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"

CI/CD pipeline for Windows Server containers

Once you have a custom action that offloads the build job to the Amazon EC2 instance, you may approach the problem stated at the beginning of this blog post: how to build and publish Windows Server containers on AWS.

With the custom action installed, the solution is quite straightforward. To build a Windows Server container image, you need to provide the value for Windows Server with Containers AMI, the instance type to use, and the command line to execute, as shown in the following screenshot:

Windows Server container custom action properties

This example executes the Docker build command on a Windows instance with the specified AMI and instance type, using the provided source artifact. In real life, you may want to keep the build script along with the source code and push the built image to a container registry. The following is a PowerShell script example that not only produces a Docker image but also pushes it to AWS ECR:

# Authenticate with ECR
Invoke-Expression -Command (Get-ECRLoginCommand).Command

# Build and push the image
docker build -t <ecr-repository-url>:latest .
docker push <ecr-repository-url>:latest

return $LASTEXITCODE

You can find a complete example of the pipeline that produces the Windows Server container image and pushes it to Amazon ECR on GitHub.

Notes on Amazon EC2 build instances

There are a few ways to get Amazon EC2 instances for custom build actions. Let’s take a look at a couple of them below.

Start new EC2 instance per job and terminate it at the end

This is a reasonable default strategy that is implemented in this GitHub project. Each time the pipeline needs to process a custom action, you start a new Amazon EC2 instance, carry out the build job, and terminate the instance afterwards.

This approach is easy to implement. It works well for scenarios in which you don’t have many builds and/or builds take some time to complete (tens of minutes). In this case, the time required to provision an instance is amortized. Conversely, if the builds are fast, instance provisioning time could be actually longer than the time required to carry out the build job.

Use a pool of running Amazon EC2 instances

There are cases when it is required to keep builder instances “warm”, either due to complex initialization or merely to reduce the build duration. To support this scenario, you could maintain a pool of always-running instances. The “acquisition” phase takes a warm instance from the pool and the “release” phase returns it back without terminating or stopping the instance. A DynamoDB table can be used as a registry to keep track of “busy” instances and provide waiting or scaling capabilities to handle high demand.

This approach works well for scenarios in which there are many builds and demand is predictable (e.g. during work hours).

Use a pool of stopped Amazon EC2 instances

This is an interesting approach, especially for Windows builds. All AWS Windows AMIs are generalized using a sysprep tool. The important implication of this is that the first start time for Windows EC2 instances is quite long: it could easily take more than 5 minutes. This is generally unacceptable for short-living build jobs (if your build takes just a minute, it is annoying to wait 5 minutes to start the instance).

Interestingly, once the Windows instance is initialized, subsequent starts take less than a minute. To utilize this, you could create a pool of initialized and stopped Amazon EC2 instances. In this case, for the acquisition phase, you start the instance, and when you need to release it, you stop or hibernate it.

This approach provides substantial improvements in terms of build start-up time.

The downside is that you reuse the same Amazon EC2 instance between the builds—it is not completely clean environment. Build jobs have to be designed to expect the presence of artifacts from the previous executions on the build instance.

Using an Amazon EC2 fleet with spot instances

Another variation of the previous strategies is to use Amazon EC2 Fleet to make use of cost-efficient spot instances for your build jobs.

Amazon EC2 Fleet makes it possible to combine on-demand instances with spot instances to deliver cost-efficient solution for your build jobs. On-demand instances can provide the minimum required capacity and spot instances provide a cost-efficient way to improve performance of your build fleet.

Note that since spot instances could be terminated at any time, the Step Functions workflow has to support Amazon EC2 instance termination and restart the build on a different instance transparently for CodePipeline.

Limits and Cost

The following are a few final thoughts.

Custom action timeouts

The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions.

Cost of running EC2 build instances

Custom Amazon EC2 instances could be even more cost effective than CodeBuild for many scenarios. However, it is difficult to compare the total cost of ownership of a custom-built fleet with CodeBuild. CodeBuild is a fully managed build service and you pay for each minute of using the service. In contrast, with Amazon EC2 instances you pay for the instance either per hour or per second (depending on instance type and operating system), EBS volumes, Lambda, and Step Functions. Please use the AWS Simple Monthly Calculator to get the total cost of your projected build solution.

Cleanup

If you are running the above steps as a part of workshop / testing, then you may delete the resources to avoid any further charges to be incurred. All resources are deployed as part of CloudFormation stack, so go to the Services, CloudFormation, select the specific stack and click delete to remove the stack.

Conclusion

The CodePipeline custom action is a simple way to utilize Amazon EC2 instances for your build jobs and address a number of CodePipeline limitations.

With AWS CloudFormation template available on GitHub you can import the CodePipeline custom action with a simple Start/Terminate instance strategy into your account and start using the custom action in your pipelines right away.

The CodePipeline custom action with a simple Start/Terminate instance strategy is available on GitHub as an AWS CloudFormation stack. You could import the stack to your account and start using the custom action in your pipelines right away.

An example of the pipeline that produces Windows Server containers and pushes them to Amazon ECR can also be found on GitHub.

I invite you to clone the repositories to play with the custom action, and to make any changes to the action definition, Lambda functions, or Step Functions flow.

Feel free to ask any questions or comments below, or file issues or PRs on GitHub to continue the discussion.

Build ARM-based applications using CodeBuild

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/build-arm-based-applications-using-codebuild/

AWS CodeBuild has announced support for ARM-based workloads, which will allow you to build and test your software updates natively, without needing to emulate or cross-compile. ARM is a quickly growing platform for application development today and if you rely on emulation and/or cross-compile, the downside is time and reliability. However, a more native approach can be faster and more reliable: Enter ARM-based workload support.

In this post, you will learn how to build a sample Java application with an ARM-based Docker image, you will then upload the artifact to an S3 bucket.

Prerequisite

A new repository in CodeCommit with the code from the sample Java application linked above has already been created. A working knowledge of git and how to fork or clone within your source provider is a pre-requisite.

Configuration Steps

Working with our source code:

  1. Fork or clone the repo and upload/push the code to your source provider of choice. As of this writing, CodeBuild supports the following Source Providers: S3, CodeCommit, BitBucket, GitHub, and GitHub Enterprise.
  2. Go to your IDE of choice* and within your repo/source create a new file named buildspec.yml and copy in the following code. *In this post, the AWS Cloud9 IDE will be referenced when discussing edits. (buildspec.yml reference page)
    version: 0.2
    phases:
        install:
            runtime-versions:
               java: corretto8
               
        build:
            commands:
                - echo Starting Java build at `date`
                - mvn package
                
            finally:
                - echo Finished build of Java Sample at `date`
                
    artifacts:
        files:
            - 'target/aws-java-sample-1.0.jar'

    For this sample, specify that your container run an Amazon Corretto 8 Java environment. An artifact file will also be output, which we will be sent to an S3 bucket later in the process.

    The two phases are:

    1. Install – Since version 0.2 is being used, the Install phase is required to specify the runtime-version.
    2. Build – This phase is where the commands used to build the software will be passed.
  3. Once the buildspec.yml file has been added and saved, you will commit your changes and push your code to your source.
    We are doing a git push to our repo.

 

Creating your CodeBuild Project:

Now that you have created your source code, it’s time to create your CodeBuild project. For this post, The AWS Management Console will be used, though other tools such as AWS Cloud Dvelopement Kit (CDK), AWS CloudFormation, or the AWS CLI can also be leveraged.

Creating your artifact destination:

The first thing you are going to do is create where your artifact will be stored. For this blog, you are going to put your artifact into an S3 bucket.

1) In the console search bar type ‘S3.’

2) Select ‘S3’ to go to the S3 Console

image displaying how to search for the S3 service in the AWS Management Console

3) Select ‘Create bucket’ from the top left of the console.

image showing where to click to create your S3 Bucket

 

4) Type in your bucket name and Region and click ‘Next.’ For this blog, you are going to use the name “mydemobuildbucket.” It is important to note that it must an be all lowercase and globally unique name.

image showing the fields in the name and region of the S3 bucket creation.

5) Leave the defaults as is for the configuration page and select ‘Next.’

image displaying to leave the configuration options as they are and click next.

6) Under the permissions tab, choose to ‘Block all public access’ to the bucket, then click ‘Next.’ This well help keep your artifact secure.

image displaying to select block all public access and select next. this is important to secure our bucket.

7) The Review pane is where you can verify all of your settings. Once you have confirmed that all settings are correct, click ‘Create bucket’ to finish.

image showing the summary of the bucket creation process and to click create bucket.

You should see your S3 Bucket. With your bucket, you can now create your build project.

 

Create CodeBuild Project:

If you have worked with CodeBuild in the past, most of this will look familiar, however, as part of the ARM release, a few new options have been added within the Build Environment section that will let you build and test any of your ARM-based applications.

1) Go to the console search and type ‘CodeBuild’ and select the service.

image showing how to search for the CodeBuild service in the AWS Management Console

2) In the top right corner click ‘Create build project.’

Image showing where to click to create a build project

3) Enter a name for your project. (‘myDemoBuild’ will be used as the default name in this post.) Descriptions are optional but can be useful.

image showing entering our project name myDemoBuild and an optional description

4) Select your source provider. I am going to use CodeCommit and select my repo and the branch where my new code is located.

image shows selecing our Source provider, Repository name, reference type, and branch name.

5) This is where the differences are that I mentioned earlier. We are now going to setup our Build Environment. For ARM support we must select the following options:

  1. Operating System: Amazon Linux 2 (At time of publishing, ARM support only supports using the Amazon Linux 2 operating system.)
  2. Runtime(s): Standard
  3. Image: amazonlinux-aarch64-standard

You can either create a new service role or select an existing service role if you have previously created one. I am going to create a new role called myDemoRole. The system will automatically create the required permissions to allow CodeBuild to access the resources based on your input from the Project. In a production environment, I would recommend creating a service role that follows a least access needed principal, instead of creating a holistically new role.

image showing the environment settings for CodeBuild.

6) Configure buildspec settings. I am going to select ‘Use a buildspec file’ and leave the name blank as it will default to buildspec.yml. However, you can select a specific name if you have multiple buildspec files for use with difference environments. E.g., buildspec-prod.yml, buildspec-staging.yml, buildspec-dev.yml

Image showing us select to use a buildspec file.

7) Configure your artifact settings. I am going to upload my artifact to the S3 Bucket we created earlier. I have named the file artifact.zip for simplicity, but any name can be used. I have selected to Zip the artifact file, however, that is not required.

image showing the artifact configuration for our CodeBuild project.

8) Configure logging. I am going to enable CloudWatch logging so that the build logs are uploaded to the logging service.

image showing configuring logs which is an optional step.

9) Select ‘Create build project.’

image showing the review panel where you can review all of the settings for your CodeBuild Project.

Run the build:

Now that your application is created, your S3 bucket has been built, and you’ve created your build project, it is time to run your build. If everything is successful, your artifact file should be stored in your S3 bucket.

1) After you have created your build project, you should now be in the build page which allows you to run/edit/delete your build project. Select ‘Start build’ in the top right part of the page.

image showing how to start the build you just created.

2) You can review or override the build settings that you configured when setting up the project. Once you have verified settings, click ‘Start build’ in the top right.

image showing the verify run options and to kick off the build.

3) Once your build has been started, it will run through the steps of you buildspec file (this can take a while depending on your application). Once complete, the status should show “Succeeded.”

image showing where to check for build status.

If for any reason your build did not succeed, look in the phase details to find the error. Validate all of your settings are correct and use the documentation to help you troubleshoot any issues.

4) Now that your build has been successful, verify that your artifact is in the S3 bucket you created.

image showing where you will find the Artifact in the S3 Bucket.

 

Clean Up

Reminder, if you created any resources just for testing purposes, you should delete them to keep from incurring additional cost.

Make sure and check the following when cleaning up:

  • S3 bucket
  • CodeBuild Project
  • CodeCommit repository
  • Cloud9 Environment

Conclusion

We have walked through the process of using CodeBuild to build a sample Java application in the new ARM environment. Now that you have built your ARM-based artifact, you can download it for any local use or get started developing your own ARM-based applications using AWS Developer Tools.

 

Receive AWS Developer Tools Notifications over Slack using AWS Chatbot

Post Syndicated from Anushri Anwekar original https://aws.amazon.com/blogs/devops/receive-aws-developer-tools-notifications-over-slack-using-aws-chatbot/

Developers often use Slack to communicate with each other about their code. With AWS Chatbot, you can configure notifications for developer tools resources such as repositories, build projects, deployment applications, and pipelines so that users in Slack channels are automatically notified about important events. When a deployment fails, a build succeeds, or a pull request is created, developers get notifications where they’re most likely to see and react to them.

The AWS services which currently support notifications are:

In this post, I walk you through the high-level steps for creating a notification that alerts users in a Slack channel every time a pull request is created in a CodeCommit repository.

Solution overview

You can create both the notification rule to listen for required events and the Amazon SNS topic used for notifications on the same web page. You can then configure AWS Chatbot so that notifications sent to that Amazon SNS topic appears in a Slack channel.

To set up notifications, follow the following process, as shown in the following diagram:

  1. Create a notification rule for a repository. This includes creating an Amazon SNS topic to use for notifications.
  2. Configure AWS Chatbot to send notifications from that Amazon SNS topic to a Slack channel.
  3. Test it out and enjoy receiving notifications in your team’s Slack channel.

This diagram describes the notification workflow and how impacted services are connected.

Prerequisites

To follow along with this example, you need an AWS account, an IAM user or role with administrative access, a CodeCommit repository, and a Slack channel.

Configuration steps

Step 1: Create notification rule in CodeCommit

Follow these steps to create a notification rule in CodeCommit:

1 . Select the repository in CodeCommit about which you want to be notified. In the following screenshot, I have selected a repository called Hello-Dublin. Screen-shot of the repository view

2. Select a repository for which you want to receive notifications. Choose Notify, then Create notification rule.Screen-shot of how to select option to create a notification rule

3. Provide a name for your notification rule. I suggest leaving the default Detail Type as Full. By selecting Full, you get extra information beyond what is present in the resource events. Also, you get updated information about your selected event types whenever new information is added about them.

  • For example, if you want to receive notifications whenever a comment is made on a pull request, select Basic, and your notification informs you that a comment has been made.
  • If you select Full, the notification also specifies the exact comment that was made. If the notification feature is enhanced and extra information is added to be a part of the notification, you start receiving the new information without modifying your existing notification rule.

4. In Event types, in Pull request, select Created.

5. In Targets, choose Create SNS topic. This automatically sets up a new Amazon SNS topic to use for notifications, applying a policy that allows notification events to be sent to it.

6. Finish creating the rule. Keep a note of the Amazon SNS ARN, as you need this information to configure Slack integration in the next step.

For complete step-by-step instructions for creating a notification rule, see Create a Notification Rule.

Step 2: Integrate your Amazon SNS topic with AWS Chatbot

Follow these steps to integrate your Amazon SNS topic with AWS Chatbot.

1. Open up your Slack channel. You need information about it as well as your notification rule to complete integration.

2. Open the AWS Chatbot console and choose Try the AWS Chatbot beta.

3. Choose Configure new client, then Slack, then Configure.

4. AWS Chatbot asks for permission to access your Slack workplace, as seen in the following screenshot. Once you give permission, you are asked to configure your Slack channel.

Screen-shot of a prompt about AWS Chatbot requesting permission to access the notifications Slack workspace

Step 3: Test the notification

In your repository, create a pull request. In this example, I named the pull request This is a new pull request. Watch as a notification about that event appears in your Slack channel, as seen in the following screenshot.

Example of a notification received on a Slack channel when a new pull request is created

Step 4: Clean-up

If you created notification rule just for testing purposes, you should delete the SNS topic to avoid any further charges.

Conclusion

And that’s it! You can use notifications to help developers to stay informed about the key events happening in their software development life cycle. You can set up notification rules for build projects, deployment applications, pipelines, and repositories, and stay informed about key events such as pull request creation, comments made on your code or commits, build state/phase change, deployment project status change, manual pipelines approval, or pipeline execution status change. For more information, see the notifications documentation.

ICYMI: Serverless Q4 2019

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2019/

Welcome to the eighth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

The three months comprising the fourth quarter of 2019

AWS re:Invent

AWS re:Invent 2019

re:Invent 2019 dominated the fourth quarter at AWS. The serverless team presented a number of talks, workshops, and builder sessions to help customers increase their skills and deliver value more rapidly to their own customers.

Serverless talks from re:Invent 2019

Chris Munns presenting 'Building microservices with AWS Lambda' at re:Invent 2019

We presented dozens of sessions showing how customers can improve their architecture and agility with serverless. Here are some of the most popular.

Videos

Decks

You can also find decks for many of the serverless presentations and other re:Invent presentations on our AWS Events Content.

AWS Lambda

For developers needing greater control over performance of their serverless applications at any scale, AWS Lambda announced Provisioned Concurrency at re:Invent. This feature enables Lambda functions to execute with consistent start-up latency making them ideal for building latency sensitive applications.

As shown in the below graph, provisioned concurrency reduces tail latency, directly impacting response times and providing a more responsive end user experience.

Graph showing performance enhancements with AWS Lambda Provisioned Concurrency

Lambda rolled out enhanced VPC networking to 14 additional Regions around the world. This change brings dramatic improvements to startup performance for Lambda functions running in VPCs due to more efficient usage of elastic network interfaces.

Illustration of AWS Lambda VPC to VPC NAT

New VPC to VPC NAT for Lambda functions

Lambda now supports three additional runtimes: Node.js 12, Java 11, and Python 3.8. Each of these new runtimes has new version-specific features and benefits, which are covered in the linked release posts. Like the Node.js 10 runtime, these new runtimes are all based on an Amazon Linux 2 execution environment.

Lambda released a number of controls for both stream and async-based invocations:

  • You can now configure error handling for Lambda functions consuming events from Amazon Kinesis Data Streams or Amazon DynamoDB Streams. It’s now possible to limit the retry count, limit the age of records being retried, configure a failure destination, or split a batch to isolate a problem record. These capabilities help you deal with potential “poison pill” records that would previously cause streams to pause in processing.
  • For asynchronous Lambda invocations, you can now set the maximum event age and retry attempts on the event. If either configured condition is met, the event can be routed to a dead letter queue (DLQ), Lambda destination, or it can be discarded.

AWS Lambda Destinations is a new feature that allows developers to designate an asynchronous target for Lambda function invocation results. You can set separate destinations for success and failure. This unlocks new patterns for distributed event-based applications and can replace custom code previously used to manage routing results.

Illustration depicting AWS Lambda Destinations with success and failure configurations

Lambda Destinations

Lambda also now supports setting a Parallelization Factor, which allows you to set multiple Lambda invocations per shard for Kinesis Data Streams and DynamoDB Streams. This enables faster processing without the need to increase your shard count, while still guaranteeing the order of records processed.

Illustration of multiple AWS Lambda invocations per Kinesis Data Streams shard

Lambda Parallelization Factor diagram

Lambda introduced Amazon SQS FIFO queues as an event source. “First in, first out” (FIFO) queues guarantee the order of record processing, unlike standard queues. FIFO queues support messaging batching via a MessageGroupID attribute that supports parallel Lambda consumers of a single FIFO queue, enabling high throughput of record processing by Lambda.

Lambda now supports Environment Variables in the AWS China (Beijing) Region and the AWS China (Ningxia) Region.

You can now view percentile statistics for the duration metric of your Lambda functions. Percentile statistics show the relative standing of a value in a dataset, and are useful when applied to metrics that exhibit large variances. They can help you understand the distribution of a metric, discover outliers, and find hard-to-spot situations that affect customer experience for a subset of your users.

Amazon API Gateway

Screen capture of creating an Amazon API Gateway HTTP API in the AWS Management Console

Amazon API Gateway announced the preview of HTTP APIs. In addition to significant performance improvements, most customers see an average cost savings of 70% when compared with API Gateway REST APIs. With HTTP APIs, you can create an API in four simple steps. Once the API is created, additional configuration for CORS and JWT authorizers can be added.

AWS SAM CLI

Screen capture of the new 'sam deploy' process in a terminal window

The AWS SAM CLI team simplified the bucket management and deployment process in the SAM CLI. You no longer need to manage a bucket for deployment artifacts – SAM CLI handles this for you. The deployment process has also been streamlined from multiple flagged commands to a single command, sam deploy.

AWS Step Functions

One powerful feature of AWS Step Functions is its ability to integrate directly with AWS services without you needing to write complicated application code. In Q4, Step Functions expanded its integration with Amazon SageMaker to simplify machine learning workflows. Step Functions also added a new integration with Amazon EMR, making EMR big data processing workflows faster to build and easier to monitor.

Screen capture of an AWS Step Functions step with Amazon EMR

Step Functions step with EMR

Step Functions now provides the ability to track state transition usage by integrating with AWS Budgets, allowing you to monitor trends and react to usage on your AWS account.

You can now view CloudWatch Metrics for Step Functions at a one-minute frequency. This makes it easier to set up detailed monitoring for your workflows. You can use one-minute metrics to set up CloudWatch Alarms based on your Step Functions API usage, Lambda functions, service integrations, and execution details.

Step Functions now supports higher throughput workflows, making it easier to coordinate applications with high event rates. This increases the limits to 1,500 state transitions per second and a default start rate of 300 state machine executions per second in US East (N. Virginia), US West (Oregon), and Europe (Ireland). Click the above link to learn more about the limit increases in other Regions.

Screen capture of choosing Express Workflows in the AWS Management Console

Step Functions released AWS Step Functions Express Workflows. With the ability to support event rates greater than 100,000 per second, this feature is designed for high-performance workloads at a reduced cost.

Amazon EventBridge

Illustration of the Amazon EventBridge schema registry and discovery service

Amazon EventBridge announced the preview of the Amazon EventBridge schema registry and discovery service. This service allows developers to automate discovery and cataloging event schemas for use in their applications. Additionally, once a schema is stored in the registry, you can generate and download a code binding that represents the schema as an object in your code.

Amazon SNS

Amazon SNS now supports the use of dead letter queues (DLQ) to help capture unhandled events. By enabling a DLQ, you can catch events that are not processed and re-submit them or analyze to locate processing issues.

Amazon CloudWatch

Amazon CloudWatch announced Amazon CloudWatch ServiceLens to provide a “single pane of glass” to observe health, performance, and availability of your application.

Screenshot of Amazon CloudWatch ServiceLens in the AWS Management Console

CloudWatch ServiceLens

CloudWatch also announced a preview of a capability called Synthetics. CloudWatch Synthetics allows you to test your application endpoints and URLs using configurable scripts that mimic what a real customer would do. This enables the outside-in view of your customers’ experiences, and your service’s availability from their point of view.

CloudWatch introduced Embedded Metric Format, which helps you ingest complex high-cardinality application data as logs and easily generate actionable metrics. You can publish these metrics from your Lambda function by using the PutLogEvents API or using an open source library for Node.js or Python applications.

Finally, CloudWatch announced a preview of Contributor Insights, a capability to identify who or what is impacting your system or application performance by identifying outliers or patterns in log data.

AWS X-Ray

AWS X-Ray announced trace maps, which enable you to map the end-to-end path of a single request. Identifiers show issues and how they affect other services in the request’s path. These can help you to identify and isolate service points that are causing degradation or failures.

X-Ray also announced support for Amazon CloudWatch Synthetics, currently in preview. CloudWatch Synthetics on X-Ray support tracing canary scripts throughout the application, providing metrics on performance or application issues.

Screen capture of AWS X-Ray Service map in the AWS Management Console

X-Ray Service map with CloudWatch Synthetics

Amazon DynamoDB

Amazon DynamoDB announced support for customer-managed customer master keys (CMKs) to encrypt data in DynamoDB. This allows customers to bring your own key (BYOK) giving you full control over how you encrypt and manage the security of your DynamoDB data.

It is now possible to add global replicas to existing DynamoDB tables to provide enhanced availability across the globe.

Another new DynamoDB capability to identify frequently accessed keys and database traffic trends is currently in preview. With this, you can now more easily identify “hot keys” and understand usage of your DynamoDB tables.

Screen capture of Amazon CloudWatch Contributor Insights for DynamoDB in the AWS Management Console

CloudWatch Contributor Insights for DynamoDB

DynamoDB also released adaptive capacity. Adaptive capacity helps you handle imbalanced workloads by automatically isolating frequently accessed items and shifting data across partitions to rebalance them. This helps reduce cost by enabling you to provision throughput for a more balanced workload instead of over provisioning for uneven data access patterns.

Amazon RDS

Amazon Relational Database Services (RDS) announced a preview of Amazon RDS Proxy to help developers manage RDS connection strings for serverless applications.

Illustration of Amazon RDS Proxy

The RDS Proxy maintains a pool of established connections to your RDS database instances. This pool enables you to support a large number of application connections so your application can scale without compromising performance. It also increases security by enabling IAM authentication for database access and enabling you to centrally manage database credentials using AWS Secrets Manager.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) now offers Verified Author badges. These badges enable consumers to quickly and reliably know who you are. The badge appears next to your name in the SAR and links to your GitHub profile.

Screen capture of SAR Verifiedl developer badge in the AWS Management Console

SAR Verified developer badges

AWS Developer Tools

AWS CodeCommit launched the ability for you to enforce rule workflows for pull requests, making it easier to ensure that code has pass through specific rule requirements. You can now create an approval rule specifically for a pull request, or create approval rule templates to be applied to all future pull requests in a repository.

AWS CodeBuild added beta support for test reporting. With test reporting, you can now view the detailed results, trends, and history for tests executed on CodeBuild for any framework that supports the JUnit XML or Cucumber JSON test format.

Screen capture of AWS CodeBuild

CodeBuild test trends in the AWS Management Console

Amazon CodeGuru

AWS announced a preview of Amazon CodeGuru at re:Invent 2019. CodeGuru is a machine learning based service that makes code reviews more effective and aids developers in writing code that is more secure, performant, and consistent.

AWS Amplify and AWS AppSync

AWS Amplify added iOS and Android as supported platforms. Now developers can build iOS and Android applications using the Amplify Framework with the same category-based programming model that they use for JavaScript apps.

Screen capture of 'amplify init' for an iOS application in a terminal window

The Amplify team has also improved offline data access and synchronization by announcing Amplify DataStore. Developers can now create applications that allow users to continue to access and modify data, without an internet connection. Upon connection, the data synchronizes transparently with the cloud.

For a summary of Amplify and AppSync announcements before re:Invent, read: “A round up of the recent pre-re:Invent 2019 AWS Amplify Launches”.

Illustration of AWS AppSync integrations with other AWS services

Q4 serverless content

Blog posts

October

November

December

Tech talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year. These are listed in the Serverless section of the AWS Online Tech Talks page.

Here are the ones from Q4:

Twitch

October

There are also a number of other helpful video series covering Serverless available on the AWS Twitch Channel.

AWS Serverless Heroes

We are excited to welcome some new AWS Serverless Heroes to help grow the serverless community. We look forward to some amazing content to help you with your serverless journey.

AWS Serverless Application Repository (SAR) Apps

In this edition of ICYMI, we are introducing a section devoted to SAR apps written by the AWS Serverless Developer Advocacy team. You can run these applications and review their source code to learn more about serverless and to see examples of suggested practices.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. We’re also kicking off a fresh series of Tech Talks in 2020 with new content providing greater detail on everything new coming out of AWS for serverless application developers.

Throughout 2020, the AWS Serverless Developer Advocates are crossing the globe to tell you more about serverless, and to hear more about what you need. Follow this blog to keep up on new launches and announcements, best practices, and examples of serverless applications in action.

You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

Happy coding!

Multi-branch CodePipeline strategy with event-driven architecture

Post Syndicated from Henrique Bueno original https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/

Henrique Bueno, DevOps Consultant, Professional Services

This blog post presents a solution for automated pipelines creation in AWS CodePipeline when a new branch is created in an AWS CodeCommit repository. A use case for this solution is when a GitFlow approach using CodePipeline is required. The strategy presented here is used by AWS customers to enable the use of GitFlow using only AWS tools.

CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.

CodeCommit is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.

GitFlow is a branching model designed around the project release. This provides a robust framework for managing larger projects. Gitflow is ideally suited for projects that have a scheduled release cycle.

Applicability

When using CodePipeline to orchestrate pipelines and CodeCommit as a code source, in addition to setting a repository, you must also set which branch will trigger the pipeline. This configuration works perfectly for the trunk-based strategy, in which you have only one main branch and all the developers interact with this single branch. However, when you need to work with a multi-branching strategy like GitFlow, the requirement to set a pipeline for each branch brings additional challenges.

It’s important to note that trunk-based is, by far, the best strategy for taking full advantage of a DevOps approach; this is the branching strategy that AWS recommends to its customers. On the other hand, many customers like to work with multiple branches and believe it justifies the effort and complexity in dealing with branching merges. This solution is for these customers.

Solution Overview

One of the great benefits of working with Infrastructure as Code is the ability to create multiple identical environments through a single template. This example uses AWS CloudFormation templates to provision pipelines and other necessary resources, as shown in the following diagram.

Multi-branch CodePipeline strategy with event-driven architecture

Multi-branch CodePipeline strategy with event-driven architecture

The template is hosted in an Amazon S3 bucket. An AWS Lambda function deploys a new AWS CloudFormation stack based on this template. This Lambda function is trigged for an Amazon CloudWatch Events rule that looks for events at the CodePipeline repository.

The CreatePipeline Events rule

The AWS CloudFormation snippet that creates the Events rule follows. The Events rule monitors create and delete branches events in all repositories, triggering the CreatePipeline Lambda function.

#----------------------------------------------------------------------#
# EventRule to trigger LambdaPipeline lambda
#----------------------------------------------------------------------#
  CreatePipelineRule:
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern:
        source:
          - aws.codecommit
        detail-type:
          - 'CodeCommit Repository State Change'
        detail:
          event:
              - referenceDeleted
              - referenceCreated
          referenceType:
            - branch
      State: ENABLED
      Targets: 
      - Arn: !GetAtt CreatePipeline.Arn
        Id: CreatePipeline

 

The CreatePipeline Lambda function

The Lambda function receives the event details, parses the variables, and executes the appropriate actions. If the event is referenceCreated, then the stack is created; otherwise the stack is deleted. The stack name created or deleted is the junction of the repository name plus the new branch name. This is a very simple function.

#----------------------------------------------------------------------#
# Lambda for Stack Creation
#----------------------------------------------------------------------#
import boto3
def lambda_handler(event, context):
    Region = event['region']
    Account = event['account']
    RepositoryName = event['detail']['repositoryName']
    NewBranch = event['detail']['referenceName']
    Event = event['detail']['event']
    if NewBranch == "master":
        quit()
    if Event == "referenceCreated":
        cf_client = boto3.client('cloudformation')
        cf_client.create_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}',
            TemplateURL=f'https://s3.amazonaws.com/{Account}-templates/TemplatePipeline.yaml',
            Parameters=[
                {
                    'ParameterKey': 'RepositoryName',
                    'ParameterValue': RepositoryName,
                    'UsePreviousValue': False
                },
                {
                    'ParameterKey': 'BranchName',
                    'ParameterValue': NewBranch,
                    'UsePreviousValue': False
                }
            ],
            OnFailure='ROLLBACK',
            Capabilities=['CAPABILITY_NAMED_IAM']
        )
    else:
        cf_client = boto3.client('cloudformation')
        cf_client.delete_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}'
        )

The logic for creating only the CI or the CI+CD is on the AWS CloudFormation template. The Conditions section of AWS CloudFormation analyzes the new branch name.

Conditions: 
    BranchMaster: !Equals [ !Ref BranchName, "master" ]
    BranchDevelop: !Equals [ !Ref BranchName, "develop"]
    Setup: !Equals [ !Ref Setup, true ]
  • If the new branch is named master, then a stack will be created containing CI+CD pipelines, with deploy stages in the homologation and production environments.
  • If the new branch is named develop, then a stack will be created containing CI+CD pipelines, with a deploy stage in the Dev environment.
  • If the new branch has any other name, then the stack will be created with only a CI pipeline.

Since the purpose of this blog post is to present only a sample of automated pipelines creation, the pipelines used here are for examples only: they don’t deploy to any environment.

 

Applicability

This event-driven strategy permits pipelines to be created or deleted along with the branches. Since the entire environment is created using Infrastructure as Code and the template is the same for all pipelines, there is no possibility of different configuration issues between environments and pipeline stages.

A GitFlow simulation could resemble that shown in the following diagram:

GitFlow Diagram

GitFlow Diagram

  1. First, a CodeCommit repository is created, along with the master branch and its respective pipeline (CI+CD).
  2. The developer creates a branch called develop based on the master branch. The pipeline (CI+CD at Dev) is automatically created.
  3. The developer creates a feature-branch called feature-a based on the develop branch. The CI pipeline for this branch is automatically created.
  4. The developer creates a Pull Request from the feature-a branch to the develop branch. As soon as the Pull Request is accepted and merged and the feature-a branch is deleted, its pipeline is automatically deleted.

The same process can be followed for the release branch and hotfix branch. Once the branch is created, a new pipeline is created for it which follows its branch lifecycle.

Pipelines Stacks

 

Implementation

Before you start, make sure that the AWS CLI is installed and configured on your machine by following these steps:

  1. Clone the repository.
  2. Create the prerequisites stack.
  3. Copy the AWS CloudFormation template to Amazon S3.
  4. Copy the seed.zip file to the Amazon S3 bucket.
  5. Create the first repository and its pipeline.
  6. Create the develop branch.
  7. Create the first feature branch.
  8. Create the first Pull Request.
  9. Execute the Pull Request approval.
  10. Cleanup.

 

1. Clone the repository

Clone the repository with the sample code.

The main files are:

  • Setup.yaml: an AWS CloudFormation template for creating pipeline prerequisites.
  • TemplatePipeline.yaml: an AWS CloudFormation template for pipeline creation.
  • seed/buildspec/CIAction.yaml: a configuration file for an AWS CodeBuild project at the CI stage.
  • seed/buildspec/CDAction.yaml: a configuration file for a CodeBuild project at the CD stage.
# Command to clone the repository
git clone https://github.com/aws-samples/aws-codepipeline-multi-branch-strategy.git
cd aws-codepipeline-multi-branch-strategy

 

2. Create the prerequisite stack

The Setup stack creates the resources that are prerequisites for pipeline creation, as shown in the following chart.

These resources are created only once and they fit all the pipelines created in this example.

Resources

# Command to create Setup stack
aws cloudformation deploy --stack-name Setup-Pipeline \
--template-file Setup.yaml --region us-east-1 --capabilities CAPABILITY_NAMED_IAM

 

3. Copy the AWS CloudFormation template to Amazon S3

For the Lambda function to deploy a new pipeline stack, it needs to get the AWS CloudFormation template from somewhere. To enable it to do so, you need to save the template inside the Amazon S3 bucket that you just created at the Setup stack.

# Command that copy Template to S3 Bucket
aws s3 cp TemplatePipeline.yaml s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

4. Copy the seed.zip file to the Amazon S3 bucket

CodeCommit permits you to populate a repository at the moment of its creation as a first commit. The content of this first commit can be saved in a .zip file in an Amazon S3 bucket. Use this CodeCommit option to populate your repository with BuildSpec files for CodeBuild.

# Command to create zip file with the Buildspec folder content.
zip -r seed.zip buildspec

# Command that copy seed.zip file to S3 Bucket.
aws s3 cp seed.zip s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

5. Create the first repository and its pipeline

Now that the Setup stack is created and the seed file is stored in an Amazon S3 bucket, create the first CodeCommit repository. Every time that you want to create a new repository, execute the command below to create a new stack.

# Command to create the stack with the CodeCommit repository,
# CodeBuild Projects and the Pipeline for the master branch.
# Note: Change "myapp" by the name you want.

RepoName="myapp"
aws cloudformation deploy --stack-name Repo-$RepoName --template-file TemplatePipeline.yaml \
--parameter-overrides RepositoryName=$RepoName Setup=true \
--region us-east-1 --capabilities CAPABILITY_NAMED_IAM

When the stack is created, in addition to the CodeCommit repository, the CodeBuild projects and the master branch pipeline are also created. By default, a CodeCommit repository is created empty, with no branch. When the repository is populated with the seed.zip file, the master branch is created.

Resources

Access the CodeCommit repository to see the seed files at the master branch. Access the CodePipeline console to see that there’s a new pipeline with the name as the repository. This pipeline contains the CI+CD stages (homolog and prod).

 

6. Create the develop branch

To simulate a real development scenario, create a new branch called develop based on the master branch. In the GitFlow concept these two (master and develop) branches are fixed and never deleted.

When this new branch is created, the Events rule identifies that there’s a change on this repository and triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains the CI+CD stages (Dev).

# Configure Git Credentials using AWS CLI Credential Helper
mkdir myapp
cd myapp
git config --global credential.helper '!aws codecommit credential-helper [email protected]'
git config --global credential.UseHttpPath true

# Clone the CodeCommit repository
# You can get the URL in the CodeCommit Console
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp .

# Create the develop branch
# For more details: https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-branch.html
git checkout -b develop
git push origin develop

 

7. Create the first feature branch

Now that there are two main and fixed branches (master and develop), you can create a feature branch. In the GitFlow concept, feature branches have a short lifetime and are frequently merged to the develop branch. This type of branch only exists during the development period. When the feature development is finished, it is merged to the develop branch and the feature branch is deleted.

# Create the feature-branch branch
# make sure that you are at develop branch
git checkout -b feature-abc
git push origin feature-abc

Just as with the develop branch, when this new branch is created, the Events rule triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains only the CI stage, without a CD stage.

Pipelines-2

 

8. Create the first Pull Request

It’s possible to simulate the end of a feature development, when the branch is ready to be merged with the develop branch. Keep in mind that the feature branch has a short lifecycle:  when the merge is done, the feature branch is deleted along with its pipeline.

# Create the Pull Request
aws codecommit create-pull-request --title "My Pull Request" \
--description "Please review these changes by Tuesday" \
--targets repositoryName=$RepoName,sourceReference=feature-abc,destinationReference=develop \
--region us-east-1

 

9. Execute the Pull Request approval

To merge the feature branch to the develop branch, the Pull Request needs to be approved. In a real scenario, a teammate should do a peer review before approval.

# Accept the Pull Request
# You can get the Pull-Request-ID in the json output of the create-pull-request command
aws codecommit merge-pull-request-by-fast-forward --pull-request-id <PULL_REQUEST_ID_FROM_PREVIOUS_COMMAND> \
--repository-name $RepoName --region us-east-1

# Delete the feature-branch
aws codecommit delete-branch --repository-name $RepoName --branch-name feature-abc --region us-east-1

The new code is integrated with the develop branch. If there’s a conflict, it needs to be solved. After that, the featurebranch is deleted together with its pipeline. The Event Rule triggers the CreatePipeline Lambda function to delete the pipeline for its branch. Access the CodePipeline console to see that the pipeline for the feature branch is deleted.

Pipelines

 

10. Cleanup

To remove the resources created as part of this blog post, follow these steps:

Delete Pipeline Stacks

# Delete all the pipeline Stacks created by CreatePipeline Lambda
Pipelines=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Pipeline`) == `true`].[StackName]' --output text)
while read -r Pipeline rest; do aws cloudformation delete-stack --stack-name $Pipeline --region us-east-1 ; done <<< $Pipelines

Delete Repository Stacks

# Delete all the Repository stacks Stacks created by Step 5.
Repos=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Repo-`) == `true`].[StackName]' --output text)
while read -r Repo rest; do aws cloudformation delete-stack --stack-name $Repo --region us-east-1 ; done <<< $Repos

Delete Setup-Pipeline Stack

# Cleaning Bucket before Stack deletion 
aws s3 rm s3://"$(aws sts get-caller-identity --query Account --output text)"-templates --recursive

# Delete Setup Stack
aws cloudformation delete-stack --stack-name Setup-Pipeline --region us-east-1 

 

Conclusion

This blog post discussed how you can work with event-driven strategy and Infrastructure as Code to implement a multi-branch pipeline flow using CodePipeline. It demonstrated how an Events rule and Lambda function can be used to fully orchestrate the creation and deletion of pipelines.

 

About the Author

Henrique Bueno

Henrique Bueno

Henrique is a DevOps Consultant in the Brazilian Professional Services Team at Amazon Web Services. He has helped AWS customers design, build and deploy cloud native applications by following the Twelve-Factor App methodology.

Integrating SonarCloud with AWS CodePipeline using AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/integrating-sonarcloud-with-aws-codepipeline-using-aws-codebuild/

In most development processes, common challenges include the quality of released code and the efficiency of the code review process. There are multiple tools providing insights into code quality which can easily be integrated into the daily routine of the development team. One such tool is SonarCloud, a code analysis as a service provided by SonarQube. This tool provides a defined process to enforce code control on three levels—syntax, code standards, and structure—before the code reaches the testing stage can address these challenges and help the developer release high-quality code every time.

In this blog post, we will demonstrate how SonarCloud can be integrated with AWS CodePipeline using AWS CodeBuild.

AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can easily integrate AWS CodePipeline with third-party services such as GitHub or with your own custom plugin.

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.

Prerequisites:

  1. GitHub account credential to login to SonarCloud. We assume you have fair understanding of SonarCloud.
  2. AWS Account and console access. We assume you have sample project to integrate either in GitHub or AWS CodeCommit repository.
  3. For more information on CodeBuild, refer getting started documentation.

High level architecture

Here, we are going to use a simple three stage CodePipeline setup to demonstrate the integration with Sonarcloud. For source stage, we will use a sample project stored in AWS CodeCommit. For review stage, we will use AWS CodeBuild project to integrate with SonarCloud and perform code quality check. For final build stage, we will use another AWS CodeBuild project and push the built artifact to S3 bucket.

Connect your repository with SonarCloud

First, connect your repository with SonarCloud by following these steps:

  1. Sign in to GitHub through the SonarCloud site using your GitHub credentials, as shown in the following screenshot.

SonarCloud Login screen      2. Choose Create a new project in the SonarCloud portal, as shown in the following screenshot.

Welcome screen SonarCloud

 

3. Choose Choose an organization in GitHub, as shown in the following screenshot.

Analyze projects on SonarCloud4. Choose Install after selecting the required repositories, as shown in the following screenshot.

Install Sonar plugin

5. Your GitHub repository is now synchronized with SonarCloud. The GitHub repository in this example has a Java project. Bind the GitHub branch and choose Create Organization, as shown in the following screenshot.
choose plan for sonarcloud

6.  To generate a token, to go User > My Account > Security. Your existing tokens are listed here, each with a Revoke button. Enter a new Token name and Click Generate.  Store it for the succeeding steps.

 

security token for Sonarcloud access

7. Select Analyze new project.

new project setup on SonarCloud

8. Select Set up manually. Add a new Project key and click Set up.

Analyze project setup on SonarCloud

Note: We will use the Project key, Organization and token in the next step to configure CodeBuild.

Configure SecretManager

We will use AWS Secret Manager to store the sonar login credentials. By using Secrets Manager we can provide controlled access to the credentials from CodeBuild.

1.     Visit AWS Secret Manager console to setup the sonar login credentials.

2.     Select Store a new secret. And choose Other types of secret

3.     Enter secret keys and values as shown below. Enter the values based on your Organization, project and token.

4.     Enter the secret name. In this case, we will use “prod/sonar” and save with default settings.

AWS Secret Manager setup

Configuring AWS CodeBuild

A buildspec.yml file is a collection of build commands and related settings in YAML format that CodeBuild uses to run a build. To understand buildspec.yml file specification, refer to the Build Specification Reference for CodeBuild.

Create a CodeBuild Project name, such as CodeReview, for integrating with SonarCloud.

For CodeBuild Environment, use AWS managed image with Ubuntu Operating System and Standard runtime with image “aws/codebuild/standard:3.0

The buildspec.yml file in CodeBuild is structured as follows:

version: 0.2
env:
  secrets-manager:
    LOGIN: prod/sonar:sonartoken
    HOST: prod/sonar:HOST
    Organization: prod/sonar:Organization
    Project: prod/sonar:Project
phases:
  install:
    runtime-versions:
      java: openjdk8
  pre_build:
    commands:
      - apt-get update
      - apt-get install -y jq
      - wget http://www-eu.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
      - tar xzf apache-maven-3.5.4-bin.tar.gz
      - ln -s apache-maven-3.5.4 maven
      - wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492-linux.zip
      - unzip ./sonar-scanner-cli-3.3.0.1492-linux.zip
      - export PATH=$PATH:/sonar-scanner-3.3.0.1492-linux/bin/
  build:
    commands:
      - mvn test     
      - mvn sonar:sonar -Dsonar.login=$LOGIN -Dsonar.host.url=$HOST -Dsonar.projectKey=$Project -Dsonar.organization=$Organization
      - sleep 5
      - curl https://sonarcloud.io/api/qualitygates/project_status?projectKey=$Project >result.json
      - cat result.json
      - if [ $(jq -r '.projectStatus.status' result.json) = ERROR ] ; then $CODEBUILD_BUILD_SUCCEEDING -eq 0 ;fi

 

Note: In the pre-build phase, we have downloaded and unzipped the SonarQube Scanner CLI package. The SonarCloud CLI is used to interact with the SonarCloud service. You can also look for the latest SonarCloud CLI release. And in the build phase, we have added a command to execute SonarCloud check and get a response from the project’s quality gate.

2. The Code Review status of the project can be also be verified in the SonarCloud dashboard, as shown in the following screenshot.

SonarCloud Quality gate sample screen

Note: Quality Gate is a feature in SonarCloud that can be configured to ensure coding standards are met and regulated across projects. You can set threshold measures on your projects like code coverage, technical debt measure, number of blocker/critical issues, security rating/unit test pass rate, and more. The last step calls the Quality Gate API to check if the code is satisfying all the conditions set in Quality Gate. Refer to the Quality Gate documentation for more information.

Quality Gate can return four possible responses:

  • ERROR: The project fails the Quality Gate.
  • WARN: The project has some irregularities but is ok to be passed on to production.
  • OK: The project successfully passes the Quality Gate.
  • None: The Quality Gate is not attached to project.

AWS CodeBuild provides several environment variables that you can use in your build commands. CODEBUILD_BUILD_SUCCEEDING is a variable used to indicate whether the current build is succeeding. Setting the value to 0 indicates the build status as failure and 1 indicates the build as success.

Using the Quality Gate ERROR response, set the CODEBUILD_BUILD_SUCCEEDING variable to failure. Accordingly, the CodeBuild status can be used to provide response for the pipeline to proceed or to stop.

Set up CodePipeline to verify the SonarCloud integration.

Switch to your CodePipeline console to create a pipeline for your repository.

You can integrate SonarCloud in any stage in CodePipeline. In this example, we created a Review stage after the CodePipeline Source stage with CodeBuild used as an action provider, as shown in the following screenshot. Here, we have used a project from our CodeCommit repository to analyze it on SonarCloud. You should be able to link your projects from either GitHub, S3 or CodeCommit as appropriate using CodePipeline.

Sample AWS CodePipeline

Clean Up

  1. Visit CodePipeline console, select the created pipeline. Select the Edit and click Delete.
  2. Visit CodeBuild console, select the created project. Select the Action and click Delete.
  3. Visit Secrets Manager console, select the created secret. Select the Action and click Delete.

Conclusion

This blog demonstrated how to integrate SonarCloud with CodePipeline using CodeBuild. With this solution, you can automate static code analysis every time you have a check-in in your source code tool. Hopefully this blog post will help you integrate SonarCloud for better code quality before release. Feel free to leave suggestions or approaches on integration in the comments.

About the Authors

 

Raji Krishnamoorthy is a AWS Cloud architect working for Tata Consultancy Services.
She carries close to 16 years of experience in Microsoft .Net, SharePoint, AWS and other cloud technologies. Currently, she is leading the Public Cloud Industry Transformation Group with Tata Consultancy Services.

 

 

 

Neelam Jain is a AWS Solution Architect working for Tata Consultancy Services. She has expertise on Java and AWS DevOps technologies. Currently, she is playing the role of a Senior Developer in Public Cloud CoE group with Tata Consultancy Services.

DevOps at re:Invent 2019!

Post Syndicated from Matt Dwyer original https://aws.amazon.com/blogs/devops/devops-at-reinvent-2019/

re:Invent 2019 is fast approaching (NEXT WEEK!) and we here at the AWS DevOps blog wanted to take a moment to highlight DevOps focused presentations, share some tips from experienced re:Invent pro’s, and highlight a few sessions that still have availability for pre-registration. We’ve broken down the track into one overarching leadership session and four topic areas: (a) architecture, (b) culture, (c) software delivery/operations, and (d) AWS tools, services, and CLI.

In total there will be 145 DevOps track sessions, stretched over 5 days, and divided into four distinct session types:

  • Sessions (34) are one-hour presentations delivered by AWS experts and customer speakers who share their expertise / use cases
  • Workshops (20) are two-hours and fifteen minutes, hands-on sessions where you work in teams to solve problems using AWS services
  • Chalk Talks (41) are interactive white-boarding sessions with a smaller audience. They typically begin with a 10–15-minute presentation delivered by an AWS expert, followed by 45–50-minutes of Q&A
  • Builders Sessions (50) are one-hour, small group sessions with six customers and one AWS expert, who is there to help, answer questions, and provide guidance
  • Select DevOps focused sessions have been highlighted below. If you want to view and/or register for any session, including Keynotes, builders’ fairs, and demo theater sessions, you can access the event catalog using your re:Invent registration credentials.

Reserve your seat for AWS re:Invent activities today >>

re:Invent TIP #1: Identify topics you are interested in before attending re:Invent and reserve a seat. We hold space in sessions, workshops, and chalk talks for walk-ups, however, if you want to get into a popular session be prepared to wait in line!

Please see below for select sessions, workshops, and chalk talks that will be conducted during re:Invent.

LEADERSHIP SESSION DELIVERED BY KEN EXNER, DIRECTOR AWS DEVELOPER TOOLS

[Session] Leadership Session: Developer Tools on AWS (DOP210-L) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Ken Exner – Director, AWS Dev Tools, Amazon Web Services
Speaker 2: Kyle Thomson – SDE3, Amazon Web Services

Join Ken Exner, GM of AWS Developer Tools, as he shares the state of developer tooling on AWS, as well as the future of development on AWS. Ken uses insight from his position managing Amazon’s internal tooling to discuss Amazon’s practices and patterns for releasing software to the cloud. Additionally, Ken provides insight and updates across many areas of developer tooling, including infrastructure as code, authoring and debugging, automation and release, and observability. Throughout this session Ken will recap recent launches and show demos for some of the latest features.

re:Invent TIP #2: Leadership Sessions are a topic area’s State of the Union, where AWS leadership will share the vision and direction for a given topic at AWS.re:Invent.

(a) ARCHITECTURE

[Session] Amazon’s approach to failing successfully (DOP208-RDOP208-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Becky Weiss – Senior Principal Engineer, Amazon Web Services

Welcome to the real world, where things don’t always go your way. Systems can fail despite being designed to be highly available, scalable, and resilient. These failures, if used correctly, can be a powerful lever for gaining a deep understanding of how a system actually works, as well as a tool for learning how to avoid future failures. In this session, we cover Amazon’s favorite techniques for defining and reviewing metrics—watching the systems before they fail—as well as how to do an effective postmortem that drives both learning and meaningful improvement.

[Session] Improving resiliency with chaos engineering (DOP309-RDOP309-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Olga Hall – Senior Manager, Tech Program Management
Speaker 2: Adrian Hornsby – Principal Evangelist, Amazon Web Services

Failures are inevitable. Regardless of the engineering efforts put into building resilient systems and handling edge cases, sometimes a case beyond our reach turns a benign failure into a catastrophic one. Therefore, we should test and continuously improve our system’s resilience to failures to minimize impact on a user’s experience. Chaos engineering is one of the best ways to achieve that. In this session, you learn how Amazon Prime Video has implemented chaos engineering into its regular testing methods, helping it achieve increased resiliency.

[Session] Amazon’s approach to security during development (DOP310-RDOP310-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Colm MacCarthaigh – Senior Principal Engineer, Amazon Web Services

At AWS we say that security comes first—and we really mean it. In this session, hear about how AWS teams both minimize security risks in our products and respond to security issues proactively. We talk through how we integrate security reviews, penetration testing, code analysis, and formal verification into the development process. Additionally, we discuss how AWS engineering teams react quickly and decisively to new security risks as they emerge. We also share real-life firefighting examples and the lessons learned in the process.

[Session] Amazon’s approach to building resilient services (DOP342-RDOP342-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Marc Brooker – Senior Principal Engineer, Amazon Web Services

One of the biggest challenges of building services and systems is predicting the future. Changing load, business requirements, and customer behavior can all change in unexpected ways. In this talk, we look at how AWS builds, monitors, and operates services that handle the unexpected. Learn how to make your own services handle a changing world, from basic design principles to patterns you can apply today.

re:Invent TIP #3: Not sure where to spend your time? Let an AWS Hero give you some pointers. AWS Heroes are prominent AWS advocates who are passionate about sharing AWS knowledge with others. They have written guides to help attendees find relevant activities by providing recommendations based on specific demographics or areas of interest.

(b) CULTURE

[Session] Driving change and building a high-performance DevOps culture (DOP207-R; DOP207-R1)

Speaker: Mark Schwartz – Enterprise Strategist, Amazon Web Services

When it comes to digital transformation, every enterprise is different. There is often a person or group with a vision, knowledge of good practices, a sense of urgency, and the energy to break through impediments. They may be anywhere in the organizational structure: high, low, or—in a typical scenario—somewhere in middle management. Mark Schwartz, an enterprise strategist at AWS and the author of “The Art of Business Value” and “A Seat at the Table: IT Leadership in the Age of Agility,” shares some of his research into building a high-performance culture by driving change from every level of the organization.

[Session] Amazon’s approach to running service-oriented organizations (DOP301-R; DOP301-R1DOP301-R2)

Speaker: Andy Troutman – Director AWS Developer Tools, Amazon Web Services

Amazon’s “two-pizza teams” are famously small teams that support a single service or feature. Each of these teams has the autonomy to build and operate their service in a way that best supports their customers. But how do you coordinate across tens, hundreds, or even thousands of two-pizza teams? In this session, we explain how Amazon coordinates technology development at scale by focusing on strategies that help teams coordinate while maintaining autonomy to drive innovation.

re:Invent TIP #4: The max number of 60-minute sessions you can attend during re:Invent is 24! These sessions (e.g., sessions, chalk talks, builders sessions) will usually make up the bulk of your agenda.

(c) SOFTWARE DELIVERY AND OPERATIONS

[Session] Strategies for securing code in the cloud and on premises. Speakers: (DOP320-RDOP320-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Craig Smith – Senior Solutions Architect
Speaker 2: Lee Packham – Solutions Architect

Some people prefer to keep their code and tooling on premises, though this can create headaches and slow teams down. Others prefer keeping code off of laptops that can be misplaced. In this session, we walk through the alternatives and recommend best practices for securing your code in cloud and on-premises environments. We demonstrate how to use services such as Amazon WorkSpaces to keep code secure in the cloud. We also show how to connect tools such as Amazon Elastic Container Registry (Amazon ECR) and AWS CodeBuild with your on-premises environments so that your teams can go fast while keeping your data off of the public internet.

[Session] Deploy your code, scale your application, and lower Cloud costs using AWS Elastic Beanstalk (DOP326) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Prashant Prahlad – Sr. Manager

You can effortlessly convert your code into web applications without having to worry about provisioning and managing AWS infrastructure, applying patches and updates to your platform or using a variety of tools to monitor health of your application. In this session, we show how anyone- not just professional developers – can use AWS Elastic Beanstalk in various scenarios: From an administrator moving a Windows .NET workload into the Cloud, a developer building a containerized enterprise app as a Docker image, to a data scientist being able to deploy a machine learning model, all without the need to understand or manage the infrastructure details.

[Session] Amazon’s approach to high-availability deployment (DOP404-RDOP404-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Peter Ramensky – Senior Manager

Continuous-delivery failures can lead to reduced service availability and bad customer experiences. To maximize the rate of successful deployments, Amazon’s development teams implement guardrails in the end-to-end release process to minimize deployment errors, with a goal of achieving zero deployment failures. In this session, learn the continuous-delivery practices that we invented that help raise the bar and prevent costly deployment failures.

[Session] Introduction to DevOps on AWS (DOP209-R; DOP209-R1)

Speaker 1: Jonathan Weiss – Senior Manager
Speaker 2: Sebastien Stormacq – Senior Technical Evangelist

How can you accelerate the delivery of new, high-quality services? Are you able to experiment and get feedback quickly from your customers? How do you scale your development team from 1 to 1,000? To answer these questions, it is essential to leverage some key DevOps principles and use CI/CD pipelines so you can iterate on and quickly release features. In this talk, we walk you through the journey of a single developer building a successful product and scaling their team and processes to hundreds or thousands of deployments per day. We also walk you through best practices and using AWS tools to achieve your DevOps goals.

[Workshop] DevOps essentials: Introductory workshop on CI/CD practices (DOP201-R; DOP201-R1; DOP201-R2; DOP201-R3)

Speaker 1: Leo Zhadanovsky – Principal Solutions Architect
Speaker 2: Karthik Thirugnanasambandam – Partner Solutions Architect

In this session, learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities. We demonstrate a prescriptive approach to incrementally adopt and embrace some of the best practices around continuous integration and delivery using AWS developer tools and third-party solutions, including, AWS CodeCommit, AWS CodeBuild, Jenkins, AWS CodePipeline, AWS CodeDeploy, AWS X-Ray and AWS Cloud9. We also highlight some best practices and productivity tips that can help make your software release process fast, automated, and reliable.

[Workshop] Implementing GitFLow with AWS tools (DOP202-R; DOP202-R1; DOP202-R2)

Speaker 1: Amit Jha – Sr. Solutions Architect
Speaker 2: Ashish Gore – Sr. Technical Account Manager

Utilizing short-lived feature branches is the development method of choice for many teams. In this workshop, you learn how to use AWS tools to automate merge-and-release tasks. We cover high-level frameworks for how to implement GitFlow using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You also get an opportunity to walk through a prebuilt example and examine how the framework can be adopted for individual use cases.

[Chalk Talk] Generating dynamic deployment pipelines with AWS CDK (DOP311-R; DOP311-R1; DOP311-R2)

Speaker 1: Flynn Bundy – AppDev Consultant
Speaker 2: Koen van Blijderveen – Senior Security Consultant

In this session we dive deep into dynamically generating deployment pipelines that deploy across multiple AWS accounts and Regions. Using the power of the AWS Cloud Development Kit (AWS CDK), we demonstrate how to simplify and abstract the creation of deployment pipelines to suit a range of scenarios. We highlight how AWS CodePipeline—along with AWS CodeBuild, AWS CodeCommit, and AWS CodeDeploy—can be structured together with the AWS deployment framework to get the most out of your infrastructure and application deployments.

[Chalk Talk] Customize AWS CloudFormation with open-source tools (DOP312-R; DOP312-R1; DOP312-E)

Speaker 1: Luis Colon – Senior Developer Advocate
Speaker 2: Ryan Lohan – Senior Software Engineer

In this session, we showcase some of the best open-source tools available for AWS CloudFormation customers, including conversion and validation utilities. Get a glimpse of the many open-source projects that you can use as you create and maintain your AWS CloudFormation stacks.

[Chalk Talk] Optimizing Java applications for scale on AWS (DOP314-R; DOP314-R1; DOP314-R2)

Speaker 1: Sam Fink – SDE II
Speaker 2: Kyle Thomson – SDE3

Executing at scale in the cloud can require more than the conventional best practices. During this talk, we offer a number of different Java-related tools you can add to your AWS tool belt to help you more efficiently develop Java applications on AWS—as well as strategies for optimizing those applications. We adapt the talk on the fly to cover the topics that interest the group most, including more easily accessing Amazon DynamoDB, handling high-throughput uploads to and downloads from Amazon Simple Storage Service (Amazon S3), troubleshooting Amazon ECS services, working with local AWS Lambda invocations, optimizing the Java SDK, and more.

[Chalk Talk] Securing your CI/CD tools and environments (DOP316-R; DOP316-R1; DOP316-R2)

Speaker: Leo Zhadanovsky – Principal Solutions Architect

In this session, we discuss how to configure security for AWS CodePipeline, deployments in AWS CodeDeploy, builds in AWS CodeBuild, and git access with AWS CodeCommit. We discuss AWS Identity and Access Management (IAM) best practices, to allow you to set up least-privilege access to these services. We also demonstrate how to ensure that your pipelines meet your security and compliance standards with the CodePipeline AWS Config integration, as well as manual approvals. Lastly, we show you best-practice patterns for integrating security testing of your deployment artifacts inside of your CI/CD pipelines.

[Chalk Talk] Amazon’s approach to automated testing (DOP317-R; DOP317-R1; DOP317-R2)

Speaker 1: Carlos Arguelles – Principal Engineer
Speaker 2: Charlie Roberts – Senior SDET

Join us for a session about how Amazon uses testing strategies to build a culture of quality. Learn Amazon’s best practices around load testing, unit testing, integration testing, and UI testing. We also discuss what parts of testing are automated and how we take advantage of tools, and share how we strategize to fail early to ensure minimum impact to end users.

[Chalk Talk] Building and deploying applications on AWS with Python (DOP319-R; DOP319-R1; DOP319-R2)

Speaker 1: James Saryerwinnie – Senior Software Engineer
Speaker 2: Kyle Knapp – Software Development Engineer

In this session, hear from core developers of the AWS SDK for Python (Boto3) as we walk through the design of sample Python applications. We cover best practices in using Boto3 and look at other libraries to help build these applications, including AWS Chalice, a serverless microframework for Python. Additionally, we discuss testing and deployment strategies to manage the lifecycle of your applications.

[Chalk Talk] Deploying AWS CloudFormation StackSets across accounts and Regions (DOP325-R; DOP325-R1)

Speaker 1: Mahesh Gundelly – Software Development Manager
Speaker 2: Prabhu Nakkeeran – Software Development Manager

AWS CloudFormation StackSets can be a critical tool to efficiently manage deployments of resources across multiple accounts and regions. In this session, we cover how AWS CloudFormation StackSets can help you ensure that all of your accounts have the proper resources in place to meet security, governance, and regulation requirements. We also cover how to make the most of the latest functionalities and discuss best practices, including how to plan for safe deployments with minimal blast radius for critical changes.

[Chalk Talk] Monitoring and observability of serverless apps using AWS X-Ray (DOP327-R; DOP327-R1; DOP327-R2)

Speaker 1 (R, R1, R2): Shengxin Li – Software Development Engineer
Speaker 2 (R, R1): Sirirat Kongdee – Solutions Architect
Speaker 3 (R2): Eric Scholz – Solutions Architect, Amazon

Monitoring and observability are essential parts of DevOps best practices. You need monitoring to debug and trace unhandled errors, performance bottlenecks, and customer impact in the distributed nature of a microservices architecture. In this chalk talk, we show you how to integrate the AWS X-Ray SDK to your code to provide observability to your overall application and drill down to each service component. We discuss how X-Ray can be used to analyze, identify, and alert on performance issues and errors and how it can help you troubleshoot application issues faster.

[Chalk Talk] Optimizing deployment strategies for speed & safety (DOP341-R; DOP341-R1; DOP341-R2)

Speaker: Karan Mahant – Software Development Manager, Amazon

Modern application development moves fast and demands continuous delivery. However, the greatest risk to an application’s availability can occur during deployments. Join us in this chalk talk to learn about deployment strategies for web servers and for Amazon EC2, container-based, and serverless architectures. Learn how you can optimize your deployments to increase productivity during development cycles and mitigate common risks when deploying to production by using canary and blue/green deployment strategies. Further, we share our learnings from operating production services at AWS.

[Chalk Talk] Continuous integration using AWS tools (DOP216-R; DOP216-R1; DOP216-R2)

Speaker: Richard Boyd – Sr Developer Advocate, Amazon Web Services

Today, more teams are adopting continuous-integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase best practices for continuous integration and discuss how to effectively use AWS tools for CI.

re:Invent TIP #5: If you’re traveling to another session across campus, give yourself at least 60 minutes!

(d) AWS TOOLS, SERVICES, AND CLI

[Session] Best practices for authoring AWS CloudFormation (DOP302-R; DOP302-R1)

Speaker 1: Olivier Munn – Sr Product Manager Technical, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

Incorporating infrastructure as code into software development practices can help teams and organizations improve automation and throughput without sacrificing quality and uptime. In this session, we cover multiple best practices for writing, testing, and maintaining AWS CloudFormation template code. You learn about IDE plug-ins, reusability, testing tools, modularizing stacks, and more. During the session, we also review sample code that showcases some of the best practices in a way that lends more context and clarity.

[Chalk Talk] Using AWS tools to author and debug applications (DOP215-RDOP215-R1DOP215-R2) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Fabian Jakobs – Principal Engineer, Amazon Web Services

Every organization wants its developers to be faster and more productive. AWS Cloud9 lets you create isolated cloud-based development environments for each project and access them from a powerful web-based IDE anywhere, anytime. In this session, we demonstrate how to use AWS Cloud9 and provide an overview of IDE toolkits that can be used to author application code.

[Session] Migrating .Net frameworks to the cloud (DOP321) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Robert Zhu – Principal Technical Evangelist, Amazon Web Services

Learn how to migrate your .NET application to AWS with minimal steps. In this demo-heavy session, we share best practices for migrating a three-tiered application on ASP.NET and SQL Server to AWS. Throughout the process, you get to see how AWS Toolkit for Visual Studio can enable you to fully leverage AWS services such as AWS Elastic Beanstalk, modernizing your application for more agile and flexible development.

[Session] Deep dive into AWS Cloud Development Kit (DOP402-R; DOP402-R1)

Speaker 1: Elad Ben-Israel – Principal Software Engineer, Amazon Web Services
Speaker 2: Jason Fulghum – Software Development Manager, Amazon Web Services

The AWS Cloud Development Kit (AWS CDK) is a multi-language, open-source framework that enables developers to harness the full power of familiar programming languages to define reusable cloud components and provision applications built from those components using AWS CloudFormation. In this session, you develop an AWS CDK application and learn how to quickly assemble AWS infrastructure. We explore the AWS Construct Library and show you how easy it is to configure your cloud resources, manage permissions, connect event sources, and build and publish your own constructs.

[Session] Introduction to the AWS CLI v2 (DOP406-R; DOP406-R1)

Speaker 1: James Saryerwinnie – Senior Software Engineer, Amazon Web Services
Speaker 2: Kyle Knapp – Software Development Engineer, Amazon Web Services

The AWS Command Line Interface (AWS CLI) is a command-line tool for interacting with AWS services and managing your AWS resources. We’ve taken all of the lessons learned from AWS CLI v1 (launched in 2013), and have been working on AWS CLI v2—the next major version of the AWS CLI—for the past year. AWS CLI v2 includes features such as improved installation mechanisms, a better getting-started experience, interactive workflows for resource management, and new high-level commands. Come hear from the core developers of the AWS CLI about how to upgrade and start using AWS CLI v2 today.

[Session] What’s new in AWS CloudFormation (DOP408-R; DOP408-R1; DOP408-R2)

Speaker 1: Jing Ling – Senior Product Manager, Amazon Web Services
Speaker 2: Luis Colon – Senior Developer Advocate, Amazon Web Services

AWS CloudFormation is one of the most widely used AWS tools, enabling infrastructure as code, deployment automation, repeatability, compliance, and standardization. In this session, we cover the latest improvements and best practices for AWS CloudFormation customers in particular, and for seasoned infrastructure engineers in general. We cover new features and improvements that span many use cases, including programmability options, cross-region and cross-account automation, operational safety, and additional integration with many other AWS services.

[Workshop] Get hands-on with Python/boto3 with no or minimal Python experience (DOP203-R; DOP203-R1; DOP203-R2)

Speaker 1: Herbert-John Kelly – Solutions Architect, Amazon Web Services
Speaker 2: Carl Johnson – Enterprise Solutions Architect, Amazon Web Services

Learning a programming language can seem like a huge investment. However, solving strategic business problems using modern technology approaches, like machine learning and big-data analytics, often requires some understanding. In this workshop, you learn the basics of using Python, one of the most popular programming languages that can be used for small tasks like simple operations automation, or large tasks like analyzing billions of records and training machine-learning models. You also learn about and use the AWS SDK (software development kit) for Python, called boto3, to write a Python program running on and interacting with resources in AWS.

[Workshop] Building reusable AWS CloudFormation templates (DOP304-R; DOP304-R1; DOP304-R2)

Speaker 1: Chelsey Salberg – Front End Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

AWS CloudFormation gives you an easy way to define your infrastructure as code, but are you using it to its full potential? In this workshop, we take real-world architecture from a sandbox template to production-ready reusable code. We start by reviewing an initial template, which you update throughout the session to incorporate AWS CloudFormation features, like nested stacks and intrinsic functions. By the end of the workshop, expect to have a set of AWS CloudFormation templates that demonstrate the same best practices used in AWS Quick Starts.

[Workshop] Building a scalable serverless application with AWS CDK (DOP306-R; DOP306-R1; DOP306-R2; DOP306-R3)

Speaker 1: David Christiansen – Senior Partner Solutions Architect, Amazon Web Services
Speaker 2: Daniele Stroppa – Solutions Architect, Amazon Web Services

Dive into AWS and build a web application with the AWS Mythical Mysfits tutorial. In this workshop, you build a serverless application using AWS Lambda, Amazon API Gateway, and the AWS Cloud Development Kit (AWS CDK). Through the tutorial, you get hands-on experience using AWS CDK to model and provision a serverless distributed application infrastructure, you connect your application to a backend database, and you capture and analyze data on user behavior. Other AWS services that are utilized include Amazon Kinesis Data Firehose and Amazon DynamoDB.

[Chalk Talk] Assembling an AWS CloudFormation authoring tool chain (DOP313-R; DOP313-R1; DOP313-R2)

Speaker 1: Nathan McCourtney – Sr System Development Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

In this session, we provide a prescriptive tool chain and methodology to improve your coding productivity as you create and maintain AWS CloudFormation stacks. We cover authoring recommendations from editors and plugins, to setting up a deployment pipeline for your AWS CloudFormation code.

[Chalk Talk] Build using JavaScript with AWS Amplify, AWS Lambda, and AWS Fargate (DOP315-R; DOP315-R1; DOP315-R2)

Speaker 1: Trivikram Kamat – Software Development Engineer, Amazon Web Services
Speaker 2: Vinod Dinakaran – Software Development Manager, Amazon Web Services

Learn how to build applications with AWS Amplify on the front end and AWS Fargate and AWS Lambda on the backend, and protocols (like HTTP/2), using the JavaScript SDKs in the browser and node. Leverage the AWS SDK for JavaScript’s modular NPM packages in resource-constrained environments, and benefit from the built-in async features to run your node and mobile applications, and SPAs, at scale.

[Chalk Talk] Scaling CI/CD adoption using AWS CodePipeline and AWS CloudFormation (DOP318-R; DOP318-R1; DOP318-R2)

Speaker 1: Andrew Baird – Principal Solutions Architect, Amazon Web Services
Speaker 2: Neal Gamradt – Applications Architect, WarnerMedia

Enabling CI/CD across your organization through repeatable patterns and infrastructure-as-code templates can unlock development speed while encouraging best practices. The SEAD Architecture team at WarnerMedia helps encourage CI/CD adoption across their company. They do so by creating and maintaining easily extensible infrastructure-as-code patterns for creating new services and deploying to them automatically using CI/CD. In this session, learn about the patterns they have created and the lessons they have learned.

re:Invent TIP #6: There are lots of extra activities at re:Invent. Expect your evenings to fill up onsite! Check out the peculiar programs including, board games, bingo, arts & crafts or ‘80s sing-alongs…

Notifying 3rd Party Services of CodeBuild State Changes

Post Syndicated from Nick Lee original https://aws.amazon.com/blogs/devops/notifying-3rd-party-services-of-codebuild-state-changes/

It is often useful to notify other systems of the build status of a code change, such as by creating release tickets in your project-tracking software when a build succeeds, or posting a message to your team’s chat solution. A previous blog post showed you how to integrate AWS Lambda and Amazon SNS to extend AWS CodeCommit to send email notifications for file changes. This blog post shows you how to integrate AWS Lambda, AWS Systems Manager Parameter Store, Amazon DynamoDB and Amazon CloudWatch Events to extend AWS CodeBuild by adding webhook functionality, allowing you to make authenticated API calls to 3rd-party services in response to CodeBuild state changes. It also provides an example of how to use this solution to create an issue in JIRA, a popular issue and project-tracking software solution, in response to a CodeBuild build status change.

Some of the services used include:

  • Amazon DynamoDB: a fully-managed key-value and document database that delivers single-digit millisecond performance at any scale. This solution uses it as a registry for webhook receivers, and takes advantage of its on-demand capacity mode so that you only pay for the resources you consume.
  • AWS Lambda: a popular serverless service that lets you run code without provisioning or managing servers. This solution uses a Lambda function to query DynamoDB for a list of webhook receivers and to notify those receivers of CodeBuild build status changes.
  • Amazon CloudWatch Events: Amazon CloudWatch Events delivers a near real-time stream of system events which allow you to detect changes to your AWS resources and to set up rules to respond to those changes (for example, by invoking a Lambda function in response to build notifications).
  • AWS Systems Manager Parameter Store: a secure, hierarchical storage solution which can be used to store items such as configuration data, passwords, database strings, and license codes. This solution uses SSM Parameter Store to store the HTTP endpoints for 3rd party providers and custom headers, rather than storing them in plaintext in DynamoDB.

To help you quickly deploy the solution, I have made it available as an AWS CloudFormation template. AWS CloudFormation is a management tool that provides a common language to describe and provision all of the infrastructure resources in AWS.

Overview

The following diagram shows how this solution uses AWS services to invoke 3rd-party services in response to CodeBuild state changes.

An overview of the workflow for this solution, showing CodeBuild publishing to CloudWatch Events which invokes the Lambda to notify the 3rd party service.

CodeBuild publishes several useful CloudWatch events, which can notify you of build state changes and build phase transitions. By setting up a CloudWatch event rule, you can detect when a CodeBuild job enters a specific state. In this solution, I create a CloudWatch event rule which captures CodeBuild state changes for all AWS CodeBuild projects in an account, then invokes a Lambda function to handle these change notifications. When this Lambda function is triggered, the following steps are executed:

  1. Query the CodeBuildWebhooks DynamoDB table to find any registered webhook receivers for the CodeBuild project which triggered the event rule.
  2. For each registered receiver:
    1. Obtain the HTTP endpoint and any custom headers from SSM Parameter Store. Some headers/endpoints may be considered sensitive, so the solution stores them in SSM Parameter Store as SecureStrings where needed. The DynamoDB records reference the relevant SSM Parameter by name. The parameter names must all be prefixed with /webhooks/ in order for the webhook Lambda function to access them.
    2. After obtaining the URL for the webhook receiver from SSM Parameter Store, the Lambda function checks if the record from DynamoDB contains a custom HTTP body template. If so, it loads this template, substituting any placeholder values. If no custom template is found, a default template is used.
    3. Finally, the HTTP request is sent with the processed body template.

I use Python and Boto 3 to implement this function. The full source code is published on GitHub. You can find it in the aws-codebuild-webhooks repository.

Getting started

The following sections describe the steps to deploy and use the solution.

Deploying the Solution

There is an AWS CloudFormation template, template.yaml, in the source code which uses the AWS Serverless Application Model to define required components of this solution. For convenience, I have made it available as a one-click launch template:

When launching the stack, the default behavior is to expect SSM parameters to be encrypted using the AWS KMS AWS managed CMK for SSM. However, you can input a different Key ID as the value for the SSMKeyId parameter if required. The above launch stack button deploys the solution in us-east-1, however links for other regions are available on the solution GitHub page.

The template deploys:

  • A Lambda function and associated IAM role for sending HTTP requests
  • A DynamoDB table for registering webhook receivers
  • A CloudWatch event rule for triggering the Lambda function in response to CodeBuild events

The Lambda function code demonstrates how to make authenticated HTTP requests to 3rd party services. You can extend the sample code to add in additional features such as deployment of the Lambda function in a VPC to access private resources like an on-premises Jira.

Example: Creating a Ticket in Jira

To demonstrate the solution, I set up a webhook to create a bug ticket in Jira whenever a build fails. In order to follow this example, you need to install and configure the AWS CLI.

First off, I store the Jira URL securely in the SSM parameter store:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-issues-webhook-url",
  "Value": "https://<my-jira-server>/rest/api/latest/issue/",
  "Type": "String",
  "Description": "Jira issues Rest API URL"
}'

For this sample, I use basic authentication with the JIRA Rest API. After following Jira’s instructions to generate a BASE64-encoded authorization string, I store the headers as a JSON string in SSM:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-basic-auth-headers",
  "Value": "{\"Authorization\": \"Basic <base64 encoded useremail:api_token>\"}",
  "Type": "SecureString",
  "Description": "Jira basic auth headers for CodeBuild webhooks"
}'

For more authentication options, consult the Jira docs.

Now I need to register the webhook receiver in my CodeBuildWebhooks DynamoDB table. In order to make requests to the JIRA REST API, my Lambda function must supply a JSON string containing a payload accepted by the Jira API for creating issues. To do this, I save the following JSON as item.json in my current working directory:

{
  "project": {"S": "MyCodeBuildProject"},
  "hook_url_param_name": {"S": "/webhooks/jira-issues-webhook-url"},
  "hook_headers_param_name": {"S": "/webhooks/jira-basic-auth-headers"},
  "statuses": {"L": [{"S": "FAILED"}]},
  "template": {"S": "{\"fields\": {\"project\":{\"id\": \"10000\"},\"summary\": \"$PROJECT build failing\",\"description\": \"AWS CodeBuild project $PROJECT latest build $STATUS\",\"issuetype\":{\"id\": \"10004\"}}}"}
}

In the template, my project ID is 10000 and the bug issue type is 10004. You can obtain this information from your JIRA instance by invoking the “createmeta” API.

Finally, I register the webhook receiver in my CodeBuildWebhooks DynamoDB table, referencing the JSON file I just created:

aws dynamodb put-item --table-name CodeBuildWebhooks --item file://item.json

That’s it! The next time my CodeBuild project fails, an issue is created in JIRA for someone to action, as shown in the following screenshot:

A Jira Kanban board showing the newly created issue for the failing CodeBuild project

You could extend this example to populate other fields Jira such as Labels, Components, or Assignee.

Cleanup

To remove the resources created as part of this blog post, first delete the stack:

aws cloudformation delete-stack --stack-name aws-codebuild-webhooks

Then delete the parameters from SSM Parameter Store:

aws ssm delete-parameters --names /webhooks/jira-issues-webhook-url /webhooks/jira-basic-auth-headers

Conclusion

In this blog post, I showed you how to use an AWS CloudFormation template to quickly build a sample solution that can help you integrate AWS CodeBuild with other 3rd party tools via AWS Lambda.

The CloudFormation template used in this post and Lambda function can be found in the aws-codebuild-webhooks GitHub repository, along with other examples.

If you have questions or other feedback about this example, please open an issue or submit a pull request.

About the Author

Nick Lee is part of the AWS Solution Builders team in the UK. Nick works with the AWS Solution Architecture community to create standardized tools, code samples, demonstrations and quick starts.