Tag Archives: devops

AWS Solutions Constructs – A Library of Architecture Patterns for the AWS CDK

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-solutions-constructs-a-library-of-architecture-patterns-for-the-aws-cdk/

Cloud applications are built using multiple components, such as virtual servers, containers, serverless functions, storage buckets, and databases. Being able to provision and configure these resources in a safe, repeatable way is incredibly important to automate your processes and let you focus on the unique parts of your implementation.

With the AWS Cloud Development Kit, you can leverage the expressive power of your favorite programming languages to model your applications. You can use high-level components called constructs, preconfigured with “sensible defaults” that you can customize, to quickly build a new application. The CDK provisions your resources using AWS CloudFormation to get all the benefits of managing your infrastructure as code. One of the reasons I like the CDK, is that you can compose and share your own custom components as higher-level constructs.

As you can imagine, there are recurring patterns that can be useful to more than one customer. For this reason, today we are launching the AWS Solutions Constructs, an open source extension library for the CDK that provides well-architected patterns to help you build your unique solutions. CDK constructs mostly cover single services. AWS Solutions Constructs provide multi-service patterns that combine two or more CDK resources, and implement best practices such as logging and encryption.

Using AWS Solutions Constructs
To see the power of a pattern-based approach, let’s take a look at how that works when building a new application. As an example, I want to build an HTTP API to store data in a Amazon DynamoDB table. To keep the content of the table small, I can use DynamoDB Time to Live (TTL) to expire items after a few days. After the TTL expires, data is deleted from the table and sent, via DynamoDB Streams, to a AWS Lambda function to archive the expired data on Amazon Simple Storage Service (S3).

To build this application, I can use a few components:

  • An Amazon API Gateway endpoint for the API.
  • A DynamoDB table to store data.
  • A Lambda function to process the API requests, and store data in the DynamoDB table.
  • DynamoDB Streams to capture data changes.
  • A Lambda function processing data changes to archive the expired data.

Can I make it simpler? Looking at the available patterns in the AWS Solutions Constructs, I find two that can help me build my app:

  • aws-apigateway-lambda, a Construct that implements an API Gateway REST API connected to a Lambda function. As an example of the “sensible defaults” used by AWS Solutions Constructs, this pattern enables CloudWatch logging for the API Gateway.
  • aws-dynamodb-stream-lambda, a Construct implementing a DynamoDB table streaming data changes to a Lambda function with the least privileged permissions.

To build the final architecture, I simply connect those two Constructs together:

I am using TypeScript to define the CDK stack, and Node.js for the Lambda functions. Let’s start with the CDK stack:

 

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as apigw from '@aws-cdk/aws-apigateway';
import * as dynamodb from '@aws-cdk/aws-dynamodb';
import { ApiGatewayToLambda } from '@aws-solutions-constructs/aws-apigateway-lambda';
import { DynamoDBStreamToLambda } from '@aws-solutions-constructs/aws-dynamodb-stream-lambda';

export class DemoConstructsStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const apiGatewayToLambda = new ApiGatewayToLambda(this, 'ApiGatewayToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'restApi.handler'
      },
      apiGatewayProps: {
        defaultMethodOptions: {
          authorizationType: apigw.AuthorizationType.NONE
        }
      }
    });

    const dynamoDBStreamToLambda = new DynamoDBStreamToLambda(this, 'DynamoDBStreamToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'processStream.handler'
      },
      dynamoTableProps: {
        tableName: 'my-table',
        partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
        timeToLiveAttribute: 'ttl'
      }
    });

    const apiFunction = apiGatewayToLambda.lambdaFunction;
    const dynamoTable = dynamoDBStreamToLambda.dynamoTable;

    dynamoTable.grantReadWriteData(apiFunction);
    apiFunction.addEnvironment('TABLE_NAME', dynamoTable.tableName);
  }
}

At the beginning of the stack, I import the standard CDK constructs for the Lambda function, the API Gateway endpoint, and the DynamoDB table. Then, I add the two patterns from the AWS Solutions Constructs, ApiGatewayToLambda and DynamoDBStreamToLambda.

After declaring the two ApiGatewayToLambda and DynamoDBStreamToLambda constructs, I store the Lambda function, created by the ApiGatewayToLambda constructs, and the DynamoDB table, created by DynamoDBStreamToLambda, in two variables.

At the end of the stack, I “connect” the two patterns together by granting permissions to the Lambda function to read/write in the DynamoDB table, and add the name of the DynamoDB table to the environment of the Lambda function, so that it can be used in the function code to store data in the table.

The code of the two Lambda functions is in the lambda folder of the CDK application. I am using the Node.js 12 runtime.

The restApi.js function implements the API and writes data to the DynamoDB table. The URL path is used as partition key, all the query string parameters in the URL are stored as attributes. The TTL for the item is computed adding a time window of 7 days to the current time.

const { DynamoDB } = require("aws-sdk");

const docClient = new DynamoDB.DocumentClient();

const TABLE_NAME = process.env.TABLE_NAME;
const TTL_WINDOW = 7 * 24 * 60 * 60; // 7 days expressed in seconds

exports.handler = async function (event) {

  const item = event.queryStringParameters;
  item.id = event.pathParameters.proxy;

  const now = new Date(); 
  item.ttl = Math.round(now.getTime() / 1000) + TTL_WINDOW;

  const response = await docClient.put({
    TableName: TABLE_NAME,
    Item: item
  }).promise();

  let statusCode = 204;
  
  if (response.err != null) {
    console.error('request: ', JSON.stringify(event, undefined, 2));
    console.error('error: ', response.err);
    statusCode = 500
  }

  return {
    statusCode: statusCode
  };
};

The processStream.js function is processing data capture records from the DynamoDB Stream, looking for the items deleted by TTL. The archive functionality is not implemented in this sample code.

exports.handler = async function (event) {
  event.Records.forEach((record) => {
    console.log('Stream record: ', JSON.stringify(record, null, 2));
    if (record.userIdentity.type == "Service" &&
      record.userIdentity.principalId == "dynamodb.amazonaws.com") {

      // Record deleted by DynamoDB Time to Live (TTL)
      
      // I can archive the record to S3, for example using Kinesis Data Firehose.
    }
  }
};

Let’s see if this works! First, I need to install all dependencies. To simplify dependencies, each release of AWS Solutions Constructs is linked to the corresponding version of the CDK. I this case, I am using version 1.46.0 for both the CDK and the AWS Solutions Constructs patterns. The first three commands are installing plain CDK constructs. The last two commands are installing the AWS Solutions Constructs patterns I am using for this application.

npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-solutions-constructs/[email protected]
npm install @aws-solutions-constructs/[email protected]

Now, I build the application and use the CDK to deploy the application.

npm run build
cdk deploy

Towards the end of the output of the cdk deploy command, a green light is telling me that the deployment of the stack is completed. Just next, in the Outputs, I find the endpoint of the API Gateway.

 ✅  DemoConstructsStack

Outputs:
DemoConstructsStack.ApiGatewayToLambdaLambdaRestApiEndpoint9800D4B5 = https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/

I can now use curl to test the API:

curl "https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/danilop?name=Danilo&company=AWS"

Let’s have a look at the DynamoDB table:

The item is stored, and the TTL is set. After a week, the item will be deleted and sent via DynamoDB Streams to the processStream.js function.

After I complete my testing, I use the CDK again to quickly delete all resources created for this application:

cdk destroy

Available Now
The AWS Solutions Constructs are available now for TypeScript and Python. The AWS Solutions Builders team is working to make these constructs also available when using Java and C# with the CDK, stay tuned. There is no cost in using the AWS Solutions Constructs, or the CDK, you only pay for the resources created when deploying the stack.

In this first release, 25 patterns are included, covering lots of different use cases. Which new patterns and features should we focus now? Give use your feedback in the open source project repository!

Danilo

AWS CodeArtifact and your package management flow – Best Practices for Integration

Post Syndicated from John Standish original https://aws.amazon.com/blogs/devops/integrating-aws-codeartifact-package-mgmt-flow/

You often use artifact repositories to store and share software or deployment packages. Centralized artifacts enable teams to operate independently and share versioned software artifacts across your organization. Sharing versioned artifacts across organizations increases code reuse and reduces delivery time. Having a central artifact store enables tighter artifact governance and improves security visibility. This post uses some of these patterns to show you how to integrate AWS CodeArtifact in an effective, cost-controlled, and efficient manner.

AWS CodeArtifact Diagram

AWS CodeArtifact Service Usage

AWS CodeArtifact concepts

AWS CodeArtifact uses the following elements:

  • Asset – An individual file stored in AWS CodeArtifact that is associated with a package version, such as an npm .tgz file or Maven POM and JAR files
  • Package – A package is a bundle of software and the metadata that is required to resolve dependencies and install the software. AWS CodeArtifact supports npmPyPI, and Maven package formats.
  • Repository – An CodeArtifact repository contains a set of package versions, each of which maps to a set of assets. Repositories are polyglot—a single repository can contain packages of any supported type. Each repository exposes endpoints for fetching and publishing packages using tools like the npm CLI, the Maven CLI (mvn), and pip.
  • Domain – Repositories are aggregated into a higher-level entity known as a domain. The domain allows organizational policy to be applied across multiple repositories. A domain deduplicates storage of the repositories packages.

Creating a domain based on organizational ownership

When you create a domain in CodeArtifact, it’s important to organize the domain by ownership within the organization. An example would be a a company being a domain, and the products being repositories. Domains allow you to apply organizational policies across multiple repositories. Generally we recommend creating one domain per company. In some cases it may also be beneficial to have a sandbox domain where prototype repositories reside. In a sandbox domain teams are at liberty to create their own repositories and experiment as needed, without affecting product deliverable assets. Using a sandbox domain will duplicate packages, isolate repositories since you can not copy packages between domains, and increase costs since package deduplication is handle at the domain level. Organizing packages by domain ownership increases the cache hits on a package within the domain and reduces cost for each subsequent package fetch request.

Whenever a package is fetched from a repository, the asset is cached in your CodeArtifact domain to minimize the cost of subsequent downstream requests. A given asset only needs to be stored once in a domain, even if it’s available in two—or two thousand—repositories. That means you only pay for storage once. Copying a package version with the CopyPackageVersions API is only possible between repositories within the same CodeArtifact domain.

You can create a domain for your organization by calling create-domain in the AWS Command Line Interface (AWS CLI), AWS SDK, or on the CodeArtifact console. See the following code:

aws codeartifact create-domain --domain "my-org"

After creating the domain you will see the domains listed in the Domains section on the CodeArtifact console.

AWS CodeArtifact domains per governing organization

Organizing packages by domain ownership

Using a shared repository

A shared repository is applicable when a team feels that a component is useful to the rest of the organization and isn’t in an experimental state, personal project, and not meant for wide distribution within the organization. Examples of shared components are open source public repositories (npm, PyPI, and Maven), authentication, logging, or helper libraries. Shared libraries aren’t related to product libraries; for instance, a service contract library shouldn’t live in a shared repository. The shared repository should be marked read-only to all users except for the publishing IAM role. At Amazon, we have found that many teams want to consume common packages as part of their application build, and don’t need to publish any package themselves. Those teams don’t need their own repository and pull packages from shared. Overall, approximately 80% of packages are downloaded from the shared repository, and 20% from team or project specific repositories.

You can create a shared repository by calling the create-repository command and setting a resource policy that makes the repository read-only.

Here is how you create a repository with the AWS CLI using the create-repository command. See the following code:

aws codeartifact create-repository --domain "my-org" \
--domain-owner "account-id" --repository "my-shared-repo-name" \
--description "My new repository"

Next you make the repository read-only by setting a resource policy. See the following code:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
		"codeartifact:DescribePackageVersion",
                "codeartifact:DescribeRepository",
                "codeartifact:GetPackageVersionReadme",
                "codeartifact:GetRepositoryEndpoint",
                "codeartifact:ListPackages",
                "codeartifact:ListPackageVersions",
                "codeartifact:ListPackageVersionAssets",
                "codeartifact:ListPackageVersionDependencies",
                "codeartifact:ReadFromRepository"
            ],
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::444455556666:root"
            },
            "Resource": "*"
        }
    ]
}

To attach a resource polcicy to a repository by calling the put-repository-permissions command. See the following code:

aws codeartifact put-repository-permissions-policy --domain "my-org" \
--domain-owner "account-id" --repository "my-shared-repo-name" \
--policy-document file:///PATH/TO/policy.json

When you have created the repository, you will see it listed in the Repositories section on the CodeArtifact console.

A list of shared repositories in AWS CodeArtifact

Shared repositories in AWS CodeArtifact

External repository connections

CodeArtifact enables you to set external repository connections and replicate them within CodeArtifact. An external connection reduces the downstream dependency on the remote external repository. When you request a package from the CodeArtifact repository that’s not already present in the repository, the package can be fetched from the external connection. This makes it possible to consume open-source dependencies used by your application. Using an external connection reduces interruption in your development process for package external dependencies, an example is if a package is removed from a public repository, you will still have a copy of the package stored in CodeArtifact. You should have a one-to-one mapping with external repositories, and rather than have multiple CodeArtifact repositories pointing to the same public repository. Each asset that CodeArtifact imports into your repository from a public repository is billed as a single request, and each connection must reconcile and fetch the package before the response is returned. By having a one-to-one mapping, you can increase cache hits, reduces time to download an application dependency from CodeArtifact, and reduce the number of external package resolution requests. Associating an external repository connection with your repository is done using the associate-external-connection command. See the following code:

aws codeartifact associate-external-connection \
--domain "my-org" --domain-owner "account-id" \
--repository "my-external-repo" --external-connection public:npmjs

Once you have associated an external connection with your repository, you’ll see the external connection visible in the Repositories section detail. In this example we’ve connected the repository to the external npmjs repository.

External connection to npmjs with AWS CodeArtifact repositories

External connection to npmjs for an AWS CodeArtifact repository

Team and product repositories

When working in distributed teams, you often align repositories to the product or service ownership. Teams working on there own repository can update as needed. An example would be creating a private package that your team only uses internally.

See the following code:

aws codeartifact create-repository --domain my-org \
--domain-owner account-id --repository my-team-repo \
--description "My new team repository"

As team’s develop against the package they will need to publish their changes to the repository. As part of your development pipeline you would publish the package to the repository. See the following code for an example:

# Log in to CodeArtifact
aws codeartifact login --tool npm \
--domain "my-org" --domain-owner "account-id" \
--repository "my-team-repo"

# Run build commands here
...

# Set $VERSION from your build system
npm version $VERSION

# Publish to CodeArtifact
npm publish

After testing the feature and you find that it will be usable across your organization, you can copy the package into your shared repository. See the following code:

# Promoting to a shared repo
aws codeartifact copy-package-versions --domain "my-org" \
--domain-owner "account-id" --source-repository "my-team-repo" \
--destination-repository "my-shared-repo" \
--package my-package --format npm \
--versions '["6.0.2"]'

Once you’ve created your shared repository you will see the repositories updated as shown here.

Team and product repositories in AWS CodeArtifact

Team and product repositories

Sharing repositories across accounts

Often teams or workloads have separate accounts within an organization. This is a recommended practice because it clearly defines operational boundaries and domain of ownership and establishes security boundaries. If your organization uses a multi-account strategy, you can share repositories across accounts using CodeArtifact resource policy. Teams can develop in their own account and publish to a CodeArtifact repository controlled in a shared account.

Here you see a list of repositories, which includes both a shared and team repository.

Cross account sharing of AWS CodeArtifact repositories

Cross account sharing of AWS CodeArtifact repositories

Using Amazon CloudWatch Events when a package is pushed

When a package is pushed into a repository, its change can affect software dependencies, teams, or process dependencies. When an artifact is pushed to CodeArtifact, an Amazon CloudWatch Events event is triggered, which you can trigger additional functionality. You can react to these events by subscribing to a CodeArtifact event in Amazon EventBridge. Some examples of reactions to a change you could take are: checking dependencies, deploying dependent services, notifying teams or services of a change, or building the dependencies.

You can also use EventBridge to start a pipeline in AWS CodePipeline, notify an Amazon Simple Notification Service (Amazon SNS) topic, and have that call AWS Chatbot. For more information see, CodeArtifact event format and example. If you are looking to integrate AWS Chatbot into your delivery flow, see Receive AWS Developer Tools Notifications over Slack using AWS Chatbot.

Deploying code in a hybrid environment

You can enable seamless software deployment into AWS and on-premises environments by integrating CodeArtifact with software build and deployment services. You can use CodeArtifact with your existing development pipeline tooling such as NPM, Python, and Maven. With native support for these package managers, you can access CodeArtifact wherever you operate today.

First, log in to CodeArtifact, build your code, and finally publish using npm publish with the following code:

# Log in to CodeArtifact 
aws codeartifact login --tool npm \
--domain "my-org" --domain-owner "account-id" \
--repository "my-team-repo"

# Run build commands here 
... 

# Set $VERSION from your build system 
npm version $VERSION 

# Publish to CodeArtifact 
npm publish

Cleaning Up

When you’re ready to clean up the repositories and domains you’ve created, you’ll need to remove them in a specific order. Please be aware that deleting a repository is a destructive action which will remove any stored packages. To delete a domain and delete a repository created from the previous sections in this blog, you will be using the delete-domain and delete-repository commands.

You will need to remove the domain and repository in the following order:

  1. Remove any repositories in a domain
  2. Remove the domain

To delete the repository and domain, see the following code:

# Delete the repository
aws codeartifact delete-repository --domain "my-org" --domain-owner "account-id" --repository "my-team-repo"

# Delete the domain
aws codeartifact delete-domain --domain "my-org" --domain-owner "account-id"

Conclusion

This post covered how to integrate CodeArtifact into your delivery flow and use CodeArtifact effectively. A shared repository approach aides in creating reusable components across your organization. Using team repositories and promoting to a consumable repository allows your teams to iterate independently. For more information, see Getting started with CodeArtifact.

About the Author

John Standish

John Standish is a Solutions Architect at AWS and spent over 13 years as a Microsoft .Net developer. Outside of work, he enjoys playing video games, cooking, and watching hockey.

Yogesh Chaturvedi

Yogesh Chaturvedi is a Solutions Architect at AWS and has over 20 years of software development and architecture experience.

 

Introducing the new Serverless LAMP stack

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-new-serverless-lamp-stack/

This is the first in a series of posts for PHP developers. The series will explain how to use serverless technologies with PHP. It covers the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

In future posts, I demonstrate how to use AWS Lambda for web applications built with PHP frameworks such as Laravel and Symphony. I show how to move from using Lambda as a replacement for web hosting functionality to a decoupled, event-driven approach. I cover how to combine multiple Lambda functions of minimal scope with other serverless services to create performant scalable microservices.

In this post, you learn how to use PHP with Lambda via the custom runtime API. Visit this GitHub repository for the sample code.

The Serverless LAMP stack

The Serverless LAMP stack

The challenges with traditional PHP applications

Scalability is an inherent challenge with the traditional LAMP stack. A scalable application is one that can handle highly variable levels of traffic. PHP applications are often scaled horizontally, by adding more web servers as needed. This is managed via a load balancer, which directs requests to various web servers. Each additional server brings additional overhead with networking, administration, storage capacity, backup and restore systems, and an update to asset management inventories. Additionally, each horizontally scaled server runs independently. This can result in configuration synchronization challenges.

Horizontal scaling with traditional LAMP stack applications.

Horizontal scaling with traditional LAMP stack applications.

New storage challenges arise as each server has its own disks and filesystem, often requiring developers to add a mechanism to handle user sessions. Using serverless technologies, scalability is managed for the developer.

If traffic surges, the services scale to meet the demand without having to deploy additional servers. This allows applications to quickly transition from prototype to production.

The serverless LAMP architecture

A traditional web application can be split in to two components:

  • The static assets (media files, css, js)
  • The dynamic application (PHP, MySQL)

A serverless approach to serving these two components is illustrated below:

The serverless LAMP stack

The serverless LAMP stack

All requests for dynamic content (anything excluding /assets/*) are forwarded to Amazon API Gateway. This is a fully managed service for creating, publishing, and securing APIs at any scale. It acts as the “front door” to the PHP application, routing requests downstream to Lambda functions. The Lambda functions contain the business logic and interaction with the MySQL database. You can pass the input to the Lambda function as any combination of request headers, path variables, query string parameters, and body.

Notable AWS features for PHP developers

Amazon Aurora Serverless

During re:Invent 2017, AWS announced Aurora Serverless, an on-demand serverless relational database with a pay-per-use cost model. This manages the responsibility of relational database provisioning and scaling for the developer.

Lambda Layers and custom runtime API.

At re:Invent 2018, AWS announced two new Lambda features. These enable developers to build custom runtimes, and share and manage common code between functions.

Improved VPC networking for Lambda functions.

In September 2019, AWS announced significant improvements in cold starts for Lambda functions inside a VPC. This results in faster function startup performance and more efficient usage of elastic network interfaces, reducing VPC cold starts.

Amazon RDS Proxy

At re:Invent 2019, AWS announced the launch of a new service called Amazon RDS Proxy. A fully managed database proxy that sits between your application and your relational database. It efficiently pools and shares database connections to improve the scalability of your application.

 

Significant moments in the serverless LAMP stack timeline

Significant moments in the serverless LAMP stack timeline

Combining these services, it is now it is possible to build secure and performant scalable serverless applications with PHP and relational databases.

Custom runtime API

The custom runtime API is a simple interface to enable Lambda function execution in any programming language or a specific language version. The custom runtime API requires an executable text file called a bootstrap. The bootstrap file is responsible for the communication between your code and the Lambda environment.

To create a custom runtime, you must first compile the required version of PHP in an Amazon Linux environment compatible with the Lambda execution environment .To do this, follow these step-by-step instructions.

The bootstrap file

The file below is an example of a basic PHP bootstrap file. This example is for explanation purposes as there is no error handling or abstractions taking place. To ensure that you handle exceptions appropriately, consult the runtime API documentation as you build production custom runtimes.

#!/opt/bin/php
<?PHP

// This invokes Composer's autoloader so that we'll be able to use Guzzle and any other 3rd party libraries we need.
require __DIR__ . '/vendor/autoload.php;

// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
    // Ask the runtime API for a request to handle.
    $request = getNextRequest();

    // Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
    $handlerFunction = array_slice(explode('.', $_ENV['_HANDLER']), -1)[0];
    require_once $_ENV['LAMBDA_TASK_ROOT'] . '/src/' . $handlerFunction . '.php;

    // Execute the desired function and obtain the response.
    $response = $handlerFunction($request['payload']);

    // Submit the response back to the runtime API.
    sendResponse($request['invocationId'], $response);
} while (true);

function getNextRequest()
{
    $client = new \GuzzleHttp\Client();
    $response = $client->get('http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/next');

    return [
      'invocationId' => $response->getHeader('Lambda-Runtime-Aws-Request-Id')[0],
      'payload' => json_decode((string) $response->getBody(), true)
    ];
}

function sendResponse($invocationId, $response)
{
    $client = new \GuzzleHttp\Client();
    $client->post(
    'http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/' . $invocationId . '/response',
       ['body' => $response]
    );
}

The #!/opt/bin/php declaration instructs the program loader to use the PHP binary compiled for Amazon Linux.

The bootstrap file performs the following tasks, in an operational loop:

  1. Obtains the next request.
  2. Executes the code to handle the request.
  3. Returns a response.

Follow these steps to package the bootstrap and compiled PHP binary together into a `runtime.zip`.

Libraries and dependencies

The runtime bootstrap uses an HTTP-based local interface. This retrieves the event payload for each Lambda function invocation and returns back the response from the function. This bootstrap file uses Guzzle, a popular PHP HTTP client, to make requests to the custom runtime API. The Guzzle package is installed using Composer package manager. Installing packages in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

Follow these steps to create and package the runtime dependencies into a `vendors.zip` binary.

Lambda Layers provides a mechanism to centrally manage code and data that is shared across multiple functions. When a Lambda function is configured with a layer, the layer’s contents are put into the /opt directory of the execution environment. You can include a custom runtime in your function’s deployment package, or as a layer. Lambda executes the bootstrap file in your deployment package, if available. If not, Lambda looks for a runtime in the function’s layers. There are several open source PHP runtime layers available today, most notably:

The following steps show how to publish the `runtime.zip` and `vendor.zip` binaries created earlier into Lambda layers and use them to build a Lambda function with a PHP runtime:

  1.  Use the AWS Command Line Interface (CLI) to publish layers from the binaries created earlier
    aws lambda publish-layer-version \
        --layer-name PHP-example-runtime \
        --zip-file fileb://runtime.zip \
        --region eu-west-1

    aws lambda publish-layer-version \
        --layer-name PHP-example-vendor \
        --zip-file fileb://vendors.zip \
        --region eu-west-1

  2. Make note of each command’s LayerVersionArn output value (for example arn:aws:lambda:eu-west-1:XXXXXXXXXXXX:layer:PHP-example-runtime:1), which you’ll need for the next steps.

Creating a PHP Lambda function

You can create a Lambda function via the AWS CLI, the AWS Serverless Application Model (SAM), or directly in the AWS Management Console. To do this using the console:

  1. Navigate to the Lambda section  of the AWS Management Console and choose Create function.
  2. Enter “PHPHello” into the Function name field, and choose Provide your own bootstrap in the Runtime field. Then choose Create function.
  3. Right click on bootstrap.sample and choose Delete.
  4. Choose the layers icon and choose Add a layer.
  5. Choose Provide a layer version ARN, then copy and paste the ARN of the custom runtime layer from in step 1 into the Layer version ARN field.
  6. Repeat steps 6 and 7 for the vendor ARN.
  7. In the Function Code section, create a new folder called src and inside it create a new file called index.php.
  8. Paste the following code into index.php:
    //index function
    function index($data)
    {
     return "Hello, ". $data['name'];
    }
    
  9. Insert “index” into the Handler input field. This instructs Lambda to run the index function when invoked.
  10. Choose Save at the top right of the page.
  11. Choose Test at the top right of the page, and  enter “PHPTest” into the Event name field. Enter the following into the event payload field and then choose Create:{ "name": "world"}
  12. Choose Test and Select the dropdown next to the execution result heading.

You can see that the event payload “name” value is used to return “hello world”. This is taken from the $data['name'] parameter provided to the Lambda function. The log output provides details about the actual duration, billed duration, and amount of memory used to execute the code.

Conclusion

This post explains how to create a Lambda function with a PHP runtime using Lambda Layers and the custom runtime API. It introduces the architecture for a serverless LAMP stack that scales with application traffic.

Lambda allows for functions with mixed runtimes to interact with each other. Now, PHP developers can join other serverless development teams focusing on shipping code. With serverless technologies, you no longer have to think about restarting webhosts, scaling or hosting.

Start building your own custom runtime for Lambda.

Building a CI/CD pipeline for multi-region deployment with AWS CodePipeline

Post Syndicated from Akash Kumar original https://aws.amazon.com/blogs/devops/building-a-ci-cd-pipeline-for-multi-region-deployment-with-aws-codepipeline/

This post discusses the benefits of and how to build an AWS CI/CD pipeline in AWS CodePipeline for multi-region deployment. The CI/CD pipeline triggers on application code changes pushed to your AWS CodeCommit repository. This automatically feeds into AWS CodeBuild for static and security analysis of the CloudFormation template. Another CodeBuild instance builds the application to generate an AMI image as output. AWS Lambda then copies the AMI image to other Regions. Finally, AWS CloudFormation cross-region actions are triggered and provision the instance into target Regions based on AMI image.

The solution is based on using a single pipeline with cross-region actions, which helps in provisioning resources in the current Region and other Regions. This solution also helps manage the complete CI/CD pipeline at one place in one Region and helps as a single point for monitoring and deployment changes. This incurs less cost because a single pipeline can deploy the application into multiple Regions.

As a security best practice, the solution also incorporates static and security analysis using cfn-lint and cfn-nag. You use these tools to scan CloudFormation templates for security vulnerabilities.

The following diagram illustrates the solution architecture.

Multi region AWS CodePipeline architecture

Multi region AWS CodePipeline architecture

Prerequisites

Before getting started, you must complete the following prerequisites:

  • Create a repository in CodeCommit and provide access to your user
  • Copy the sample source code from GitHub under your repository
  • Create an Amazon S3 bucket in the current Region and each target Region for your artifact store

Creating a pipeline with AWS CloudFormation

You use a CloudFormation template for your CI/CD pipeline, which can perform the following actions:

  1. Use CodeCommit repository as source code repository
  2. Static code analysis on the CloudFormation template to check against the resource specification and block provisioning if this check fails
  3. Security code analysis on the CloudFormation template to check against secure infrastructure rules and block provisioning if this check fails
  4. Compilation and unit test of application code to generate an AMI image
  5. Copy the AMI image into target Regions for deployment
  6. Deploy into multiple Regions using the CloudFormation template; for example, us-east-1, us-east-2, and ap-south-1

You use a sample web application to run through your pipeline, which requires Java and Apache Maven for compilation and testing. Additionally, it uses Tomcat 8 for deployment.

The following table summarizes the resources that the CloudFormation template creates.

Resource NameTypeObjective
CloudFormationServiceRoleAWS::IAM::RoleService role for AWS CloudFormation
CodeBuildServiceRoleAWS::IAM::RoleService role for CodeBuild
CodePipelineServiceRoleAWS::IAM::RoleService role for CodePipeline
LambdaServiceRoleAWS::IAM::RoleService role for Lambda function
SecurityCodeAnalysisServiceRoleAWS::IAM::RoleService role for security analysis of provisioning CloudFormation template
StaticCodeAnalysisServiceRoleAWS::IAM::RoleService role for static analysis of provisioning CloudFormation template
StaticCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for static analysis of provisioning CloudFormation template
SecurityCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for security analysis of provisioning CloudFormation template
CodeBuildProjectAWS::CodeBuild::ProjectCodeBuild for compilation, testing, and AMI creation
CopyImageAWS::Lambda::FunctionPython Lambda function for copying AMI images into other Regions
AppPipelineAWS::CodePipeline::PipelineCodePipeline for CI/CD

To start creating your pipeline, complete the following steps:

  • Launch the CloudFormation stack with the following link:
Launch button for CloudFormation

Launch button for CloudFormation

  • Choose Next.
  • For Specify details, provide the following values:
ParameterDescription
Stack nameName of your stack
OtherRegion1Input the target Region 1 (other than current Region) for deployment
OtherRegion2Input the target Region 2 (other than current Region) for deployment
RepositoryBranchBranch name of repository
RepositoryNameRepository name of the project
S3BucketNameInput the S3 bucket name for artifact store
S3BucketNameForOtherRegion1Create a bucket in target Region 1 and specify the name for artifact store
S3BucketNameForOtherRegion2Create a bucket in target Region 2 and specify the name for artifact store

Choose Next.

  • On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources.
  • Choose Create.
  • Wait for the CloudFormation stack status to change to CREATE_COMPLETE (this takes approximately 5–7 minutes).

When the stack is complete, your pipeline should be ready and running in the current Region.

  • To validate the pipeline, check the images and EC2 instances running into the target Regions and also refer the AWS CodePipeline Execution summary as below.
AWS CodePipeline Execution Summary

AWS CodePipeline Execution Summary

We will walk you through the following steps for creating a multi-region deployment pipeline:

1. Using CodeCommit as your source code repository

The deployment workflow starts by placing the application code on the CodeCommit repository. When you add or update the source code in CodeCommit, the action generates a CloudWatch event, which triggers the pipeline to run.

2. Static code analysis of CloudFormation template to provision AWS resources

Historically, AWS CloudFormation linting was limited to the ValidateTemplate action in the service API. This action tells you if your template is well-formed JSON or YAML, but doesn’t help validate the actual resources you’ve defined.

You can use a linter such as the cfn-lint tool for static code analysis to improve your AWS CloudFormation development cycle. The tool validates the provisioning CloudFormation template properties and their values (mappings, joins, splits, conditions, and nesting those functions inside each other) against the resource specification. This can cover the most common of the underlying service constraints and help encode some best practices.

The following rules cover underlying service constraints:

  • E2530 – Checks that Lambda functions have correctly configured memory sizes
  • E3025 – Checks that your RDS instances use correct instance types for the database engine
  • W2001 – Checks that each parameter is used at least once

You can also add this step as a pre-commit hook for your GIT repository if you are using CodeCommit or GitHub.

You provision a CodeBuild project for static code analysis as the first step in CodePipeline after source. This helps in early detection of any linter issues.

3. Security code analysis of CloudFormation template to provision AWS resources

You can use Stelligent’s cfn_nag tool to perform additional validation of your template resources for security. The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure provisioning and validates against AWS best practices. For example:

  • IAM rules that are too permissive (wildcards)
  • Security group rules that are too permissive (wildcards)
  • Access logs that aren’t enabled
  • Encryption that isn’t enabled
  • Password literals

You provision a CodeBuild project for security code analysis as the second step in CodePipeline. This helps detect any insecure infrastructure provisioning issues.

4. Compiling and testing application code and generating an AMI image

Because you use a Java-based application for this walkthrough, you use Amazon Corretto as your JVM. Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto comes with long-term support that includes performance enhancements and security fixes.

You also use Apache Maven as a build automation tool to build the sample application, and the HashiCorp Packer tool to generate an AMI image for the application.

You provision a CodeBuild project for compilation, unit testing, AMI generation, and storing the AMI ImageId in the Parameter Store, which the CloudFormation template uses as the next step of the pipeline.

5. Copying the AMI image into target Regions

You use a Lambda function to copy the AMI image into target Regions so the CloudFormation template can use it to provision instances into that Region as the next step of the pipeline. It also writes the target Region AMI ImageId into the target Region’s Parameter Store.

6. Deploying into multiple Regions with the CloudFormation template

You use the CloudFormation template as a cross-region action to provision AWS resources into a target Region. CloudFormation uses Parameter Store’s ImageId as reference and provisions the instances into the target Region.

Cleaning up

To avoid additional charges, you should delete the following AWS resources after you validate the pipeline:

  • The cross-region CloudFormation stack in the target and current Regions
  • The main CloudFormation stack in the current Region
  • The AMI you created in the target and current Regions
  • The Parameter Store AMI_VERSION in the target and current Regions

Conclusion

You have now created a multi-region deployment pipeline in CodePipeline without having to worry about the mechanics of creating and copying AMI images across Regions. CodePipeline abstracts the creating and copying of the images in the background in each Region. You can now upload new source code changes to the CodeCommit repository in the primary Region, and changes deploy automatically to other Regions. Cross-region actions are very powerful and are not limited to deploy actions. You can also use them with build and test actions.

Deploying a serverless application using AWS CDK

Post Syndicated from Georges Leschener original https://aws.amazon.com/blogs/devops/deploying-a-serverless-application-using-aws-cdk/

There are multiple ways to deploy API endpoints, such as this example, in which you could use an application running on Amazon EC2 to demonstrate how to integrate Amazon ElastiCache with Amazon DocumentDB (with MongoDB capability). While the approach in this example help achieve great performance and reliability through the elasticity and the ability to scale up or down the number of EC2 instances in order to accommodate the load on the application, there is still however some operational overhead you still have to manage the EC2 instances yourself. One way of addressing the operational overhead issue and related costs could be to transform the application into a serverless architecture.

The example in this blog post uses an application that provides a similar use case, leveraging a serverless architecture showcasing some of the tools that are being leveraged by customers transitioning from lift-and-shift to building cloud-native applications. It uses Amazon API Gateway to provide the REST API endpoint connected to an AWS Lambda function to provide the business logic to read and write from an Amazon Aurora Serverless database. It also showcases the deployment of most of the infrastructure with the AWS Cloud Development Kit, known as the CDK. By moving your applications to cloud native architecture like the example showcased in this blog post, you will be able to realize a number of benefits including:

  • Fast and clean deployment of your application thereby achieving fast time to market
  • Reduce operational costs by serverless and managed services

Architecture Diagram

At the end of this blog, you have an AWS Cloud9 instance environment containing a CDK project which deploys an API Gateway and Lambda function. This Lambda function leverages a secret stored in your AWS Secrets Manager to read and write from your Aurora Serverless database through the data API, as shown in the following diagram.

 

Architecture diagram for deploying a serverless application using AWS CDK

This above architecture diagram showcases the resources to be deployed in your AWS Account

Through the blog post you will be creating the following resources:

  1. Deploy an Amazon Aurora Serverless database cluster
  2. Secure the cluster credentials in AWS Secrets Manager
  3. Create and populate your database in the AWS Console
  4. Deploy an AWS Cloud9 instance used as a development environment
  5. Initialize and configure an AWS Cloud Development Kit project including the definition of your Amazon API Gateway endpoint and AWS Lambda function
  6. Deploy an AWS CloudFormation template through the AWS Cloud Development Kit

Prerequisites

In order to deploy the CDK application, there are a few prerequisites that need to be met:

  1. Create an AWS account or use an existing account.
  2. Install Postman for testing purposes

Amazon Aurora serverless cluster creation

To begin, navigate to the AWS console to create a new Amazon RDS database.

  1. Select Create Database from the Amazon RDS service.
  2. Select Standard Create under Choose a database creation method.
  3. Select Serverless under Database features.
  4. Select Amazon Aurora as the engine type under Engine options.
  5. Enter db-blog for your DB Cluster Identifier.
  6. Expand the Additional Connectivity section and select the Data API option. This functionality enables you to access Aurora Serverless with web services-based applications. It also allows you to use the query editor feature for Aurora Serverless in order to run SQL queries against your database instance.
  7. Leave the default selection for everything else and choose Create Database.

Your database instance is created in a single availability zone (AZ), but an Aurora Serverless database cluster has a capability known as automatic multi-AZ failover, which enables Aurora to recreate the database instance in a different AZ should the current database instance or the AZ become unavailable. The storage volume for the cluster is spread across multiple AZs, since Aurora separates computation capacity and storage. This allows for data to remain available even if the database instance or the associated AZ is affected by an outage.

Securing database credentials with AWS Secrets Manager

After creating the database instance, the next step is to store your secrets for your database in AWS Secrets Manager.

  • Navigate to AWS Secrets Manager, and select Store a New Secret.
  • Leave the default selection (Credentials for RDS database) for the secret type. Enter your database username and password and then select the radio button for the database you created in the previous step (in this example, db-blog), as shown in the following screenshot.

database search in aws secrets manager

  •  Choose Next.
  • Enter a name and optionally a description. For the name, make sure to add the prefix rds-db-credentials/ as shown in the following screenshot.

AWS Secrets Manager Store a new secret window

  • Choose Next and leave the default selection.
  • Review your settings on the last page and choose Store to have your secrets created and stored in AWS Secrets Manager, which you can now use to connect to your database.

Creating and populating your Amazon Aurora Serverless database

After creating the DB cluster, create the database instance; create your tables and populate them; and finally, test a connection to ensure that you can query your database.

  • Navigate to the Amazon RDS service from the AWS console, and select your db-blog database cluster.
  • Select Query under Actions to open the Connect to database window as shown in the screenshot below . Enter your database connection details. You can copy your secret manager ARN from the Secrets Manager service and paste it into the corresponding field in the database connection window.

Amazon RDS connect to database window

  • To create the DB instance run the following SQL query: CREATE DATABASE recordstore;from the Query editor shown in the screenshot below:

 

Amazon RDS Query editor

  • Before you can run the following commands, make sure you are using the Recordstore database you just created by running the command:
USE recordstore;
  • Create a records table using the following command:
CREATE TABLE IF NOT EXISTS records (recordid INT PRIMARY KEY, title VARCHAR(255) NOT NULL, release_date DATE);
  • Create a singers table using the following command:
CREATE TABLE IF NOT EXISTS singers (id INT PRIMARY KEY, name VARCHAR(255) NOT NULL, nationality VARCHAR(255) NOT NULL, recordid INT NOT NULL, FOREIGN KEY (recordid) REFERENCES records (recordid) ON UPDATE RESTRICT ON DELETE CASCADE);
  • Add a record to your records table and a singer to your singers table.
INSERT INTO records(recordid,title,release_date) VALUES(001,'Liberian Girl','2012-05-03');
INSERT INTO singers(id,name,nationality,recordid) VALUES(100,'Michael Jackson','American',001);

If you have the AWS CLI set up on your computer, you can connect to your database and retrieve records.

To test it, use the rds-data execute-statement API within the AWS CLI to connect to your database via the data API web service and query the singers table, as shown below:

aws rds-data execute-statement —secret-arn "arn:aws:secretsmanager:REGION:xxxxxxxxxxx:secret:rds-db-credentials/xxxxxxxxxxxxxxx" —resource-arn "arn:aws:rds:us-east-1:xxxxxxxxxx:cluster:db-blog" —database demodb —sql "select * from singers" —output json

You should see the following result:

    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "longValue": 100
            },
            {
                "stringValue": "Michael Jackson"
            },
            {
                "stringValue": "American"
            },
            {
                "longValue": 1
            }
        ]
    ]
}

Creating a Cloud9 instance

To create a Cloud9 instance:

  1. Navigate to the Cloud9 console and select Create Environment.
  2. Name your environment AuroraServerlessBlog.
  3. Keep the default values under the Environment Settings.

Once your instance is launched, you see the screen shown in the following screenshot:

AWS Cloud9

 

You can now install the CDK in your environment. Run the following command inside your bash terminal on the blue section at the bottom of your screen:

npm install -g [email protected]

For the next section of this example, you mostly work on the command line of your Cloud9 terminal and on your file explorer.

Creating the CDK deployment

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages. If you would like to familiarize yourself the CDKWorkshop is a great place to start.

First, create a working directory called RecordsApp and initialize a CDK project from a template.

Run the following commands:

mkdir RecordsApp
cd RecordsApp
cdk init app --language typescript
mkdir resources
npm install @aws-cdk/[email protected] @aws-cdk/[email protected] @aws-cdk/[email protected]

Now your instance should look like the example shown in the following screenshot:

AWS Cloud9 shell

 

You are mainly working in two directories:

  • Resources
  • Lib

Your initial set up is ready, and you can move into creating specific services and deploying them to your account.

Creating AWS resources using the CDK

  1. Follow these steps to create AWS resources using the CDK:
  2. Under the /lib folder,  create a new file called records_service.ts.
    • Inside of your new file, paste the following code with these changes:
    • Replace the dbARN with the ARN of your AuroraServerless DB ARN from the previous steps.

Replace the dbSecretARN with the ARN of your Secrets Manager secret ARN from the previous steps.

import core = require("@aws-cdk/core");
import apigateway = require("@aws-cdk/aws-apigateway");
import lambda = require("@aws-cdk/aws-lambda");
import iam = require("@aws-cdk/aws-iam");

//REPLACE THIS
const dbARN = "arn:aws:rds:XXXX:XXXX:cluster:aurora-serverless-blog";
//REPLACE THIS
const dbSecretARN = "arn:aws:secretsmanager:XXXXX:XXXXX:secret:rds-db-credentials/XXXXX";

export class RecordsService extends core.Construct {
  constructor(scope: core.Construct, id: string) {
    super(scope, id);

    const lambdaRole = new iam.Role(this, 'AuroraServerlessBlogLambdaRole', {
      assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
      managedPolicies: [
            iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonRDSDataFullAccess'),
            iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSLambdaBasicExecutionRole')
        ]
    });

    const handler = new lambda.Function(this, "RecordsHandler", {
     role: lambdaRole,
     runtime: lambda.Runtime.NODEJS_12_X, // So we can use async in widget.js
     code: lambda.Code.asset("resources"),
     handler: "records.main",
     environment: {
       TABLE: dbARN,
       TABLESECRET: dbSecretARN,
       DATABASE: "recordstore"
     }
   });

    const api = new apigateway.RestApi(this, "records-api", {
      restApiName: "Records Service",
      description: "This service serves records."
   });

    const getRecordsIntegration = new apigateway.LambdaIntegration(handler, {
      requestTemplates: { "application/json": '{ "statusCode": 200 }' }
    });

    api.root.addMethod("GET", getRecordsIntegration); // GET /

    const record = api.root.addResource("{id}");
    const postRecordIntegration = new apigateway.LambdaIntegration(handler);
    const getRecordIntegration = new apigateway.LambdaIntegration(handler);

    record.addMethod("POST", postRecordIntegration); // POST /{id}
    record.addMethod("GET", getRecordIntegration); // GET/{id}
  }
}

This snippet of code will instruct the AWS CDK to create the following resources:

  • IAM role: AuroraServerlessBlogLambdaRole containing the following managed policies:
    • AmazonRDSDataFullAccess
    • service-role/AWSLambdaBasicExecutionRole
  • Lambda function: RecordsHandler, which has a Node.js 8.10 runtime and three environmental variables
  • API Gateway: Records Service, which has the following characteristics:
    • GET Method
      • GET /
    • { id } Resource
      • GET method
        • GET /{id}
      • POST method
        • POST /{id}

Now that you have a service, you need to add it to your stack under the /lib directory.

  1. Open the records_app-stack.ts
  2. Replace the contents of this file with the following:
import cdk = require('@aws-cdk/core'); 
import records_service = require('../lib/records_service'); 
export class RecordsAppStack extends cdk.Stack { 
  constructor(scope: cdk.Construct, id: string, props?
: cdk.StackProps) { 
    super(scope, id, props); 
    new records_service.RecordsService(this, 'Records'
); 
  } 
}
  1. Create the Lambda code that is invoked from the API Gateway endpoint. Under the /resources directory, create a file called records.js and paste the following code in this file
const AWS = require('aws-sdk');
var rdsdataservice = new AWS.RDSDataService();

exports.main = async function(event, context) {
  try {
    var method = event.httpMethod;
    var recordName = event.path.startsWith('/') ? event.path.substring(1) : event.path;
// Defining parameters for rdsdataservice
    var params = {
      resourceArn: process.env.TABLE,
      secretArn: process.env.TABLESECRET,
      database: process.env.DATABASE,
   }
   if (method === "GET") {
      if (event.path === "/") {
       //Here is where we are defining the SQL query that will be run at the DATA API
       params['sql'] = 'select * from records';
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           records: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
     else if (recordName) {
       params['sql'] = `SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';`
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           singer: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
   }
   else if (method === "POST") {
     var payload = JSON.parse(event.body);
     if (!payload) {
       return {
         statusCode: 400,
         headers: {},
         body: "The body is missing"
       };
     }

     //Generating random IDs
     var recordId = uuidv4();
     var singerId = uuidv4();

     //Parsing the payload from body
     var recordTitle = `${payload.recordTitle}`;
     var recordReleaseDate = `${payload.recordReleaseDate}`;
     var singerName = `${payload.singerName}`;
     var singerNationality = `${payload.singerNationality}`;

      //Making 2 calls to the data API to insert the new record and singer
      params['sql'] = `INSERT INTO records(recordid,title,release_date) VALUES(${recordId},"${recordTitle}","${recordReleaseDate}");`;
      const recordsWrite = await rdsdataservice.executeStatement(params).promise();
      params['sql'] = `INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");`;
      const singersWrite = await rdsdataservice.executeStatement(params).promise();

      return {
        statusCode: 200,
        headers: {},
        body: JSON.stringify("Your record has been saved")
      };

    }
    // We got something besides a GET, POST, or DELETE
    return {
      statusCode: 400,
      headers: {},
      body: "We only accept GET, POST, and DELETE, not " + method
    };
  } catch(error) {
    var body = error.stack || JSON.stringify(error, null, 2);
    return {
      statusCode: 400,
      headers: {},
      body: body
    }
  }
}
function uuidv4() {
  return 'xxxx'.replace(/[xy]/g, function(c) {
    var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
    return v;
  });
}

Take a look at what this Lambda function is doing. You have two functions inside of your Lambda function. The first is the exported handler, which is defined as an asynchronous function. The second is a unique identifier function to generate four-digit random numbers you use as UIDs for your database records. In your handler function, you handle the following actions based on the event you get from API Gateway:

  • Method GETwith empty path /:
    • This calls the data API executeStatement method with the following SQL query:
SELECT * from records
  • Method GET with a record name in the path /{recordName}:
    • This calls the data API executeStatmentmethod with the following SQL query:
SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';
  • Method POST with a payload in the body:
    • This makes two calls to the data API executeStatement with the following SQL queries:
INSERT INTO records(recordid,titel,release_date) VALUES(${recordId},"${recordTitle}",“${recordReleaseDate}”);&lt;br /&gt;INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");

Now you have all the pieces you need to deploy your endpoint and Lambda function by running the following commands:

npm run build
cdk synth
cdk bootstrap
cdk deploy

If you change the Lambda code or add aditional AWS resources to your CDK deployment, you can redeploy the application by running all four commands in a single line:

npm run build; cdk synth; cdk bootstrap; cdk deploy

Testing with Postman

Once it’s done, you can test it using Postman:

GET = ‘RecordName’ in the path

  • example:
    • ENDPOINT/RecordName

POST = Payload in the body

  • example:
{
   "recordTitle" : "BlogTest",
   "recordReleaseDate" : "2020-01-01",
   "singerName" : "BlogSinger",
   "singerNationality" : "AWS"
}

Clean up

To clean up the resources created by the CDK, run the following command in your Cloud9 instance:

cdk destroy

To clean up the resources created manually, run the following commands:

aws rds delete-db-cluster --db-cluster-identifier Serverless-blog --skip-final-snapshot
aws secretsmanager delete-secret --secret-id XXXXX --recovery-window-in-days 7

Conclusion

This blog post demonstrated how to transform an application running on Amazon EC2 from a previous blog into serverless architecture by leveraging services such as Amazon API Gateway, Lambda, Cloud 9, AWS CDK, and Aurora Serverless. The benefit of serverless architecture is that it takes away the overhead of having to manage a server and helps reduce costs, as you only pay for the time in which your code executes.

This example used a record-store application written in Node.js that allows users to find their favorite singer’s record titles, as well as the dates when they were released. This example could be expanded, for instance, by adding a payment gateway and a shopping cart to allow users to shop and pay for their favorite records. You could then incorporate some machine learning into the application to predict user choice based on previous visits, purchases, or information provided through registration profiles.

 


 

About the Authors

Luis Lopez Soria is an AI/ML specialist solutions architect working with the AWS machine learning team. He works with AWS customers to help them with the adoption of Machine Learning on a large scale. He enjoys doing sports in addition to traveling around the world, exploring new foods and cultures.

 

 

 

 Georges Leschener is a Partner Solutions Architect in the Global System Integrator (GSI) team at Amazon Web Services. He works with our GSIs partners to help migrate customers’ workloads to AWS cloud, design and architect innovative solutions on AWS by applying AWS recommended best practices.

 

Building and testing iOS and iPadOS apps with AWS DevOps and mobile services

Post Syndicated from Abdullahi Olaoye original https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/

Continuous integration/continuous deployment (CI/CD) helps automate software delivery processes. With the software delivery process automated, developers can test and deliver features faster. In iOS app development, testing your apps on real devices allows you to understand how users will interact with your app and to detect potential issues in real time.

AWS has a collection of tools designed to help developers build, test, configure, and release cloud-based applications for mobile devices. This blog post shows you how to leverage some of those tools and integrate third-party build tools like Jenkins into a CI/CD Pipeline in AWS for iOS app development and testing.

A new commit to the source repository triggers the pipeline. The build is done on a Jenkins server, and the build artifact from Jenkins is passed to the test phase, which is configured with AWS Device Farm to test the application on real devices. AWS CodePipeline provides the orchestration and helps automate the build and test phases. The CodePipeline continuous delivery process is illustrated in the following screenshot.

CodePipeline Archietcture with all stages

Figure: CodePipeline Continuous Delivery Architecture

 

Prerequisites

Ensure you have the following prerequisites set up before beginning:

  1. Apple developer account
  2. Build server (macOS)
  3. Xcode Version 11.3 (installed on the build server and setup)
  4. Jenkins (installed on the build server)
  5. AWS CLI installed and configured on workstation
  6. Basic knowledge of Git

Source

This example uses a sample iOS Notes app which we have hosted in an AWS CodeCommit repository, which is in the source stage of the pipeline.

Jenkins installation

Jenkins can be installed on macOS using a homebrew package manager for macOS with the following command:

$ brew install Jenkins

Start Jenkins by typing the following command:

$ Jenkins

You can also configure Jenkins to start as a service on startup with the following command:

$ brew services start Jenkins

Jenkins configuration

On a browser on your local machine, visit http://localhost:8080. You should see the setup screen shown in the following screenshot:

Screenshot of how to retrive Jenkins secret during setup on mac

1. Grab the initial admin password from the terminal by typing:

$ cat /Users/administrator/.jenkins/secrets/initialAdminPassword

2. Follow the onscreen instructions to complete setup. This includes creating a first admin user, installing initial plugins, etc.

3. Make some changes to the config file to ensure Jenkins is accessible from anywhere, not just the local machine:

    • Open the config file:

$ sudo nano /Users/admin/Library/LaunchAgents/homebrew.mxcl.jenkins.plist

    • Find the following line:

<string>--httpListenAddress=127.0.0.1</string>

    • Change it to the following:

<string>--httpListenAddress=0.0.0.0</string>

    • Save your changes and exit.

To reach Jenkins from the internet, enter the following into a web browser:

<build-server-public-ip>:<Jenkins-port>

The default Jenkins port is 8080. For example, if a public IP address 1.2.3.4, the path is 1.2.3.4:8080.

4. Install the AWS CodePipeline Jenkins plugin:

      • Sign in to Jenkins using the user name and password you created. Choose Manage Jenkins, then Manage Plugins.
      • Switch to the Available tab and start typing CodePipeline into the filter until AWS CodePipeline Plugin appears. Select the plugin, then select Install without restart.
      • Select Restart Jenkins when installation is complete and no jobs are running.

5. Create a project. Choose New Item, then Freestyle Project. Enter a descriptive name. This example uses iosapp as the item name.

6. In the Source Code Management section, select AWS CodePipeline and configure the plugin as shown in the following screenshot.

Screenshot of Source Code Management configuration in a Jenkins freestyle project

      • AWS region: The region in which you want to create the CI/CD pipeline.
      • AWS access key and AWS secret key: Create a special IAM user and apply the AWSCodePipelineCustomActionAccess managed policy to that user. Use the access credentials for that user to configure this section.
      • Category: Choose Build. This is also used in the pipeline configuration.
      • Provider: This example uses the name Jenkins. It can be renamed, but take note of the name specified here.
      • Version: Enter 1 here. This value is used in the pipeline configuration.

7. Under Build Triggers, select Poll SCM. Enter the schedule * * * * * separated by spaces, as shown in the following screenshot.

Screenshot of Build Triggers confoguration in a Jenkins freestyle project

8. Under Build, select Add build step, then Execute shell. Enter the following commands, inserting your development team ID.

/usr/bin/xcodebuild -version
/usr/bin/xcodebuild build-for-testing -scheme MyNotes -destination generic/platform=iOS DEVELOPMENT_TEAM=<your development team ID> -allowProvisioningUpdates -derivedDataPath /Users/admin/.jenkins/workspace/iosapp
mkdir Payload && cp -r /Users/admin/.jenkins/workspace/iosapp/Build/Products/Debug-iphoneos/MyNotes.app Payload/
zip -r Payload.zip Payload && mv Payload.zip MyNotes.ipa

9. Under Post-build Actions, select Add post-build action, then AWS CodePipeline Publisher. Fill in the fields as shown in the following screenshot:

Screenshot of Post Build Action Configuration in a Jenkins freestyle project

10. Save the configuration.

11. Retrieve the public IP address for the macOS build server.

Configure Device Farm

In this section, you configure Device Farm to test the sample iOS app on real-world devices.

  1. Navigate to the AWS Device Farm Console
  2. Choose Create a new project and enter a name for the project. Choose Create project. Note the name of the project.
  3. Choose the newly created project and retrieve the project ID:
    • Copy the URL found in the browser into a text editor.
    • Note the project ID, which can be found in the URL path:

https://us-west-2.console.aws.amazon.com/devicefarm/home?region=us-east-1#/projects/<your project ID is here>/runs

    • Decide on which devices you want to test the sample app. This is known as the device pool in Device Farm. This example doesn’t use a PRIVATE device pool. It uses a CURATED device pool, which is a device pool created and managed by AWS Device Farm.
    • Retrieve the ARN of the CURATED device pool for your project using the AWS CLI:

$ aws devicefarm list-device-pools --arn arn::devicefarm:us-west-2:<account-id>:project:<project id noted above> --region us-west-2 --query 'devicePools[?name==`Top Devices`]'

Note the device pool ARN.

Configure the CodeCommit repository

In this section, the source code repository is created and source code is pushed to the repository.

  1. Create a CodeCommit repository. Take note of the repository name.
  2. Connect to the newly created repository.
  3. Push the iOS app code from the local repository to the remote CodeCommit repository:

$ git push

Create and configure CodePipeline

CodePipeline orchestrates all phases of the example. Each action is represented as a stage.

Since you have a Jenkins stage, which is considered a custom action and has to be configured via the AWS management console, use the AWS management console to create your pipeline.

  1. Go to the AWS CodePipeline console and choose Create pipeline.
  2. Enter iosapp under Pipeline settings and select New service role.
  3. Leave the default Role name, and select Allow AWS CodePipeline to create a service role so it can be used with this new pipeline.
  4. Choose Next.
  5. Select AWS CodeCommit as the Source provider. Select the repository you created and the branch name, then select Next.
  6. Select Add Jenkins as the build provider and fill in the fields:
    • Provider name: Specify the provider name you configured for this example.
    • Server URL: Specify the public IP address of the Jenkins server and the port on which Jenkins is. For example, if 1.2.3.4 is the IP address and 8080 is the port, the server URL is http://1.2.3.4:8080.
    • Project name: Specify the name you gave to the Jenkins Freestyle project you created.
  7. Choose Next.
  8. Choose Skip deploy stage. You are integrating with Device Farm and this is only valid as a test stage, not a deploy stage.
  9. Choose Create pipeline. This creates a two-stage pipeline which starts executing immediately after creation. However, you are not done yet, so stop the current execution
  10. Now create a test stage with Device Farm. Choose Edit to modify the pipeline. Under the Build stage, select Add Stage and enter a stage name (such as Test). Choose Add stage again.
  11. In the newly added stage, choose Add action group and fill in the fields:
    • Action name: Enter an Action name
    • Action Provider: Select AWS Device Farm
    • Region: Select US West – Oregon.

      “AWS Device Farm is only supported in US-West-2 (Oregon) so this action will be a cross region action since the pipeline is in us-east-1”

    • Input artifacts: Select BuildArtifact, which is the output of the Jenkins build stage
    • ProjectId: This is the Device Farm project ID you noted earlier
    • DevicePoolArn: This is the Device Farm ARN you noted earlier
    • AppType: Enter iOS
    • App: This is the file that contains the app to test; the filename of your generated IPA is MyNotes.ipa
    • TestType: This is the type of test to run on the application; enter BUILTIN_FUZZ
  12. Leave the other fields blank and choose Done to save the action configuration, then choose Save to save the pipeline changes.
  13. Optionally, you can enable notifications to notify you of changes in the pipeline, such as when the pipeline completes, when a stage or action completes, or when there is a failure. To enable notifications, create a notification rule.
  14. Choose Release change to execute the pipeline, as shown in the following screenshot.

Completed codepipeline sample with example test failure

Verify the test on Device Farm

From the pipeline execution, you can see there is a failure in your test. Check the test results:

  1. Navigate to the AWS Device Farm Console.
  2. Select the project you created.
  3. All the tests that have run are listed, as seen in the following screenshot.
  4. Failure on AWS Device FarmChoose the test to see more details.

You can see the source of the failure. To investigate why the test failed, choose each device. The device names on which the app was tested are also shown, such as the OS version and the total duration of the test for each device. You can see screenshots of the test by switching to the Screenshots tab. More information can also be seen by clicking on a device.

Troubleshoot the failure by examining the result in each of the devices on which the test was run to determine what changes are needed in the application. After making the needed changes in the application source code, push the changes to the remote repository (in this case, a CodeCommit repository) to trigger the pipeline again. The following screenshot shows a successful pipeline execution:

Succesfuly executed CodePipeline

The following screenshot shows a successful test:

Sucessfully executed tests on Device Farm

Cleanup

Cleanup the following AWS resources:

Conclusion

This post showed you how to integrate CodePipeline with an iOS Jenkins build server and leverage the integration of CodePipeline and Device Farm to automatically build and test iOS apps on real-world devices. By taking this approach to testing iOS apps, you can visualize how an app will behave on actual devices and with the automated CI/CD pipeline, and quickly test apps as they are developed.

Use AWS Firewall Manager and VPC security groups to protect your applications hosted on EC2 instances

Post Syndicated from Kaustubh Phatak original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-vpc-security-groups-to-protect-applications-hosted-on-ec2-instances/

You can use AWS Firewall Manager to centrally configure and manage Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. This post will take you through the step-by-step instructions to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

In this post, I’ll show you how to create and enforce a master set of security group rules by using common security group policy, while still allowing developers to deploy and manage application-specific security group rules. In the example below, the security group rules you’ll create allow SSH access only from the public IP address of the bastion host, and to set a policy that prohibits any security group rules that allow SSH access from everywhere (port 22).

When you use Firewall Manager to centrally apply a common security group, you can do things such as ensure that all Application Load Balancers only talk to Amazon CloudFront, or the Secure Shell (SSH) protocol is only allowed from specific IP ranges, or to give system administrators access to a central database.

In many organizations, developers write their own security group rules for their applications. However, if you’re a security administrator, you want to audit the security group rules so you’ll know when a security group is misconfigured. Using audit security group policy, you can set guardrails on which security group rules can or cannot be created across your organization. For example, you could only allow security group rules on ports 10-1000, or specify that you do not allow security group rules on port 23.

As an administrator, you also want to simplify operations by detecting unused and redundant security groups across their AWS accounts. You can use a managed audit policy to help identify unused and redundant security groups.

If you haven’t used these services before, here’s a quick overview:

  1. AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organization by using AWS Config in the background. Using AWS Firewall Manager, you can easily roll out AWS WAF rules, create AWS Shield Advanced protections, and enable security groups for your Amazon Elastic Compute Cloud (Amazon EC2) and elastic network interface resource types in Amazon VPCs.
  2. VPC security groups act as a virtual, stateful firewall for your Amazon Elastic Compute Cloud (Amazon EC2) instance to control inbound and outbound traffic. You can specify separate rules for inbound and outbound traffic, and instances associated with a security group can’t talk to each other unless you add rules allowing it.

After you put the master set of security group rules in place, you’ll get notification of all non-compliant changes made by the developers. You can take remediation action if necessary using an audit security group policy. In this post, you’ll also set up a usage security group policy, so that you can flag unused security groups and merge redundant security groups for simpler administration.

Prerequisites

AWS Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an Organization and Enabling All Features in Your Organization.
  • An administrator AWS Account: You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization.
  • AWS Config: You must enable AWS Config for all of the accounts in your organization, so that AWS Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Note: You’ll be charged $100 per policy per month. In the solution in this post, you’ll create three policies. In addition, AWS Config charges also apply. For more information, see AWS Firewall Manager pricing and AWS Config pricing.

Overview

The diagram below illustrates the following steps:

  1. Complete the prerequisites that were outlined in the prerequisites section above.
  2. Create a primary security group under AWS Firewall Manager. This is a VPC security group that gets replicated as a new security group to every resource within the policy scope.
  3. In AWS Firewall Manager, create policies that can be applied to individual application security groups by mapping them to specific application name/value tags. The policies you create will result in the generation of individual new security groups.
  4. Application developers can build additional app-specific security group rules created in the previous step.

 

Figure 1: Overview of solution

Figure 1: Overview of solution

Create a common security group policy

You’ll begin by creating a common security group policy to push primary security group rules across all accounts.

  1. Sign in to the AWS Management Console using the AWS Firewall Manager administrator account that you set up in the prerequisites, and then open the Firewall Manager console.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. Using the Filter menu, select the AWS Region where your application is hosted and choose Create policy. In my example, I choose US West (Oregon).
  4. For Policy type, choose Security group.
  5. For Security group policy type, choose Common security groups, then choose Next.
  6. Enter a policy name. In my example, I’ve named my policy Test_Common_Policy.
  7. Policy rules allow you to choose how the security groups in this policy are applied and maintained. For this tutorial, choose Apply the primary security groups to every resource within the policy scope and leave the other options unchecked. You can also choose to apply only one of these policies. Note that if you choose both check boxes, a local user won’t be able to modify security group and they won’t be able to add additional security groups.
  8. Choose Add primary security group to see all security groups in your account in your specified AWS Region. Select any one of your existing security groups, or create a new security group.
  9. (Optional) If you choose to create a new security group, you’ll be taken to the VPC dashboard where you can create your primary security group by following the Creating a Security Group documentation. Under audit security group, add the following:
    1. For Ingress Rules, choose Allow access on Port 22 from 203.0.113.1/32.
    2. For Egress Rules, choose Allow all traffic on all ports.
  10. After you select the primary security group, choose Add security group.
  11. For Policy action, for this example, choose Apply policy rules and identify resources that are non-compliant but do not auto remediate. By selecting this option, Firewall Manager will notify you of any non-compliant security groups, but will not auto-remediate. Choose Next.
  12. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource Type to apply this policy, choose EC2 instances.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  13. Review the security policy, then choose Create policy.

 

Figure 2: Summary of Common Security Group policy

Figure 2: Summary of Common Security Group policy

The security policy will review all the EC2 instances in your child accounts in your specified AWS Region and add the primary security group to the primary network interface of the Amazon EC2 instances. All primary interfaces of the Amazon EC2 instances created in future will also have this primary security group. If the developers remove the security group rules of the primary security group, you’re notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of changing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will be removed. Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Create an audit security group policy

Now, you’ll create an audit security group policy to enforce the guardrails. You’ll create a security group rule that allows port 22 access from an allowed IP subnet of 203.0.113.1/32 according to the security team’s recommendations.

  1. In the AWS Management Console, select AWS WAF and AWS Shield.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. In the Filter, select the AWS Region where your application is hosted and choose Create policy. In my example, I will choose US West (Oregon).
  4. For Policy type, choose Security group. For Security group policy type, choose Auditing and enforcement guidelines for security group rules, then choose Next.
  5. Enter a policy name. In my example, I’ve named my policy Test_Audit_Policy.
  6. For Policy rules, select Allow any rules defined in audit security group.
  7. Choose Add audit security group to see all security groups in your account in your specified AWS Region. You can select a security group, or create a new security group.
  8. (Optional) If you choose to create a new security group, you’ll be taken to VPC dashboard where you can create your primary security group by following the Creating a Security Group documentation. In the audit security group, add the following:
    1. For Ingress Rules, choose Allow access on Port 22 from 203.0.113.1/32.
    2. For Egress Rules, choose Allow all traffic on all ports.
  9. After you select the audit security group, choose Add security group.
  10. For Policy action, you can only select Apply policy rules and identify resources that are non-compliant but do not auto remediate. By selecting this option, Firewall Manager will notify you of any non-compliant security groups, but will not auto-remediate. Choose Next.
  11. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource type to apply this policy, choose Security groups.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  12. Review the security policy and choose Create policy.

 

Figure 3: Summary of Audit Security Group policy

Figure 3: Summary of Audit Security Group policy

The security policy will audit all the security groups in your child accounts in your specified AWS Region and will only allow security group ingress rules that allow port 22 access from 203.0.113.1/32. All security groups created in future will also have this restriction. If Firewall Manager detects that a security groups exists that allows port 22 access from everywhere except 203.0.113.1/32, you’re notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of editing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will be removed. Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Create a usage security group policy

Lastly, you’ll create a usage security group policy to remove unused security groups, and to merge redundant security groups.

  1. In the AWS Management Console, select AWS WAF and Shield.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies. In the Filter, select the AWS Region where your application is hosted and choose Create policy. In my example, I am choosing US West (Oregon).
  3. For Policy type, choose Security group. For Security group policy type, choose Auditing and cleanup of unused and redundant security groups. Choose Next.
  4. Enter a policy name. In my example, I’ve named my policy Test_Usage_Policy.
  5. For Policy rules, select both the options: Security groups within this policy scope should be used by at least one resource and Security groups within this policy scope should not have similar content.
  6. For Policy action, select Apply policy rules and identify resources that are non-compliant but do not auto remediate. Choose Next.
  7. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource type to apply this policy, choose Security groups.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  8. A pop-up warning message will appear. Select Exclude Firewall Manager admin account from the policy scope, so that security groups in the administrator account are not affected.
  9. Review the security policy and choose Create policy.

 

Figure 4: Summary of Usage Security Group policy

Figure 4: Summary of Usage Security Group policy

The security policy will review all the security groups in your child accounts in your specified AWS Region and check if there are any security groups that are not associated with any resource. The security policy will also review if there are any duplicate security group rules. After these cases are identified, AWS Firewall Manager will automatically merge them into one security group. All security groups created in future will also be checked for this. If Firewall Manager detects that a security groups exists that is not associated to any resource or has overlapping rules, you’ll be notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of editing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will either be removed (in case of unused) or rules will be merged (in case of redundant security groups). Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Conclusion

In this post, you learned how you can create AWS Firewall Manager rules using the console. Using both VPC security groups and AWS Firewall Manager, you created a deployment strategy that enables the developers in your organization to maintain a security mindset and begin coding security group rules, while at the same time ensuring that all applications are still protected by a set of security group rules defined by your organization’s security team. In addition, you have reduced the likelihood of misconfigured or overly permissive security groups, as well as the operational burden, by simplifying the security groups created in all your member accounts.

For further reading, see AWS Firewall Manager Update – Support for VPC Security Groups.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kaustubh Phatak

Kaustubh is a Cloud Support Engineer II at AWS. On a daily basis, he provides solutions for customers’ cloud architecture questions related to Networking, Security, and DevOps domain. Outside of the office, Kaustubh likes to play cricket, ping-pong, and soccer. He is also an avid console gamer.

Using AWS CodeBuild to execute administrative tasks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/devops/using-aws-codebuild-to-execute-administrative-tasks/

This article is a guest post from AWS Serverless Hero Gojko Adzic.

At MindMup, we started using AWS CodeBuild to quickly lift and shift support tasks to the cloud. MindMup is a collaborative mind-mapping tool, used by millions of teachers and students to collaborate on assignments, structure ideas, and organize and navigate complex information. Still, the team behind the product consists of just two people, and we’re both responsible for everything from sales and product management to programming and customer support. One of the key reasons why such a tiny team can support a large group of users is that we tend to automate all recurring tasks in order to free up our time for more productive work.

Administrative support tasks often start as ad-hoc command line scripts, with manual intervention to resolve exceptions. As the scripts stabilize, humans can be less involved, so teams look for ways of scheduling and automating job executions. For infrastructure deployed to AWS, this also means moving away from running scripts from on-premises developers or operations computers to running in the cloud. With utilization-based pricing and on-demand capacity, AWS Lambda and AWS Fargate are the two obvious choices for running such tasks in AWS. There is a third option, often overlooked: CodeBuild. Although CodeBuild is designed for a completely different purpose, it offers some compelling features that make it very easy to set up and run periodic support jobs, especially as a first easy step towards a more systematic solution.

Solution overview

CodeBuild is, as the name suggests, a managed service for executing typical software build jobs. In some ways, such as each job having an associated IAM permissions, CodeBuild is similar to Lambda and Fargate. One of fundamental differences between Codebuild jobs and Lambda functions or Fargate tasks is the location of the executable definition of the job. The executable definition of a Lambda function is in a ZIP archive deployed to Lambda. For Fargate, the executable definition is in a Docker container image, deployed in a task with Amazon ECS or in a Kubernetes pod with Amazon EKS. Both services require an explicit deployment to update the executable definition of a task. For CodeBuild jobs, the executable definition is not deployed to an AWS service. Instead, it is in a source code control system that you can manage locally or using a service such as GitHub or AWS CodeCommit.

Sitting alongside the rest of the source code, each CodeBuild task has an entry-point configuration file, by convention called a buildspec.yml. The buildspec.yml file lists the programming language runtimes required by the job, and the steps to execute before, during and after the build job. For example, the following buildspec.yml sets up a build environment for JavaScript with Node.js 12, installs dependencies, runs tests, and then produces a deployment package using webpack.

version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 12
    commands:
      - npm install
  build:
    commands:
      - npm test
      - npm run web pack

Usually, the buildspec.yml file involves some variant of installing dependencies, compiling code and running tests, then packaging and versioning artifacts. But the steps of a buildspec.yml file are actually just shell commands, so CodeBuild doesn’t necessarily need to run tasks related to compiling or packaging. It can execute any sequence of Unix commands, scripts, or binaries. This makes CodeBuild a uniquely compelling choice for the transition from running shell scripts on an operations machine to running a shell script in the cloud.

Comparing CodeBuild and Lambda for administrative tasks

The major advantage of CodeBuild over Lambda functions for support jobs is that the scripts can be significantly more flexible. Moving from shell scripts to Lambda functions usually means rewriting the task in a language such as JavaScript or Python. You can execute a shell script from a Lambda function when using Amazon Linux 1 instances, or even use a Bash custom runtime, but when using CodeBuild, you can execute the same shell script without changes.

Lambda functions usually run only in a single language. Support tasks often perform a chain of actions, and different steps might require utilities written in different languages. Running such varied tasks with a Lambda function would require constructing a custom Lambda runtime, or splitting steps into multiple functions with different runtimes, and then somehow coordinating and passing data between them. AWS Step Functions can be used to coordinate the workflow, but most support tasks are a sequence of steps, to be executed in order if the previous one succeeds. With CodeBuild, you can configure the task to include all required runtimes.

Support tasks often need to transform the outputs of one tool and pass it into a different tool. For example, select rows from a database containing expired accounts, then filter out only the user emails, separate the data with commas, and send to an automated mailer with a template. Tools such as grep, awk, and sed become invaluable for such transformations. However, they aren’t available on new Lambda runtimes.

Lambda runtimes based on Amazon Linux 2 bundle only the absolutely minimal operating system packages. Even the basic command line Linux utilities, such as which, are not packaged with the recent Lambda runtimes. On the other hand, CodeBuild runs tasks in a full-blown Linux environment. Executing support tasks through CodeBuild means that you can pipe results into all the standard Unix tools, without having to use half-baked replacements written in a scripting language.

For applications running in the AWS ecosystem, support tasks often need to communicate with AWS services or resources. Standard CodeBuild environments also come with the aws command line tools, so you can use them without any additional setup. This becomes especially important for moving data from and to Amazon S3, where command line tools have operations for batch uploads or downloads or recursive directory synchronization. Those operations are not directly available through the programming language SDK libraries.

It is, of course, possible to install additional binaries to Lambda functions by building them for the right Linux environment. Because the standard shared system libraries are also not in the recent Lambda runtimes, compiling additional tools is akin to building a Linux distribution from scratch. With CodeBuild, most standard tools are included already, and you can add additional tools to the system by using an operating system package manager (apt-get or yum).

CodeBuild execution environments can also be more flexible in terms of execution time and performance constraints. Lambda tasks are currently limited to fifteen minutes. The only performance setting you can influence is the memory size, which proportionally impacts the CPU power. The highest setting is currently 3GB memory, which assigns two virtual cores. CodeBuild allows you to configure tasks which can run for up to 8 hours. You can also explicitly select a compute type, including using GPU processors and going all the way up to 255 GB memory or 72 virtual CPU cores. This makes CodeBuild an interesting choice for tasks that need to potentially run longer than fifteen minutes, that are very computationally intensive, or that need a lot of working memory.

On the other hand, compared to Lambda functions, CodeBuild jobs start significantly slower and running them in parallel is not as easy or convenient. For example, by default you can only run up to 60 CodeBuild tasks in parallel, but this is a soft limit that you can increase. However, support tasks are mostly batch jobs by nature, so saving a few seconds or being able to execute thousands of such tasks in parallel is not usually important.

Comparing CodeBuild or Amazon EC2/Fargate for administrative tasks

Most of the limitations of Lambda functions for admin jobs could be solved by running a virtual machine through Amazon EC2. In fact, running tasks on Amazon EC2 was the usual way of lifting support tasks from the operations computers and moving them into the cloud until Lambda became available. However, due to how Amazon EC2 instances are billed, teams often bundled all the operations tasks on a single Amazon EC2 instance. That instance needed a superset of all the security privileges required by the various tasks, opening potential security risks. That’s where Fargate can help. Fargate runs container-based tasks on demand, offering utilization-based billing and removing many restrictions of Lambda, such as the 15-minute runtime and reduced operating system environment, also allowing you to choose execution environments more flexibly.

This means that, compared to Fargate tasks, CodeBuild execution is more or less comparable in terms of what you can run and how much power you can assign to your tasks. Both can use a custom Docker container, and both run a full-blown operating system with all the standard binaries. They also have similar terms of start-up time and parallelization. However, setting up a CodeBuild job and updating it later is much easier than with Fargate using tasks or pods.

With Fargate, you need to provide a custom Docker container with the right entry point. CodeBuild lets you use custom containers or choose standard images provided by AWS, including Ubuntu or AWS Linux instances. Likewise, configuring a Fargate task involves deploying in an Amazon VPC, and if the task needs to access other AWS services, setting up a NAT gateway. CodeBuild tasks have network access by default, and can be deployed in a VPC if required.

Updating support scripts can also be easier with CodeBuild than with Fargate. Deploying a new version of a task into Fargate involves building a new Docker container and uploading it to a container manager such as Amazon ECS or Amazon EKS. Deploying a new version of a CodeBuild job involves committing to the version control system, without the need to set up a CI/CD pipeline. This makes CodeBuild a compelling way of setting up support tasks, especially for larger organizations with strict access rules. Support people can update tasks definitions by having access to the source code control system, without the need to get access to production resources on AWS.

Fargate environments are transient, similar to Lambda functions. If you want to preserve some files between job runs (for example, compiled task binaries or installed dependencies), you would have to manage that manually with Fargate. CodeBuild supports artifact caching out of the box, so it’s significantly easier to preserve data files or installed dependencies between runs.

Potential downsides

Although taking supporting tasks directly from the source code repository is one of the biggest advantages of CodeBuild over Fargate or Lambda, it can also be a major drawback. Ensuring that the scripts are always in a stable condition requires discipline regarding committing to the trunk. Without such discipline, untested or unstable code might be used for admin tasks by mistake. A potential workaround for teams without good trunk commit discipline would be to use a specific branch for CodeBuild tasks, and then merge code into that branch once it is ready to be released.

Using support scripts directly from a source code repository makes it more complicated to synchronize versions with other deployed software. If you need the support scripts to track the exact version of code that was deployed to other services, it’s probably safer and easier to use Lambda functions or Fargate containers with an explicit deployment step.

Executing support tasks through CodeBuild

CodeBuild jobs take a bit more setup than Lambda functions, but significantly less than Fargate tasks. Below is an example of a CodeBuild job set up through AWS CloudFormation.

Architecture diagram for the CodeBuild being used for administrative tasks

Here are a few things to note:

  • You can add the required IAM permissions for the task into the Policies section of the CodeBuildRole resource.
  • The Environment section of the CodeBuildProject resource is where you can define the container image, choose the virtual hardware or set up environment variables to configure the task.
  • Environment variables are directly available for the shell commands listed in the buildspec.yml file, so this trick allows you to easily parameterize jobs to use resources from the same AWS CloudFormation template.
  • The Location and BuildSpec properties in the Source section define the source code repository, and the path of the buildspec.yml file within the repository.
Resources:
  CodeBuildRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "codebuild.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'logs:*'
                Resource: '*'

  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref JobName
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: NO_ARTIFACTS
      LogsConfig:
        CloudWatchLogs:
          Status: ENABLED
      Cache:
        Type: NO_CACHE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/standard:3.0
        EnvironmentVariables:
          - Name: SYSTEM_BUCKET 
            Value: !Ref SystemBucketName
      Source:
        Type: GITHUB
        Location: !Ref GithubRepository 
        GitCloneDepth: 1
        BuildSpec: !Ref BuildSpecPath 
        ReportBuildStatus: False
        InsecureSsl: False
      TimeoutInMinutes: !Ref TimeoutInMinutes

CodeBuild jobs usually run after changes to source code files. Support tasks usually need to run on a periodic schedule. The previous snippet did not define the Triggers property for the CodeBuild job, so it will not track source code changes or run automatically. Instead, you can set up an Amazon CloudWatch Event rule (or optionally use Amazon EventBridge, that provides more sophisticated rules) that will periodically trigger the CodeBuild job. Here is how to do that with AWS CloudFormation:

  RunCodeBuildJobRole:
    Condition: ScheduleRuns
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: StartTask 
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'codebuild:StartBuild'
                Resource:
                  - !GetAtt CodeBuildProject.Arn

  RunCodeBuildJobRoleRule:
    Condition: ScheduleRuns
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-scheduler'
      Description: Periodically runs codebuild job to archive defunct accounts
      ScheduleExpression: !Ref ScheduleRate
      State: ENABLED
      Targets:
        - Arn: !GetAtt CodeBuildProject.Arn
          Id: CodeBuildProject
          RoleArn: !GetAtt RunCodeBuildJobRole.Arn

Note the ScheduleExpression property of the RunCodeBuildJobRoleRule resource. You can use any supported CloudWatch schedule expression there to set up when or how frequently your job runs.

Observability and audit logs

If a support job fails for any reason, people need to know. Luckily, CodeBuild already integrates nicely with CloudWatch to report job statuses, so you can set up another CloudWatch Event rule that tracks failures and alerts someone about it. To make notifications flexible, you can send them to an Amazon SNS topic. You can then subscribe for email notifications or forward those alerts somewhere else easily. The following wires up notifications with an AWS CloudFormation template.

  SnsPublishRole:
    Condition: CreateSNSNotifications
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'SNS:Publish'
                Resource:
                  - !Ref SnsTopicArn

  CodeBuildNotificationRule:
    Condition: CreateSNSNotifications
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-fail-notification'
      Description: Notify about codebuild project failures
      RoleArn: !GetAtt SnsPublishRole.Arn
      EventPattern:
        source:
          - "aws.codebuild"
        detail-type:
          - "CodeBuild Build State Change"
        detail:
          build-status:
            - "FAILED"
            - "STOPPED"
          project-name:
            - !Ref CodeBuildProject
      State: ENABLED
      Targets:
        - Arn: Ref SnsTopicArn
          Id: NotificationTopic

Another option to keep the execution of your tasks under control is to generate a report using the test report functionality introduced a few months ago and specify in the buildspec.yml file about the location of the files that store results you want to include in your report.

Testing administrative tasks

Note the build-status list inside the CodeBuildNotificationRule resource. This defines a list of statuses about which you want to publish alerts. In the previous snippet, the list does not include successful runs. That’s because it’s usually not necessary to take any action when a support job runs successfully. However, during initial testing you may want to add IN_PROGRESS (notify when a task starts) and SUCCEEDED (notify when the job ends without an error).

Finally, one of the biggest challenges when moving scripts from an operations machine to running in CodeBuild is to create the right IAM policies. Command-line users on operations machines usually have a wide set of privileges, and identifying the minimum required for a specific job usually involves starting small, then iterating over failed attempts and opening up required operations. Running that process directly through CodeBuild can be quite slow. Instead, I suggest setting up a separate IAM policy for the job, then assigning it both to the role for the CodeBuild task, and to a command-line role or a command-line user. You can then iterate quickly directly on the command line and identify all required IAM operations, then remove the additional command-line user when done.

Conclusion

The next time you need to move a support task to the cloud, and you need a rich execution environment, consider using CodeBuild, at least as the initial step towards a more systematic solution. It will allow you to quickly get a script up and running with all the benefits of IAM isolation, scheduled execution, and reliable notifications.

Gojko is author of the Running Serverless book and interactive course. He is currently working on Video Puppet, a tool for editing videos as easily as editing text. You can reach out to him on Twitter.

Providing self-service repositories to end users to connect to AWS Lambda backed services

Post Syndicated from Richard Rustean original https://aws.amazon.com/blogs/devops/providing-self-service-repositories-to-end-users-to-connect-to-aws-lambda-backed-services/

Offering products to your consumers in AWS is a great way to accelerate adoption, and offering these products through AWS Service Catalog helps to simplify and streamline the process. This blog post describes how you can offer multiple consumers access to your backend products in AWS by using some simple AWS tools and services.

In this case, the backend product uploads newly created or modified objects from an AWS CodeCommit repository to a repository-specific path in an Amazon S3 bucket via some logic in an AWS Lambda function. This method works equally well with any other backend AWS service and is particularly useful for CI/CD or machine learning pipelines in which some logic is required before the pipeline processes the files. In a recent project, I used this method to push machine learning models to dynamically created Amazon EMR clusters.

Overview

The architecture behind the customer-facing portion of this solution is relatively simple, using only three AWS services. As discussed in the summary, the backend architecture uses a single Lambda function to push objects to Amazon S3. In reality, this could be a much larger and more complex solution.

Architecture diagram showing that we only need three AWS Services for this example

Getting started

This example deploys all components of this infrastructure as code using AWS CloudFormation. AWS CloudFormation templates are deployed using AWS CLI. You can deploy them using the AWS console if you prefer, but that is not covered in this blog post.

Prerequisites

This post assumes that you have an AWS account in place with permissions to allow the following:

  • Access to create AWS Lambda functions
  • Access to create AWS CodeCommit repositories and push to them
  • Access to create AWS Service Catalog products
  • Access to create and subscribe to Amazon SNS topics
  • AWS CLI Installed with the above access to your AWS account
  • Amazon S3 bucket created

You should download the AWS CloudFormation templates for this project, unzip them, and store them in a local folder.

Deploying the backend service

In the source code for this blog post, find an AWS CloudFormation template called backend-function.yml. This is the backend service with which you interact. When you create your repository through AWS Service Catalog, you specify this backend service as an input, which allows your single AWS Service Catalog product to serve many different backend products.

  1. Download the backend AWS CloudFormation templates as discussed in the Prerequisites section, unzip them, and place them in a folder on your local computer
  2. Navigate to that folder and run the following AWS CLI command. In this command, you assume that you act on commit to the master branch of your repository. If this is not the case, change the codeCommitBranch key to the branch on which you are acting. You should also replace the value <myS3Bucket> with the correct name for your Amazon S3 bucket.
    aws cloudformation create-stack --stack-name myBackendFunction --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM CAPABILITY_IAM --template-body file://backend-function.yml --parameters ParameterKey=codeCommitBranch,ParameterValue=master ParameterKey=s3BucketName,ParameterValue=<myS3Bucket>
    This returns a stack ID such as the following:
    {
    "StackId": "arn:aws:cloudformation:eu-west-1:737661087350:stack/myBackendFunction/c0d04af0-f98a-11e9-8f65-06c34fd08df4"
    }
  3. You can check on the progress of the AWS CloudFormation stack creation by running the following command and looking at the StackStatus.
    aws cloudformation describe-stacks --stack-name "<StackId from the above command>"
    Once your status is set to CREATE_COMPLETE, you can continue to the next step.
  4.  Looking at the output from the aws cloudformation describe-stacks command, you should also note down the ExportName in the Outputs section. This is the value that you use when provisioning the CodeCommit repositories so that they connect to this specific backend product. In this case, the name is myBackendFunction-BackendLambdaCode.

Deploying the Service Catalog product

In the folder that you downloaded and unzipped the project files into, find an AWS CloudFormation template called service-catalog-product.yml. This is the code that creates the service catalog product for your consumers and contains the CodeCommit repository that they use. It does this by calling another AWS CloudFormation template that you upload to your Amazon S3 bucket.

  1. In the folder into which you downloaded and unzipped the project files, find an AWS CloudFormation template called create-backend-linked-repository.yml. You need to upload this to the Amazon S3 bucket you created. In practice, this is on a secured bucket owned by your infrastructure team, but in this example, place it on the same bucket to which your backend function is writing. Upload it using the following AWS CLI command, where <myS3Bucket> is the name of your Amazon S3 bucket
    aws s3 cp create-backend-linked-repository.yml s3://<myS3Bucket>
  2.  In the folder into which you downloaded and unzipped the project files, find the file named service-catalog-product.yml.
  3. Navigate to the local folder with the files you downloaded and run the following AWS CLI command. You should replace the value <myS3Bucket> with the correct name for your Amazon S3 bucket, and replace the value <permissionArn> with the full ARN of a user, group, or role that needs to be able to deploy the repositories from the AWS Service Catalog.
    aws cloudformation create-stack --stack-name myServicCatalogProduct --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM CAPABILITY_IAM --template-body file://service-catalog-product.yml --parameters ParameterKey=s3BucketName,ParameterValue=<myS3Bucket> ParameterKey=permissionsArn,ParameterValue=< permissionsArn >
    This returns a stack ID such as the following:
    {
    "StackId": "arn:aws:cloudformation:eu-west-1:737661087350:stack/myServicCatalogProduct/dcb48f80-f988-11e9-8199-0637bdb794d0"
    }
  4.  You can check on the progress of the AWS CloudFormation stack creation by running the following command and looking at the StackStatus:
    aws cloudformation describe-stacks --stack-name "<StackId from the above command>"

Once your status is set to CREATE_COMPLETE, you can continue to the next step.

Deploying the AWS CodeCommit repository as a user

Now that you have deployed the infrastructure around this AWS Service Catalog product, you can deploy the actual repository just as a user would. You do this from the AWS Service Catalog page in the AWS console.

  1. Open the AWS Service Catalog page and navigate to the product lists. You should see the product you just created, called CodeCommit Repository for Demo. Choose the product name, and then choose Launch Product.
  2. Give the product a name and choose Next.
  3. Enter the details into the Parameters page. You can leave the default values in there for this example or change the values to something more meaningful. The parameter for backendFunction should be the name of the backend function. This is the ExportName that you noted down in Step 4 in the Deploying the backend service section of this blog (in this case it is myBackendFunction-BackendLambdaCode).
  4. Enter any tags that you want to use and then choose Next.
  5. Leave the checkbox unselected in the Notifications section and choose Next.
  6.  Choose Launch to create your new repository.

Uploading content to the AWS CodeCommit Repository

Note that, in the AWS CodeCommit console, you have created a new repository. You can now choose the Clone URL links (either HTTPS or SSH) and connect from your favorite Git client, as shown in the following screenshot.

View of the CodeCommit Repository that was created in the previous step

If you prefer, you can also use the AWS CodeCommit user interface to add and update your files, as shown in the following screenshot.

Adding files directly to the Repository using the AWS CodeCommit UI

Once you commit to the master branch, you can see your files in the Amazon S3 bucket you referenced for this project, which validates that your integration has worked.

Cleaning up your environment

There are three steps to cleaning up your environment after deploying this infrastructure. You must first remove any AWS CodeCommit repositories that you provisioned using AWS Service Catalog, then remove the infrastructure AWS CloudFormation Templates that you deployed and finally you should remove any data that you pushed into your AWS CodeCommit repository from Amazon S3.

Since the end user created the AWS CodeCommit Repository via AWS Service Catalog, we will get them to remove these repositories in the same way.

  1. Open the AWS Service Catalog page and navigate to the Provisioned product list. You should see the repository that you created earlier. Hit the three dots to the left of the product and select Terminate provisioned product.
  2. Click Terminate in the warning window that appears.
  3. After a few minutes hit the refresh button and you will see that this provisioned product disappears.

Snip showing how to terminate an AWS Service Catalog provisioned products

Now that we have cleaned up our repositories, we need to remove the AWS CloudFormation stacks that contain all of the logic. Since we deployed these using the AWS CLI, we will remove them in the same way.

  1. You should first remove the Service Catalog stack by running the command:
    aws cloudformation delete-stack --stack-name myServicCatalogProduct
  2. You can check on the progress of the AWS CloudFormation stack deletion by running the following command and looking at the StackStatus:
    aws cloudformation list-stacks
    When this stack shows a StackStatus of DELETE_COMPLETE then it has been successfully removed and you can move onto the next step.
  3. Next you need to remove the backend stack. You can do this by running the following command:
    aws cloudformation delete-stack --stack-name myBackendFunction
  4. You can check on the progress of the AWS CloudFormation stack deletion by running the following command and looking at the StackStatus:
    aws cloudformation list-stacks
    When this stack shows a StackStatus of DELETE_COMPLETE then it has been successfully removed and you can move onto the next step.

Finally you should remove any unwanted test data from the Amazon S3 bucket that you chose as a target for our repository. All objects will be in a folder with the same name as the repository and this whole folder can now be removed. Please ensure that any data being removed is no longer required before deleting.

Conclusion

In this blog post, you used AWS CloudFormation and AWS CLI to deploy an AWS Service Catalog product and associated a backend Lambda function to move files from a CodeCommit repository to an Amazon S3 bucket. As previously discussed, this is a simple use case for what you can do using this type of infrastructure. By changing the Lambda function to match your requirements, you can use the same infrastructure for practically anything.

 

Deploying a ASP.NET Core web application to Amazon ECS using an Azure DevOps pipeline

Post Syndicated from John Formento original https://aws.amazon.com/blogs/devops/deploying-a-asp-net-core-web-application-to-amazon-ecs-using-an-azure-devops-pipeline/

For .NET developers, leveraging Team Foundation Server (TFS) has been the cornerstone for CI/CD over the years. As more and more .NET developers start to deploy onto AWS, they have been asking questions about using the same tools to deploy to the AWS cloud. By configuring a pipeline in Azure DevOps to deploy to the AWS cloud, you can easily use familiar Microsoft development tools to build great applications.

Solution overview

This blog post demonstrates how to create a simple Azure DevOps project, repository, and pipeline to deploy an ASP.NET Core web application to Amazon ECS using Azure DevOps. The following screenshot shows a high-level architecture diagram of the pipeline:

 

Solution Architecture Diagram

In this example, you perform the following steps:

  1. Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application.
  2. Create a pipeline in Azure DevOps
  3. Build an Amazon ECS Cluster, Task and Service.
  4. Kick-off deployment of the ASP.Net Core web application using the newly create Azure DevOps pipeline.

 

Prerequisites

Ensure you have the following prerequisites set up:

  • An Amazon ECR repository
  • An IAM user with permissions for Amazon ECR and Amazon ECS (the user will need an access key and secret access key)

 

Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application

Follow these steps to deploy a .NET Core app onto your Amazon ECS cluster using the Azure DevOps (ADO) repository and pipeline:

 

  1. Login to dev.azure.com and navigate to the marketplace.
  2. Go to Visual Studio, search for “AWS”, and add the AWS Tools for Microsoft Visual Studio Team Services.
  3. Create a project in ADO: Provide a project name and choose Create.
  4. On the Project Summary page, choose Project Settings.
  5. In the Project Settings pane, navigate to the Service Connections page.
  6. Choose Create service connection, select AWS, and choose Next.
  7. Input an Access Key ID and Secret Access Key. (You’ll need an IAM user with permissions for Amazon ECR and Amazon ECS in order to deploy via the Azure DevOps pipeline.) Choose Save.
  8. Choose Repos in the left pane, then Clone in Visual Studio under Clone to your computer.
  9. Create a ASP.NET Core web application in Visual Studio, set the location to locally cloned repository, and check Enable Docker support.
  10. Once you’ve created the new project, perform an initial commit and push to the repository in Azure DevOps.

 

Creating a pipeline in Azure DevOps

Now that you have synced the repository, create a pipeline in Azure DevOps.

  1. Go to the pipeline page within Azure DevOps and choose Create Pipeline.
  2. Choose Use the classic editor.Pipeline configuration with repository
  3. Select Azure Repos Git for the location of your code and select the repository you created earlier.
  4. On the Choose a Template page, select Docker Container and choose Apply.
  5. Remove the Push an image step.
  6. Add an Amazon ECR Push task by choosing the + symbol next to Agent job 1. You can search for “AWS” in the Add tasks pane to filter for all AWS tasks.

 

Now, configure each task:

  1. Choose the Build an image task and ensure that the action is set to Build an image. Additionally, you can modify the Image Name to your standards.Pipeline configuration page Azure DevOps
  2. Choose the Push Image task and provide the following
    • Enter a name under Display Name.
    • Select the AWS Credentials that you created in Service Connections.
    • Select the AWS Region.
    • Provide the source image name, which you can find in the setting for the Build an image task.
    • Enter the name of the repository in Amazon ECR to which the image is pushedPipeline configuration page Azure DevOps
  3. Choose Save and queue.

Build Amazon ECS Cluster, Task, and Service

The goal here is to test up to building the Docker image and ensure it’s pushed to Amazon ECR. Once the Docker image is in Amazon ECR, you can create the Amazon ECS cluster, task definition, and service leveraging the newly created Docker image.

  1. Create an Amazon ECS cluster.
  2. Create an Amazon ECS task definition. When you create the task definition and configure the container, use the Amazon ECR URI for the Docker image that was just pushed to Amazon ECR.
  3. Create an Amazon ECS service.

Go back and edit the pipeline:

  1. Add the last step by choosing the + symbol next to Agent job 1.
  2. Search for “AWS CLI” in the search bar and add the task.
  3. Choose AWS CLI and configure the task.
  4. Enter a name under Display Name, such as Update ECS Service.
  5. Select the AWS Credentials that you created in Service Connections.
  6. Select the AWS Region.
  7. Input the following command, which updates the Amazon ECS service after a new image is pushed to Amazon ECR. Replace <clustername> and <servicename> with your Amazon ECS cluster and service names.
    • Command:ecs
    • Subcommand:update-service
    • Options and parameters: --cluster <clustername> --service <servicename> --force-new-deployment
  8. Now choose the Triggers tab and select Enable continuous integration with the repository you created.
  9. Choose Save and queue.

 

At this point, your build pipeline kicks off and builds a Docker image from the source code in the repository you created, pushes the image to Amazon ECR, and updates the Amazon ECS service with the new image.

You can verify by viewing the build. Choose Pipelines in Azure DevOps, selecting the entry for the latest run, and then the icon under the status column. Once it successfully completes, you can log in to the AWS console and view the updated image in Amazon ECR and the updated service in Amazon ECS.Pipeline status page Azure DevOps

Every time you commit and push your code through Visual Studio, this pipeline kicks off and builds and deploys your application to Amazon ECS.

Cleanup

At the end of this example, once you’ve completed all steps and are finished testing, follow these steps to disable or delete resources to avoid incurring costs:

  1. Go to the Amazon ECS console within the AWS Console.
  2. Navigate to the cluster you created, then choose the Tasks tab.
  3. Choose Stop all to turn off the tasks.

Conclusion

This blog post reviewed how to create a CI/CD pipeline in Azure DevOps to deploy a Docker Image to Amazon ECR and container to Amazon ECS. It provided detailed steps on how to set up a basic CI/CD pipeline, leveraging tools with which .NET developers are familiar and the steps needed to integrate with Amazon ECR and Amazon ECS.

I hope this post was informative and has helped you learn the basics of how to integrate Amazon ECR and Amazon ECS with Azure DevOps to create a robust CI/CD pipeline.

About the Authors

John Formento

 

 

John Formento is a Solution Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud.

Enhancing automated database continuous integration with AWS CodeBuild and Amazon RDS Database Snapshot

Post Syndicated from bobyeh original https://aws.amazon.com/blogs/devops/enhancing-automated-database-continuous-integration-with-aws-codebuild-and-amazon-rds-database-snapshot/

In major integration merges, it’s sometimes necessary to verify the changes with existing online data. To inspect the changes with a cloned database can give us confidence to deploy to the production database. This post demonstrates how to use AWS CodeBuild and Amazon RDS Database Snapshot to verify your code revisions in both the application layer and the underlying layer, ensuring that your existing data works seamlessly with your revised code.

Making code revisions using continuous integration requires running periodic verification to ensure that your new deliverable works functionally and reliably. It’s easy to focus attention solely on the surface level changes made to the application layer. However, it’s important to remember to inspect the changes made to the underlying data layer too.

From the application layer, users modify the data model for different reasons. Any data model definition change in the application layer maps to a schema change in the database. For those services backed with a relational database (RDBMS), a user might perform data definition language (DDL) operations directly toward a database schema or rely on an object-relational mapping (ORM) library to migrate the schema to fit the application revision. These schema changes (CREATE, DROP, ALTER, TRUNCATE, etc.) can be very critical, especially for those services serving real customers.

Performing proper verification and simulation for these changes mitigates the risk of bringing down services. After the changes are applied, fundamental operation testing (CRUD – CREATE, READ, UPDATE, DELETE) toward data models is mandatory; this leads to data control language (DCL) operations (INSERT, SELECT, UPDATE, DELETE, etc.). After all the necessary steps, a user can move on to the deployment stage.

About this page

  • Time to read:6 minutes
  • Time to complete:30 minutes
  • Cost to complete (estimated):Less than $1 for 1-GB database snapshot and restored instance
  • Learning level:Advanced (300)
  • Services used:AWS CodeBuild, IAM, RDS

Solution overview

This example uses a buildspec file in CodeBuild. Set up a build project that points to a source control repository containing that buildspec file. The CodeBuild runtime environment restores the database server from an RDS snapshot.We restore snapshot to an Amazon Aurora cluster as example through AWS Command Line Interface (AWS CLI). After the database is restored, the build process starts to run your integration process, which is in mock code in the buildspec definition. After the verification stage, CodeBuild drops the restored database.

 

Architecture diagram showing an overview of how we use CodeBuild to restore a database snapshot to verify and validate the new database schema change.

Prerequisites

The following components are required to implement this example:

Walkthrough

Follow these steps to execute the solution.

Prepare your build specification file

Before you begin, prepare your CodeBuild Build Specification file with following information:

  • db-cluster-identifier-prefix
  • db-snapshot-identifier
  • region-ID
  • account-ID
  • vpc-security-group-id

The db-cluster-identifier-prefix creates a temporary database followed by a timestamp. Make sure that this value does not overlap with any other databases. The db-snapshot-identifier points to the snapshot you are calling to run with your application. Region-ID and account-ID describe the account on which you are running. The vpc-security-group-id indicates the security group you use in the CodeBuild environment and temporary database.

YAML
Version: 0.2
phases:
  install:
    runtime-versions:
      python: 3.7
pre_build:
  commands:
    - pip3 install awscli --upgrade --user
    - export DATE=`date +%Y%m%d%H%M`
    - export DBIDENTIFIER=db-cluster-identifier-prefix-$DATE
    - echo $DBIDENTIFIER
    - aws rds restore-db-cluster-from-snapshot --snapshot-identifier arn:aws:rds:region-ID:account-ID:cluster-snapshot:db-snapshot-identifier –vpc-security-group-ids vpc-security-group-id --db-cluster-identifier $DBIDENTIFIER --engine aurora
    - while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
    - echo "Temp db ready"
    - export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
    - echo $ENDPOINT
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    - echo proceed db migrate update, DDL proceed here
    - echo proceed application test, CRUD test run here
post_build:
  commands:
    - echo Build completed on `date`
    - echo $DBNAME
    - aws rds delete-db-cluster --db-cluster-identifier $DBIDENTIFIER --skip-final-snapshot &

 

After you finish editing the file, name it buildspec.yml. Save it in the root directory with which you plan to build, then commit the file into your code repository.

  1. Open the CodeBuild console.
  2. Choose Create build project.
  3. In Project Configuration, enter the name and description for the build project.
  4. In Source, select the source provider for your code repository.
  5. In Environment image, choose Managed image, Ubuntu, and the latest runtime version.
  6. Choose the appropriate service role for your project.
  7. In the Additional configuration menu, select the VPC with your Amazon RDS database snapshots, as shown in the following screenshot, and then select Validate VPC Settings. For more information, see Use CodeBuild with Amazon Virtual Private Cloud.
  8. In Security Groups, select the security group needed for the CodeBuild environment to access your temporary database.
  9. In Build Specifications, select Use a buildspec file.

CodeBuild Project Additional Configuration - VPC

Grant permission for the build project

Follow these steps to grant permission.

  1. Navigate to the AWS Management Console Policies.
  2. Choose Create a policy and select the JSON tab.To give CodeBuild access to the Amazon RDS resource in the pre_build stage, you must grant RestoreDBClusterFromSnapshot and DeleteDBCluster. Follow the least privilege guideline and limit the DeleteDBCluster action point to “arn:aws:rds:*:*:cluster: db-cluster-identifier-*”.
  3. Copy the following code and paste it into your policy:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": "rds:RestoreDBClusterFromSnapshot",
          "Resource": "*"
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": "rds:DeleteDBCluster*",
          "Resource": "arn:aws:rds:*:*:cluster:db-cluster-identifier-*"
        }
      ]
    }
  4. Choose Review Policy.
  5. Enter a Name and Description for this policy, then choose Create Policy.
  6. After the policy is ready, attach it to your CodeBuild service role, as shown in the following screenshot.

Attach created policy to IAM role

Use database snapshot restore to launch the build process

  1. Navigate back to CodeBuild and locate the project you just created.
  2. Give an appropriate timeout setting and make sure that you set it to the correct branch for your repository.
  3. Choose Start Build.
  4. Open the Build Log to view the database cluster from your snapshot in the pre_build stage, as shown in the following screenshot.CodeBuild ProjectBuild Log - pre_build stage
  5. In the build stage, use $ENDPOINT to point your application to this temporary database, as shown in the following screenshot.CodeBuild Project Build Log - build stage
  6. In the post_build, delete the cluster, as shown in the following screenshot.CodeBuild Project Build log - post build stage

Test your database schema change

After you set up this pipeline, you can begin to test your database schema change within your application code. This example defines several steps in the Build Specifications file to migrate the schema and run with the latest application code. In this example, you can verify that all the modifications fit from the application to the database.

YAML
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    # run a script to apply your latest schema change
    - echo proceed db migrate update
    # start the latest code, and run your own testing
    - echo proceed application test

After validation

After we validated the database schema change in the above steps, a suitable strategy for deployment to production should be utilized that would align with the criteria to satisfy the business goals.

Cleaning up

To avoid incurring future charges, delete the resources as following steps:

  1. Open the CodeBuild console
  2. Click the project you created for this test.
  3. Click the delete build project and input delete to confirm deletion.

Conclusion

In this post, you created a mechanism to set up a temporary database and limit access into the build runtime. The temporary database stands alone and isolated. This mechanism can be applied to secure the permission control for the database snapshot, or not to break any existing environment. The database engine applies to all available RDS options, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. This provides options, without impacting any existing environments, for critical events triggered by major changes in the production database schema, or data format changes required by business decisions.

 

AWS Alarms for Application Errors

Post Syndicated from Bozho original https://techblog.bozho.net/aws-alarms-for-application-errors/

Monitoring is key for any real-world application. You have to know what’s happening and be alerted in real time if something wrong is happening. AWS has CloudWatch for that, and gives you a lot of metrics automatically. But there are some that you have to define yourself. And then you need to define proper alarms.

Here I’ll focus on hour:

  • High number of application errors
  • High number of application warnings
  • High number of 5xx errors on the load balancer
  • High number of 4xx errors on the load balancer

First, the prerequisites:

  • You need to be using CloudFormation to automate everything. You can create all of those things manually, but automation is a big plus
  • If using CloudFormation, you’d preferably have a sub-stack for configuring alarms
  • You need to be collecting your logs with CloudWatch logs

If you are not using CloudWatch logs, here’s a simple config file and script to enable them:

{
  "agent": {
    "metrics_collection_interval": 10,
    "region": "eu-west-1",
    "logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log"
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "{{logPath}}",
            "log_group_name": "{{logGroupName}}",
            "log_stream_name": "{instance_id}",
            "timestamp_format": "%Y-%m-%d %H:%M:%S"
          }
        ]
      }
    }
  }
}
# install AWS CloudWatch monitor
mkdir cloud-watch-agent
cd cloud-watch-agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip
unzip AmazonCloudWatchAgent.zip
./install.sh

aws s3 cp s3://$BUCKET_NAME/cloudwatch-agent-config.json /var/config/cloudwatch-agent-config.json
sed -i -- 's|{{logPath}}|/var/log/application.log|g' /var/config/cloudwatch-agent-config.json
sed -i -- 's|{{logGroupName}}|app_node|g' /var/config/cloudwatch-agent-config.json

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/var/config/cloudwatch-agent-config.json -s

Now you have to define two things: Log metrics and alarms. The cloudformation code below creates both:

    "HighAppErrorsNotification": {
      "Type": "AWS::CloudWatch::Alarm",
      "Properties": {
        "AlarmActions": [
          {
            "Ref": "NotificationTopicId"
          }
        ],
        "InsufficientDataActions": [
          {
            "Ref": "NotificationTopicId"
          }
        ],
        "AlarmDescription": "Notify if there are too many application errors",
        "ComparisonOperator": "GreaterThanOrEqualToThreshold",
        "EvaluationPeriods": "1",
        "MetricName": "ApplicationErrors",
        "Namespace": "LogMetrics",
        "Period": "900",
        "Statistic": "Sum",
        "Threshold": "5",
        "TreatMissingData": "ignore"
      }
    },
    "ErrorMetricFilter": {
      "Type": "AWS::Logs::MetricFilter",
      "Properties": {
        "LogGroupName": "app_node",
        "FilterPattern": "ERROR",
        "MetricTransformations": [
          {
            "DefaultValue": 0,
            "MetricValue": "1",
            "MetricNamespace": "LogMetrics",
            "MetricName": "ApplicationErrors"
          }
        ]
      }
    },

If you need to do that manually, go to the CloudWatch logs homepage, select the log group (app_node) and use the button “Create metric filter” ontop. It lets you specify the pattern to look for (“ERROR” in this case). When you have that ready, you can create an Alarm based on it, through the Alarms -> Create alarm. Lookup the metric by name and select it to trigger the alarm (in the example above, it gets triggered if there are more than 5 errors within 900 seconds)

You can then create an identical alarm for warnings (pattern to look for: “WARN”). The threshold there might be higher, e.g. 10 or 20. But that depends on your application logging patterns.

Then there’s the error 5xx load balancer alarms. In CloudFormation it would look like this:

   "TooMany5xxErrorsWebAppAlarmNotification": {
      "Type": "AWS::CloudWatch::Alarm",
      "Properties": {
        "AlarmActions": [
          {
            "Ref": "NotificationTopicId"
          }
        ],
        "InsufficientDataActions": [
          {
            "Ref": "NotificationTopicId"
          }
        ],
        "AlarmDescription": "Notify if there are too many 5xx errors",
        "ComparisonOperator": "GreaterThanOrEqualToThreshold",
        "Dimensions": [
          {
            "Name": "LoadBalancer",
            "Value": {
              "Ref": "WebAppALBId"
            }
          }
        ],
        "TreatMissingData": "notBreaching",
        "EvaluationPeriods": "1",
        "MetricName": "HTTPCode_Target_5XX_Count",
        "Namespace": "AWS/ApplicationELB",
        "Period": "60",
        "Statistic": "Sum",
        "Threshold": "2"
      }
    }

You can again create that manually – look for the HTTPCode_Target_5XX_Count metric in the metric selection screen for the alarm. You have several options there, the most straightforward is to select the per AppELB metric. And again, the same approach can be used for 4xx errors (HTTPCode_Target_5XX_Count).

Getting this running with CloudFormation (and even manually) is not as straighforward as it seems. The right combination of metric names, namespaces and values is not obvious and the relevant documentation is not the first thing that pops up. So I decided to share something that works, as it may take some time experimenting before getting it to that state.

But even outside of CloudFormation or AWS context, monitoring and alerting in case of a high number of application errors, warnings and HTTP errors is a must. And automating the creation of those alarms is the recommended approach.

The post AWS Alarms for Application Errors appeared first on Bozho's tech blog.

Monitoring and management with Amazon QuickSight and Athena in your CI/CD pipeline

Post Syndicated from Umair Nawaz original https://aws.amazon.com/blogs/devops/monitoring-and-management-with-amazon-quicksight-and-athena-in-your-ci-cd-pipeline/

One of the many ways to monitor and manage required CI/CD metrics is to use Amazon QuickSight to build customized visualizations. Additionally, by applying Lean management to software delivery processes, organizations can improve delivery of features faster, pivot when needed, respond to compliance and security changes, and take advantage of instant feedback to improve the customer delivery experience. This blog post demonstrates how AWS resources and tools can provide monitoring and information pertaining to their CI/CD pipelines.

There are three principles in Lean management that this artifact enables and to which it contributes:

  • Limiting work in progress by establishing constraints that drive process improvement and increase throughput.
  • Creating and maintaining dashboards displaying key quality information, productivity metrics, and current status of work (including defects).
  • Using data from development performance and operations monitoring tools to enable business decisions more frequently.

Overview

The following architectural diagram shows how to use AWS services to collect metrics from a CI/CD pipeline and deliver insights through Amazon QuickSight dashboards.

Architecture diagram showing an overview of how CI/CD metrics are extracted and transformed to create a dynamic QuickSight dashboard

In this example, the orchestrator for the CI/CD pipeline is AWS CodePipeline with the entry point as an AWS CodeCommit Git repository for source control. When a developer pushes a code change into the CodeCommit repository, the change goes through a series of phases in CodePipeline. AWS CodeBuild is responsible for performing build actions and, upon successful completion of this phase, AWS CodeDeploy kicks off the actions to execute the deployment.

For each action in CodePipeline, the following series of events occurs:

  • An Amazon CloudWatch rule creates a CloudWatch event containing the action’s metadata.
  • The CloudWatch event triggers an AWS Lambda function.
  • The Lambda function extracts relevant reporting data and writes it to a CSV file in an Amazon S3 bucket.
  • Amazon Athena queries the Amazon S3 bucket and loads the query results into SPICE (an in-memory engine for Amazon QuickSight).
  • Amazon QuickSight obtains data from SPICE to build dashboard displays for the management team.

Note: This solution is for an AWS account with an existing CodePipeline(s). If you do not have a CodePipeline, no metrics will be collected.

Getting started

To get started, follow these steps:

  • Create a Lambda function and copy the following code snippet. Be sure to replace the bucket name with the one used to store your event data. This Lambda function takes the payload from a CloudWatch event and extracts the field’s pipeline, time, state, execution, stage, and action to transform into a CSV file.

Note: Athena’s performance can be improved by compressing, partitioning, or converting data into columnar formats such as Apache Parquet. In this use-case, the dataset size is negligible therefore, a transformation from CSV to Parquet is not required.


import boto3
import csv
import datetime
import os

 # Analyze payload from CloudWatch Event
 def pipeline_execution(data):
     print (data)
     # Specify data fields to deliver to S3
     row=['pipeline,time,state,execution,stage,action']
     
     if "stage" in data['detail'].keys():
         stage=data['detail']['execution']
     else:
         stage='NA'
         
     if "action" in data['detail'].keys():
         action=data['detail']['action']
     else:
         action='NA'
     row.append(data['detail']['pipeline']+','+data['time']+','+data['detail']['state']+','+data['detail']['execution']+','+stage+','+action)  
     values = '\n'.join(str(v) for v in row)
     return values

 # Upload CSV file to S3 bucket
 def upload_data_to_s3(data):
     s3=boto3.client('s3')
     runDate = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S:%f")
     csv_key=runDate+'.csv'
     response = s3.put_object(
         Body=data,
         Bucket='*<example-bucket>*',
         Key=csv_key
     )

 def lambda_handler(event, context):
     upload_data_to_s3(pipeline_execution(event))
  • Create an Athena table to query the data stored in the Amazon S3 bucket. Execute the following SQL in the Athena query console and provide the bucket name that will hold the data.
CREATE EXTERNAL TABLE `devops`(
   `pipeline` string, 
   `time` string, 
   `state` string, 
   `execution` string, 
   `stage` string, 
   `action` string)
 ROW FORMAT DELIMITED 
   FIELDS TERMINATED BY ',' 
 STORED AS INPUTFORMAT 
   'org.apache.hadoop.mapred.TextInputFormat' 
 OUTPUTFORMAT 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
 LOCATION
   's3://**<example-bucket>**/'
 TBLPROPERTIES (
   'areColumnsQuoted'='false', 
   'classification'='csv', 
   'columnsOrdered'='true', 
   'compressionType'='none', 
   'delimiter'=',', 
   'skip.header.line.count'='1',  
   'typeOfData'='file')  
  • Create a CloudWatch event rule that passes events to the Lambda function created in Step 1. In the event rule configuration, set the Service Name as CodePipeline and, for Event Type, select All Events.

Sample Dataset view from Athena.

Sample Athena query and the results

Amazon QuickSight visuals

After the initial setup is done, you are ready to create your QuickSight dashboard. Be sure to check that the Athena permissions are properly set before creating an analysis to be published as an Amazon QuickSight dashboard.

Below are diagrams and figures from Amazon QuickSight that can be generated using the event data queried from Athena. In this example, you can see how many executions happened in the account and how many were successful.

The following screenshot shows that most pipeline executions are failing. A manager might be concerned that this points to a significant issue and prompt an investigation in which they can allocate resources to improve delivery and efficiency.

QuickSight Dashboard showing total execution successes and failures

The visual for this solution is dynamic in nature. In case the pipeline has more or fewer actions, the visual will adjust automatically to reflect all actions. After looking at the success and failure rates for each CodePipeline action in Amazon QuickSight, as shown in the following screenshot, users can take targeted actions quickly. For example, if the team sees a lot of failures due to vulnerability scanning, they can work on improving that problem area to drive value for future code releases.

QuickSight Dashboard showing the successes and failures of pipeline actions

Day-over-day visuals reflect date-specific activity and enable teams to see their progress over a period of time.

QuickSight Dashboard showing day over day results of successful CI/CD executions and failures

Amazon QuickSight offers controls that can be configured to apply filters to visuals. For example, the following screenshot demonstrates how users can toggle between visuals for different applications.

QuickSight's control function to switch between different visualization options

Cleanup (optional)

In order to avoid unintended charges, delete the following resources:

  • Amazon CloudWatch event rule
  • Lambda function
  • Amazon S3 Bucket (the location in which CSV files generated by the Lambda function are stored)
  • Athena external table
  • Amazon QuickSight data sets
  • Analysis and dashboard

Conclusion

In this blog, we showed how metrics can be derived from a CI/CD pipeline. Utilizing Amazon QuickSight to create visuals from these metrics allows teams to continuously deliver updates on the deployment process to management. The aggregation of the captured data over time allows individual developers and teams to improve their processes. That is the goal of creating a Lean DevOps process: to oversee the meta-delivery pipeline and optimize all future releases by identifying weak spots and points of risk during the entire release process.

___________________________________________________________

About the Authors

Umair Nawaz is a DevOps Engineer at Amazon Web Services in New York City. He works on building secure architectures and advises enterprises on agile software delivery. He is motivated to solve problems strategically by utilizing modern technologies.
Christopher Flores is an Engagement Manager at Amazon Web Services in New York City. He leads AWS developers, partners, and client teams in using the customer engagement accelerator framework. Christopher expedites stakeholder alignment, enterprise cohesion and risk mitigation while ensuring feedback loops to close the engagement lifecycle.
Carol Liao is a Cloud Infrastructure Architect at Amazon Web Services in New York City. She enjoys designing and developing modern IT solutions in the cloud where there is always more to learn, more problems to solve, and more to build.

 

Identifying and resolving security code vulnerabilities using Snyk in AWS CI/CD Pipeline

Post Syndicated from Jay Yeras original https://aws.amazon.com/blogs/devops/identifying-and-resolving-vulnerabilities-in-your-code/

The majority of companies have embraced open-source software (OSS) at an accelerated rate even when building proprietary applications. Some of the obvious benefits for this shift include transparency, cost, flexibility, and a faster time to market. Snyk’s unique combination of developer-first tooling and best in class security depth enables businesses to easily build security into their continuous development process.

Even for teams building proprietary code, use of open-source packages and libraries is a necessity. In reality, a developer’s own code is often a small core within the app, and the rest is open-source software. While relying on third-party elements has obvious benefits, it also presents numerous complexities. Inadvertently introducing vulnerabilities into your codebase through repositories that are maintained in a distributed fashion and with widely varying levels of security expertise can be common, and opens up applications to effective attacks downstream.

There are three common barriers to truly effective open-source security:

  1. The security task remains in the realm of security and compliance, often perpetuating the siloed structure that DevOps strives to eliminate and slowing down release pace.
  2. Current practice may offer automated scanning of repositories, but the remediation advice it provides is manual and often un-actionable.
  3. The data generated often focuses solely on public sources, without unique and timely insights.

Developer-led application security

This blog post demonstrates techniques to improve your application security posture using Snyk tools to seamlessly integrate within the developer workflow using AWS services such as Amazon ECR, AWS Lambda, AWS CodePipeline, and AWS CodeBuild. Snyk is a SaaS offering that organizations use to find, fix, prevent, and monitor open source dependencies. Snyk is a developer-first platform that can be easily integrated into the Software Development Lifecycle (SDLC). The examples presented in this post enable you to actively scan code checked into source code management, container images, and serverless, creating a highly efficient and effective method of managing the risk inherent to open source dependencies.

Prerequisites

The examples provided in this post assume that you already have an AWS account and that your account has the ability to create new IAM roles and scope other IAM permissions. You can use your integrated development environment (IDE) of choice. The examples reference AWS Cloud9 cloud-based IDE. An AWS Quick Start for Cloud9 is available to quickly deploy to either a new or existing Amazon VPC and offers expandable Amazon EBS volume size.

Sample code and AWS CloudFormation templates are available to simplify provisioning the various services you need to configure this integration. You can fork or clone those resources. You also need a working knowledge of git and how to fork or clone within your source provider to complete these tasks.

cd ~/environment && \ 
git clone https://github.com/aws-samples/aws-modernization-with-snyk.git modernization-workshop 
cd modernization-workshop 
git submodule init 
git submodule update

Configure your CI/CD pipeline

The workflow for this example consists of a continuous integration and continuous delivery pipeline leveraging AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS Fargate, as shown in the following screenshot.

CI/CD Pipeline

For simplicity, AWS CloudFormation templates are available in the sample repo for services.yaml, pipeline.yaml, and ecs-fargate.yaml, which deploy all services necessary for this example.

Launch AWS CloudFormation templates

A detailed step-by-step guide can be found in the self-paced workshop, but if you are familiar with AWS CloudFormation, you can launch the templates in three steps. From your Cloud9 IDE terminal, change directory to the location of the sample templates and complete the following three steps.

1) Launch basic services

aws cloudformation create-stack --stack-name WorkshopServices --template-body file://services.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \
--stack-name "WorkshopServices" --query "Stacks[0].[StackStatus]" \
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

2) Launch Fargate:

aws cloudformation create-stack --stack-name WorkshopECS --template-body file://ecs-fargate.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \ 
--stack-name "WorkshopECS" --query "Stacks[0].[StackStatus]" \ 
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

3) From your Cloud9 IDE terminal, change directory to the location of the sample templates and run the following command:

aws cloudformation create-stack --stack-name WorkshopPipeline --template-body file://pipeline.yaml \
--capabilities CAPABILITY_NAMED_IAM until [[ `aws cloudformation describe-stacks \
--stack-name "WorkshopPipeline" --query "Stacks[0].[StackStatus]" \
--output text` == "CREATE_COMPLETE" ]]; do echo "The stack is NOT in a state of CREATE_COMPLETE at `date`"; sleep 30; done &&; echo "The Stack is built at `date` - Please proceed"

Improving your security posture

You need to sign up for a free account with Snyk. You may use your Google, Bitbucket, or Github credentials to sign up. Snyk utilizes these services for authentication and does not store your password. Once signed up, navigate to your name and select Account Settings. Under API Token, choose Show, which will reveal the token to copy, and copy this value. It will be unique for each user.

Save your password to the session manager

Run the following command, replacing abc123 with your unique token. This places the token in the session parameter manager.

aws ssm put-parameter --name "snykAuthToken" --value "abc123" --type SecureString

Set up application scanning

Next, you need to insert testing with Snyk after maven builds the application. The simplest method is to insert commands to download, authorize, and run the Snyk commands after maven has built the application/dependency tree.

The sample Dockerfile contains an environment variable from a value passed to the docker build command, which contains the token for Snyk. By using an environment variable, Snyk automatically detects the token when used.

#~~~~~~~SNYK Variable~~~~~~~~~~~~ 
# Declare Snyktoken as a build-arg ARG snyk_auth_token
# Set the SNYK_TOKEN environment variable ENV
SNYK_TOKEN=${snyk_auth_token}
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Download Snyk, and run a test, looking for medium to high severity issues. If the build succeeds, post the results to Snyk for monitoring and reporting. If a new vulnerability is found, you are notified.

# package the application
RUN mvn package -Dmaven.test.skip=true

#~~~~~~~SNYK test~~~~~~~~~~~~
# download, configure and run snyk. Break build if vulns present, post results to `https://snyk.io/`
RUN curl -Lo ./snyk "https://github.com/snyk/snyk/releases/download/v1.210.0/snyk-linux"
RUN chmod -R +x ./snyk
#Auth set through environment variable
RUN ./snyk test --severity-threshold=medium
RUN ./snyk monitor

Set up docker scanning

Later in the build process, a docker image is created. Analyze it for vulnerabilities in buildspec.yml. First, pull the Snyk token snykAuthToken from the parameter store.

env:
  parameter-store:
    SNYK_AUTH_TOKEN: "snykAuthToken"

Next, in the prebuild phase, install Snyk.

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=$(aws ecr describe-repositories --repository-name petstore_frontend --query=repositories[0].repositoryUri --output=text)
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
      - PWD=$(pwd)
      - PWDUTILS=$(pwd)
      - curl -Lo ./snyk "https://github.com/snyk/snyk/releases/download/v1.210.0/snyk-linux"
      - chmod -R +x ./snyk

Next, in the build phase, pass the token to the docker compose command, where it is retrieved in the Dockerfile code you set up to test the application.

build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - cd modules/containerize-application
      - docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest.

You can further extend the build phase to authorize the Snyk instance for testing the Docker image that’s produced. If it passes, you can pass the results to Snyk for monitoring and reporting.

build:
    commands:
      - $PWDUTILS/snyk auth $SNYK_AUTH_TOKEN
      - $PWDUTILS/snyk test --docker $REPOSITORY_URI:latest
      - $PWDUTILS/snyk monitor --docker $REPOSITORY_URI:latest
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG

For reference, a sample buildspec.yaml configured with Snyk is available in the sample repo. You can either copy this file and overwrite your existing buildspec.yaml or open an editor and replace the contents.

Testing the application

Now that services have been provisioned and Snyk tools have been integrated into your CI/CD pipeline, any new git commit triggers a fresh build and application scanning with Snyk detects vulnerabilities in your code.

In the CodeBuild console, you can look at your build history to see why your build failed, identify security vulnerabilities, and pinpoint how to fix them.

Testing /usr/src/app...
✗ Medium severity vulnerability found in org.primefaces:primefaces
Description: Cross-site Scripting (XSS)
Info: https://snyk.io/vuln/SNYK-JAVA-ORGPRIMEFACES-31642
Introduced through: org.primefaces:[email protected]
From: org.primefaces:[email protected]
Remediation:
Upgrade direct dependency org.primefaces:[email protected] to org.primefaces:[email protected] (triggers upgrades to org.primefaces:[email protected])
✗ Medium severity vulnerability found in org.primefaces:primefaces
Description: Cross-site Scripting (XSS)
Info: https://snyk.io/vuln/SNYK-JAVA-ORGPRIMEFACES-31643
Introduced through: org.primefaces:[email protected]
From: org.primefaces:[email protected]
Remediation:
Upgrade direct dependency org.primefaces:[email protected] to org.primefaces:[email protected] (triggers upgrades to org.primefaces:[email protected])
Organisation: sample-integrations
Package manager: maven
Target file: pom.xml
Open source: no
Project path: /usr/src/app
Tested 37 dependencies for known vulnerabilities, found 2 vulnerabilities, 2 vulnerable paths.
The command '/bin/sh -c ./snyk test' returned a non-zero code: 1
[Container] 2020/02/14 03:46:22 Command did not exit successfully docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest . exit status 1
[Container] 2020/02/14 03:46:22 Phase complete: BUILD Success: false
[Container] 2020/02/14 03:46:22 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build --build-arg snyk_auth_token=$SNYK_AUTH_TOKEN -t $REPOSITORY_URI:latest .. Reason: exit status 1

Remediation

Once you remediate your vulnerabilities and check in your code, another build is triggered and an additional scan is performed by Snyk. This time, you should see the build pass with a status of Succeeded.

You can also drill down into the CodeBuild logs and see that Snyk successfully scanned the Docker Image and found no package dependency issues with your Docker container!

[Container] 2020/02/14 03:54:14 Running command $PWDUTILS/snyk test --docker $REPOSITORY_URI:latest
Testing 300326902600.dkr.ecr.us-west-2.amazonaws.com/petstore_frontend:latest...
Organisation: sample-integrations
Package manager: rpm
Docker image: 300326902600.dkr.ecr.us-west-2.amazonaws.com/petstore_frontend:latest
✓ Tested 190 dependencies for known vulnerabilities, no vulnerable paths found.

Reporting

Snyk provides detailed reports for your imported projects. You can navigate to Projects and choose View Report to set the frequency with which the project is checked for vulnerabilities. You can also choose View Report and then the Dependencies tab to see which libraries were used. Snyk offers a comprehensive database and remediation guidance for known vulnerabilities in their Vulnerability DB. Specifics on potential vulnerabilities that may exist in your code would be contingent on the particular open source dependencies used with your application.

Cleaning up

Remember to delete any resources you may have created in order to avoid additional costs. If you used the AWS CloudFormation templates provided here, you can safely remove them by deleting those stacks from the AWS CloudFormation Console.

Conclusion

In this post, you learned how to leverage various AWS services to build a fully automated CI/CD pipeline and cloud IDE development environment. You also learned how to utilize Snyk to seamlessly integrate with AWS and secure your open-source dependencies and container images. If you are interested in learning more about DevSecOps with Snyk and AWS, then I invite you to check out this workshop and watch this video.

 

About the Author

Author Photo

 

Jay is a Senior Partner Solutions Architect at AWS bringing over 20 years of experience in various technical roles. He holds a Master of Science degree in Computer Information Systems and is a subject matter expert and thought leader for strategic initiatives that help customers embrace a DevOps culture.

 

 

Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/

AWS CodePipeline is a fully managed continuous delivery service that helps automate the build, test, and deploy processes of your application. Application owners use CodePipeline to manage releases by configuring “pipeline,” workflow constructs that describe the steps, from source code to deployed application, through which an application progresses as it is released. If you are new to CodePipeline, check out Getting Started with CodePipeline to get familiar with the core concepts and terminology.

Overview

In a default setup, a pipeline is kicked-off whenever a change in the configured pipeline source is detected. CodePipeline currently supports sourcing from AWS CodeCommit, GitHub, Amazon ECR, and Amazon S3. When using CodeCommit, Amazon ECR, or Amazon S3 as the source for a pipeline, CodePipeline uses an Amazon CloudWatch Event to detect changes in the source and immediately kick off a pipeline. When using GitHub as the source for a pipeline, CodePipeline uses a webhook to detect changes in a remote branch and kick off the pipeline. Note that CodePipeline also supports beginning pipeline executions based on periodic checks, although this is not a recommended pattern.

CodePipeline supports adding a number of custom actions and manual approvals to ensure that pipeline functionality is flexible and code releases are deliberate; however, without further customization, pipelines will still be kicked-off for every change in the pipeline source. To customize the logic that controls pipeline executions in the event of a source change, you can introduce a custom CloudWatch Event, which can result in the following benefits:

  • Multiple pipelines with a single source: Trigger select pipelines when multiple pipelines are listening to a single source. This can be useful if your organization is using monorepos, or is using a single repository to host configuration files for multiple instances of identical stacks.
  • Avoid reacting to unimportant files: Avoid triggering a pipeline when changing files that do not affect the application functionality (e.g. documentation files, readme files, and .gitignore files).
  • Conditionally kickoff pipelines based on environmental conditions: Use custom code to evaluate whether a pipeline should be triggered. This allows for further customization beyond polling a source repository or relying on a push event. For example, you could create custom logic to automatically reschedule deployments on holidays to the next available workday.

This post explores and demonstrates how to customize the actions that invoke a pipeline by modifying the default CloudWatch Events configuration that is used for CodeCommit, ECR, or S3 sources. To illustrate this customization, we will walk through two examples: prevent updates to documentation files from triggering a pipeline, and manage execution of multiple pipelines monitoring a single source repository.

The key concepts behind customizing pipeline invocations extend to GitHub sources and webhooks as well; however, creating a custom webhook is outside the scope of this post.

Sample Architecture

This post is only interested in controlling the execution of the pipeline (as opposed to the deploy, test, or approval stages), so it uses simple source and pipeline configurations. The sample architecture considers a simple CodePipeline with only two stages: source and build.

Example CodePipeline Architecture

Example CodePipeline Architecture with Custom CloudWatch Event Configuration

The sample CodeCommit repository consists only of buildspec.yml, readme.md, and script.py files.

Normally, after you create a pipeline, it automatically triggers a pipeline execution to release the latest version of your source code. From then on, every time you make a change to your source location, a new pipeline execution is triggered. In addition, you can manually re-run the last revision through a pipeline using the “Release Change” button in the console. This architecture uses a custom CloudWatch Event and AWS Lambda function to avoid commits that change only the readme.md file from initiating an execution of the pipeline.

Creating a custom CloudWatch Event

When we create a CodePipeline that monitors a CodeCommit (or other) source, a default CloudWatch Events rule is created to trigger our pipeline for every change to the CodeCommit repository. This CloudWatch Events rule monitors the CodeCommit repository for changes, and triggers the pipeline for events matching the referenceCreated or referenceUpdated CodeCommit Event (refer to CodeCommit Event Types for more information).

Default CloudWatch Events Rule to Trigger CodePipeline

Default CloudWatch Events Rule to Trigger CodePipeline

To introduce custom logic and control the events that kickoff the pipeline, this example configures the default CloudWatch Events rule to detect changes in the source and trigger a Lambda function rather than invoke the pipeline directly. The example uses a CodeCommit source, but the same principle applies to Amazon S3 and Amazon ECR sources as well, as these both use CloudWatch Events rules to notify CodePipeline of changes.

Custom CloudWatch Events Rule to Trigger CodePipeline

Custom CloudWatch Events Rule to Trigger CodePipeline

When a change is introduced to the CodeCommit repository, the configured Lambda function receives an event from CloudWatch signaling that there has been a source change.

{
   "version":"0",
   "id":"2f9a75be-88f6-6827-729d-34495072e5a1",
   "detail-type":"CodeCommit Repository State Change",
   "source":"aws.codecommit",
   "account":"accountNumber",
   "time":"2019-11-12T04:56:47Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:codecommit:us-east-1:accountNumber:codepipeline-customization-sandbox-repo"
   ],
   "detail":{
      "callerUserArn":"arn:aws:sts::accountNumber:assumed-role/admin/roleName ",
      "commitId":"92e953e268345c77dd93cec860f7f91f3fd13b44",
      "event":"referenceUpdated",
      "oldCommitId":"5a058542a8dfa0dacf39f3c1e53b88b0f991695e",
      "referenceFullName":"refs/heads/master",
      "referenceName":"master",
      "referenceType":"branch",
      "repositoryId":"658045f1-c468-40c3-93de-5de2c000d84a",
      "repositoryName":"codepipeline-customization-sandbox-repo"
   }
}

The Lambda function is responsible for determining whether a source change necessitates kicking-off the pipeline, which in the example is necessary if the change contains modifications to files other than readme.md. To implement this, the Lambda function uses the commitId and oldCommitId fields provided in the body of the CloudWatch event message to determine which files have changed. If the function determines that a change has occurred to a “non-ignored” file, then the function programmatically executes the pipeline. Note that for S3 sources, it may be necessary to process an entire file zip archive, or to retrieve past versions of an artifact.

import boto3

files_to_ignore = [ "readme.md" ]

codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')

def lambda_handler(event, context):
    # Extract commits
    old_commit_id = event["detail"]["oldCommitId"]
    new_commit_id = event["detail"]["commitId"]
    # Get commit differences
    codecommit_response = codecommit_client.get_differences(
        repositoryName="codepipeline-customization-sandbox-repo",
        beforeCommitSpecifier=str(old_commit_id),
        afterCommitSpecifier=str(new_commit_id)
    )
    # Search commit differences for files to ignore
    for difference in codecommit_response["differences"]:
        file_name = difference["afterBlob"]["path"].lower()
        # If non-ignored file is present, kickoff pipeline
        if file_name not in files_to_ignore:
            codepipeline_response = codepipeline_client.start_pipeline_execution(
                name="codepipeline-customization-sandbox-pipeline"
                )
            # Break to avoid executing the pipeline twice
            break

Multiple pipelines sourcing from a single repository

Architectures that use a single-source repository monitored by multiple pipelines can add custom logic to control the types of events that trigger a specific pipeline to execute. Without customization, any change to the source repository would trigger all pipelines.

Consider the following example:

  • A CodeCommit repository contains a number of config files (for example, config_1.json and config_2.json).
  • Multiple pipelines (for example, codepipeline-customization-sandbox-pipeline-1 and codepipeline-customization-sandbox-pipeline-2) source from this CodeCommit repository.
  • Whenever a config file is updated, a custom CloudWatch Event triggers a Lambda function that is used to determine which config files changed, and therefore which pipelines should be executed.
Example CodePipeline Architecture

Example CodePipeline Architecture for Monorepos with Custom CloudWatch Event Configuration

This example follows the same pattern of creating a custom CloudWatch Event and Lambda function shown in the preceding example. However, in this scenario, the Lambda function is responsible for determining which files changed and which pipelines should be kicked off as a result. To execute this logic, the Lambda function uses the config_file_mapping variable to map files to corresponding pipelines. Pipelines are only executed if their designated config file has changed.

Note that the config_file_mapping can be exported to Amazon S3 or Amazon DynamoDB for more complex use cases.

import boto3

# Map config files to pipelines
config_file_mapping = {
        "config_1.json" : "codepipeline-customization-sandbox-pipeline-1",
        "config_2.json" : "codepipeline-customization-sandbox-pipeline-2"
        }
        
codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')

def lambda_handler(event, context):
    # Extract commits
    old_commit_id = event["detail"]["oldCommitId"]
    new_commit_id = event["detail"]["commitId"]
    # Get commit differences
    codecommit_response = codecommit_client.get_differences(
        repositoryName="codepipeline-customization-sandbox-repo",
        beforeCommitSpecifier=str(old_commit_id),
        afterCommitSpecifier=str(new_commit_id)
    )
    # Search commit differences for files that trigger executions
    for difference in codecommit_response["differences"]:
        file_name = difference["afterBlob"]["path"].lower()
        # If file corresponds to pipeline, execute pipeline
        if file_name in config_file_mapping:
            codepipeline_response = codepipeline_client.start_pipeline_execution(
                name=config_file_mapping["file_name"]
                )

Results

For the first example, updates affecting only the readme.md file are completely ignored by the pipeline, while updates affecting other files begin a normal pipeline execution. For the second example, the two pipelines monitor the same source repository; however, codepipeline-customization-sandbox-pipeline-1 is executed only when config_1.json is updated and codepipeline-customization-sandbox-pipeline-2 is executed only when config_2.json is updated.

These CloudWatch Event and Lambda function combinations serve as a good general examples of the introduction of custom logic to pipeline kickoffs, and can be expanded to account for variously complex processing logic.

Cleanup

To avoid additional infrastructure costs from the examples described in this post, be sure to delete all CodeCommit repositories, CodePipeline pipelines, Lambda functions, and CodeBuild projects. When you delete a CodePipeline, the CloudWatch Events rule that was created automatically is deleted, even if the rule has been customized.

Conclusion

For scenarios which need you to define additional custom logic to control the execution of one or multiple pipelines, configuring a CloudWatch Event to trigger a Lambda function allows you to customize the conditions and types of events that can kick-off your pipeline.

Building Windows containers with AWS CodePipeline and custom actions

Post Syndicated from Dmitry Kolomiets original https://aws.amazon.com/blogs/devops/building-windows-containers-with-aws-codepipeline-and-custom-actions/

Dmitry Kolomiets, DevOps Consultant, Professional Services

AWS CodePipeline and AWS CodeBuild are the primary AWS services for building CI/CD pipelines. AWS CodeBuild supports a wide range of build scenarios thanks to various built-in Docker images. It also allows you to bring in your own custom image in order to use different tools and environment configurations. However, there are some limitations in using custom images.

Considerations for custom Docker images:

  • AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images.
  • AWS CodeBuild provides a limited set of instance types to run the builds. You might have to use a custom image if the build job requires higher memory, CPU, graphical subsystems, or any other functionality that is not part of the out-of-the-box provided Docker image.

Windows-specific limitations

  • AWS CodeBuild supports Windows builds only in a limited number of AWS regions at this time.
  • AWS CodeBuild executes Windows Server containers using Windows Server 2016 hosts, which means that build containers are huge—it is not uncommon to have an image size of 15 GB or more (with .NET Framework SDK installed). Windows Server 2019 containers, which are almost half as small, cannot be used due to host-container mismatch.
  • AWS CodeBuild runs build jobs inside Docker containers. You should enable privileged mode in order to build and publish Linux Docker images as part of your build job. However, DIND is not supported on Windows and, therefore, AWS CodeBuild cannot be used to build Windows Server container images.

The last point is the critical one for microservice type of applications based on Microsoft stacks (.NET Framework, Web API, IIS). The usual workflow for this kind of applications is to build a Docker image, push it to ECR and update ECS / EKS cluster deployment.

Here is what I cover in this post:

  • How to address the limitations stated above by implementing AWS CodePipeline custom actions (applicable for both Linux and Windows environments).
  • How to use the created custom action to define a CI/CD pipeline for Windows Server containers.

CodePipeline custom actions

By using Amazon EC2 instances, you can address the limitations with Windows Server containers and enable Windows build jobs in the regions where AWS CodeBuild does not provide native Windows build environments. To accommodate the specific needs of a build job, you can pick one of the many Amazon EC2 instance types available.

The downside of this approach is additional management burden—neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly. There are ways to set up a Jenkins build cluster on AWS and integrate it with CodeBuild and CodeDeploy, but these options are too “heavy” for the simple task of building a Docker image.

There is a different way to tackle this problem: AWS CodePipeline provides APIs that allow you to extend a build action though custom actions. This example demonstrates how to add a custom action to offload a build job to an Amazon EC2 instance.

Here is the generic sequence of steps that the custom action performs:

  • Acquire EC2 instance (see the Notes on Amazon EC2 build instances section).
  • Download AWS CodePipeline artifacts from Amazon S3.
  • Execute the build command and capture any errors.
  • Upload output artifacts to be consumed by subsequent AWS CodePipeline actions.
  • Update the status of the action in AWS CodePipeline.
  • Release the Amazon EC2 instance.

Notice that most of these steps are the same regardless of the actual build job being executed. However, the following parameters will differ between CI/CD pipelines and, therefore, have to be configurable:

  • Instance type (t2.micro, t3.2xlarge, etc.)
  • AMI (builds could have different prerequisites in terms of OS configuration, software installed, Docker images downloaded, etc.)
  • Build command line(s) to execute (MSBuild script, bash, Docker, etc.)
  • Build job timeout

Serverless custom action architecture

CodePipeline custom build action can be implemented as an agent component installed on an Amazon EC2 instance. The agent polls CodePipeline for build jobs and executes them on the Amazon EC2 instance. There is an example of such an agent on GitHub, but this approach requires installation and configuration of the agent on all Amazon EC2 instances that carry out the build jobs.

Instead, I want to introduce an architecture that enables any Amazon EC2 instance to be a build agent without additional software and configuration required. The architecture diagram looks as follows:

Serverless custom action architecture

There are multiple components involved:

  1. An Amazon CloudWatch Event triggers an AWS Lambda function when a custom CodePipeline action is to be executed.
  2. The Lambda function retrieves the action’s build properties (AMI, instance type, etc.) from CodePipeline, along with location of the input artifacts in the Amazon S3 bucket.
  3. The Lambda function starts a Step Functions state machine that carries out the build job execution, passing all the gathered information as input payload.
  4. The Step Functions flow acquires an Amazon EC2 instance according to the provided properties, waits until the instance is up and running, and starts an AWS Systems Manager command. The Step Functions flow is also responsible for handling all the errors during build job execution and releasing the Amazon EC2 instance once the Systems Manager command execution is complete.
  5. The Systems Manager command runs on an Amazon EC2 instance, downloads CodePipeline input artifacts from the Amazon S3 bucket, unzips them, executes the build script, and uploads any output artifacts to the CodePipeline-provided Amazon S3 bucket.
  6. Polling Lambda updates the state of the custom action in CodePipeline once it detects that the Step Function flow is completed.

The whole architecture is serverless and requires no maintenance in terms of software installed on Amazon EC2 instances thanks to the Systems Manager command, which is essential for this solution. All the code, AWS CloudFormation templates, and installation instructions are available on the GitHub project. The following sections provide further details on the mentioned components.

Custom Build Action

The custom action type is defined as an AWS::CodePipeline::CustomActionType resource as follows:

  Ec2BuildActionType: 
    Type: AWS::CodePipeline::CustomActionType
    Properties: 
      Category: !Ref CustomActionProviderCategory
      Provider: !Ref CustomActionProviderName
      Version: !Ref CustomActionProviderVersion
      ConfigurationProperties: 
        - Name: ImageId 
          Description: AMI to use for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: InstanceType
          Description: Instance type for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: Command
          Description: Command(s) to execute.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String 
        - Name: WorkingDirectory 
          Description: Working directory for the command to execute.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
        - Name: OutputArtifactPath 
          Description: Path of the file(-s) or directory(-es) to use as custom action output artifact.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
      InputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0
      OutputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0 
      Settings: 
        EntityUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/documents/${RunBuildJobOnEc2Instance}"
        ExecutionUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/states/home#/executions/details/{ExternalExecutionId}"

The custom action type is uniquely identified by Category, Provider name, and Version.

Category defines the stage of the pipeline in which the custom action can be used, such as build, test, or deploy. Check the AWS documentation for the full list of allowed values.

Provider name and Version are the values used to identify the custom action type in the CodePipeline console or AWS CloudFormation templates. Once the custom action type is installed, you can add it to the pipeline, as shown in the following screenshot:

Adding custom action to the pipeline

The custom action type also defines a list of user-configurable properties—these are the properties identified above as specific for different CI/CD pipelines:

  • AMI Image ID
  • Instance Type
  • Command
  • Working Directory
  • Output artifacts

The properties are configurable in the CodePipeline console, as shown in the following screenshot:

Custom action properties

Note the last two settings in the Custom Action Type AWS CloudFormation definition: EntityUrlTemplate and ExecutionUrlTemplate.

EntityUrlTemplate defines the link to the AWS Systems Manager document that carries over the build actions. The link is visible in AWS CodePipeline console as shown in the following screenshot:

Custom action's EntityUrlTemplate link

ExecutionUrlTemplate defines the link to additional information related to a specific execution of the custom action. The link is also visible in the CodePipeline console, as shown in the following screenshot:

Custom action's ExecutionUrlTemplate link

This URL is defined as a link to the Step Functions execution details page, which provides high-level information about the custom build step execution, as shown in the following screenshot:

Custom build step execution

This page is a convenient visual representation of the custom action execution flow and may be useful for troubleshooting purposes as it gives an immediate access to error messages and logs.

The polling Lambda function

The Lambda function polls CodePipeline for custom actions when it is triggered by the following CloudWatch event:

  source: 
    - "aws.codepipeline"
  detail-type: 
    - "CodePipeline Action Execution State Change"
  detail: 
    state: 
      - "STARTED"

The event is triggered for every CodePipeline action started, so the Lambda function should verify if, indeed, there is a custom action to be processed.

The rest of the lambda function is trivial and relies on the following APIs to retrieve or update CodePipeline actions and deal with instances of Step Functions state machines:

CodePipeline API

AWS Step Functions API

You can find the complete source of the Lambda function on GitHub.

Step Functions state machine

The following diagram shows complete Step Functions state machine. There are three main blocks on the diagram:

  • Acquiring an Amazon EC2 instance and waiting while the instance is registered with Systems Manager
  • Running a Systems Manager command on the instance
  • Releasing the Amazon EC2 instance

Note that it is necessary to release the Amazon EC2 instance in case of error or exception during Systems Manager command execution, relying on Fallback States to guarantee that.

You can find the complete definition of the Step Function state machine on GitHub.

Step Functions state machine

Systems Manager Document

The AWS Systems Manager Run Command does all the magic. The Systems Manager agent is pre-installed on AWS Windows and Linux AMIs, so no additional software is required. The Systems Manager run command executes the following steps to carry out the build job:

  1. Download input artifacts from Amazon S3.
  2. Unzip artifacts in the working folder.
  3. Run the command.
  4. Upload output artifacts to Amazon S3, if any; this makes them available for the following CodePipeline stages.

The preceding steps are operating-system agnostic, and both Linux and Windows instances are supported. The following code snippet shows the Windows-specific steps.

You can find the complete definition of the Systems Manager document on GitHub.

mainSteps:
  - name: win_enable_docker
    action: aws:configureDocker
    inputs:
      action: Install

  # Windows steps
  - name: windows_script
    precondition:
      StringEquals: [platformType, Windows]
    action: aws:runPowerShellScript
    inputs:
      runCommand:
        # Ensure that if a command fails the script does not proceed to the following commands
        - "$ErrorActionPreference = \"Stop\""

        - "$jobDirectory = \"{{ workingDirectory }}\""
        # Create temporary folder for build artifacts, if not provided
        - "if ([string]::IsNullOrEmpty($jobDirectory)) {"
        - "    $parent = [System.IO.Path]::GetTempPath()"
        - "    [string] $name = [System.Guid]::NewGuid()"
        - "    $jobDirectory = (Join-Path $parent $name)"
        - "    New-Item -ItemType Directory -Path $jobDirectory"
                # Set current location to the new folder
        - "    Set-Location -Path $jobDirectory"
        - "}"

        # Download/unzip input artifact
        - "Read-S3Object -BucketName {{ inputBucketName }} -Key {{ inputObjectKey }} -File artifact.zip"
        - "Expand-Archive -Path artifact.zip -DestinationPath ."

        # Run the build commands
        - "$directory = Convert-Path ."
        - "$env:PATH += \";$directory\""
        - "{{ commands }}"
        # We need to check exit code explicitly here
        - "if (-not ($?)) { exit $LASTEXITCODE }"

        # Compress output artifacts, if specified
        - "$outputArtifactPath  = \"{{ outputArtifactPath }}\""
        - "if ($outputArtifactPath) {"
        - "    Compress-Archive -Path $outputArtifactPath -DestinationPath output-artifact.zip"
                # Upload compressed artifact to S3
        - "    $bucketName = \"{{ outputBucketName }}\""
        - "    $objectKey = \"{{ outputObjectKey }}\""
        - "    if ($bucketName -and $objectKey) {"
                    # Don't forget to encrypt the artifact - CodePipeline bucket has a policy to enforce this
        - "        Write-S3Object -BucketName $bucketName -Key $objectKey -File output-artifact.zip -ServerSideEncryption aws:kms"
        - "    }"
        - "}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"

CI/CD pipeline for Windows Server containers

Once you have a custom action that offloads the build job to the Amazon EC2 instance, you may approach the problem stated at the beginning of this blog post: how to build and publish Windows Server containers on AWS.

With the custom action installed, the solution is quite straightforward. To build a Windows Server container image, you need to provide the value for Windows Server with Containers AMI, the instance type to use, and the command line to execute, as shown in the following screenshot:

Windows Server container custom action properties

This example executes the Docker build command on a Windows instance with the specified AMI and instance type, using the provided source artifact. In real life, you may want to keep the build script along with the source code and push the built image to a container registry. The following is a PowerShell script example that not only produces a Docker image but also pushes it to AWS ECR:

# Authenticate with ECR
Invoke-Expression -Command (Get-ECRLoginCommand).Command

# Build and push the image
docker build -t <ecr-repository-url>:latest .
docker push <ecr-repository-url>:latest

return $LASTEXITCODE

You can find a complete example of the pipeline that produces the Windows Server container image and pushes it to Amazon ECR on GitHub.

Notes on Amazon EC2 build instances

There are a few ways to get Amazon EC2 instances for custom build actions. Let’s take a look at a couple of them below.

Start new EC2 instance per job and terminate it at the end

This is a reasonable default strategy that is implemented in this GitHub project. Each time the pipeline needs to process a custom action, you start a new Amazon EC2 instance, carry out the build job, and terminate the instance afterwards.

This approach is easy to implement. It works well for scenarios in which you don’t have many builds and/or builds take some time to complete (tens of minutes). In this case, the time required to provision an instance is amortized. Conversely, if the builds are fast, instance provisioning time could be actually longer than the time required to carry out the build job.

Use a pool of running Amazon EC2 instances

There are cases when it is required to keep builder instances “warm”, either due to complex initialization or merely to reduce the build duration. To support this scenario, you could maintain a pool of always-running instances. The “acquisition” phase takes a warm instance from the pool and the “release” phase returns it back without terminating or stopping the instance. A DynamoDB table can be used as a registry to keep track of “busy” instances and provide waiting or scaling capabilities to handle high demand.

This approach works well for scenarios in which there are many builds and demand is predictable (e.g. during work hours).

Use a pool of stopped Amazon EC2 instances

This is an interesting approach, especially for Windows builds. All AWS Windows AMIs are generalized using a sysprep tool. The important implication of this is that the first start time for Windows EC2 instances is quite long: it could easily take more than 5 minutes. This is generally unacceptable for short-living build jobs (if your build takes just a minute, it is annoying to wait 5 minutes to start the instance).

Interestingly, once the Windows instance is initialized, subsequent starts take less than a minute. To utilize this, you could create a pool of initialized and stopped Amazon EC2 instances. In this case, for the acquisition phase, you start the instance, and when you need to release it, you stop or hibernate it.

This approach provides substantial improvements in terms of build start-up time.

The downside is that you reuse the same Amazon EC2 instance between the builds—it is not completely clean environment. Build jobs have to be designed to expect the presence of artifacts from the previous executions on the build instance.

Using an Amazon EC2 fleet with spot instances

Another variation of the previous strategies is to use Amazon EC2 Fleet to make use of cost-efficient spot instances for your build jobs.

Amazon EC2 Fleet makes it possible to combine on-demand instances with spot instances to deliver cost-efficient solution for your build jobs. On-demand instances can provide the minimum required capacity and spot instances provide a cost-efficient way to improve performance of your build fleet.

Note that since spot instances could be terminated at any time, the Step Functions workflow has to support Amazon EC2 instance termination and restart the build on a different instance transparently for CodePipeline.

Limits and Cost

The following are a few final thoughts.

Custom action timeouts

The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions.

Cost of running EC2 build instances

Custom Amazon EC2 instances could be even more cost effective than CodeBuild for many scenarios. However, it is difficult to compare the total cost of ownership of a custom-built fleet with CodeBuild. CodeBuild is a fully managed build service and you pay for each minute of using the service. In contrast, with Amazon EC2 instances you pay for the instance either per hour or per second (depending on instance type and operating system), EBS volumes, Lambda, and Step Functions. Please use the AWS Simple Monthly Calculator to get the total cost of your projected build solution.

Cleanup

If you are running the above steps as a part of workshop / testing, then you may delete the resources to avoid any further charges to be incurred. All resources are deployed as part of CloudFormation stack, so go to the Services, CloudFormation, select the specific stack and click delete to remove the stack.

Conclusion

The CodePipeline custom action is a simple way to utilize Amazon EC2 instances for your build jobs and address a number of CodePipeline limitations.

With AWS CloudFormation template available on GitHub you can import the CodePipeline custom action with a simple Start/Terminate instance strategy into your account and start using the custom action in your pipelines right away.

The CodePipeline custom action with a simple Start/Terminate instance strategy is available on GitHub as an AWS CloudFormation stack. You could import the stack to your account and start using the custom action in your pipelines right away.

An example of the pipeline that produces Windows Server containers and pushes them to Amazon ECR can also be found on GitHub.

I invite you to clone the repositories to play with the custom action, and to make any changes to the action definition, Lambda functions, or Step Functions flow.

Feel free to ask any questions or comments below, or file issues or PRs on GitHub to continue the discussion.

Receive AWS Developer Tools Notifications over Slack using AWS Chatbot

Post Syndicated from Anushri Anwekar original https://aws.amazon.com/blogs/devops/receive-aws-developer-tools-notifications-over-slack-using-aws-chatbot/

Developers often use Slack to communicate with each other about their code. With AWS Chatbot, you can configure notifications for developer tools resources such as repositories, build projects, deployment applications, and pipelines so that users in Slack channels are automatically notified about important events. When a deployment fails, a build succeeds, or a pull request is created, developers get notifications where they’re most likely to see and react to them.

The AWS services which currently support notifications are:

In this post, I walk you through the high-level steps for creating a notification that alerts users in a Slack channel every time a pull request is created in a CodeCommit repository.

Solution overview

You can create both the notification rule to listen for required events and the Amazon SNS topic used for notifications on the same web page. You can then configure AWS Chatbot so that notifications sent to that Amazon SNS topic appears in a Slack channel.

To set up notifications, follow the following process, as shown in the following diagram:

  1. Create a notification rule for a repository. This includes creating an Amazon SNS topic to use for notifications.
  2. Configure AWS Chatbot to send notifications from that Amazon SNS topic to a Slack channel.
  3. Test it out and enjoy receiving notifications in your team’s Slack channel.

This diagram describes the notification workflow and how impacted services are connected.

Prerequisites

To follow along with this example, you need an AWS account, an IAM user or role with administrative access, a CodeCommit repository, and a Slack channel.

Configuration steps

Step 1: Create notification rule in CodeCommit

Follow these steps to create a notification rule in CodeCommit:

1 . Select the repository in CodeCommit about which you want to be notified. In the following screenshot, I have selected a repository called Hello-Dublin. Screen-shot of the repository view

2. Select a repository for which you want to receive notifications. Choose Notify, then Create notification rule.Screen-shot of how to select option to create a notification rule

3. Provide a name for your notification rule. I suggest leaving the default Detail Type as Full. By selecting Full, you get extra information beyond what is present in the resource events. Also, you get updated information about your selected event types whenever new information is added about them.

  • For example, if you want to receive notifications whenever a comment is made on a pull request, select Basic, and your notification informs you that a comment has been made.
  • If you select Full, the notification also specifies the exact comment that was made. If the notification feature is enhanced and extra information is added to be a part of the notification, you start receiving the new information without modifying your existing notification rule.

4. In Event types, in Pull request, select Created.

5. In Targets, choose Create SNS topic. This automatically sets up a new Amazon SNS topic to use for notifications, applying a policy that allows notification events to be sent to it.

6. Finish creating the rule. Keep a note of the Amazon SNS ARN, as you need this information to configure Slack integration in the next step.

For complete step-by-step instructions for creating a notification rule, see Create a Notification Rule.

Step 2: Integrate your Amazon SNS topic with AWS Chatbot

Follow these steps to integrate your Amazon SNS topic with AWS Chatbot.

1. Open up your Slack channel. You need information about it as well as your notification rule to complete integration.

2. Open the AWS Chatbot console and choose Try the AWS Chatbot beta.

3. Choose Configure new client, then Slack, then Configure.

4. AWS Chatbot asks for permission to access your Slack workplace, as seen in the following screenshot. Once you give permission, you are asked to configure your Slack channel.

Screen-shot of a prompt about AWS Chatbot requesting permission to access the notifications Slack workspace

Step 3: Test the notification

In your repository, create a pull request. In this example, I named the pull request This is a new pull request. Watch as a notification about that event appears in your Slack channel, as seen in the following screenshot.

Example of a notification received on a Slack channel when a new pull request is created

Step 4: Clean-up

If you created notification rule just for testing purposes, you should delete the SNS topic to avoid any further charges.

Conclusion

And that’s it! You can use notifications to help developers to stay informed about the key events happening in their software development life cycle. You can set up notification rules for build projects, deployment applications, pipelines, and repositories, and stay informed about key events such as pull request creation, comments made on your code or commits, build state/phase change, deployment project status change, manual pipelines approval, or pipeline execution status change. For more information, see the notifications documentation.

Multi-branch CodePipeline strategy with event-driven architecture

Post Syndicated from Henrique Bueno original https://aws.amazon.com/blogs/devops/multi-branch-codepipeline-strategy-with-event-driven-architecture/

Henrique Bueno, DevOps Consultant, Professional Services

This blog post presents a solution for automated pipelines creation in AWS CodePipeline when a new branch is created in an AWS CodeCommit repository. A use case for this solution is when a GitFlow approach using CodePipeline is required. The strategy presented here is used by AWS customers to enable the use of GitFlow using only AWS tools.

CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.

CodeCommit is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.

GitFlow is a branching model designed around the project release. This provides a robust framework for managing larger projects. Gitflow is ideally suited for projects that have a scheduled release cycle.

Applicability

When using CodePipeline to orchestrate pipelines and CodeCommit as a code source, in addition to setting a repository, you must also set which branch will trigger the pipeline. This configuration works perfectly for the trunk-based strategy, in which you have only one main branch and all the developers interact with this single branch. However, when you need to work with a multi-branching strategy like GitFlow, the requirement to set a pipeline for each branch brings additional challenges.

It’s important to note that trunk-based is, by far, the best strategy for taking full advantage of a DevOps approach; this is the branching strategy that AWS recommends to its customers. On the other hand, many customers like to work with multiple branches and believe it justifies the effort and complexity in dealing with branching merges. This solution is for these customers.

Solution Overview

One of the great benefits of working with Infrastructure as Code is the ability to create multiple identical environments through a single template. This example uses AWS CloudFormation templates to provision pipelines and other necessary resources, as shown in the following diagram.

Multi-branch CodePipeline strategy with event-driven architecture

Multi-branch CodePipeline strategy with event-driven architecture

The template is hosted in an Amazon S3 bucket. An AWS Lambda function deploys a new AWS CloudFormation stack based on this template. This Lambda function is trigged for an Amazon CloudWatch Events rule that looks for events at the CodePipeline repository.

The CreatePipeline Events rule

The AWS CloudFormation snippet that creates the Events rule follows. The Events rule monitors create and delete branches events in all repositories, triggering the CreatePipeline Lambda function.

#----------------------------------------------------------------------#
# EventRule to trigger LambdaPipeline lambda
#----------------------------------------------------------------------#
  CreatePipelineRule:
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern:
        source:
          - aws.codecommit
        detail-type:
          - 'CodeCommit Repository State Change'
        detail:
          event:
              - referenceDeleted
              - referenceCreated
          referenceType:
            - branch
      State: ENABLED
      Targets: 
      - Arn: !GetAtt CreatePipeline.Arn
        Id: CreatePipeline

 

The CreatePipeline Lambda function

The Lambda function receives the event details, parses the variables, and executes the appropriate actions. If the event is referenceCreated, then the stack is created; otherwise the stack is deleted. The stack name created or deleted is the junction of the repository name plus the new branch name. This is a very simple function.

#----------------------------------------------------------------------#
# Lambda for Stack Creation
#----------------------------------------------------------------------#
import boto3
def lambda_handler(event, context):
    Region = event['region']
    Account = event['account']
    RepositoryName = event['detail']['repositoryName']
    NewBranch = event['detail']['referenceName']
    Event = event['detail']['event']
    if NewBranch == "master":
        quit()
    if Event == "referenceCreated":
        cf_client = boto3.client('cloudformation')
        cf_client.create_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}',
            TemplateURL=f'https://s3.amazonaws.com/{Account}-templates/TemplatePipeline.yaml',
            Parameters=[
                {
                    'ParameterKey': 'RepositoryName',
                    'ParameterValue': RepositoryName,
                    'UsePreviousValue': False
                },
                {
                    'ParameterKey': 'BranchName',
                    'ParameterValue': NewBranch,
                    'UsePreviousValue': False
                }
            ],
            OnFailure='ROLLBACK',
            Capabilities=['CAPABILITY_NAMED_IAM']
        )
    else:
        cf_client = boto3.client('cloudformation')
        cf_client.delete_stack(
            StackName=f'Pipeline-{RepositoryName}-{NewBranch}'
        )

The logic for creating only the CI or the CI+CD is on the AWS CloudFormation template. The Conditions section of AWS CloudFormation analyzes the new branch name.

Conditions: 
    BranchMaster: !Equals [ !Ref BranchName, "master" ]
    BranchDevelop: !Equals [ !Ref BranchName, "develop"]
    Setup: !Equals [ !Ref Setup, true ]
  • If the new branch is named master, then a stack will be created containing CI+CD pipelines, with deploy stages in the homologation and production environments.
  • If the new branch is named develop, then a stack will be created containing CI+CD pipelines, with a deploy stage in the Dev environment.
  • If the new branch has any other name, then the stack will be created with only a CI pipeline.

Since the purpose of this blog post is to present only a sample of automated pipelines creation, the pipelines used here are for examples only: they don’t deploy to any environment.

 

Applicability

This event-driven strategy permits pipelines to be created or deleted along with the branches. Since the entire environment is created using Infrastructure as Code and the template is the same for all pipelines, there is no possibility of different configuration issues between environments and pipeline stages.

A GitFlow simulation could resemble that shown in the following diagram:

GitFlow Diagram

GitFlow Diagram

  1. First, a CodeCommit repository is created, along with the master branch and its respective pipeline (CI+CD).
  2. The developer creates a branch called develop based on the master branch. The pipeline (CI+CD at Dev) is automatically created.
  3. The developer creates a feature-branch called feature-a based on the develop branch. The CI pipeline for this branch is automatically created.
  4. The developer creates a Pull Request from the feature-a branch to the develop branch. As soon as the Pull Request is accepted and merged and the feature-a branch is deleted, its pipeline is automatically deleted.

The same process can be followed for the release branch and hotfix branch. Once the branch is created, a new pipeline is created for it which follows its branch lifecycle.

Pipelines Stacks

 

Implementation

Before you start, make sure that the AWS CLI is installed and configured on your machine by following these steps:

  1. Clone the repository.
  2. Create the prerequisites stack.
  3. Copy the AWS CloudFormation template to Amazon S3.
  4. Copy the seed.zip file to the Amazon S3 bucket.
  5. Create the first repository and its pipeline.
  6. Create the develop branch.
  7. Create the first feature branch.
  8. Create the first Pull Request.
  9. Execute the Pull Request approval.
  10. Cleanup.

 

1. Clone the repository

Clone the repository with the sample code.

The main files are:

  • Setup.yaml: an AWS CloudFormation template for creating pipeline prerequisites.
  • TemplatePipeline.yaml: an AWS CloudFormation template for pipeline creation.
  • seed/buildspec/CIAction.yaml: a configuration file for an AWS CodeBuild project at the CI stage.
  • seed/buildspec/CDAction.yaml: a configuration file for a CodeBuild project at the CD stage.
# Command to clone the repository
git clone https://github.com/aws-samples/aws-codepipeline-multi-branch-strategy.git
cd aws-codepipeline-multi-branch-strategy

 

2. Create the prerequisite stack

The Setup stack creates the resources that are prerequisites for pipeline creation, as shown in the following chart.

These resources are created only once and they fit all the pipelines created in this example.

Resources

# Command to create Setup stack
aws cloudformation deploy --stack-name Setup-Pipeline \
--template-file Setup.yaml --region us-east-1 --capabilities CAPABILITY_NAMED_IAM

 

3. Copy the AWS CloudFormation template to Amazon S3

For the Lambda function to deploy a new pipeline stack, it needs to get the AWS CloudFormation template from somewhere. To enable it to do so, you need to save the template inside the Amazon S3 bucket that you just created at the Setup stack.

# Command that copy Template to S3 Bucket
aws s3 cp TemplatePipeline.yaml s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

4. Copy the seed.zip file to the Amazon S3 bucket

CodeCommit permits you to populate a repository at the moment of its creation as a first commit. The content of this first commit can be saved in a .zip file in an Amazon S3 bucket. Use this CodeCommit option to populate your repository with BuildSpec files for CodeBuild.

# Command to create zip file with the Buildspec folder content.
zip -r seed.zip buildspec

# Command that copy seed.zip file to S3 Bucket.
aws s3 cp seed.zip s3://"$(aws sts get-caller-identity --query Account --output text)"-templates/ --acl private

 

5. Create the first repository and its pipeline

Now that the Setup stack is created and the seed file is stored in an Amazon S3 bucket, create the first CodeCommit repository. Every time that you want to create a new repository, execute the command below to create a new stack.

# Command to create the stack with the CodeCommit repository,
# CodeBuild Projects and the Pipeline for the master branch.
# Note: Change "myapp" by the name you want.

RepoName="myapp"
aws cloudformation deploy --stack-name Repo-$RepoName --template-file TemplatePipeline.yaml \
--parameter-overrides RepositoryName=$RepoName Setup=true \
--region us-east-1 --capabilities CAPABILITY_NAMED_IAM

When the stack is created, in addition to the CodeCommit repository, the CodeBuild projects and the master branch pipeline are also created. By default, a CodeCommit repository is created empty, with no branch. When the repository is populated with the seed.zip file, the master branch is created.

Resources

Access the CodeCommit repository to see the seed files at the master branch. Access the CodePipeline console to see that there’s a new pipeline with the name as the repository. This pipeline contains the CI+CD stages (homolog and prod).

 

6. Create the develop branch

To simulate a real development scenario, create a new branch called develop based on the master branch. In the GitFlow concept these two (master and develop) branches are fixed and never deleted.

When this new branch is created, the Events rule identifies that there’s a change on this repository and triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains the CI+CD stages (Dev).

# Configure Git Credentials using AWS CLI Credential Helper
mkdir myapp
cd myapp
git config --global credential.helper '!aws codecommit credential-helper [email protected]'
git config --global credential.UseHttpPath true

# Clone the CodeCommit repository
# You can get the URL in the CodeCommit Console
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/myapp .

# Create the develop branch
# For more details: https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-branch.html
git checkout -b develop
git push origin develop

 

7. Create the first feature branch

Now that there are two main and fixed branches (master and develop), you can create a feature branch. In the GitFlow concept, feature branches have a short lifetime and are frequently merged to the develop branch. This type of branch only exists during the development period. When the feature development is finished, it is merged to the develop branch and the feature branch is deleted.

# Create the feature-branch branch
# make sure that you are at develop branch
git checkout -b feature-abc
git push origin feature-abc

Just as with the develop branch, when this new branch is created, the Events rule triggers the CreatePipeline Lambda function to create a new pipeline for this branch. Access the CodePipeline console to see that there’s a new pipeline with the name of the repository plus the branch name. This pipeline contains only the CI stage, without a CD stage.

Pipelines-2

 

8. Create the first Pull Request

It’s possible to simulate the end of a feature development, when the branch is ready to be merged with the develop branch. Keep in mind that the feature branch has a short lifecycle:  when the merge is done, the feature branch is deleted along with its pipeline.

# Create the Pull Request
aws codecommit create-pull-request --title "My Pull Request" \
--description "Please review these changes by Tuesday" \
--targets repositoryName=$RepoName,sourceReference=feature-abc,destinationReference=develop \
--region us-east-1

 

9. Execute the Pull Request approval

To merge the feature branch to the develop branch, the Pull Request needs to be approved. In a real scenario, a teammate should do a peer review before approval.

# Accept the Pull Request
# You can get the Pull-Request-ID in the json output of the create-pull-request command
aws codecommit merge-pull-request-by-fast-forward --pull-request-id <PULL_REQUEST_ID_FROM_PREVIOUS_COMMAND> \
--repository-name $RepoName --region us-east-1

# Delete the feature-branch
aws codecommit delete-branch --repository-name $RepoName --branch-name feature-abc --region us-east-1

The new code is integrated with the develop branch. If there’s a conflict, it needs to be solved. After that, the featurebranch is deleted together with its pipeline. The Event Rule triggers the CreatePipeline Lambda function to delete the pipeline for its branch. Access the CodePipeline console to see that the pipeline for the feature branch is deleted.

Pipelines

 

10. Cleanup

To remove the resources created as part of this blog post, follow these steps:

Delete Pipeline Stacks

# Delete all the pipeline Stacks created by CreatePipeline Lambda
Pipelines=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Pipeline`) == `true`].[StackName]' --output text)
while read -r Pipeline rest; do aws cloudformation delete-stack --stack-name $Pipeline --region us-east-1 ; done <<< $Pipelines

Delete Repository Stacks

# Delete all the Repository stacks Stacks created by Step 5.
Repos=$(aws cloudformation list-stacks --stack-status-filter --region us-east-1 --query 'StackSummaries[? StackStatus==`CREATE_COMPLETE` && starts_with(StackName, `Repo-`) == `true`].[StackName]' --output text)
while read -r Repo rest; do aws cloudformation delete-stack --stack-name $Repo --region us-east-1 ; done <<< $Repos

Delete Setup-Pipeline Stack

# Cleaning Bucket before Stack deletion 
aws s3 rm s3://"$(aws sts get-caller-identity --query Account --output text)"-templates --recursive

# Delete Setup Stack
aws cloudformation delete-stack --stack-name Setup-Pipeline --region us-east-1 

 

Conclusion

This blog post discussed how you can work with event-driven strategy and Infrastructure as Code to implement a multi-branch pipeline flow using CodePipeline. It demonstrated how an Events rule and Lambda function can be used to fully orchestrate the creation and deletion of pipelines.

 

About the Author

Henrique Bueno

Henrique Bueno

Henrique is a DevOps Consultant in the Brazilian Professional Services Team at Amazon Web Services. He has helped AWS customers design, build and deploy cloud native applications by following the Twelve-Factor App methodology.

Integrating SonarQube as a pull request approver on AWS CodeCommit

Post Syndicated from David Jackson original https://aws.amazon.com/blogs/devops/integrating-sonarqube-as-a-pull-request-approver-on-aws-codecommit/

Integrating SonarQube as a pull request approver on AWS CodeCommit

On Nov 25th, AWS CodeCommit launched a new feature that allows customers to configure approval rules on pull requests. Approval rules act as a gate on your source code changes. Pull requests which fail to satisfy the required approvals cannot be merged into your important branches. Additionally, CodeCommit launched the ability to create approval rule templates, which are rulesets that can automatically be applied to all pull requests created for one or more repositories in your AWS account. With templates, it becomes simple to create rules like “require one approver from my team” for any number of repositories in your AWS account.

A common problem for software developers is accidentally or unintentionally merging code with bugs, defects, or security vulnerabilities into important master branches. Once bad code is merged into a master branch, it can be difficult to remove. It’s also potentially costly if the code is deployed into production environments and causes outages or other serious issues. Using CodeCommit’s new features, adding required approvers to your repository pull requests can help identify and mitigate those issues before they are merged into your master branches.

The most rudimentary use of required approvers is to require at least one team member to approve each pull request. While adding human team members as approvers is an important part of the pull request workflow, this feature can also be used to require ‘robot’ approvers of your pull requests, and you can trigger them automatically on each new or updated pull request. Robotic approvers can help find issues that humans miss and enforce best practices regarding code style, test coverage, and more.

Customers have been asking us how we can integrate code review tools with AWS CodeCommit pull requests. I encourage you to check out Amazon CodeGuru Reviewer, which is a service that uses program analysis and machine learning to detect potential defects that are difficult for developers to find and recommends fixes in your Java code, and was launched in preview at the AWS Re:Invent 2019 conference. Another popular tool is SonarQube, which is an open-source platform for performing code quality analysis. It helps detect defects, bugs, and security vulnerabilities in your pull requests. This blog post shows you how to integrate SonarQube into the pull requests workflow.

This post shows…

Time to read10 minutes
Time to complete20 minutes
Cost to complete (estimated)$0.40/month for secret, ~$0.02 per build on CodeBuild. $0-1 for CodeCommit user depending on current free tier status. (at publication time)
Learning levelIntermediate (200)
Services usedAWS CodeCommit, AWS CodeBuild, AWS CloudFormation, Amazon Elastic Compute Cloud (EC2), AWS CloudWatch Events, AWS Identity and Access Management, AWS Secrets Manager

Solution overview

In this solution, you create a CodeCommit repository that requires a successful SonarQube quality analysis before pull requests can be merged. You can create the required AWS resources in your account by using the provided AWS CloudFormation template. This template creates the following resources:

  • A new CodeCommit repository, containing a starter Java project that uses the Apache Maven build system, as well as a custom buildspec.yml file to facilitate communication with SonarQube and CodeCommit.
  • An AWS CodeBuild project which invokes your SonarQube instance on build, then reports the status of the analysis back to CodeCommit.
  • An Amazon CloudWatch Events Rule, which listens for pullRequestCreated and pullRequestSourceBranchUpdated events from CodeCommit, and invokes your CodeBuild project.
  • An AWS Secrets Manager secret, which securely stores and provides the username and password of your SonarQube user to the CodeBuild project on-demand.
  • IAM roles for CodeBuild and CloudWatch events.

Although this tutorial showcases a Java project with Maven, the design principles should also apply for other languages and build systems with SonarQube integrations.

Design

The following diagram shows the flow of data, starting with a new or updated pull request on CodeCommit. CloudWatch Events listens for these events and invokes your CodeBuild project. The CodeBuild container clones your repository source commit, performs a Maven install, and invokes the quality analysis on SonarQube, using the credentials obtained from AWS Secrets Manager. When finished, CodeBuild leaves a comment on your pull request, and potentially approves your pull request.

 

Diagram showing the flow of data between the AWS service components, as well as the SonarQube.

Prerequisites

For this walkthrough, you require:

  • An AWS account
  • A SonarQube server instance (Optional setup instructions included if you don’t have one already)

SonarQube instance setup (Optional)

This tutorial shows a basic setup of SonarQube on Amazon EC2 for informational purposes only. It does not include details about securing your Amazon EC2 instance or SonarQube installation. Please be sure you have secured your environments before placing sensitive data on them.

  1. To start, get a SonarQube server instance up and running. If you are already using SonarQube, feel free to skip these instructions and just note down your host URL and port number for later. If you don’t have one already, I recommend using a fresh Amazon EC2 instance for the job. You can get up and running quickly in just a few commands. I’ve selected an Amazon Linux 2 AMI for my EC2 instance.
  2. Download and install the latest JDK 11 module. Because I am using an Amazon Linux 2 EC2 instance, I can directly install Amazon Corretto 11 with yum.

$ sudo yum install java-11-amazon-corretto-headless

  1. After it’s installed, verify you’re using this version of Java:

$ sudo alternatives --config java

  1. Choose the Java 11 version you just installed.
  2. Download the latest SonarQube installation.
  3. Copy the zip-file onto your Amazon EC2 instance.
  4. Unzip the file into your home directory:

$ unzip sonarqube-8.0.zip -d ~/

This will copy the files into a directory like /home/ec2-user/sonarqube-8.0.

Now, start the server!

$ ~/sonarqube-8.0/bin/linux-x86-64/sonar.sh start

This should start a SonarQube server running on an address like http://<instance-address>:9000. It may take a few moments for the server to start.

Steps

Follow these steps to create automated pull request approvals.

Create a SonarQube User

Get started by creating a SonarQube user from your SonarQube webpage. This user is the identity used by the robot caller to your SonarQube for this workflow.

  1. Go to the Administration tab on your SonarQube instance.
  2. Choose Security, then Users, as shown in the following screenshot.Screenshot showing where to find the user management options inside SonarQube.
  3. Choose Create User. Fill in the form, and note down the Login and Password You will need to provide these values when creating the following AWS resources.
  4. Choose Create.

Create AWS resources

For this integration, you need to create some AWS resources:

  • AWS CodeCommit repository
  • AWS CodeBuild project
  • Amazon CloudWatch Events rule (to trigger builds when pull requests are created or updated)
  • IAM role (for CodeBuild to assume)
  • IAM role (for CloudWatch Events to assume and invoke CodeBuild)
  • AWS Secrets Manager secret (to store and manage your SonarQube user credentials)

I have created an AWS CloudFormation template to provision these resources for you. You can download the template from the sample repository on GitHub for this blog demo. This repository also contains the sample code which will be uploaded to your CodeCommit repository. The contents of this GitHub repository will automatically be copied into your new CodeCommit repository for you when you create this CloudFormation stack. This is because I’ve conveniently uploaded a zip-file of the contents into a publicly-readable S3 bucket, and am using it within this CloudFormation template.

  1. Download or copy the CloudFormation template from GitHub and save it as template.yaml on your local computer.
  2. At the CloudFormation console, choose Create Stack (with new resources).
  3. Choose Upload a template file.
  4. Choose Choose file and select the template.yaml file you just saved.
  5. Choose Next.
  6. Give your stack a name, optionally update the CodeCommit repository name and description, and paste in the username and password of the SonarQube user you created.
  7. Choose Next.
  8. Review the stack options and choose Next.
  9. On Step 4, review your stack, acknowledge the required capabilities, and choose Create Stack.
  10. Wait for the stack creation to complete before proceeding.
  11. Before leaving the AWS CloudFormation console, choose the Resources tab and note down the newly created CodeBuildRole’s Physical Id, as shown in the following screenshot. You need this in the next step. Screenshot showing the Physical Id of the CodeBuild role created through CloudFormation.

Create an Approval Rule Template

Now that your resources are created, create an Approval Rule Template in the CodeCommit console. This template allows you to define a required approver for new pull requests on specific repositories.

  1. On the CodeCommit console home page, choose Approval rule templates in the left panel. Choose Create template.
  2. Give the template a name (like Require SonarQube approval) and optionally, a description.
  3. Set the number of approvals needed as 1.
  4. Under Approval pool members, choose Add.
  5. Set the approver type to Fully qualified ARN. Since the approver will be the identity obtained by assuming the CodeBuild execution role, your approval pool ARN should be the following string:
    arn:aws:sts::<Your AccountId>:assumed-role/<Your CodeBuild IAM role name>/*
    The CodeBuild IAM role name is the Physical Id of the role you created and noted down above. You can also find the full name either in the IAM console or the AWS CloudFormation stack details. Adding this role to the approval pool allows any identity assuming your CodeBuild role to satisfy this approval rule.
  6. Under Associated repositories, find and choose your repository (PullRequestApproverBlogDemo). This ensures that any pull requests subsequently created on your repository will have this rule by default.
  7. Choose Create.

Update the repository with a SonarQube endpoint URL

For this step, you update your CodeCommit repository code to include the endpoint URL of your SonarQube instance. This allows CodeBuild to know where to go to invoke your SonarQube.

You can use the AWS Management Console to make this code change.

  1. Head back to the CodeCommit home page and choose your repository name from the Repositories list.
  2. You need a new branch on which to update the code. From the repository page, choose Branches, then Create branch.
  3. Give the new branch a name (such as update-url) and make sure you are branching from master. Choose Create branch.
  4. You should now see two branches in the table. Choose the name of your new branch (update-url) to start browsing the code on this branch. On the update-url branch, open the buildspec.yml file by choosing it.
  5. Choose Edit to make a change.
  6. In the pre_build steps, modify line 17 with your SonarQube instance url and listen port number, as shown in the following screenshot.Screenshot showing buildspec yaml code.
  7. To save, scroll down and fill out the author, email, and commit message. When you’re happy, commit this by choosing Commit changes.

Create a Pull Request

You are now ready to create a pull request!

  1. From the CodeCommit console main page, choose Repositories and PullRequestApproverBlogDemo.
  2. In the left navigation panel, choose Pull Requests.
  3. Choose Create pull request.
  4. Select master as your destination branch, and your new branch (update-url) as the source branch.
  5. Choose Compare.
  6. Give your pull request a title and description, and choose Create pull request.

It’s time to see the magic in action. Now that you’ve created your pull request, you should already see that your pull request requires one approver but is not yet approved. This rule comes from the template you created and associated earlier.

You’ll see images like the following screenshot if you browse through the tabs on your pull request:

Screenshot showing that your pull request has 0 of 1 rule satisfied, with 0 approvals. Screenshot showing a table of approval rules on this pull request which were applied by a template. Require SonarQube approval is listed but not yet satisfied.

Thanks to the CloudWatch Events Rule, CodeBuild should already be hard at work cloning your repository, performing a build, and invoking your SonarQube instance. It is able to find the SonarQube URL you provided because CodeBuild is cloning the source branch of your pull request. If you choose to peek at your project in the CodeBuild console, you should see an in-progress build.

Once the build has completed, head back over to your CodeCommit pull request page. If all went well, you’ll be able to see that SonarQube approved your pull request and left you a comment. (Or alternatively, failed and also left you a comment while not approving).

The Activity tab should resemble that in the following screenshot:

Screenshot showing that a comment was made by SonarQube through CodeBuild, and that the quality gate passed. The comment includes a link back to the SonarQube instance.

The Approvals tab should resemble that in the following screenshot:

Screenshot of Approvals tab on the pull request. The approvals table shows an approval by the SonarQube and that the rule to require SonarQube approval is satisfied.

Suppose you need to make a change to your pull request. If you perform updates to your source branch, the approval status will be reset. As your push completes, a new SonarQube analysis will begin just as it did the first time.

Once your SonarQube thresholds are satisfied and your pull request is approved, feel free to merge it!

Cleanup

To avoid incurring additional charges, you may want to delete the AWS resources you created for this project. To do this, simply navigate to the CloudFormation console, select the stack you created above, and choose Delete. If you are sure you want to delete, confirm by choosing Delete stack. CloudFormation will delete all the resources you created with this stack.

Conclusion

In this tutorial, you created a workflow to watch for pull request changes to your repository, triggered a CodeBuild project execution which invoked your SonarQube for code quality analysis, and then reported back to CodeCommit to approve your pull request.

I hope this guide illustrates the potential power of combining pull request approval rules with robotic approvers. While this example is specifically about integrating SonarQube, the same pattern can be used to invoke other robotic approvers using CodeBuild, or by invoking an AWS Lambda function instead.

This tutorial was written and tested using SonarQube Version 8.0 (build 29455).

DevOps at re:Invent 2019!

Post Syndicated from Matt Dwyer original https://aws.amazon.com/blogs/devops/devops-at-reinvent-2019/

re:Invent 2019 is fast approaching (NEXT WEEK!) and we here at the AWS DevOps blog wanted to take a moment to highlight DevOps focused presentations, share some tips from experienced re:Invent pro’s, and highlight a few sessions that still have availability for pre-registration. We’ve broken down the track into one overarching leadership session and four topic areas: (a) architecture, (b) culture, (c) software delivery/operations, and (d) AWS tools, services, and CLI.

In total there will be 145 DevOps track sessions, stretched over 5 days, and divided into four distinct session types:

  • Sessions (34) are one-hour presentations delivered by AWS experts and customer speakers who share their expertise / use cases
  • Workshops (20) are two-hours and fifteen minutes, hands-on sessions where you work in teams to solve problems using AWS services
  • Chalk Talks (41) are interactive white-boarding sessions with a smaller audience. They typically begin with a 10–15-minute presentation delivered by an AWS expert, followed by 45–50-minutes of Q&A
  • Builders Sessions (50) are one-hour, small group sessions with six customers and one AWS expert, who is there to help, answer questions, and provide guidance
  • Select DevOps focused sessions have been highlighted below. If you want to view and/or register for any session, including Keynotes, builders’ fairs, and demo theater sessions, you can access the event catalog using your re:Invent registration credentials.

Reserve your seat for AWS re:Invent activities today >>

re:Invent TIP #1: Identify topics you are interested in before attending re:Invent and reserve a seat. We hold space in sessions, workshops, and chalk talks for walk-ups, however, if you want to get into a popular session be prepared to wait in line!

Please see below for select sessions, workshops, and chalk talks that will be conducted during re:Invent.

LEADERSHIP SESSION DELIVERED BY KEN EXNER, DIRECTOR AWS DEVELOPER TOOLS

[Session] Leadership Session: Developer Tools on AWS (DOP210-L) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Ken Exner – Director, AWS Dev Tools, Amazon Web Services
Speaker 2: Kyle Thomson – SDE3, Amazon Web Services

Join Ken Exner, GM of AWS Developer Tools, as he shares the state of developer tooling on AWS, as well as the future of development on AWS. Ken uses insight from his position managing Amazon’s internal tooling to discuss Amazon’s practices and patterns for releasing software to the cloud. Additionally, Ken provides insight and updates across many areas of developer tooling, including infrastructure as code, authoring and debugging, automation and release, and observability. Throughout this session Ken will recap recent launches and show demos for some of the latest features.

re:Invent TIP #2: Leadership Sessions are a topic area’s State of the Union, where AWS leadership will share the vision and direction for a given topic at AWS.re:Invent.

(a) ARCHITECTURE

[Session] Amazon’s approach to failing successfully (DOP208-RDOP208-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Becky Weiss – Senior Principal Engineer, Amazon Web Services

Welcome to the real world, where things don’t always go your way. Systems can fail despite being designed to be highly available, scalable, and resilient. These failures, if used correctly, can be a powerful lever for gaining a deep understanding of how a system actually works, as well as a tool for learning how to avoid future failures. In this session, we cover Amazon’s favorite techniques for defining and reviewing metrics—watching the systems before they fail—as well as how to do an effective postmortem that drives both learning and meaningful improvement.

[Session] Improving resiliency with chaos engineering (DOP309-RDOP309-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Olga Hall – Senior Manager, Tech Program Management
Speaker 2: Adrian Hornsby – Principal Evangelist, Amazon Web Services

Failures are inevitable. Regardless of the engineering efforts put into building resilient systems and handling edge cases, sometimes a case beyond our reach turns a benign failure into a catastrophic one. Therefore, we should test and continuously improve our system’s resilience to failures to minimize impact on a user’s experience. Chaos engineering is one of the best ways to achieve that. In this session, you learn how Amazon Prime Video has implemented chaos engineering into its regular testing methods, helping it achieve increased resiliency.

[Session] Amazon’s approach to security during development (DOP310-RDOP310-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Colm MacCarthaigh – Senior Principal Engineer, Amazon Web Services

At AWS we say that security comes first—and we really mean it. In this session, hear about how AWS teams both minimize security risks in our products and respond to security issues proactively. We talk through how we integrate security reviews, penetration testing, code analysis, and formal verification into the development process. Additionally, we discuss how AWS engineering teams react quickly and decisively to new security risks as they emerge. We also share real-life firefighting examples and the lessons learned in the process.

[Session] Amazon’s approach to building resilient services (DOP342-RDOP342-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Marc Brooker – Senior Principal Engineer, Amazon Web Services

One of the biggest challenges of building services and systems is predicting the future. Changing load, business requirements, and customer behavior can all change in unexpected ways. In this talk, we look at how AWS builds, monitors, and operates services that handle the unexpected. Learn how to make your own services handle a changing world, from basic design principles to patterns you can apply today.

re:Invent TIP #3: Not sure where to spend your time? Let an AWS Hero give you some pointers. AWS Heroes are prominent AWS advocates who are passionate about sharing AWS knowledge with others. They have written guides to help attendees find relevant activities by providing recommendations based on specific demographics or areas of interest.

(b) CULTURE

[Session] Driving change and building a high-performance DevOps culture (DOP207-R; DOP207-R1)

Speaker: Mark Schwartz – Enterprise Strategist, Amazon Web Services

When it comes to digital transformation, every enterprise is different. There is often a person or group with a vision, knowledge of good practices, a sense of urgency, and the energy to break through impediments. They may be anywhere in the organizational structure: high, low, or—in a typical scenario—somewhere in middle management. Mark Schwartz, an enterprise strategist at AWS and the author of “The Art of Business Value” and “A Seat at the Table: IT Leadership in the Age of Agility,” shares some of his research into building a high-performance culture by driving change from every level of the organization.

[Session] Amazon’s approach to running service-oriented organizations (DOP301-R; DOP301-R1DOP301-R2)

Speaker: Andy Troutman – Director AWS Developer Tools, Amazon Web Services

Amazon’s “two-pizza teams” are famously small teams that support a single service or feature. Each of these teams has the autonomy to build and operate their service in a way that best supports their customers. But how do you coordinate across tens, hundreds, or even thousands of two-pizza teams? In this session, we explain how Amazon coordinates technology development at scale by focusing on strategies that help teams coordinate while maintaining autonomy to drive innovation.

re:Invent TIP #4: The max number of 60-minute sessions you can attend during re:Invent is 24! These sessions (e.g., sessions, chalk talks, builders sessions) will usually make up the bulk of your agenda.

(c) SOFTWARE DELIVERY AND OPERATIONS

[Session] Strategies for securing code in the cloud and on premises. Speakers: (DOP320-RDOP320-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Craig Smith – Senior Solutions Architect
Speaker 2: Lee Packham – Solutions Architect

Some people prefer to keep their code and tooling on premises, though this can create headaches and slow teams down. Others prefer keeping code off of laptops that can be misplaced. In this session, we walk through the alternatives and recommend best practices for securing your code in cloud and on-premises environments. We demonstrate how to use services such as Amazon WorkSpaces to keep code secure in the cloud. We also show how to connect tools such as Amazon Elastic Container Registry (Amazon ECR) and AWS CodeBuild with your on-premises environments so that your teams can go fast while keeping your data off of the public internet.

[Session] Deploy your code, scale your application, and lower Cloud costs using AWS Elastic Beanstalk (DOP326) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Prashant Prahlad – Sr. Manager

You can effortlessly convert your code into web applications without having to worry about provisioning and managing AWS infrastructure, applying patches and updates to your platform or using a variety of tools to monitor health of your application. In this session, we show how anyone- not just professional developers – can use AWS Elastic Beanstalk in various scenarios: From an administrator moving a Windows .NET workload into the Cloud, a developer building a containerized enterprise app as a Docker image, to a data scientist being able to deploy a machine learning model, all without the need to understand or manage the infrastructure details.

[Session] Amazon’s approach to high-availability deployment (DOP404-RDOP404-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Peter Ramensky – Senior Manager

Continuous-delivery failures can lead to reduced service availability and bad customer experiences. To maximize the rate of successful deployments, Amazon’s development teams implement guardrails in the end-to-end release process to minimize deployment errors, with a goal of achieving zero deployment failures. In this session, learn the continuous-delivery practices that we invented that help raise the bar and prevent costly deployment failures.

[Session] Introduction to DevOps on AWS (DOP209-R; DOP209-R1)

Speaker 1: Jonathan Weiss – Senior Manager
Speaker 2: Sebastien Stormacq – Senior Technical Evangelist

How can you accelerate the delivery of new, high-quality services? Are you able to experiment and get feedback quickly from your customers? How do you scale your development team from 1 to 1,000? To answer these questions, it is essential to leverage some key DevOps principles and use CI/CD pipelines so you can iterate on and quickly release features. In this talk, we walk you through the journey of a single developer building a successful product and scaling their team and processes to hundreds or thousands of deployments per day. We also walk you through best practices and using AWS tools to achieve your DevOps goals.

[Workshop] DevOps essentials: Introductory workshop on CI/CD practices (DOP201-R; DOP201-R1; DOP201-R2; DOP201-R3)

Speaker 1: Leo Zhadanovsky – Principal Solutions Architect
Speaker 2: Karthik Thirugnanasambandam – Partner Solutions Architect

In this session, learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities. We demonstrate a prescriptive approach to incrementally adopt and embrace some of the best practices around continuous integration and delivery using AWS developer tools and third-party solutions, including, AWS CodeCommit, AWS CodeBuild, Jenkins, AWS CodePipeline, AWS CodeDeploy, AWS X-Ray and AWS Cloud9. We also highlight some best practices and productivity tips that can help make your software release process fast, automated, and reliable.

[Workshop] Implementing GitFLow with AWS tools (DOP202-R; DOP202-R1; DOP202-R2)

Speaker 1: Amit Jha – Sr. Solutions Architect
Speaker 2: Ashish Gore – Sr. Technical Account Manager

Utilizing short-lived feature branches is the development method of choice for many teams. In this workshop, you learn how to use AWS tools to automate merge-and-release tasks. We cover high-level frameworks for how to implement GitFlow using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You also get an opportunity to walk through a prebuilt example and examine how the framework can be adopted for individual use cases.

[Chalk Talk] Generating dynamic deployment pipelines with AWS CDK (DOP311-R; DOP311-R1; DOP311-R2)

Speaker 1: Flynn Bundy – AppDev Consultant
Speaker 2: Koen van Blijderveen – Senior Security Consultant

In this session we dive deep into dynamically generating deployment pipelines that deploy across multiple AWS accounts and Regions. Using the power of the AWS Cloud Development Kit (AWS CDK), we demonstrate how to simplify and abstract the creation of deployment pipelines to suit a range of scenarios. We highlight how AWS CodePipeline—along with AWS CodeBuild, AWS CodeCommit, and AWS CodeDeploy—can be structured together with the AWS deployment framework to get the most out of your infrastructure and application deployments.

[Chalk Talk] Customize AWS CloudFormation with open-source tools (DOP312-R; DOP312-R1; DOP312-E)

Speaker 1: Luis Colon – Senior Developer Advocate
Speaker 2: Ryan Lohan – Senior Software Engineer

In this session, we showcase some of the best open-source tools available for AWS CloudFormation customers, including conversion and validation utilities. Get a glimpse of the many open-source projects that you can use as you create and maintain your AWS CloudFormation stacks.

[Chalk Talk] Optimizing Java applications for scale on AWS (DOP314-R; DOP314-R1; DOP314-R2)

Speaker 1: Sam Fink – SDE II
Speaker 2: Kyle Thomson – SDE3

Executing at scale in the cloud can require more than the conventional best practices. During this talk, we offer a number of different Java-related tools you can add to your AWS tool belt to help you more efficiently develop Java applications on AWS—as well as strategies for optimizing those applications. We adapt the talk on the fly to cover the topics that interest the group most, including more easily accessing Amazon DynamoDB, handling high-throughput uploads to and downloads from Amazon Simple Storage Service (Amazon S3), troubleshooting Amazon ECS services, working with local AWS Lambda invocations, optimizing the Java SDK, and more.

[Chalk Talk] Securing your CI/CD tools and environments (DOP316-R; DOP316-R1; DOP316-R2)

Speaker: Leo Zhadanovsky – Principal Solutions Architect

In this session, we discuss how to configure security for AWS CodePipeline, deployments in AWS CodeDeploy, builds in AWS CodeBuild, and git access with AWS CodeCommit. We discuss AWS Identity and Access Management (IAM) best practices, to allow you to set up least-privilege access to these services. We also demonstrate how to ensure that your pipelines meet your security and compliance standards with the CodePipeline AWS Config integration, as well as manual approvals. Lastly, we show you best-practice patterns for integrating security testing of your deployment artifacts inside of your CI/CD pipelines.

[Chalk Talk] Amazon’s approach to automated testing (DOP317-R; DOP317-R1; DOP317-R2)

Speaker 1: Carlos Arguelles – Principal Engineer
Speaker 2: Charlie Roberts – Senior SDET

Join us for a session about how Amazon uses testing strategies to build a culture of quality. Learn Amazon’s best practices around load testing, unit testing, integration testing, and UI testing. We also discuss what parts of testing are automated and how we take advantage of tools, and share how we strategize to fail early to ensure minimum impact to end users.

[Chalk Talk] Building and deploying applications on AWS with Python (DOP319-R; DOP319-R1; DOP319-R2)

Speaker 1: James Saryerwinnie – Senior Software Engineer
Speaker 2: Kyle Knapp – Software Development Engineer

In this session, hear from core developers of the AWS SDK for Python (Boto3) as we walk through the design of sample Python applications. We cover best practices in using Boto3 and look at other libraries to help build these applications, including AWS Chalice, a serverless microframework for Python. Additionally, we discuss testing and deployment strategies to manage the lifecycle of your applications.

[Chalk Talk] Deploying AWS CloudFormation StackSets across accounts and Regions (DOP325-R; DOP325-R1)

Speaker 1: Mahesh Gundelly – Software Development Manager
Speaker 2: Prabhu Nakkeeran – Software Development Manager

AWS CloudFormation StackSets can be a critical tool to efficiently manage deployments of resources across multiple accounts and regions. In this session, we cover how AWS CloudFormation StackSets can help you ensure that all of your accounts have the proper resources in place to meet security, governance, and regulation requirements. We also cover how to make the most of the latest functionalities and discuss best practices, including how to plan for safe deployments with minimal blast radius for critical changes.

[Chalk Talk] Monitoring and observability of serverless apps using AWS X-Ray (DOP327-R; DOP327-R1; DOP327-R2)

Speaker 1 (R, R1, R2): Shengxin Li – Software Development Engineer
Speaker 2 (R, R1): Sirirat Kongdee – Solutions Architect
Speaker 3 (R2): Eric Scholz – Solutions Architect, Amazon

Monitoring and observability are essential parts of DevOps best practices. You need monitoring to debug and trace unhandled errors, performance bottlenecks, and customer impact in the distributed nature of a microservices architecture. In this chalk talk, we show you how to integrate the AWS X-Ray SDK to your code to provide observability to your overall application and drill down to each service component. We discuss how X-Ray can be used to analyze, identify, and alert on performance issues and errors and how it can help you troubleshoot application issues faster.

[Chalk Talk] Optimizing deployment strategies for speed & safety (DOP341-R; DOP341-R1; DOP341-R2)

Speaker: Karan Mahant – Software Development Manager, Amazon

Modern application development moves fast and demands continuous delivery. However, the greatest risk to an application’s availability can occur during deployments. Join us in this chalk talk to learn about deployment strategies for web servers and for Amazon EC2, container-based, and serverless architectures. Learn how you can optimize your deployments to increase productivity during development cycles and mitigate common risks when deploying to production by using canary and blue/green deployment strategies. Further, we share our learnings from operating production services at AWS.

[Chalk Talk] Continuous integration using AWS tools (DOP216-R; DOP216-R1; DOP216-R2)

Speaker: Richard Boyd – Sr Developer Advocate, Amazon Web Services

Today, more teams are adopting continuous-integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase best practices for continuous integration and discuss how to effectively use AWS tools for CI.

re:Invent TIP #5: If you’re traveling to another session across campus, give yourself at least 60 minutes!

(d) AWS TOOLS, SERVICES, AND CLI

[Session] Best practices for authoring AWS CloudFormation (DOP302-R; DOP302-R1)

Speaker 1: Olivier Munn – Sr Product Manager Technical, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

Incorporating infrastructure as code into software development practices can help teams and organizations improve automation and throughput without sacrificing quality and uptime. In this session, we cover multiple best practices for writing, testing, and maintaining AWS CloudFormation template code. You learn about IDE plug-ins, reusability, testing tools, modularizing stacks, and more. During the session, we also review sample code that showcases some of the best practices in a way that lends more context and clarity.

[Chalk Talk] Using AWS tools to author and debug applications (DOP215-RDOP215-R1DOP215-R2) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Fabian Jakobs – Principal Engineer, Amazon Web Services

Every organization wants its developers to be faster and more productive. AWS Cloud9 lets you create isolated cloud-based development environments for each project and access them from a powerful web-based IDE anywhere, anytime. In this session, we demonstrate how to use AWS Cloud9 and provide an overview of IDE toolkits that can be used to author application code.

[Session] Migrating .Net frameworks to the cloud (DOP321) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Robert Zhu – Principal Technical Evangelist, Amazon Web Services

Learn how to migrate your .NET application to AWS with minimal steps. In this demo-heavy session, we share best practices for migrating a three-tiered application on ASP.NET and SQL Server to AWS. Throughout the process, you get to see how AWS Toolkit for Visual Studio can enable you to fully leverage AWS services such as AWS Elastic Beanstalk, modernizing your application for more agile and flexible development.

[Session] Deep dive into AWS Cloud Development Kit (DOP402-R; DOP402-R1)

Speaker 1: Elad Ben-Israel – Principal Software Engineer, Amazon Web Services
Speaker 2: Jason Fulghum – Software Development Manager, Amazon Web Services

The AWS Cloud Development Kit (AWS CDK) is a multi-language, open-source framework that enables developers to harness the full power of familiar programming languages to define reusable cloud components and provision applications built from those components using AWS CloudFormation. In this session, you develop an AWS CDK application and learn how to quickly assemble AWS infrastructure. We explore the AWS Construct Library and show you how easy it is to configure your cloud resources, manage permissions, connect event sources, and build and publish your own constructs.

[Session] Introduction to the AWS CLI v2 (DOP406-R; DOP406-R1)

Speaker 1: James Saryerwinnie – Senior Software Engineer, Amazon Web Services
Speaker 2: Kyle Knapp – Software Development Engineer, Amazon Web Services

The AWS Command Line Interface (AWS CLI) is a command-line tool for interacting with AWS services and managing your AWS resources. We’ve taken all of the lessons learned from AWS CLI v1 (launched in 2013), and have been working on AWS CLI v2—the next major version of the AWS CLI—for the past year. AWS CLI v2 includes features such as improved installation mechanisms, a better getting-started experience, interactive workflows for resource management, and new high-level commands. Come hear from the core developers of the AWS CLI about how to upgrade and start using AWS CLI v2 today.

[Session] What’s new in AWS CloudFormation (DOP408-R; DOP408-R1; DOP408-R2)

Speaker 1: Jing Ling – Senior Product Manager, Amazon Web Services
Speaker 2: Luis Colon – Senior Developer Advocate, Amazon Web Services

AWS CloudFormation is one of the most widely used AWS tools, enabling infrastructure as code, deployment automation, repeatability, compliance, and standardization. In this session, we cover the latest improvements and best practices for AWS CloudFormation customers in particular, and for seasoned infrastructure engineers in general. We cover new features and improvements that span many use cases, including programmability options, cross-region and cross-account automation, operational safety, and additional integration with many other AWS services.

[Workshop] Get hands-on with Python/boto3 with no or minimal Python experience (DOP203-R; DOP203-R1; DOP203-R2)

Speaker 1: Herbert-John Kelly – Solutions Architect, Amazon Web Services
Speaker 2: Carl Johnson – Enterprise Solutions Architect, Amazon Web Services

Learning a programming language can seem like a huge investment. However, solving strategic business problems using modern technology approaches, like machine learning and big-data analytics, often requires some understanding. In this workshop, you learn the basics of using Python, one of the most popular programming languages that can be used for small tasks like simple operations automation, or large tasks like analyzing billions of records and training machine-learning models. You also learn about and use the AWS SDK (software development kit) for Python, called boto3, to write a Python program running on and interacting with resources in AWS.

[Workshop] Building reusable AWS CloudFormation templates (DOP304-R; DOP304-R1; DOP304-R2)

Speaker 1: Chelsey Salberg – Front End Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

AWS CloudFormation gives you an easy way to define your infrastructure as code, but are you using it to its full potential? In this workshop, we take real-world architecture from a sandbox template to production-ready reusable code. We start by reviewing an initial template, which you update throughout the session to incorporate AWS CloudFormation features, like nested stacks and intrinsic functions. By the end of the workshop, expect to have a set of AWS CloudFormation templates that demonstrate the same best practices used in AWS Quick Starts.

[Workshop] Building a scalable serverless application with AWS CDK (DOP306-R; DOP306-R1; DOP306-R2; DOP306-R3)

Speaker 1: David Christiansen – Senior Partner Solutions Architect, Amazon Web Services
Speaker 2: Daniele Stroppa – Solutions Architect, Amazon Web Services

Dive into AWS and build a web application with the AWS Mythical Mysfits tutorial. In this workshop, you build a serverless application using AWS Lambda, Amazon API Gateway, and the AWS Cloud Development Kit (AWS CDK). Through the tutorial, you get hands-on experience using AWS CDK to model and provision a serverless distributed application infrastructure, you connect your application to a backend database, and you capture and analyze data on user behavior. Other AWS services that are utilized include Amazon Kinesis Data Firehose and Amazon DynamoDB.

[Chalk Talk] Assembling an AWS CloudFormation authoring tool chain (DOP313-R; DOP313-R1; DOP313-R2)

Speaker 1: Nathan McCourtney – Sr System Development Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

In this session, we provide a prescriptive tool chain and methodology to improve your coding productivity as you create and maintain AWS CloudFormation stacks. We cover authoring recommendations from editors and plugins, to setting up a deployment pipeline for your AWS CloudFormation code.

[Chalk Talk] Build using JavaScript with AWS Amplify, AWS Lambda, and AWS Fargate (DOP315-R; DOP315-R1; DOP315-R2)

Speaker 1: Trivikram Kamat – Software Development Engineer, Amazon Web Services
Speaker 2: Vinod Dinakaran – Software Development Manager, Amazon Web Services

Learn how to build applications with AWS Amplify on the front end and AWS Fargate and AWS Lambda on the backend, and protocols (like HTTP/2), using the JavaScript SDKs in the browser and node. Leverage the AWS SDK for JavaScript’s modular NPM packages in resource-constrained environments, and benefit from the built-in async features to run your node and mobile applications, and SPAs, at scale.

[Chalk Talk] Scaling CI/CD adoption using AWS CodePipeline and AWS CloudFormation (DOP318-R; DOP318-R1; DOP318-R2)

Speaker 1: Andrew Baird – Principal Solutions Architect, Amazon Web Services
Speaker 2: Neal Gamradt – Applications Architect, WarnerMedia

Enabling CI/CD across your organization through repeatable patterns and infrastructure-as-code templates can unlock development speed while encouraging best practices. The SEAD Architecture team at WarnerMedia helps encourage CI/CD adoption across their company. They do so by creating and maintaining easily extensible infrastructure-as-code patterns for creating new services and deploying to them automatically using CI/CD. In this session, learn about the patterns they have created and the lessons they have learned.

re:Invent TIP #6: There are lots of extra activities at re:Invent. Expect your evenings to fill up onsite! Check out the peculiar programs including, board games, bingo, arts & crafts or ‘80s sing-alongs…