Tag Archives: devops

Cross-account and cross-region deployment using GitHub actions and AWS CDK

Post Syndicated from DAMODAR SHENVI WAGLE original https://aws.amazon.com/blogs/devops/cross-account-and-cross-region-deployment-using-github-actions-and-aws-cdk/

GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

A cross-account deployment strategy is a CI/CD pattern or model in AWS. In this pattern, you have a designated AWS account called tools, where all CI/CD pipelines reside. Deployment is carried out by these pipelines across other AWS accounts, which may correspond to dev, staging, or prod. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline.

In this post, we show you how to use GitHub Actions to deploy an AWS Lambda-based API to an AWS account and Region using the cross-account deployment strategy.

Using GitHub Actions may have associated costs in addition to the cost associated with the AWS resources you create. For more information, see About billing for GitHub Actions.

Prerequisites

Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work:

  • Tools – Where you create an AWS Identity and Access Management (IAM) user for GitHub Actions to use to carry out deployment.
  • Target – Where deployment occurs. You can call this as your dev/stage/prod environment.

You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. So, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI.

Solution overview

You start by building the necessary resources in the tools account (an IAM user with permissions to assume a specific IAM role from the target account to carry out deployment). For simplicity, we refer to this IAM role as the cross-account role, as specified in the architecture diagram.

You also create the cross-account role in the target account that trusts the IAM user in the tools account and provides the required permissions for AWS CDK to bootstrap and initiate creating an AWS CloudFormation deployment stack in the target account. GitHub Actions uses the tools account IAM user credentials to the assume the cross-account role to carry out deployment.

In addition, you create an AWS CloudFormation execution role in the target account, which AWS CloudFormation service assumes in the target account. This role has permissions to create your API resources, such as a Lambda function and Amazon API Gateway, in the target account. This role is passed to AWS CloudFormation service via AWS CDK.

You then configure your tools account IAM user credentials in your Git secrets and define the GitHub Actions workflow, which triggers upon pushing code to a specific branch of the repo. The workflow then assumes the cross-account role and initiates deployment.

The following diagram illustrates the solution architecture and shows AWS resources across the tools and target accounts.

Architecture diagram

Creating an IAM user

You start by creating an IAM user called git-action-deployment-user in the tools account. The user needs to have only programmatic access.

  1. Clone the GitHub repo aws-cross-account-cicd-git-actions-prereq and navigate to folder tools-account. Here you find the JSON parameter file src/cdk-stack-param.json, which contains the parameter CROSS_ACCOUNT_ROLE_ARN, which represents the ARN for the cross-account role we create in the next step in the target account. In the ARN, replace <target-account-id> with the actual account ID for your designated AWS target account.                                             Replace <target-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the tools AWS account profile you created earlier. The script compiles the code, builds a package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd aws-cross-account-cicd-git-actions-prereq/tools-account/
./deploy.sh "<AWS-TOOLS-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in the tools account: CDKToolkit and cf-GitActionDeploymentUserStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package. cf-GitActionDeploymentUserStack creates the IAM user with permission to assume git-action-cross-account-role (which you create in the next step). On the Outputs tab of the stack, you can find the user access key and the AWS Secrets Manager ARN that holds the user secret. To retrieve the secret, you need to go to Secrets Manager. Record the secret to use later.

Stack that creates IAM user with its secret stored in secrets manager

Creating a cross-account IAM role

In this step, you create two IAM roles in the target account: git-action-cross-account-role and git-action-cf-execution-role.

git-action-cross-account-role provides required deployment-specific permissions to the IAM user you created in the last step. The IAM user in the tools account can assume this role and perform the following tasks:

  • Upload deployment assets such as the CloudFormation template and Lambda code package to a designated S3 bucket via AWS CDK
  • Create a CloudFormation stack that deploys API Gateway and Lambda using AWS CDK

AWS CDK passes git-action-cf-execution-role to AWS CloudFormation to create, update, and delete the CloudFormation stack. It has permissions to create API Gateway and Lambda resources in the target account.

To deploy these two roles using AWS CDK, complete the following steps:

  1. In the already cloned repo from the previous step, navigate to the folder target-account. This folder contains the JSON parameter file cdk-stack-param.json, which contains the parameter TOOLS_ACCOUNT_USER_ARN, which represents the ARN for the IAM user you previously created in the tools account. In the ARN, replace <tools-account-id> with the actual account ID for your designated AWS tools account.                                             Replace <tools-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the target AWS account profile you created earlier. The script compiles the code, builds the package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd ../target-account/
./deploy.sh "<AWS-TARGET-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in your target account: CDKToolkit and cf-CrossAccountRolesStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket to hold deployment assets such as the CloudFormation template and Lambda code package. The cf-CrossAccountRolesStack creates the two IAM roles we discussed at the beginning of this step. The IAM role git-action-cross-account-role now has the IAM user added to its trust policy. On the Outputs tab of the stack, you can find these roles’ ARNs. Record these ARNs as you conclude this step.

Stack that creates IAM roles to carry out cross account deployment

Configuring secrets

One of the GitHub actions we use is aws-actions/[email protected]. This action configures AWS credentials and Region environment variables for use in the GitHub Actions workflow. The AWS CDK CLI detects the environment variables to determine the credentials and Region to use for deployment.

For our cross-account deployment use case, aws-actions/[email protected] takes three pieces of sensitive information besides the Region: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET, and CROSS_ACCOUNT_ROLE_TO_ASSUME. Secrets are recommended for storing sensitive pieces of information in the GitHub repo. It keeps the information in an encrypted format. For more information about referencing secrets in the workflow, see Creating and storing encrypted secrets.

Before we continue, you need your own empty GitHub repo to complete this step. Use an existing repo if you have one, or create a new repo. You configure secrets in this repo. In the next section, you check in the code provided by the post to deploy a Lambda-based API CDK stack into this repo.

  1. On the GitHub console, navigate to your repo settings and choose the Secrets tab.
  2. Add a new secret with name as TOOLS_ACCOUNT_ACCESS_KEY_ID.
  3. Copy the access key ID from the output OutGitActionDeploymentUserAccessKey of the stack GitActionDeploymentUserStack in tools account.
  4. Enter the ID in the Value field.                                                                                                                                                                Create secret
  5. Repeat this step to add two more secrets:
    • TOOLS_ACCOUNT_SECRET_ACCESS_KEY (value retrieved from the AWS Secrets Manager in tools account)
    • CROSS_ACCOUNT_ROLE (value copied from the output OutCrossAccountRoleArn of the stack cf-CrossAccountRolesStack in target account)

You should now have three secrets as shown below.

All required git secrets

Deploying with GitHub Actions

As the final step, first clone your empty repo where you set up your secrets. Download and copy the code from the GitHub repo into your empty repo. The folder structure of your repo should mimic the folder structure of source repo. See the following screenshot.

Folder structure of the Lambda API code

We can take a detailed look at the code base. First and foremost, we use Typescript to deploy our Lambda API, so we need an AWS CDK app and AWS CDK stack. The app is defined in app.ts under the repo root folder location. The stack definition is located under the stack-specific folder src/git-action-demo-api-stack. The Lambda code is located under the Lambda-specific folder src/git-action-demo-api-stack/lambda/ git-action-demo-lambda.

We also have a deployment script deploy.sh, which compiles the app and Lambda code, packages the Lambda code into a .zip file, bootstraps the app by copying the assets to an S3 bucket, and deploys the stack. To deploy the stack, AWS CDK has to pass CFN_EXECUTION_ROLE to AWS CloudFormation; this role is configured in src/params/cdk-stack-param.json. Replace <target-account-id> with your own designated AWS target account ID.

Update cdk-stack-param.json in git-actions-cross-account-cicd repo with TARGET account id

Finally, we define the Git Actions workflow under the .github/workflows/ folder per the specifications defined by GitHub Actions. GitHub Actions automatically identifies the workflow in this location and triggers it if conditions match. Our workflow .yml file is named in the format cicd-workflow-<region>.yml, where <region> in the file name identifies the deployment Region in the target account. In our use case, we use us-east-1 and us-west-2, which is also defined as an environment variable in the workflow.

The GitHub Actions workflow has a standard hierarchy. The workflow is a collection of jobs, which are collections of one or more steps. Each job runs on a virtual machine called a runner, which can either be GitHub-hosted or self-hosted. We use the GitHub-hosted runner ubuntu-latest because it works well for our use case. For more information about GitHub-hosted runners, see Virtual environments for GitHub-hosted runners. For more information about the software preinstalled on GitHub-hosted runners, see Software installed on GitHub-hosted runners.

The workflow also has a trigger condition specified at the top. You can schedule the trigger based on the cron settings or trigger it upon code pushed to a specific branch in the repo. See the following code:

name: Lambda API CICD Workflow
# This workflow is triggered on pushes to the repository branch master.
on:
  push:
    branches:
      - master

# Initializes environment variables for the workflow
env:
  REGION: us-east-1 # Deployment Region

jobs:
  deploy:
    name: Build And Deploy
    # This job runs on Linux
    runs-on: ubuntu-latest
    steps:
      # Checkout code from git repo branch configured above, under folder $GITHUB_WORKSPACE.
      - name: Checkout
        uses: actions/[email protected]
      # Sets up AWS profile.
      - name: Configure AWS credentials
        uses: aws-actions/[email protected]
        with:
          aws-access-key-id: ${{ secrets.TOOLS_ACCOUNT_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.TOOLS_ACCOUNT_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.REGION }}
          role-to-assume: ${{ secrets.CROSS_ACCOUNT_ROLE }}
          role-duration-seconds: 1200
          role-session-name: GitActionDeploymentSession
      # Installs CDK and other prerequisites
      - name: Prerequisite Installation
        run: |
          sudo npm install -g [email protected]
          cdk --version
          aws s3 ls
      # Build and Deploy CDK application
      - name: Build & Deploy
        run: |
          cd $GITHUB_WORKSPACE
          ls -a
          chmod 700 deploy.sh
          ./deploy.sh

For more information about triggering workflows, see Triggering a workflow with events.

We have configured a single job workflow for our use case that runs on ubuntu-latest and is triggered upon a code push to the master branch. When you create an empty repo, master branch becomes the default branch. The workflow has four steps:

  1. Check out the code from the repo, for which we use a standard Git action actions/[email protected]. The code is checked out into a folder defined by the variable $GITHUB_WORKSPACE, so it becomes the root location of our code.
  2. Configure AWS credentials using aws-actions/[email protected]. This action is configured as explained in the previous section.
  3. Install your prerequisites. In our use case, the only prerequisite we need is AWS CDK. Upon installing AWS CDK, we can do a quick test using the AWS Command Line Interface (AWS CLI) command aws s3 ls. If cross-account access was successfully established in the previous step of the workflow, this command should return a list of buckets in the target account.
  4. Navigate to root location of the code $GITHUB_WORKSPACE and run the deploy.sh script.

You can check in the code into the master branch of your repo. This should trigger the workflow, which you can monitor on the Actions tab of your repo. The commit message you provide is displayed for the respective run of the workflow.

Workflow for region us-east-1 Workflow for region us-west-2

You can choose the workflow link and monitor the log for each individual step of the workflow.

Git action workflow steps

In the target account, you should now see the CloudFormation stack cf-GitActionDemoApiStack in us-east-1 and us-west-2.

Lambda API stack in us-east-1 Lambda API stack in us-west-2

The API resource URL DocUploadRestApiResourceUrl is located on the Outputs tab of the stack. You can invoke your API by choosing this URL on the browser.

API Invocation Output

Clean up

To remove all the resources from the target and tools accounts, complete the following steps in their given order:

  1. Delete the CloudFormation stack cf-GitActionDemoApiStack from the target account. This step removes the Lambda and API Gateway resources and their associated IAM roles.
  2. Delete the CloudFormation stack cf-CrossAccountRolesStack from the target account. This removes the cross-account role and CloudFormation execution role you created.
  3. Go to the CDKToolkit stack in the target account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.
  4. Delete the CloudFormation stack cf-GitActionDeploymentUserStack from tools account. This removes cross-account-deploy-user IAM user.
  5. Go to the CDKToolkit stack in the tools account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.

Security considerations

Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific Amazon S3 and CloudFormation permissions. This makes sure that the cross-account role can only do those things. The actual creation of Lambda, API Gateway, and Amazon DynamoDB resources happens via the AWS CloudFormation IAM role, which AWS  CloudFormation assumes in the target AWS account.

Make sure that you use secrets to store your sensitive workflow configurations, as specified in the section Configuring secrets.

Conclusion

In this post we showed how you can leverage GitHub’s popular software development platform to securely deploy to AWS accounts and Regions using GitHub actions and AWS CDK.

Build your own GitHub Actions CI/CD workflow as shown in this post.

About the author

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, ci/cd and automation.

How Pushly Media used AWS to pivot and quickly spin up a StartUp

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/how-pushly-media-used-aws-to-pivot-and-quickly-spin-up-a-startup/

This is a guest post from Pushly. In their own words, “Pushly provides a scalable, easy-to-use platform designed to deliver targeted and timely content via web push notifications across all modern desktop browsers and Android devices.”

Introduction

As a software engineer at Pushly, I’m part of a team of developers responsible for building our SaaS platform.

Our customers are content publishers spanning the news, ecommerce, and food industries, with the primary goal of increasing page views and paid subscriptions, ultimately resulting in increased revenue.

Pushly’s platform is designed to integrate seamlessly into a publisher’s workflow and enables advanced features such as customizable opt-in flow management, behavioral targeting, and real-time reporting and campaign delivery analytics.

As developers, we face various challenges to make all this work seamlessly. That’s why we turned to Amazon Web Services (AWS). In this post, I explain why and how we use AWS to enable the Pushly user experience.

At Pushly, my primary focus areas are developer and platform user experience. On the developer side, I’m responsible for building and maintaining easy-to-use APIs and a web SDK. On the UX side, I’m responsible for building a user-friendly and stable platform interface.

The CI/CD process

We’re a cloud native company and have gone all in with AWS.

AWS CodePipeline lets us automate the software release process and release new features to our users faster. Rapid delivery is key here, and CodePipeline lets us automate our build, test, and release process so we can quickly and easily test each code change and fail fast if needed. CodePipeline is vital to ensuring the quality of our code by running each change through a staging and release process.

One of our use cases is continuous reiteration deployment. We foster an environment where developers can fully function in their own mindset while adhering to our company’s standards and the architecture within AWS.

We deploy code multiple times per day and rely on AWS services to run through all checks and make sure everything is packaged uniformly. We want to fully test in a staging environment before moving to a customer-facing production environment.

The development and staging environments

Our development environment allows developers to securely pull down applications as needed and access the required services in a development AWS account. After an application is tested and is ready for staging, the application is deployed to our staging environment—a smaller reproduction of our production environment—so we can test how the changes work together. This flow allows us to see how the changes run within the entire Pushly ecosystem in a secure environment without pushing to production.

When testing is complete, a pull request is created for stakeholder review and to merge the changes to production branches. We use AWS CodeBuild, CodePipeline, and a suite of in-house tools to ensure that the application has been thoroughly tested to our standards before being deployed to our production AWS account.

Here is a high level diagram of the environment described above:

Diagram showing at a high level the Pushly environment.Ease of development

Ease of development was—and is—key. AWS provides the tools that allow us to quickly iterate and adapt to ever-changing customer needs. The infrastructure as code (IaC) approach of AWS CloudFormation allows us to quickly and simply define our infrastructure in an easily reproducible manner and rapidly create and modify environments at scale. This has given us the confidence to take on new challenges without concern over infrastructure builds impacting the final product or causing delays in development.

The Pushly team

Although Pushly’s developers all have the skill-set to work on both front-end-facing and back-end-facing projects, primary responsibilities are split between front-end and back-end developers. Developers that primarily focus on front-end projects concentrate on public-facing projects and internal management systems. The back-end team focuses on the underlying architecture, delivery systems, and the ecosystem as a whole. Together, we create and maintain a product that allows you to segment and target your audiences, which ensures relevant delivery of your content via web push notifications.

Early on we ran all services entirely off of AWS Lambda. This allowed us to develop new features quickly in an elastic, cost efficient way. As our applications have matured, we’ve identified some services that would benefit from an always on environment and moved them to AWS Elastic Beanstalk. The capability to quickly iterate and move from service to service is a credit to AWS, because it allows us to customize and tailor our services across multiple AWS offerings.

Elastic Beanstalk has been the fastest and simplest way for us to deploy this suite of services on AWS; their blue/green deployments allow us to maintain minimal downtime during deployments. We can easily configure deployment environments with capacity provisioning, load balancing, autoscaling, and application health monitoring.

The business side

We had several business drivers behind choosing AWS: we wanted to make it easier to meet customer demands and continually scale as much as needed without worrying about the impact on development or on our customers.

Using AWS services allowed us to build our platform from inception to our initial beta offering in fewer than 2 months! AWS made it happen with tools for infrastructure deployment on top of the software deployment. Specifically, IaC allowed us to tailor our infrastructure to our specific needs and be confident that it’s always going to work.

On the infrastructure side, we knew that we wanted to have a staging environment that truly mirrored the production environment, rather than managing two entirely disparate systems. We could provide different sets of mappings based on accounts and use the templates across multiple environments. This functionality allows us to use the exact same code we use in our current production environment and easily spin up additional environments in 2 hours.

The need for speed

It took a very short time to get our project up and running, which included rewriting different pieces of the infrastructure in some places and completely starting from scratch in others.

One of the new services that we adopted is AWS CodeArtifact. It lets us have fully customized private artifact stores in the cloud. We can keep our in-house libraries within our current AWS accounts instead of relying on third-party services.

CodeBuild lets us compile source code, run test suites, and produce software packages that are ready to deploy while only having to pay for the runtime we use. With CodeBuild, you don’t need to provision, manage, and scale your own build servers, which saves us time.

The new tools that AWS is releasing are going to even further streamline our processes. We’re interested in the impact that CodeArtifact will have on our ability to share libraries in Pushly and with other business units.

Cost savings is key

What are we saving by choosing AWS? A lot. AWS lets us scale while keeping costs at a minimum. This was, and continues to be, a major determining factor when choosing a cloud provider.

By using Lambda and designing applications with horizontal scale in mind, we have scaled from processing millions of requests per day to hundreds of millions, with very little change to the underlying infrastructure. Due to the nature of our offering, our traffic patterns are unpredictable. Lambda allows us to process these requests elastically and avoid over-provisioning. As a result, we can increase our throughput tenfold at any time, pay for the few minutes of extra compute generated by a sudden burst of traffic, and scale back down in seconds.

In addition to helping us process these requests, AWS has been instrumental in helping us manage an ever-growing data warehouse of clickstream data. With Amazon Kinesis Data Firehose, we automatically convert all incoming events to Parquet and store them in Amazon Simple Storage Service (Amazon S3), which we can query directly using Amazon Athena within minutes of being received. This has once again allowed us to scale our near-real-time data reporting to a degree that would have otherwise required a significant investment of time and resources.

As we look ahead, one thing we’re interested in is Lambda custom stacks, part of AWS’s Lambda-backed custom resources. Amazon supports many languages, so we can run almost every language we need. If we want to switch to a language that AWS doesn’t support by default, they still provide a way for us to customize a solution. All we have to focus on is the code we’re writing!

The importance of speed for us and our customers is one of our highest priorities. Think of a news publisher in the middle of a briefing who wants to get the story out before any of the competition and is relying on Pushly—our confidence in our ability to deliver on this need comes from AWS services enabling our code to perform to its fullest potential.

Another way AWS has met our needs was in the ease of using Amazon ElastiCache, a fully managed in-memory data store and cache service. Although we try to be as horizontal thinking as possible, some services just can’t scale with the immediate elasticity we need to handle a sudden burst of requests. We avoid duplicate lookups for the same resources with ElastiCache. ElastiCache allows us to process requests quicker and protects our infrastructure from being overwhelmed.

In addition to caching, ElastiCache is a great tool for job locking. By locking messages by their ID as soon as they are received, we can use the near-unlimited throughput of Amazon Simple Queue Service (Amazon SQS) in a massively parallel environment without worrying that messages are processed more than once.

The heart of our offering is in the segmentation of subscribers. We allow building complex queries in our dashboard that calculate reach in real time and are available to use immediately after creation. These queries are often never-before-seen and may contain custom properties provided by our clients, operate on complex data types, and include geospatial conditions. No matter the size of the audience, we see consistent sub-second query times when calculating reach. We can provide this to our clients using Amazon Elasticsearch Service (Amazon ES) as the backbone to our subscriber store.

Summary

AWS has countless positives, but one key theme that we continue to see is overall ease of use, which enables us to rapidly iterate. That’s why we rely on so many different AWS services—Amazon API Gateway with Lambda integration, Elastic Beanstalk, Amazon Relational Database Service (Amazon RDS), ElastiCache, and many more.

We feel very secure about our future working with AWS and our continued ability to improve, integrate, and provide a quality service. The AWS team has been extremely supportive. If we run into something that we need to adjust outside of the standard parameters, or that requires help from the AWS specialists, we can reach out and get feedback from subject matter experts quickly. The all-around capabilities of AWS and its teams have helped Pushly get where we are, and we’ll continue to rely on them for the foreseeable future.

 

Reducing Docker image build time on AWS CodeBuild using an external cache

Post Syndicated from Camillo Anania original https://aws.amazon.com/blogs/devops/reducing-docker-image-build-time-on-aws-codebuild-using-an-external-cache/

With the proliferation of containerized solutions to simplify creating, deploying, and running applications, coupled with the use of automation CI/CD pipelines that continuously rebuild, test, and deploy such applications when new changes are committed, it’s important that your CI/CD pipelines run as quickly as possible, enabling you to get early feedback and allowing for faster releases.

AWS CodeBuild supports local caching, which makes it possible to persist intermediate build artifacts, like a Docker layer cache, locally on the build host and reuse them in subsequent runs. The CodeBuild local cache is maintained on the host at best effort, so it’s possible several of your build runs don’t hit the cache as frequently as you would like.

A typical Docker image is built from several intermediate layers that are constructed during the initial image build process on a host. These intermediate layers are reused if found valid in any subsequent image rebuild; doing so speeds up the build process considerably because the Docker engine doesn’t need to rebuild the whole image if the layers in the cache are still valid.

This post shows how to implement a simple, effective, and durable external Docker layer cache for CodeBuild to significantly reduce image build runtime.

Solution overview

The following diagram illustrates the high-level architecture of this solution. We describe implementing each stage in more detail in the following paragraphs.

CodeBuildExternalCacheDiagram

In a modern software engineering approach built around CI/CD practices, whenever specific events happen, such as an application code change is merged, you need to rebuild, test, and eventually deploy the application. Assuming the application is containerized with Docker, the build process entails rebuilding one or multiple Docker images. The environment for this rebuild is on CodeBuild, which is a fully managed build service in the cloud. CodeBuild spins up a new environment to accommodate build requests and runs a sequence of actions defined in its build specification.

Because each CodeBuild instance is an independent environment, build artifacts can’t be persisted in the host indefinitely. The native CodeBuild local caching feature allows you to persist a cache for a limited time so that immediate subsequent builds can benefit from it. Native local caching is performed at best effort and can’t be relied on when multiple builds are triggered at different times. This solution describes using an external persistent cache that you can reuse across builds and is valid at any time.

After the first build of a Docker image is complete, the image is tagged and pushed to Amazon Elastic Container Registry (Amazon ECR). In each subsequent build, the image is pulled from Amazon ECR and the Docker build process is forced to use it as cache for its next build iteration of the image. Finally, the newly produced image is pushed back to Amazon ECR.

In the following paragraphs, we explain the solution and walk you through an example implementation. The solution rebuilds the publicly available Amazon Linux 2 Standard 3.0 image, which is an optimized image that you can use with CodeBuild.

Creating a policy and service role

The first step is to create an AWS Identity and Access Management (IAM) policy and service role for CodeBuild with the minimum set of permissions to perform the job.

  1. On the IAM console, choose Policies.
  2. Choose Create policy.
  3. Provide the following policy in JSON format:
    CodeBuild Docker Cache Policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ecr:GetAuthorizationToken",
                    "ecr:BatchCheckLayerAvailability",
                    "ecr:GetDownloadUrlForLayer",
                    "ecr:GetRepositoryPolicy",
                    "ecr:DescribeRepositories",
                    "ecr:ListImages",
                    "ecr:DescribeImages",
                    "ecr:BatchGetImage",
                    "ecr:ListTagsForResource",
                    "ecr:DescribeImageScanFindings",
                    "ecr:InitiateLayerUpload",
                    "ecr:UploadLayerPart",
                    "ecr:CompleteLayerUpload",
                    "ecr:PutImage"
                ],
                "Resource": "*"
            }
        ]
    }
  4. In the Review policy section, enter a name (for example, CodeBuildDockerCachePolicy).
  5. Choose Create policy.
  6. Choose Roles on the navigation pane.
  7. Choose Create role.
  8. Keep AWS service as the type of role and choose CodeBuild from the list of services.
  9. Choose Next.
  10. Search for and add the policy you created.
  11. Review the role and enter a name (for example, CodeBuildDockerCacheRole).
  12. Choose Create role.

Creating an Amazon ECR repository

In this step, we create an Amazon ECR repository to store the built Docker images.

  1. On the Amazon ECR console, choose Create repository.
  2. Enter a name (for example, amazon_linux_codebuild_image).
  3. Choose Create repository.

Configuring a CodeBuild project

You now configure the CodeBuild project that builds the Docker image and configures its cache to speed up the process.

  1. On the CodeBuild console, choose Create build project.
  2. Enter a name (for example, SampleDockerCacheProject).
  3. For Source provider, choose GitHub.
  4. For Repository, select Public repository.
  5. For Repository URL, enter https://github.com/aws/aws-codebuild-docker-images.
    CodeBuildGitHubSourceConfiguration
  6. In the Environment section, for Environment image, select Managed image.
  7. For Operating system, choose Amazon Linux 2.
  8. For Runtime(s), choose Standard.
  9. For Image, enter aws/codebuild/amazonlinux2-x86_64-standard:3.0.
  10. For Image version, choose Always use the latest image for this runtime version.
  11. For Environment type, choose Linux.
  12. For Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges.
  13. For Service role, select Existing service role.
  14. For Role ARN, enter the ARN for the service role you created (CodeBuildDockerCachePolicy).
  15. Select Allow AWS CodeBuild to modify this service so it can be used with this build project.
    CodeBuildEnvironmentConfiguration
  16. In the Buildspec section, select Insert build commands.
  17. Choose Switch to editor.
  18. Enter the following build specification (substitute account-ID and region).
    version: 0.2
    
    env:
        variables:
        CONTAINER_REPOSITORY_URL: account-ID.dkr.ecr.region.amazonaws.com/amazon_linux_codebuild_image
        TAG_NAME: latest
    
    phases:
      install:
        runtime-versions:
          docker: 19
    
    pre_build:
      commands:
        - $(aws ecr get-login --no-include-email)
        - docker pull $CONTAINER_REPOSITORY_URL:$TAG_NAME || true
    
    build:
      commands:
        - cd ./al2/x86_64/standard/1.0
        - docker build --cache-from $CONTAINER_REPOSITORY_URL:$TAG_NAME --tag
    $CONTAINER_REPOSITORY_URL:$TAG_NAME .
    
    post_build:
        commands:
          - docker push $CONTAINER_REPOSITORY_URL
  19. Choose Create the project.

The provided build specification instructs CodeBuild to do the following:

  • Use the Docker 19 runtime to run the build. The following process doesn’t work reliably with Docker versions lower than 19.
  • Authenticate with Amazon ECR and pull the image you want to rebuild if it exists (on the first run, this image doesn’t exist).
  • Run the image rebuild, forcing Docker to consider as cache the image pulled at the previous step using the –cache-from parameter.
  • When the image rebuild is complete, push it to Amazon ECR.

Testing the solution

The solution is fully configured, so we can proceed to evaluate its behavior.

For the first run, we record a runtime of approximately 39 minutes. The build doesn’t use any cache and the docker pull in the pre-build stage fails to find the image we indicate, as expected (the || true statement at the end of the command line guarantees that the CodeBuild instance doesn’t stop because the docker pull failed).

The second run pulls the previously built image before starting the rebuild and completes in approximately 6 minutes, most of which is spent downloading the image from Amazon ECR (which is almost 5 GB).

We trigger another run after simulating a change halfway through the Dockerfile (addition of an echo command to the statement at line 291 of the Dockerfile). Docker still reuses the layers in the cache until the point of the changed statement and then rebuilds from scratch the remaining layers described in the Dockerfile. The runtime was approximately 31 minutes; the overhead of downloading the whole image first partially offsets the advantages of using it as cache.

It’s relevant to note the image size in this use case is considerably large; on average, projects deal with smaller images that introduce less overhead. Furthermore, the previous run had the built-in CodeBuild feature to cache Docker layers at best effort disabled; enabling it provides further efficiency because the docker pull specified in the pre-build stage doesn’t have to download the image if the one available locally matches the one on Amazon ECR.

Cleaning up

When you’re finished testing, you should un-provision the following resources to avoid incurring further charges and keep the account clean from unused resources:

  • The amazon_linux_codebuild_image Amazon ECR repository and its images;
  • The SampleDockerCacheProject CodeBuild project;
  • The CodeBuildDockerCachePolicy policy and the CodeBuildDockerCacheRole role.

Conclusion

In this post, we reviewed a simple and effective solution to implement a durable external cache for Docker on CodeBuild. The solution provides significant improvements in the execution time of the Docker build process on CodeBuild and is general enough to accommodate the majority of use cases, including multi-stage builds.

The approach works in synergy with the built-in CodeBuild feature of caching Docker layers at best effort, and we recommend using it for further improvements. Shorter build processes translate to lower compute costs, and overall determine a shorter development lifecycle for features released faster and at a lower cost.

About the Author

 

 

Camillo Anania is a Global DevOps Consultant with AWS Professional Services, London, UK.

 

 

 

 

James Jacob is a Global DevOps Consultant with AWS Professional Services, London, UK.

 

Migrating Subversion repositories to AWS CodeCommit

Post Syndicated from Iftikhar khan original https://aws.amazon.com/blogs/devops/migrating-subversion-repositories-aws-codecommit/

In this post, we walk you through migrating Subversion (SVN) repositories to AWS CodeCommit. But before diving into the migration, we do a brief review of SVN and Git based systems such as CodeCommit.

About SVN

SVN is an open-source version control system. Founded in 2000 by CollabNet, Inc., it was originally designed to be a better Concurrent Versions System (CVS), and is being developed as a project of the Apache Software Foundation. SVN is the third implementation of a revision control system: Revision Control System (RCS), then CVS, and finally SVN.

SVN is the leader in centralized version control. Systems such as CVS and SVN have a single remote server of versioned data with individual users operating locally against copies of that data’s version history. Developers commit their changes directly to that central server repository.

All the files and commit history information are stored in a central server, but working on a single central server means more chances of having a single point of failure. SVN offers few offline access features; a developer has to connect to the SVN server to make a commit that makes commits slower. The single point of failure, security, maintenance, and scaling SVN infrastructure are the major concerns for any organization.

About DVCS

Distributed Version Control Systems (DVCSs) address the concerns and challenges of SVN. In a DVCS (such as Git or Mercurial), you don’t just check out the latest snapshot of the files; rather, you fully mirror the repository, including its full history. If any server dies, and these systems are collaborating via that server, you can copy any of the client repositories back up to the server to restore it. Every clone is a full backup of all the data.

DVCs such as Git are built with speed, non-linear development, simplicity, and efficiency in mind. It works very efficiently with large projects, which is one of the biggest factors why customers find it popular.

A significant reason to migrate to Git is branching and merging. Creating a branch is very lightweight, which allows you to work faster and merge easily.

About CodeCommit

CodeCommit is a version control system that is fully managed by AWS. CodeCommit can host secure and highly scalable private Git repositories, which eliminates the need to operate your source control system and scale its infrastructure. You can use it to securely store anything, from source code to binaries. CodeCommit features like collaboration, encryption, and easy access control make it a great choice. It works seamlessly with most existing Git tools and provides free private repositories.

Understanding the repository structure of SVN and Git

SVNs have a tree model with one branch where the revisions are stored, whereas Git uses a graph structure and each commit is a node that knows its parent. When comparing the two, consider the following features:

  • Trunk – An SVN trunk is like a primary branch in a Git repository, and contains tested and stable code.
  • Branches – For SVN, branches are treated as separate entities with its own history. You can merge revisions between branches, but they’re different entities. Because of its centralized nature, all branches are remote. In Git, branches are very cheap; it’s a pointer for a particular commit on the tree. It can be local or be pushed to a remote repository for collaboration.
  • Tags – A tag is just another folder in the main repository in SVN and remains static. In Git, a tag is a static pointer to a specific commit.
  • Commits – To commit in SVN, you need access to the main repository and it creates a new revision in the remote repository. On Git, the commit happens locally, so you don’t need to have access to the remote. You can commit the work locally and then push all the commits at one time.

So far, we have covered how SVN is different from Git-based version control systems and illustrated the layout of SVN repositories. Now it’s time to look at how to migrate SVN repositories to CodeCommit.

Planning for migration

Planning is always a good thing. Before starting your migration, consider the following:

  • Identify SVN branches to migrate.
  • Come up with a branching strategy for CodeCommit and document how you can map SVN branches.
  • Prepare build, test scripts, and test cases for system testing.

If the size of the SVN repository is big enough, consider running all migration commands on the SVN server. This saves time because it eliminates network bottlenecks.

Migrating the SVN repository to CodeCommit

When you’re done with the planning aspects, it’s time to start migrating your code.

Prerequisites

You must have the AWS Command Line Interface (AWS CLI) with an active account and Git installed on the machine that you’re planning to use for migration.

Listing all SVN users for an SVN repository
SVN uses a user name for each commit, whereas Git stores the real name and email address. In this step, we map SVN users to their corresponding Git names and email.

To list all the SVN users, run the following PowerShell command from the root of your local SVN checkout:

svn.exe log --quiet | ? { $_ -notlike '-*' } | % { "{0} = {0} &amp;amp;lt;{0}&amp;amp;gt;" -f ($_ -split ' \| ')[1] } | Select-Object -Unique | Out-File 'authors-transform.txt'

On a Linux based machine, run the following command from the root of your local SVN checkout:

svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" &lt;"$2"&gt;"}' | sort -u &gt; authors-transform.txt

The authors-transform.txt file content looks like the following code:

ikhan = ikhan <ikhan>
foobar= foobar <foobar>
abob = abob <abob>

After you transform the SVN user to a Git user, it should look like the following code:

ikhan = ifti khan <[email protected]>
fbar = foo bar <[email protected]>
abob = aaron bob <[email protected]>

Importing SVN contents to a Git repository

The next step in the migration from SVN to Git is to import the contents of the SVN repository into a new Git repository. We do this with the git svn utility, which is included with most Git distributions. The conversion process can take a significant amount of time for larger repositories.

The git svn clone command transforms the trunk, branches, and tags in your SVN repository into a new Git repository. The command depends on the structure of the SVN.

git svn clone may not be available in all installations; you might consider using an AWS Cloud9 environment or using a temporary Amazon Elastic Compute Cloud (Amazon EC2) instance.

If your SVN layout is standard, use the following command:

git svn clone --stdlayout --authors-file=authors.txt  <svn-repo>/<project> <temp-dir/project>

If your SVN layout isn’t standard, you need to map the trunk, branches, and tags folder in the command as parameters:

git svn clone <svn-repo>/<project> --prefix=svn/ --no-metadata --trunk=<trunk-dir> --branches=<branches-dir>  --tags==<tags-dir>  --authors-file "authors-transform.txt" <temp-dir/project>

Creating a bare Git repository and pushing the local repository

In this step, we create a blank repository and match the default branch with the SVN’s trunk name.

To create the .gitignore file, enter the following code:

cd <temp-dir/project>
git svn show-ignore > .gitignore
git add .gitignore
git commit -m 'Adding .gitignore.'

To create the bare Git repository, enter the following code:

git init --bare <git-project-dir>\local-bare.git
cd <git-project-dir>\local-bare.git
git symbolic-ref HEAD refs/heads/trunk

To update the local bare Git repository, enter the following code:

cd <temp-dir/project>
git remote add bare <git-project-dir\local-bare.git>
git config remote.bare.push 'refs/remotes/*:refs/heads/*'
git push bare

You can also add tags:

cd <git-project-dir\local-bare.git>

For Windows, enter the following code:

git for-each-ref --format='%(refname)' refs/heads/tags | % { $_.Replace('refs/heads/tags/','') } | % { git tag $_ "refs/heads/tags/$_"; git branch -D "tags/$_" }

For Linux, enter the following code:

for t in $(git for-each-ref --format='%(refname:short)' refs/remotes/tags); do git tag ${t/tags\//} $t &amp;&amp; git branch -D -r $t; done

You can also add branches:

cd <git-project-dir\local-bare.git>

For Windows, enter the following code:

git for-each-ref --format='%(refname)' refs/remotes | % { $_.Replace('refs/remotes/','') } | % { git branch "$_" "refs/remotes/$_"; git branch -r -d "$_"; }

For Linux, enter the following code:

for b in $(git for-each-ref --format='%(refname:short)' refs/remotes); do git branch $b refs/remotes/$b && git branch -D -r $b; done

As a final touch-up, enter the following code:

cd <git-project-dir\local-bare.git>
git branch -m trunk master

Creating a CodeCommit repository

You can now create a CodeCommit repository with the following code (make sure that the AWS CLI is configured with your preferred Region and credentials):

aws configure
aws codecommit create-repository --repository-name MySVNRepo --repository-description "SVN Migration repository" --tags Team=Migration

You get the following output:

{
    "repositoryMetadata": {
        "repositoryName": "MySVNRepo",
        "cloneUrlSsh": "ssh://ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo",
        "lastModifiedDate": 1446071622.494,
        "repositoryDescription": "SVN Migration repository",
        "cloneUrlHttp": "https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo",
        "creationDate": 1446071622.494,
        "repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
        "Arn": "arn:aws:codecommit:us-east-2:111111111111:MySVNRepo",
        "accountId": "111111111111"
    }
}

Pushing the code to CodeCommit

To push your code to the new CodeCommit repository, enter the following code:

cd <git-project-dir\local-bare.git>
git remote add origin https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo
git add *

git push origin --all
git push origin --tags (Optional if tags are mapped)

Troubleshooting

When migrating SVN repositories, you might encounter a few SVN errors, which are displayed as code on the console. For more information, see Subversion client errors caused by inappropriate repository URL.

For more information about the git-svn utility, see the git-svn documentation.

Conclusion

In this post, we described the straightforward process of using the git-svn utility to migrate SVN repositories to Git or Git-based systems like CodeCommit. After you migrate an SVN repository to CodeCommit, you can use any Git-based client and start using CodeCommit as your primary version control system without worrying about securing and scaling its infrastructure.

Scalable agile development practices based on AWS CodeCommit

Post Syndicated from Mengxin Zhu original https://aws.amazon.com/blogs/devops/scalable-agile-development-practices-based-on-aws-codecommit/

Development teams use agile development processes based on Git services extensively. AWS provides AWS CodeCommit, a managed, Git protocol-based, secure, and highly available code service. The capabilities of CodeCommit combined with other developer tools, like AWS CodeBuild and AWS CodePipeline, make it easy to manage collaborative, scalable development process with fine-grained permissions and on-demand resources.

You can manage user roles with different AWS Identity and Access Management (IAM) policies in the code repository of CodeCommit. You can build your collaborative development process with pull requests and approval rules. The process described in this post only requires you to manage the developers’ role, without forking the source repository for individual developers. CodeCommit pull requests can integrate numerous code analysis services as approvers to improve code quality and mitigate security vulnerabilities, such as SonarQube static scanning and the ML-based code analysis service Amazon CodeGuru Reviewer.

The CodeCommit-based agile development process described in this post has the following characteristics:

  • Control permissions of the CodeCommit repository via IAM.
    • Any code repository has at least two user roles:
      • Development collaborator – Participates in the development of the project.
      • Repository owner – Has code review permission and partial management permissions of the repository. The repository owner is also the collaborator of the repository.
    • Both development collaborator and owner have read permissions of the repository and can pull code to local disk via the Git-supported protocols.
    • The development collaborator can push new code to branches with a specific prefix, for example, features/ or bugs/. Multiple collaborators can work on a particular branch for one pull request. Collaborators can create new pull requests to request merging code into the main branch, such as the mainline branch.
    • The repository owner has permission to review pull requests with approval voting and merge pull requests.
    • Directly pushing code to the main branch of repository is denied.
  • Development workflow. This includes the following:
    • Creating an approval template rule of CodeCommit that requires at least two approvals from the sanity checking build of the pull request and repository owner. The workflow also applies the approval rule to require mandatory approvals for pull requests of the repository.
    • The creation and update of source branch events of pull requests via Amazon EventBridge triggers a sanity checking build of CodeBuild to compile, test, and analyze the pull request code. If all checks pass, the pull request gets an approval voting from the sanity checking build.
    • Watching the main branch of the repository triggers a continuous integration for any commit. You can continuously publish artifacts of your project to the artifact repository or integrate the latest version of the service to your business system.

This agile development process can use AWS CloudFormation and AWS Cloud Development Kit (AWS CDK) to orchestrate AWS resources with the best practice of infrastructure as code. You can manage hundreds of repositories in your organization and automatically provision new repositories and related DevOps resources from AWS after the pull request of your IaC as a new application is approved. This makes sure that you’re managing the code repository and DevOps resources in a secure and compliant way. You can use it as a reference solution for your organization to manage large-scale R&D resources.

Solution overview

In the following use case, you’re working on a Java-based project AWS Toolkit for JetBrains. This application has developers that can submit code via pull requests. Each pull request is automatically checked and validated by CodeBuild builds. The owners of the project can review the pull request and merge it to the main branch. The code submitted to the main branch triggers the continuous integration to build the project artifacts.

The following diagram illustrates the components built in this post and their role in the DevOps process.

architecture diagram

Prerequisites

For this walkthrough, you should meet the following prerequisites:

Preparing the code

Clone the sample code from the Github repo with your preferred Git client or IDE and view branch aws-toolkit-jetbrains, or download the sample code directly and unzip it into an empty folder.

Initializing the environment

Open the terminal or command prompt of your operating system, enter the directory where the sample code is located, enter the following code to initialize the environment, and install the dependency packages:

npm run init

Deploying application

After successfully initializing the AWS CDK environment and installing the dependencies of the sample application, enter the following code to deploy the application:

npm run deploy

Because the application creates the IAM roles and policies, AWS CDK requires you to confirm security-related changes before deploying it. You see the following outputs from the command line.

deploy stack

Enter y to confirm the security changes, and AWS CDK begins to deploy the application. After a few minutes, you see output similar to the following code, indicating that the application stack has been successfully deployed in your AWS account:

✅  CodecommitDevopsModelStack

Outputs:
CodecommitDevopsModelStack.Repo1AdminRoleOutput = arn:aws:iam::012345678912:role/codecommitmodel/CodecommitDevopsModelStack-Repo1AdminRole0648F018-OQGKZPM6T0HP
CodecommitDevopsModelStack.Repo1CollaboratorRoleOutput = arn:aws:iam::012345678912:role/codecommitmodel/CodecommitDevopsModelStac-Repo1CollaboratorRole1EB-15KURO7Z9VNOY

Stack ARN:
arn:aws:cloudformation:ap-southeast-1:012345678912:stack/CodecommitDevopsModelStack/5ecd1c50-b56b-11ea-8061-020de04cec9a

As shown in the preceding code, the output of successful deployment indicates that the ARN of two IAM roles were created on behalf of the owner and development collaborator of the source code repository.

Checking deployment results

After successfully deploying the app, you can sign in to the CodeCommit console and browse repositories. The following screenshot shows three repositories.

created repos

For this post, we use three repositories to demonstrate configuring the different access permissions for different teams in your organization. As shown in the following screenshot, the repository CodeCommitDevopsModelStack-MyApp1 is tagged to grant permissions to the specific team abc.

repository tags

The IAM roles for the owner and development collaborator only have access to the code repository with the following tags combination:

{
 'app': 'my-app-1',
 'team': 'abc',
}

Configuring CodeCommit repository access on behalf of owner and collaborator

Next, you configure the current user to simulate the owner and development collaborator via IAM’s AssumeRole.

Edit the AWS CLI profile file with your preferred text editor and add the following configuration lines:

[profile codecommit-repo1-owner]

role_arn = <the ARN of owner role after successfully deploying sample app>

source_profile = default

region = ap-southeast-1

cli_pager=

[profile codecommit-repo1-collaborator]

role_arn = <the ARN of collaborator role after successfully deploying sample app>

source_profile = default

region = ap-southeast-1

cli_pager=

Replace the role_arn in the owner and collaborator sections with the corresponding output after successfully deploying the sample app.

If the AWS CLI isn’t using the default profile, replace the value of source_profile with the profile name you’re currently using.

Make the region consistent with the value configured in source_profile. For example, this post uses ap-southeast-1.

After saving the modification of the profile, you can test this configuration from the command line. See the following code:

export AWS_DEFAULT_PROFILE=codecommit-repo1-owner # assume owner role of repository

aws sts get-caller-identity # get current user identity, you should see output like below,
{
    "UserId": "AROAQP3VLCVWYYTPJL2GW:botocore-session-1587717914",
    "Account": "0123456789xx",
    "Arn": "arn:aws:sts::0123456789xx:assumed-role/CodecommitDevopsModelStack-Repo1AdminRole0648F018-1SNXR23P4XVYZ/botocore-session-1587717914"
}

aws codecommit list-repositories # list of all repositories of AWS CodeCommit in configured region
{
    "repositories": [
        {
            "repositoryName": "CodecommitDevopsModelStack-MyApp1",
            "repositoryId": "208dd6d1-ade4-4633-a2a3-fe1a9a8f3d1c "
        },
        {
            "repositoryName": "CodecommitDevopsModelStack-MyApp2",
            "repositoryId": "44421652-d12e-413e-85e3-e0db894ab018"
        },
        {
            "repositoryName": "CodecommitDevopsModelStack-MyApp3",
            "repositoryId": "8d146b34-f659-4b17-98d8-85ebaa07283c"
        }
    ]
}

aws codecommit get-repository --repository-name CodecommitDevopsModelStack-MyApp1 # get detail information of repository name ends with MyApp1
{
    "repositoryMetadata": {
        "accountId": "0123456789xx",
        "repositoryId": "208dd6d1-ade4-4633-a2a3-fe1a9a8f3d1c",
        "repositoryName": "CodecommitDevopsModelStack-MyApp1",
        "repositoryDescription": "Repo for App1.",
        "lastModifiedDate": "2020-06-24T00:06:24.734000+08:00",
        "creationDate": "2020-06-24T00:06:24.734000+08:00",
        "cloneUrlHttp": "https://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/CodecommitDevopsModelStack-MyApp1",
        "cloneUrlSsh": "ssh://git-codecommit.ap-southeast-1.amazonaws.com/v1/repos/CodecommitDevopsModelStack-MyApp1",
        "Arn": "arn:aws:codecommit:ap-southeast-1:0123456789xx:CodecommitDevopsModelStack-MyApp1"
    }
}

aws codecommit get-repository --repository-name CodecommitDevopsModelStack-MyApp2 # try to get detail information of repository MyApp2 that does not have accessing permission by the role

An error occurred (AccessDeniedException) when calling the GetRepository operation: User: arn:aws:sts::0123456789xx:assumed-role/CodecommitDevopsModelStack-Repo1AdminRole0648F018-OQGKZPM6T0HP/botocore-session-1593325146 is not authorized to perform: codecommit:GetRepository on resource: arn:aws:codecommit:ap-southeast-1:0123456789xx:CodecommitDevopsModelStack-MyApp2

You can also grant IAM policies starting with CodecommitDevopsmodelStack-CodecommitCollaborationModel to existing IAM users for the corresponding owner or collaborator permissions.

Initializing the repository

The new code repository CodecommitdevopsmodelStack-MyApp1 is an empty Git repository without any commit. You can use the AWS Toolkit for JetBrains project as the existing local codebase and push the code to the repository hosted by CodeCommit.

Enter the following code from the command line:

export AWS_DEFAULT_PROFILE=codecommit-repo1-owner # assume owner role of repository

git clone https://github.com/aws/aws-toolkit-jetbrains.git # clone aws-toolkit-jetbrains to local as existing codebase

cd aws-toolkit-jetbrains

git remote add codecommit codecommit::ap-southeast-1://CodecommitDevopsModelStack-MyApp1 # add CodeCommit hosted repo as new remote named as codecommit. Follow the doc set up AWS CodeCommit with git-remote-codecommit, or use remote url of repository via https/ssh protocol

git push codecommit master:init  # push existing codebase to a temporary branch named 'init'

aws codecommit create-branch --repository-name CodecommitDevopsModelStack-MyApp1 --branch-name master --commit-id `git rev-parse master` # create new branch 'master'

aws codecommit update-default-branch --repository-name CodecommitDevopsModelStack-MyApp1 --default-branch-name master # set branch 'master' as main branch of repository

aws codecommit delete-branch --repository-name CodecommitDevopsModelStack-MyApp1 --branch-name init # clean up 'init' branch

Agile development practices

For this use case, you act as the collaborator of the repository implementing a new feature for aws-toolkit-jetbrains, then follow the development process to submit your code changes to the main branch.

Enter the following code from the command line:

export AWS_DEFAULT_PROFILE=codecommit-repo1-collaborator # assume collaborator role of repository

# add/modify/delete source files for your new feature

git commit -m 'This is my new feature.' -a

git push codecommit HEAD:refs/heads/features/my-feature # push code to new branch with prefix /features/

aws codecommit create-pull-request --title 'My feature "Short Description".' --description 'Detail description of feature request'  --targets repositoryName=CodecommitDevopsModelStack-MyApp1,sourceReference=features/my-feature,destinationReference=master # create pull request for new feature

The preceding code submits the changes of the new feature to a branch with the prefix features/ and creates a pull request to merge the change into the main branch.

On the CodeCommit console, you can see that a pull request called My feature "Short Description". created by the development collaborator has passed the sanity checking build of the pull request and gets an approval voting (it takes about 15 minutes to complete the checking build in this project).

PR build result

 

The owner of the repository also needs to review the pull request with one approval at least, then they can merge the repository to the main branch. The pull request on the CodeCommit console supports several code review features, such as change comparison, in-line comments, and code discussions. For more information, see Using AWS CodeCommit Pull Requests to request code reviews and discuss code. The following screenshot shows the review tool on the CodeCommit console, on the Changes tab.

CodeReview Tool

 

The following screenshot shows the approval details of the pull request, on the Approvals tab.

Approvals tab

When browsing the continuous integration deployment project after merging the pull request, you can see that a new continuous integration build has been triggered by the event of merging the pull request to the main branch.

Deployment build

Cleaning up

When you’re finished exploring this use case and discovering the deployed resources, the last step is to clean up your account. The following code deletes all the resources you created:

npm run cleanup

Summary

This post discussed agile development practices based on CodeCommit, including implementation mechanisms and practice processes, and demonstrated how to collaborate in development under those processes. AWS powers the code that manages the code repository itself and the DevOps processes built around it in the example application. You can use the IaC capability of AWS and apply those practices in your organization to build compliant and secure R&D processes.

Automated CI/CD pipeline for .NET Core Lambda functions using AWS extensions for dotnet CLI

Post Syndicated from Sundar Narasiman original https://aws.amazon.com/blogs/devops/automated-ci-cd-pipeline-for-net-core-lambda-functions-using-aws-extensions-for-dotnet-cli/

The trend of building AWS Serverless applications using AWS Lambda is increasing at an ever-rapid pace. Common use cases for AWS Lambda include data processing, real-time file processing, and extract, transform, and load (ETL) for data processing, web backends, internet of things (IoT) backends, and mobile backends. Lambda natively supports languages such as Java, Go, PowerShell, Node.js, C#, Python, and Ruby. It also provides a Runtime API that allows you to use any additional programming languages to author your functions.

.NET framework occupies a significant footprint in the technology landscape of enterprises. Nowadays, enterprise customers are modernizing .NET framework applications to .NET Core using AWS Serverless (Lambda). In this journey, you break down a large monolith service into multiple smaller independent and autonomous microservices using.NET Core Lambda functions

When you have several microservices running in production, a change management strategy is key for business agility and time-to-market changes. The change management of .NET Core Lambda functions translates to how well you implement an automated CI/CD pipeline using AWS CodePipeline. In this post, you see two approaches for implementing CI/CD for .NET Core Lambda functions: creating a pipeline with either two or three stages.

Creating a pipeline with two stages

In this approach, you define the pipeline in CodePipeline with two stages: AWS CodeCommit and AWS CodeBuild. CodeCommit is the fully-managed source control repository that stores the source code for .NET Core Lambda functions. It triggers CodeBuild when a new code change is published. CodeBuild defines a compute environment for the build process. It builds the .NET Core Lambda function and creates a deployment package (.zip). Finally, CodeBuild uses AWS extensions for Dotnet CLI to deploy the Lambda packages (.zip) to the Lambda environment. The following diagram illustrates this architecture.

 

CodePipeline with CodeBuild and CodeCommit stages.

CodePipeline with CodeBuild and CodeCommit stages.

Creating a pipeline with three stages

In this approach, you define the pipeline with three stages: CodeCommit, CodeBuild, and AWS CodeDeploy.

CodeCommit stores the source code for .NET Core Lambda functions and triggers CodeBuild when a new code change is published. CodeBuild defines a compute environment for the build process and builds the .NET Core Lambda function. Then CodeBuild invokes the CodeDeploy stage. CodeDeploy uses AWS CloudFormation templates to deploy the Lambda function to the Lambda environment. The following diagram illustrates this architecture.

CodePipeline with CodeCommit, CodeBuild and CodeDeploy stages.

CodePipeline with CodeCommit, CodeBuild and CodeDeploy stages.

Solution Overview

In this post, you learn how to implement an automated CI/CD pipeline using the first approach: CodePipeline with CodeCommit and CodeBuild stages. The CodeBuild stage in this approach implements the build and deploy functionalities. The high-level steps are as follows:

  1. Create the CodeCommit repository.
  2. Create a Lambda execution role.
  3. Create a Lambda project with .NET Core CLI.
  4. Change the Lambda project configuration.
  5. Create a buildspec file.
  6. Commit changes to the CodeCommit repository.
  7. Create your CI/CD pipeline.
  8. Complete and verify pipeline creation.

For the source code and buildspec file, see the GitHub repo.

Prerequisites

Before you get started, you need the following prerequisites:

Creating a CodeCommit repository

You first need a CodeCommit repository to store the Lambda project source code.

1. In the Repository settings section, for Repository name, enter a name for your repository.

2. Choose Create.

Name a repository

 

 

 

 

 

 

 

 

3. Initialize this repository with a markdown file (readme.md). You need this markdown file to create documentation about the repository.

4. Set up an AWS Identity and Access Management (IAM) credential to CodeCommit. Alternatively, you can set up SSH-based access. For instructions, see Setup for HTTPS users using Git credentials and Setup steps for SSH connections to AWS CodeCommit repositories on Linux, MacOS, or Unix. You need this to work with the CodeCommit repository from the development environment.

5. Clone the CodeCommit repository to a local folder.

Proceed to the next step to create an IAM role for Lambda execution.

Creating a Lambda execution role

Every Lambda function needs an IAM role for execution. Create an IAM role for Lambda execution with the appropriate IAM policy, if it doesn’t exist already. You’re now ready to create a Lambda function project using .NET Core Command Line Interface (CLI).

Creating a Lambda function project

You have multiple options for creating .NET Core Lambda function projects, such as using Visual Studio 2019, Visual Studio Code, and .NET Core CLI. In this post, you use .NET Core CLI.

By default, .NET Core CLI doesn’t support Lambda projects. You need the Amazon.Lambda.Templates nuget package to create your project.

  1. Install the nuget package Amazon.Lambda.Templates to have all the Amazon Lambda project templates in the development environment. See the following CLI Command.
    dotnet new -i Amazon.Lambda.Templates::*
  2. Verify the installation with the following CLI Command.
    dotnet new

    You should see the following output reflecting the presence of various Lambda templates in the development environment. You also need to install AWS extensions for Dotnet Lambda CLI to deploy and invoke Lambda functions from the terminal or command prompt.dotnet cli command listing lambda project templates

  3. To install the extensions, enter the following CLI Commands.
    dotnet tool install -g Amazon.Lambda.Tools
    dotnet tool update -g Amazon.Lambda.Tools
    

    You’re now ready to create a Lambda function project in the development environment.

  4. Navigate to the root of the cloned CodeCommit repository (which you created in the previous step).
  5. Create the Lambda function by entering the following CLI Command.
    dotnet new lambda.EmptyFunction --name Dotnetlambda4 --profile default --region us-east-1

    After you create your Lambda function project, you need to make some configuration changes.

Changing the Lambda function project configuration

When you create a .NET Core Lambda function project, it adds the configuration file aws-lambda-tools-defaults.json at the root of the project directory. This file holds the various configuration parameters for Lambda execution. You want to make sure that the function role is set to the IAM role you created earlier, and that the profile is set to default.

The updated aws-lambda-tools-defaults.json file should look like the following code:

{
  "Information": [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",

    "dotnet lambda help",

    "All the command line options for the Lambda command can be specified in this file."
  ],

  "profile": "default",
  "region": "us-east-1",
  "configuration": "Release",
  "framework": "netcoreapp3.1",
  "function-runtime": "dotnetcore3.1",
  "function-memory-size": 256,
  "function-timeout": 30,
  "function-handler": "Dotnetlambda4::Dotnetlambda4.Function::FunctionHandler",
  "function-role": "arn:aws:iam::awsaccountnumber:role/testlambdarole"
}

After you update your project configuration, you’re ready to create the buildspec.yml file.

Creating a buildspec file

As a prerequisite to configuring the CodeCommit stage, you created a Lambda function project. For the CodeBuild stage, you need to create a buildspec file.

 

Create a buildspec.yml file with the following definition and save it at the root of the CodeCommit directory:

version: 0.2
env:
  variables:
    DOTNET_ROOT: /root/.dotnet
  secrets-manager:
    AWS_ACCESS_KEY_ID_PARAM: CodeBuild:AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY_PARAM: CodeBuild:AWS_SECRET_ACCESS_KEY
phases:
  install:
    runtime-versions:
      dotnet: 3.1
  pre_build:
    commands:
      - echo Restore started on `date`
      - export PATH="$PATH:/root/.dotnet/tools"
      - pip install --upgrade awscli
      - aws configure set profile $Profile
      - aws configure set region $Region
      - aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID_PARAM
      - aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_PARAM
      - cd Dotnetlambda4
      - cd src
      - cd Dotnetlambda4
      - dotnet clean 
      - dotnet restore
  build:
    commands:
      - echo Build started on `date`
      - dotnet new -i Amazon.Lambda.Templates::*
      - dotnet tool install -g Amazon.Lambda.Tools
      - dotnet tool update -g Amazon.Lambda.Tools
      - dotnet lambda deploy-function "Dotnetlambda4" --function-role "arn:aws:iam::yourawsaccount:role/youriamroleforlambda" --region "us-east-1"

You’re now ready to commit your changes to the CodeCommit repository.

Committing changes to the CodeCommit repository

To push changes to your CodeCommit repository, enter the following git commands.

git add --all
git commit –a –m “Initial Comment”
git push

After you commit the changes, you can create your CI/CD pipeline using CodePipeline.

Creating a CI/CD pipeline

To create your pipeline with a CodeCommit and CodeBuild stage, complete the following steps:

  1. In the Pipeline settings section, for Pipeline name, enter a name.
  2. For Service role, select New service role.
  3. For Role name, use the auto-generated name.
  4. Select Allow AWS CodePipeline to create a service role so it can be used with this new pipeline.
  5. Choose Next.Choose Pipeline settings
  6. In the Source section, for Source provider, choose AWS CodeCommit.
  7. For Repository name, choose your repository.
  8. For Branch name, choose your branch.
  9. For Change detection options, select Amazon CloudWatch Events.
  10. Choose Next.Populating the Source stage
  11. In the Build section, for Build provider, choose AWS CodeBuild.Populating the CodeBuild stage
  12. For Environment image, choose Managed image.
  13. For Operating system, choose Ubuntu.
  14. For Image, choose aws/codebuild/standard:4.0.
  15. For Image version, choose Always use the latest image for this runtime versionSelecting Codebuild runtime
  16. CodeBuild needs to assume an IAM service role to get the required privileges for successful build operation.Create a new service role for the CodeBuild project.Selecting the Service role
  17. Attach the following IAM policy to the role:
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "SecretManagerRead",
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetRandomPassword",
                    "secretsmanager:GetResourcePolicy",
                    "secretsmanager:UntagResource",
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:DescribeSecret",
                    "secretsmanager:ListSecretVersionIds",
                    "secretsmanager:ListSecrets",
                    "secretsmanager:TagResource"
                ],
                "Resource": "*"
            }
        ]
    }
    
  18. You now need to define the compute and environment variables for CodeBuild. For Compute, select your preferred compute.
  19. For Environment variables, enter two variables. For Region, enter your preferred Region. For Profile, Enter Value as default. Selecting CodeBuild env optionsThis allows the environment to use the default AWS profile in the build process.
  20. To set up an AWS profile, the CodeBuild environment needs AccessKeyId and SecretAccessKey. As a best practice, configure AccessKeyId and SecretAccessKey as secrets in AWS Secrets Manager and reference it in buildspec.yml. On the Secrets Manager console, choose Store a new secret.
  21. For Select secret type, select Other type of secrets.Selecting secret types
  22. Configure secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.Configuring secrets
  23. For the encryption key, choose DefaultEncryptionKey.
  24. Choose Next.
  25. For Secret name, enter CodeBuild.Secret name
  26. Leave the rest of selections as default and choose Store.Commented code
  27. In the Add deploy stage section, choose Skip deploy stage.Add Deploy stage

Completing and verifying your pipeline

After you save your pipeline, push the code changes of the Lambda function from the local repository to the remote CodeCommit repository.

After a few seconds, you should see the activation of the CodeCommit stage and transition to CodeBuild stage. Pipeline creation can take up to a few minutes.

CodePipeline

You can verity your pipeline on the CodePipeline console. This should deploy the Lambda function changes to the Lambda environment.

Cleaning up

If you no longer need the following resources, delete them to avoid incurring further charges:

  • CodeCommit repository
  • CodePipeline project
  • CodeBuild project
  • IAM role for Lambda execution
  • Lambda function

Conclusion

In this post, you implemented an automated CI/CD for .NET Core Lambda functions using two stages of CodePipeline: CodeCommit and CodeBuild. You can apply this solution to your own use cases.

About the author

Sundararajan Narasiman works as Senior Partner Solutions Architect with Amazon Web Services.

Automating cross-account actions with an AWS CDK credential plugin

Post Syndicated from Cory Hall original https://aws.amazon.com/blogs/devops/cdk-credential-plugin/

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages. You can automate release pipelines for your infrastructure defined by the AWS CDK by using tools such as AWS CodePipeline. As the architecture for your application becomes more complex, so too can your release pipelines.

When you first create an AWS CDK application, you define a top-level AWS CDK app. Within the app, you typically define one or more stacks, which are the unit of deployment, analogous to AWS CloudFormation stacks. Each stack instance in your AWS CDK app is explicitly or implicitly associated with an environment (env). An environment is the target AWS account and Region into which you intend to deploy the stack. When you attempt to deploy an AWS CDK app that contains multiple environments, managing the credentials for each environment can become difficult and usually involves using custom scripts.

This post shows how to use an AWS CDK credential plugin to simplify and streamline deploying AWS CDK apps that contain multiple stacks to deploy to multiple environments. This post assumes that you are explicitly associating your stacks with an environment and may not work with environment-agnostic stacks.

AWS CDK credential plugin overview

AWS CDK allows the use of plugins during the credential process. By default, it looks for default credentials in a few different places. For more information, see Prerequisites. When you run an AWS CDK command such as synth or deploy, the AWS CDK CLI needs to perform actions against the AWS account that is defined for the stack. It attempts to use your default credentials, but what happens if you need credentials for multiple accounts? This is where credential plugins come into play. The basic flow that the AWS CDK CLI takes when obtaining credentials is as follows:

  1. Determine the environment for the stack.
  2. Look for credentials to use against that environment.
  3. If the default credentials match, the environment uses those.
  4. If the default credentials don’t match the environment, it loads any credential plugins and attempts to fetch credentials for the environment using those credential plugins.

Walkthrough overview

In this walkthrough, you use the cdk-assume-role-credential plugin to read information from multiple AWS accounts as part of the synthesis process. This post assumes you have the following three accounts:

  • Shared services – Where you run the AWS CDK commands from. It has access to assume the role in the other two accounts. This is where you can also deploy a pipeline to automate the deployment of your AWS CDK app.
  • Development application – The development environment (dev) for the application.
  • Production application – The production environment (prod) for the application.

However, you can still follow the walkthrough if you only have access to the shared services and either the development or production accounts.

The walkthrough follows this high-level process:

  1. Download and install the plugin
  2. Create the required resources
  3. Use the plugin to synthesize CloudFormation templates for the dev and prod account.

The sample project used for this walkthrough is located on GitHub.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • Access to at least the shared services and either the development or production account.
  • AWS CDK installed with its prerequisites
  • Familiarity with running AWS commands from the AWS CLI

Downloading and installing the plugin

The cdk-assume-role-credential plugin and sample code used in this post are on the GitHub repo. You need to first clone this repo locally and install the plugin as a global package.

  1. Download the GitHub project with the following code:

$ git clone https://github.com/aws-samples/cdk-assume-role-credential-plugin.git

  1. Install the plugin globally with the following code:

$ npm install -g git+https://github.com/aws-samples/cdk-assume-role-credential-plugin.git

Creating the required resources

Because this plugin uses pre-provisioned roles in the target account, you need to first create those roles. For this post, you create two AWS Identity and Access Management (IAM) roles with the default names that the plugin looks for:

Both roles also are configured to trust the shared services account.

Before completing the following steps, make sure you have the account IDs for the three accounts and can obtain AWS CLI credentials for each account.

  1. Move to the sample-app folder:

$ cd cdk-assume-role-credential-plugin/aws-samples

  1. Install dependencies:

$ npm install

  1. Edit the bin/required-resources.ts file and fill in the account numbers where indicated:
new RequiredResourcesStack(app, 'dev', {
  env: {
     account: 'REPLACE_WITH_DEV_ACCOUNT_ID',
    region: 'REPLACE_WITH_REGION'
  },
  trustedAccount: 'REPLACE_WITH_SHARED_SERVICES_ACCOUNT_ID'
});

new RequiredResourcesStack (app, 'prod', {
  env: {
     account: 'REPLACE_WITH_PROD_ACCOUNT_ID',
    region: 'REPLACE_WITH_REGION'
  },
  trustedAccount: 'REPLACE_WITH_SHARED_SERVICES_ACCOUNT_ID'
});
  1. Build the AWS CDK app:

$ npm run build

  1. Using the AWS CLI credentials for the dev account, run cdk deploy to create the resources:

$ cdk deploy dev

  1. Using the AWS CLI credentials for the prod account, run cdk deploy to create the resources:

$ cdk deploy prod

Now you should have the required roles created in both the dev and prod accounts.

Synthesizing the AWS CDK app

Take a look at the sample app to see what it’s comprised of. When you open the bin/sample-app.ts file, you can see that the AWS CDK app is comprised of two SampleApp stacks: one deployed to the dev account in the us-east-2 region, and the other deployed to the prod account in the us-east-1 region. To synthesize the application, complete the following steps:

  1. Edit the bin/sample-app.ts file (fill in the account numbers where indicated):
const dev = { account: 'REPLACE_WITH_DEV_ACCOUNT_ID', region: 'us-east-2' }
const prod = { account: 'REPLACE_WITH_PROD_ACCOUNT_ID', region: 'us-east-1' }

new SampleApp(app, 'devSampleApp', { env: dev });
new SampleApp(app, 'prodSampleApp', { env: prod });
  1. Build the AWS CDK app:

$ npm run build

  1. Using the AWS CLI credentials for the shared services account, try to synthesize the app:

$ cdk synth –-app "npx ts-node bin/sample-app.ts"

You should receive an error message similar to the following code, which indicates that you don’t have credentials for the accounts specified:

[Error at /devSampleApp] Need to perform AWS calls for account 11111111111, but the current credentials are for 222222222222.
[Error at /prodSampleApp] Need to perform AWS calls for account 333333333333, but the current credentials are for 222222222222.
  1. Enter the code again, but this time tell it to use cdk-assume-role-credential-plugin:

$ cdk synth –-app "npx ts-node bin/sample-app.ts" –-plugin cdk-assume-role-credential-plugin

You should see the command succeed:

Successfully synthesized to /cdk.out
Supply a stack id (devSampleApp, prodSampleApp) to display its template.

Cleaning up

To avoid incurring future charges, delete the resources. Make sure you’re in the cdk-assume-role-credential-plugin/sample-app/.

  1. Using the AWS CLI credentials for the dev account, run cdk destroy to destroy the resources:

$ cdk destroy dev

  1. Using the AWS CLI credentials for the prod account, run cdk destroy to destroy the resources:

$ cdk destroy prod

 

Conclusion

You can simplify deploying stacks to multiple accounts by using a credential process plugin cdk-assume-role-credential-plugin.

This post provided a straightforward example of using the plugin while deploying an AWS CDK app manually.

Securing Amazon EKS workloads with Atlassian Bitbucket and Snyk

Post Syndicated from James Bland original https://aws.amazon.com/blogs/devops/securing-amazon-eks-workloads-with-atlassian-bitbucket-and-snyk/

This post was contributed by James Bland, Sr. Partner Solutions Architect, AWS, Jay Yeras, Head of Cloud and Cloud Native Solution Architecture, Snyk, and Venkat Subramanian, Group Product Manager, Bitbucket

 

One of our goals at Atlassian is to make the software delivery and development process easier. This post explains how you can set up a software delivery pipeline using Bitbucket Pipelines and Snyk, a tool that finds and fixes vulnerabilities in open-source dependencies and container images, to deploy secured applications on Amazon Elastic Kubernetes Service (Amazon EKS). By presenting important development information directly on pull requests inside the product, you can proactively diagnose potential issues, shorten test cycles, and improve code quality.

Atlassian Bitbucket Cloud is a Git-based code hosting and collaboration tool, built for professional teams. Bitbucket Pipelines is an integrated CI/CD service that allows you to automatically build, test, and deploy your code. With its best-in-class integrations with Jira, Bitbucket Pipelines allows different personas in an organization to collaborate and get visibility into the deployments. Bitbucket Pipes are small chunks of code that you can drop into your pipeline to make it easier to build powerful, automated CI/CD workflows.

In this post, we go over the following topics:

  • The importance of security as practices shift-left in DevOps
  • How embedding security into pull requests helps developer workflows
  • Deploying an application on Amazon EKS using Bitbucket Pipelines and Snyk

Shift-left on security

Security is usually an afterthought. Developers tend to focus on delivering software first and addressing security issues later when IT Security, Ops, or InfoSec teams discover them. However, research from the 2016 State of DevOps Report shows that you can achieve better outcomes by testing for security earlier in the process within a developer’s workflow. This concept is referred to as shift-left, where left indicates earlier in the process, as illustrated in the following diagram.

There are two main challenges in shifting security left to developers:

  • Developers aren’t security experts – They develop software in the most efficient way they know how, which can mean importing libraries to take care of lower-level details. And sometimes these libraries import other libraries within them, and so on. This makes it almost impossible for a developer, who is not a security expert, to keep track of security.
  • It’s time-consuming – There is no automation. Developers have to run tests to understand what’s happening and then figure out how to fix it. This slows them down and takes them away from their core job: building software.

Time spent on SDLC testing

Enabling security into a developer’s workflow

Code Insights is a new feature in Bitbucket that provides contextual information as part of the pull request interface. It surfaces information relevant to a pull request so issues related to code quality or security vulnerabilities can be viewed and acted upon during the code review process. The following screenshot shows Code Insights on the pull request sidebar.

 

Code insights

In the security space, we’ve partnered with Snyk, McAfee, Synopsys, and Anchore. When you use any of these integrations in your Bitbucket Pipeline, security vulnerabilities are automatically surfaced within your pull request, prompting developers to address them. By bringing the vulnerability information into the pull request interface before the actual deployment, it’s much easier for code reviewers to assess the impact of the vulnerability and provide actionable feedback.

When security issues are fixed as part of a developer’s workflow instead of post-deployment, it means fewer sev1 incidents, which saves developer time and IT resources down the line, and leads to a better user experience for your customers.

 

Securing your Atlassian Workflow with Snyk

To demonstrate how you can easily introduce a few steps to your workflow that improve your security posture, we take advantage of the new Snyk integration to Atlassian’s Code Insights and other Snyk integrations to Bitbucket Cloud, Amazon Elastic Container Registry (Amazon ECR, for more information see Container security with Amazon Elastic Container Registry (ECR): integrate and test), and Amazon EKS (for more information see Kubernetes workload and image scanning. We reference sample code in a publicly available Bitbucket repository. In this repository, you can find resources such as a multi-stage build Dockerfile for a sample Java web application, a sample bitbucket-pipelines.yml configured to perform Snyk scans and push container images to Amazon ECR, and a reference Kubernetes manifest to deploy your application.

Prerequisites

You first need to have a few resources provisioned, such as an Amazon ECR repository and an Amazon EKS cluster. You can quickly create these using the AWS Command Line Interface (AWS CLI) by invoking the create-repository command and following the Getting started with eksctl guide. Next, make sure that you have enabled the new code review experience in your Bitbucket account.

To take a closer look at the bitbucket-pipelines.yml file, see the following code:

script:
 - IMAGE_NAME="petstore"
 - docker build -t $IMAGE_NAME .
 - pipe: snyk/snyk-scan:0.4.3
   variables:
     SNYK_TOKEN: $SNYK_TOKEN
     LANGUAGE: "docker"
     IMAGE_NAME: $IMAGE_NAME
     TARGET_FILE: "Dockerfile"
     CODE_INSIGHTS_RESULTS: "true"
     SEVERITY_THRESHOLD: "high"
     DONT_BREAK_BUILD: "true"
 - pipe: atlassian/aws-ecr-push-image:1.1.2
   variables:
     AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
     AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
     AWS_DEFAULT_REGION: "us-west-2"
     IMAGE_NAME: $IMAGE_NAME

In the preceding code, we invoke two Bitbucket Pipes to easily configure our pipeline and complete two critical tasks in just a few lines: scan our container image and push to our private registry. This saves time and allows for reusability across repositories while discovering innovative ways to automate our pipelines thanks to an extensive catalog of integrations.

Snyk pipe for Bitbucket Pipelines

In the following use case, we build a container image from the Dockerfile included in the Bitbucket repository and scan the image using the Snyk pipe. We also invoke the aws-ecr-push-image pipe to securely store our image in a private registry on Amazon ECR. When the pipeline runs, we see results as shown in the following screenshot.

Bitbucket pipeline

If we choose the available report, we can view the detailed results of our Snyk scan. In the following screenshot, we see detailed insights into the content of that report: three high, one medium, and five low-severity vulnerabilities were found in our container image.

container image report

 

Snyk scans of Bitbucket and Amazon ECR repositories

Because we use Snyk’s integration to Amazon ECR and Snyk’s Bitbucket Cloud integration to scan and monitor repositories, we can dive deeper into these results by linking our Dockerfile stored in our Bitbucket repository to the results of our last container image scan. By doing so, we can view recommendations for upgrading our base image, as in the following screenshot.

 

ECR scan recommendations

As a result, we can move past informational insights and onto actionable recommendations. In the preceding screenshot, our current image of jboss/wilfdly:11.0.0.Final contains 76 vulnerabilities. We also see two recommendations: a major upgrade to jboss/wildfly:18.0.1.FINAL, which brings our total vulnerabilities down to 65, and an alternative upgrade, which is less desirable.

 

We can investigate further by drilling down into the report to view additional context on how a potential vulnerability was introduced, and also create a Jira issue to Atlassian Jira Software Cloud. The following screenshot shows a detailed report on the Issues tab.

 

Jira issue

We can also explore the Dependencies tab for a list of all the direct dependencies, transitive dependencies, and the vulnerabilities those may contain. See the following screenshot.

 

dependency vulnerabilities

Snyk scan Amazon EKS configuration

The final step in securing our workflow involves integrating Snyk with Kubernetes and deploying to Amazon EKS and Bitbucket Pipelines. Sample Kubernetes manifest files and a bitbucket-pipeline.yml are available for you to use in the accompanying Bitbucket repository for this post. Our bitbucket-pipeline.yml contains the following step:

script:
 - pipe: atlassian/aws-eks-kubectl-run:1.2.3
   variables:
     AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
     AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
     AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
     CLUSTER_NAME: "my-kube-cluster"
     KUBECTL_COMMAND: "apply"
     RESOURCE_PATH: "java-app.yaml"

In the preceding code, we call the aws-eks-kubectl-run pipe and pass in a few repository variables we previously defined (see the following screenshot).

 

repository variables

For more information about generating the necessary access keys in AWS Identity and Access Management (IAM) to make programmatic requests to the AWS API, see Creating an IAM User in Your AWS Account.

Now that we have provisioned the supporting infrastructure and invoked kubectl apply -f java-app.yaml to deploy our pods using our container images in Amazon ECR, we can monitor our project details and view some initial results. The following screenshot shows that our initial configuration isn’t secure.

secure config scan results

The reason for this is that we didn’t explicitly define a few parameters in our Kubernetes manifest under securityContext. For example, parameters such as readOnlyRootFilesystem, runAsNonRoot, allowPrivilegeEscalation, and capabilities either aren’t defined or are set incorrectly in our template. As a result, we see this in our findings with the FAIL flag. Hovering over these on the Snyk console provides specific insights on how to fix these, for example:

  • Run as non-root – Whether any containers in the workload have securityContext.runAsNonRoot set to false or unset
  • Read-only root file system – Whether any containers in the workload have securityContext.readOnlyFilesystem set to false or unset
  • Drop capabilities – Whether all capabilities are dropped and CAP_SYS_ADMIN isn’t added

 

To save you the trouble of researching this, we provide another sample template, java-app-snyk.yaml, which you can apply against your running pods. The difference in this template is that we have included the following lines to the manifest, which address the three failed findings in our report:

securityContext:
 allowPrivilegeEscalation: false
 readOnlyRootFilesystem: true
 runAsNonRoot: true
 capabilities:
   drop:
     - all

After a subsequent scan, we can validate our changes propagated successfully and our Kubernetes configuration is secure (see the following screenshot).

secure config scan passing results

Conclusion

This post demonstrated how to secure your entire flow proactively with Atlassian Bitbucket Cloud and Snyk. Seamless integrations to Bitbucket Cloud provide you with actionable insights at each step of your development process.

Get started for free with Bitbucket and Snyk and learn more about the Bitbucket-Snyk integration.

 

“The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.”

 

AWS Solutions Constructs – A Library of Architecture Patterns for the AWS CDK

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-solutions-constructs-a-library-of-architecture-patterns-for-the-aws-cdk/

Cloud applications are built using multiple components, such as virtual servers, containers, serverless functions, storage buckets, and databases. Being able to provision and configure these resources in a safe, repeatable way is incredibly important to automate your processes and let you focus on the unique parts of your implementation.

With the AWS Cloud Development Kit, you can leverage the expressive power of your favorite programming languages to model your applications. You can use high-level components called constructs, preconfigured with “sensible defaults” that you can customize, to quickly build a new application. The CDK provisions your resources using AWS CloudFormation to get all the benefits of managing your infrastructure as code. One of the reasons I like the CDK, is that you can compose and share your own custom components as higher-level constructs.

As you can imagine, there are recurring patterns that can be useful to more than one customer. For this reason, today we are launching the AWS Solutions Constructs, an open source extension library for the CDK that provides well-architected patterns to help you build your unique solutions. CDK constructs mostly cover single services. AWS Solutions Constructs provide multi-service patterns that combine two or more CDK resources, and implement best practices such as logging and encryption.

Using AWS Solutions Constructs
To see the power of a pattern-based approach, let’s take a look at how that works when building a new application. As an example, I want to build an HTTP API to store data in a Amazon DynamoDB table. To keep the content of the table small, I can use DynamoDB Time to Live (TTL) to expire items after a few days. After the TTL expires, data is deleted from the table and sent, via DynamoDB Streams, to a AWS Lambda function to archive the expired data on Amazon Simple Storage Service (S3).

To build this application, I can use a few components:

  • An Amazon API Gateway endpoint for the API.
  • A DynamoDB table to store data.
  • A Lambda function to process the API requests, and store data in the DynamoDB table.
  • DynamoDB Streams to capture data changes.
  • A Lambda function processing data changes to archive the expired data.

Can I make it simpler? Looking at the available patterns in the AWS Solutions Constructs, I find two that can help me build my app:

  • aws-apigateway-lambda, a Construct that implements an API Gateway REST API connected to a Lambda function. As an example of the “sensible defaults” used by AWS Solutions Constructs, this pattern enables CloudWatch logging for the API Gateway.
  • aws-dynamodb-stream-lambda, a Construct implementing a DynamoDB table streaming data changes to a Lambda function with the least privileged permissions.

To build the final architecture, I simply connect those two Constructs together:

I am using TypeScript to define the CDK stack, and Node.js for the Lambda functions. Let’s start with the CDK stack:

 

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as apigw from '@aws-cdk/aws-apigateway';
import * as dynamodb from '@aws-cdk/aws-dynamodb';
import { ApiGatewayToLambda } from '@aws-solutions-constructs/aws-apigateway-lambda';
import { DynamoDBStreamToLambda } from '@aws-solutions-constructs/aws-dynamodb-stream-lambda';

export class DemoConstructsStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const apiGatewayToLambda = new ApiGatewayToLambda(this, 'ApiGatewayToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'restApi.handler'
      },
      apiGatewayProps: {
        defaultMethodOptions: {
          authorizationType: apigw.AuthorizationType.NONE
        }
      }
    });

    const dynamoDBStreamToLambda = new DynamoDBStreamToLambda(this, 'DynamoDBStreamToLambda', {
      deployLambda: true,
      lambdaFunctionProps: {
        code: lambda.Code.fromAsset('lambda'),
        runtime: lambda.Runtime.NODEJS_12_X,
        handler: 'processStream.handler'
      },
      dynamoTableProps: {
        tableName: 'my-table',
        partitionKey: { name: 'id', type: dynamodb.AttributeType.STRING },
        timeToLiveAttribute: 'ttl'
      }
    });

    const apiFunction = apiGatewayToLambda.lambdaFunction;
    const dynamoTable = dynamoDBStreamToLambda.dynamoTable;

    dynamoTable.grantReadWriteData(apiFunction);
    apiFunction.addEnvironment('TABLE_NAME', dynamoTable.tableName);
  }
}

At the beginning of the stack, I import the standard CDK constructs for the Lambda function, the API Gateway endpoint, and the DynamoDB table. Then, I add the two patterns from the AWS Solutions Constructs, ApiGatewayToLambda and DynamoDBStreamToLambda.

After declaring the two ApiGatewayToLambda and DynamoDBStreamToLambda constructs, I store the Lambda function, created by the ApiGatewayToLambda constructs, and the DynamoDB table, created by DynamoDBStreamToLambda, in two variables.

At the end of the stack, I “connect” the two patterns together by granting permissions to the Lambda function to read/write in the DynamoDB table, and add the name of the DynamoDB table to the environment of the Lambda function, so that it can be used in the function code to store data in the table.

The code of the two Lambda functions is in the lambda folder of the CDK application. I am using the Node.js 12 runtime.

The restApi.js function implements the API and writes data to the DynamoDB table. The URL path is used as partition key, all the query string parameters in the URL are stored as attributes. The TTL for the item is computed adding a time window of 7 days to the current time.

const { DynamoDB } = require("aws-sdk");

const docClient = new DynamoDB.DocumentClient();

const TABLE_NAME = process.env.TABLE_NAME;
const TTL_WINDOW = 7 * 24 * 60 * 60; // 7 days expressed in seconds

exports.handler = async function (event) {

  const item = event.queryStringParameters;
  item.id = event.pathParameters.proxy;

  const now = new Date(); 
  item.ttl = Math.round(now.getTime() / 1000) + TTL_WINDOW;

  const response = await docClient.put({
    TableName: TABLE_NAME,
    Item: item
  }).promise();

  let statusCode = 204;
  
  if (response.err != null) {
    console.error('request: ', JSON.stringify(event, undefined, 2));
    console.error('error: ', response.err);
    statusCode = 500
  }

  return {
    statusCode: statusCode
  };
};

The processStream.js function is processing data capture records from the DynamoDB Stream, looking for the items deleted by TTL. The archive functionality is not implemented in this sample code.

exports.handler = async function (event) {
  event.Records.forEach((record) => {
    console.log('Stream record: ', JSON.stringify(record, null, 2));
    if (record.userIdentity.type == "Service" &&
      record.userIdentity.principalId == "dynamodb.amazonaws.com") {

      // Record deleted by DynamoDB Time to Live (TTL)
      
      // I can archive the record to S3, for example using Kinesis Data Firehose.
    }
  }
};

Let’s see if this works! First, I need to install all dependencies. To simplify dependencies, each release of AWS Solutions Constructs is linked to the corresponding version of the CDK. I this case, I am using version 1.46.0 for both the CDK and the AWS Solutions Constructs patterns. The first three commands are installing plain CDK constructs. The last two commands are installing the AWS Solutions Constructs patterns I am using for this application.

npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-cdk/[email protected]
npm install @aws-solutions-constructs/[email protected]
npm install @aws-solutions-constructs/[email protected]

Now, I build the application and use the CDK to deploy the application.

npm run build
cdk deploy

Towards the end of the output of the cdk deploy command, a green light is telling me that the deployment of the stack is completed. Just next, in the Outputs, I find the endpoint of the API Gateway.

 ✅  DemoConstructsStack

Outputs:
DemoConstructsStack.ApiGatewayToLambdaLambdaRestApiEndpoint9800D4B5 = https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/

I can now use curl to test the API:

curl "https://1a2c3c4d.execute-api.eu-west-1.amazonaws.com/prod/danilop?name=Danilo&amp;company=AWS"

Let’s have a look at the DynamoDB table:

The item is stored, and the TTL is set. After a week, the item will be deleted and sent via DynamoDB Streams to the processStream.js function.

After I complete my testing, I use the CDK again to quickly delete all resources created for this application:

cdk destroy

Available Now
The AWS Solutions Constructs are available now for TypeScript and Python. The AWS Solutions Builders team is working to make these constructs also available when using Java and C# with the CDK, stay tuned. There is no cost in using the AWS Solutions Constructs, or the CDK, you only pay for the resources created when deploying the stack.

In this first release, 25 patterns are included, covering lots of different use cases. Which new patterns and features should we focus now? Give use your feedback in the open source project repository!

Danilo

AWS CodeArtifact and your package management flow – Best Practices for Integration

Post Syndicated from John Standish original https://aws.amazon.com/blogs/devops/integrating-aws-codeartifact-package-mgmt-flow/

You often use artifact repositories to store and share software or deployment packages. Centralized artifacts enable teams to operate independently and share versioned software artifacts across your organization. Sharing versioned artifacts across organizations increases code reuse and reduces delivery time. Having a central artifact store enables tighter artifact governance and improves security visibility. This post uses some of these patterns to show you how to integrate AWS CodeArtifact in an effective, cost-controlled, and efficient manner.

AWS CodeArtifact Diagram

AWS CodeArtifact Service Usage

AWS CodeArtifact concepts

AWS CodeArtifact uses the following elements:

  • Asset – An individual file stored in AWS CodeArtifact that is associated with a package version, such as an npm .tgz file or Maven POM and JAR files
  • Package – A package is a bundle of software and the metadata that is required to resolve dependencies and install the software. AWS CodeArtifact supports npmPyPI, and Maven package formats.
  • Repository – An CodeArtifact repository contains a set of package versions, each of which maps to a set of assets. Repositories are polyglot—a single repository can contain packages of any supported type. Each repository exposes endpoints for fetching and publishing packages using tools like the npm CLI, the Maven CLI (mvn), and pip.
  • Domain – Repositories are aggregated into a higher-level entity known as a domain. The domain allows organizational policy to be applied across multiple repositories. A domain deduplicates storage of the repositories packages.

Creating a domain based on organizational ownership

When you create a domain in CodeArtifact, it’s important to organize the domain by ownership within the organization. An example would be a a company being a domain, and the products being repositories. Domains allow you to apply organizational policies across multiple repositories. Generally we recommend creating one domain per company. In some cases it may also be beneficial to have a sandbox domain where prototype repositories reside. In a sandbox domain teams are at liberty to create their own repositories and experiment as needed, without affecting product deliverable assets. Using a sandbox domain will duplicate packages, isolate repositories since you can not copy packages between domains, and increase costs since package deduplication is handle at the domain level. Organizing packages by domain ownership increases the cache hits on a package within the domain and reduces cost for each subsequent package fetch request.

Whenever a package is fetched from a repository, the asset is cached in your CodeArtifact domain to minimize the cost of subsequent downstream requests. A given asset only needs to be stored once in a domain, even if it’s available in two—or two thousand—repositories. That means you only pay for storage once. Copying a package version with the CopyPackageVersions API is only possible between repositories within the same CodeArtifact domain.

You can create a domain for your organization by calling create-domain in the AWS Command Line Interface (AWS CLI), AWS SDK, or on the CodeArtifact console. See the following code:

aws codeartifact create-domain --domain "my-org"

After creating the domain you will see the domains listed in the Domains section on the CodeArtifact console.

AWS CodeArtifact domains per governing organization

Organizing packages by domain ownership

Using a shared repository

A shared repository is applicable when a team feels that a component is useful to the rest of the organization and isn’t in an experimental state, personal project, and not meant for wide distribution within the organization. Examples of shared components are open source public repositories (npm, PyPI, and Maven), authentication, logging, or helper libraries. Shared libraries aren’t related to product libraries; for instance, a service contract library shouldn’t live in a shared repository. The shared repository should be marked read-only to all users except for the publishing IAM role. At Amazon, we have found that many teams want to consume common packages as part of their application build, and don’t need to publish any package themselves. Those teams don’t need their own repository and pull packages from shared. Overall, approximately 80% of packages are downloaded from the shared repository, and 20% from team or project specific repositories.

You can create a shared repository by calling the create-repository command and setting a resource policy that makes the repository read-only.

Here is how you create a repository with the AWS CLI using the create-repository command. See the following code:

aws codeartifact create-repository --domain "my-org" \
--domain-owner "account-id" --repository "my-shared-repo-name" \
--description "My new repository"

Next you make the repository read-only by setting a resource policy. See the following code:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
		"codeartifact:DescribePackageVersion",
                "codeartifact:DescribeRepository",
                "codeartifact:GetPackageVersionReadme",
                "codeartifact:GetRepositoryEndpoint",
                "codeartifact:ListPackages",
                "codeartifact:ListPackageVersions",
                "codeartifact:ListPackageVersionAssets",
                "codeartifact:ListPackageVersionDependencies",
                "codeartifact:ReadFromRepository"
            ],
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::444455556666:root"
            },
            "Resource": "*"
        }
    ]
}

To attach a resource polcicy to a repository by calling the put-repository-permissions command. See the following code:

aws codeartifact put-repository-permissions-policy --domain "my-org" \
--domain-owner "account-id" --repository "my-shared-repo-name" \
--policy-document file:///PATH/TO/policy.json

When you have created the repository, you will see it listed in the Repositories section on the CodeArtifact console.

A list of shared repositories in AWS CodeArtifact

Shared repositories in AWS CodeArtifact

External repository connections

CodeArtifact enables you to set external repository connections and replicate them within CodeArtifact. An external connection reduces the downstream dependency on the remote external repository. When you request a package from the CodeArtifact repository that’s not already present in the repository, the package can be fetched from the external connection. This makes it possible to consume open-source dependencies used by your application. Using an external connection reduces interruption in your development process for package external dependencies, an example is if a package is removed from a public repository, you will still have a copy of the package stored in CodeArtifact. You should have a one-to-one mapping with external repositories, and rather than have multiple CodeArtifact repositories pointing to the same public repository. Each asset that CodeArtifact imports into your repository from a public repository is billed as a single request, and each connection must reconcile and fetch the package before the response is returned. By having a one-to-one mapping, you can increase cache hits, reduces time to download an application dependency from CodeArtifact, and reduce the number of external package resolution requests. Associating an external repository connection with your repository is done using the associate-external-connection command. See the following code:

aws codeartifact associate-external-connection \
--domain "my-org" --domain-owner "account-id" \
--repository "my-external-repo" --external-connection public:npmjs

Once you have associated an external connection with your repository, you’ll see the external connection visible in the Repositories section detail. In this example we’ve connected the repository to the external npmjs repository.

External connection to npmjs with AWS CodeArtifact repositories

External connection to npmjs for an AWS CodeArtifact repository

Team and product repositories

When working in distributed teams, you often align repositories to the product or service ownership. Teams working on there own repository can update as needed. An example would be creating a private package that your team only uses internally.

See the following code:

aws codeartifact create-repository --domain my-org \
--domain-owner account-id --repository my-team-repo \
--description "My new team repository"

As team’s develop against the package they will need to publish their changes to the repository. As part of your development pipeline you would publish the package to the repository. See the following code for an example:

# Log in to CodeArtifact
aws codeartifact login --tool npm \
--domain "my-org" --domain-owner "account-id" \
--repository "my-team-repo"

# Run build commands here
...

# Set $VERSION from your build system
npm version $VERSION

# Publish to CodeArtifact
npm publish

After testing the feature and you find that it will be usable across your organization, you can copy the package into your shared repository. See the following code:

# Promoting to a shared repo
aws codeartifact copy-package-versions --domain "my-org" \
--domain-owner "account-id" --source-repository "my-team-repo" \
--destination-repository "my-shared-repo" \
--package my-package --format npm \
--versions '["6.0.2"]'

Once you’ve created your shared repository you will see the repositories updated as shown here.

Team and product repositories in AWS CodeArtifact

Team and product repositories

Sharing repositories across accounts

Often teams or workloads have separate accounts within an organization. This is a recommended practice because it clearly defines operational boundaries and domain of ownership and establishes security boundaries. If your organization uses a multi-account strategy, you can share repositories across accounts using CodeArtifact resource policy. Teams can develop in their own account and publish to a CodeArtifact repository controlled in a shared account.

Here you see a list of repositories, which includes both a shared and team repository.

Cross account sharing of AWS CodeArtifact repositories

Cross account sharing of AWS CodeArtifact repositories

Using Amazon CloudWatch Events when a package is pushed

When a package is pushed into a repository, its change can affect software dependencies, teams, or process dependencies. When an artifact is pushed to CodeArtifact, an Amazon CloudWatch Events event is triggered, which you can trigger additional functionality. You can react to these events by subscribing to a CodeArtifact event in Amazon EventBridge. Some examples of reactions to a change you could take are: checking dependencies, deploying dependent services, notifying teams or services of a change, or building the dependencies.

You can also use EventBridge to start a pipeline in AWS CodePipeline, notify an Amazon Simple Notification Service (Amazon SNS) topic, and have that call AWS Chatbot. For more information see, CodeArtifact event format and example. If you are looking to integrate AWS Chatbot into your delivery flow, see Receive AWS Developer Tools Notifications over Slack using AWS Chatbot.

Deploying code in a hybrid environment

You can enable seamless software deployment into AWS and on-premises environments by integrating CodeArtifact with software build and deployment services. You can use CodeArtifact with your existing development pipeline tooling such as NPM, Python, and Maven. With native support for these package managers, you can access CodeArtifact wherever you operate today.

First, log in to CodeArtifact, build your code, and finally publish using npm publish with the following code:

# Log in to CodeArtifact 
aws codeartifact login --tool npm \
--domain "my-org" --domain-owner "account-id" \
--repository "my-team-repo"

# Run build commands here 
... 

# Set $VERSION from your build system 
npm version $VERSION 

# Publish to CodeArtifact 
npm publish

Cleaning Up

When you’re ready to clean up the repositories and domains you’ve created, you’ll need to remove them in a specific order. Please be aware that deleting a repository is a destructive action which will remove any stored packages. To delete a domain and delete a repository created from the previous sections in this blog, you will be using the delete-domain and delete-repository commands.

You will need to remove the domain and repository in the following order:

  1. Remove any repositories in a domain
  2. Remove the domain

To delete the repository and domain, see the following code:

# Delete the repository
aws codeartifact delete-repository --domain "my-org" --domain-owner "account-id" --repository "my-team-repo"

# Delete the domain
aws codeartifact delete-domain --domain "my-org" --domain-owner "account-id"

Conclusion

This post covered how to integrate CodeArtifact into your delivery flow and use CodeArtifact effectively. A shared repository approach aides in creating reusable components across your organization. Using team repositories and promoting to a consumable repository allows your teams to iterate independently. For more information, see Getting started with CodeArtifact.

About the Author

John Standish

John Standish is a Solutions Architect at AWS and spent over 13 years as a Microsoft .Net developer. Outside of work, he enjoys playing video games, cooking, and watching hockey.

Yogesh Chaturvedi

Yogesh Chaturvedi is a Solutions Architect at AWS and has over 20 years of software development and architecture experience.

 

Introducing the new Serverless LAMP stack

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-new-serverless-lamp-stack/

This is the first in a series of posts for PHP developers. The series will explain how to use serverless technologies with PHP. It covers the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

In future posts, I demonstrate how to use AWS Lambda for web applications built with PHP frameworks such as Laravel and Symphony. I show how to move from using Lambda as a replacement for web hosting functionality to a decoupled, event-driven approach. I cover how to combine multiple Lambda functions of minimal scope with other serverless services to create performant scalable microservices.

In this post, you learn how to use PHP with Lambda via the custom runtime API. Visit this GitHub repository for the sample code.

The Serverless LAMP stack

The Serverless LAMP stack

The challenges with traditional PHP applications

Scalability is an inherent challenge with the traditional LAMP stack. A scalable application is one that can handle highly variable levels of traffic. PHP applications are often scaled horizontally, by adding more web servers as needed. This is managed via a load balancer, which directs requests to various web servers. Each additional server brings additional overhead with networking, administration, storage capacity, backup and restore systems, and an update to asset management inventories. Additionally, each horizontally scaled server runs independently. This can result in configuration synchronization challenges.

Horizontal scaling with traditional LAMP stack applications.

Horizontal scaling with traditional LAMP stack applications.

New storage challenges arise as each server has its own disks and filesystem, often requiring developers to add a mechanism to handle user sessions. Using serverless technologies, scalability is managed for the developer.

If traffic surges, the services scale to meet the demand without having to deploy additional servers. This allows applications to quickly transition from prototype to production.

The serverless LAMP architecture

A traditional web application can be split in to two components:

  • The static assets (media files, css, js)
  • The dynamic application (PHP, MySQL)

A serverless approach to serving these two components is illustrated below:

The serverless LAMP stack

The serverless LAMP stack

All requests for dynamic content (anything excluding /assets/*) are forwarded to Amazon API Gateway. This is a fully managed service for creating, publishing, and securing APIs at any scale. It acts as the “front door” to the PHP application, routing requests downstream to Lambda functions. The Lambda functions contain the business logic and interaction with the MySQL database. You can pass the input to the Lambda function as any combination of request headers, path variables, query string parameters, and body.

Notable AWS features for PHP developers

Amazon Aurora Serverless

During re:Invent 2017, AWS announced Aurora Serverless, an on-demand serverless relational database with a pay-per-use cost model. This manages the responsibility of relational database provisioning and scaling for the developer.

Lambda Layers and custom runtime API.

At re:Invent 2018, AWS announced two new Lambda features. These enable developers to build custom runtimes, and share and manage common code between functions.

Improved VPC networking for Lambda functions.

In September 2019, AWS announced significant improvements in cold starts for Lambda functions inside a VPC. This results in faster function startup performance and more efficient usage of elastic network interfaces, reducing VPC cold starts.

Amazon RDS Proxy

At re:Invent 2019, AWS announced the launch of a new service called Amazon RDS Proxy. A fully managed database proxy that sits between your application and your relational database. It efficiently pools and shares database connections to improve the scalability of your application.

 

Significant moments in the serverless LAMP stack timeline

Significant moments in the serverless LAMP stack timeline

Combining these services, it is now it is possible to build secure and performant scalable serverless applications with PHP and relational databases.

Custom runtime API

The custom runtime API is a simple interface to enable Lambda function execution in any programming language or a specific language version. The custom runtime API requires an executable text file called a bootstrap. The bootstrap file is responsible for the communication between your code and the Lambda environment.

To create a custom runtime, you must first compile the required version of PHP in an Amazon Linux environment compatible with the Lambda execution environment .To do this, follow these step-by-step instructions.

The bootstrap file

The file below is an example of a basic PHP bootstrap file. This example is for explanation purposes as there is no error handling or abstractions taking place. To ensure that you handle exceptions appropriately, consult the runtime API documentation as you build production custom runtimes.

#!/opt/bin/php
<?PHP

// This invokes Composer's autoloader so that we'll be able to use Guzzle and any other 3rd party libraries we need.
require __DIR__ . '/vendor/autoload.php;

// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
    // Ask the runtime API for a request to handle.
    $request = getNextRequest();

    // Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
    $handlerFunction = array_slice(explode('.', $_ENV['_HANDLER']), -1)[0];
    require_once $_ENV['LAMBDA_TASK_ROOT'] . '/src/' . $handlerFunction . '.php;

    // Execute the desired function and obtain the response.
    $response = $handlerFunction($request['payload']);

    // Submit the response back to the runtime API.
    sendResponse($request['invocationId'], $response);
} while (true);

function getNextRequest()
{
    $client = new \GuzzleHttp\Client();
    $response = $client->get('http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/next');

    return [
      'invocationId' => $response->getHeader('Lambda-Runtime-Aws-Request-Id')[0],
      'payload' => json_decode((string) $response->getBody(), true)
    ];
}

function sendResponse($invocationId, $response)
{
    $client = new \GuzzleHttp\Client();
    $client->post(
    'http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/' . $invocationId . '/response',
       ['body' => $response]
    );
}

The #!/opt/bin/php declaration instructs the program loader to use the PHP binary compiled for Amazon Linux.

The bootstrap file performs the following tasks, in an operational loop:

  1. Obtains the next request.
  2. Executes the code to handle the request.
  3. Returns a response.

Follow these steps to package the bootstrap and compiled PHP binary together into a `runtime.zip`.

Libraries and dependencies

The runtime bootstrap uses an HTTP-based local interface. This retrieves the event payload for each Lambda function invocation and returns back the response from the function. This bootstrap file uses Guzzle, a popular PHP HTTP client, to make requests to the custom runtime API. The Guzzle package is installed using Composer package manager. Installing packages in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

Follow these steps to create and package the runtime dependencies into a `vendors.zip` binary.

Lambda Layers provides a mechanism to centrally manage code and data that is shared across multiple functions. When a Lambda function is configured with a layer, the layer’s contents are put into the /opt directory of the execution environment. You can include a custom runtime in your function’s deployment package, or as a layer. Lambda executes the bootstrap file in your deployment package, if available. If not, Lambda looks for a runtime in the function’s layers. There are several open source PHP runtime layers available today, most notably:

The following steps show how to publish the `runtime.zip` and `vendor.zip` binaries created earlier into Lambda layers and use them to build a Lambda function with a PHP runtime:

  1.  Use the AWS Command Line Interface (CLI) to publish layers from the binaries created earlier
    aws lambda publish-layer-version \
        --layer-name PHP-example-runtime \
        --zip-file fileb://runtime.zip \
        --region eu-west-1

    aws lambda publish-layer-version \
        --layer-name PHP-example-vendor \
        --zip-file fileb://vendors.zip \
        --region eu-west-1

  2. Make note of each command’s LayerVersionArn output value (for example arn:aws:lambda:eu-west-1:XXXXXXXXXXXX:layer:PHP-example-runtime:1), which you’ll need for the next steps.

Creating a PHP Lambda function

You can create a Lambda function via the AWS CLI, the AWS Serverless Application Model (SAM), or directly in the AWS Management Console. To do this using the console:

  1. Navigate to the Lambda section  of the AWS Management Console and choose Create function.
  2. Enter “PHPHello” into the Function name field, and choose Provide your own bootstrap in the Runtime field. Then choose Create function.
  3. Right click on bootstrap.sample and choose Delete.
  4. Choose the layers icon and choose Add a layer.
  5. Choose Provide a layer version ARN, then copy and paste the ARN of the custom runtime layer from in step 1 into the Layer version ARN field.
  6. Repeat steps 6 and 7 for the vendor ARN.
  7. In the Function Code section, create a new folder called src and inside it create a new file called index.php.
  8. Paste the following code into index.php:
    //index function
    function index($data)
    {
     return "Hello, ". $data['name'];
    }
    
  9. Insert “index” into the Handler input field. This instructs Lambda to run the index function when invoked.
  10. Choose Save at the top right of the page.
  11. Choose Test at the top right of the page, and  enter “PHPTest” into the Event name field. Enter the following into the event payload field and then choose Create:{ "name": "world"}
  12. Choose Test and Select the dropdown next to the execution result heading.

You can see that the event payload “name” value is used to return “hello world”. This is taken from the $data['name'] parameter provided to the Lambda function. The log output provides details about the actual duration, billed duration, and amount of memory used to execute the code.

Conclusion

This post explains how to create a Lambda function with a PHP runtime using Lambda Layers and the custom runtime API. It introduces the architecture for a serverless LAMP stack that scales with application traffic.

Lambda allows for functions with mixed runtimes to interact with each other. Now, PHP developers can join other serverless development teams focusing on shipping code. With serverless technologies, you no longer have to think about restarting webhosts, scaling or hosting.

Start building your own custom runtime for Lambda.

Building a CI/CD pipeline for multi-region deployment with AWS CodePipeline

Post Syndicated from Akash Kumar original https://aws.amazon.com/blogs/devops/building-a-ci-cd-pipeline-for-multi-region-deployment-with-aws-codepipeline/

This post discusses the benefits of and how to build an AWS CI/CD pipeline in AWS CodePipeline for multi-region deployment. The CI/CD pipeline triggers on application code changes pushed to your AWS CodeCommit repository. This automatically feeds into AWS CodeBuild for static and security analysis of the CloudFormation template. Another CodeBuild instance builds the application to generate an AMI image as output. AWS Lambda then copies the AMI image to other Regions. Finally, AWS CloudFormation cross-region actions are triggered and provision the instance into target Regions based on AMI image.

The solution is based on using a single pipeline with cross-region actions, which helps in provisioning resources in the current Region and other Regions. This solution also helps manage the complete CI/CD pipeline at one place in one Region and helps as a single point for monitoring and deployment changes. This incurs less cost because a single pipeline can deploy the application into multiple Regions.

As a security best practice, the solution also incorporates static and security analysis using cfn-lint and cfn-nag. You use these tools to scan CloudFormation templates for security vulnerabilities.

The following diagram illustrates the solution architecture.

Multi region AWS CodePipeline architecture

Multi region AWS CodePipeline architecture

Prerequisites

Before getting started, you must complete the following prerequisites:

  • Create a repository in CodeCommit and provide access to your user
  • Copy the sample source code from GitHub under your repository
  • Create an Amazon S3 bucket in the current Region and each target Region for your artifact store

Creating a pipeline with AWS CloudFormation

You use a CloudFormation template for your CI/CD pipeline, which can perform the following actions:

  1. Use CodeCommit repository as source code repository
  2. Static code analysis on the CloudFormation template to check against the resource specification and block provisioning if this check fails
  3. Security code analysis on the CloudFormation template to check against secure infrastructure rules and block provisioning if this check fails
  4. Compilation and unit test of application code to generate an AMI image
  5. Copy the AMI image into target Regions for deployment
  6. Deploy into multiple Regions using the CloudFormation template; for example, us-east-1, us-east-2, and ap-south-1

You use a sample web application to run through your pipeline, which requires Java and Apache Maven for compilation and testing. Additionally, it uses Tomcat 8 for deployment.

The following table summarizes the resources that the CloudFormation template creates.

Resource NameTypeObjective
CloudFormationServiceRoleAWS::IAM::RoleService role for AWS CloudFormation
CodeBuildServiceRoleAWS::IAM::RoleService role for CodeBuild
CodePipelineServiceRoleAWS::IAM::RoleService role for CodePipeline
LambdaServiceRoleAWS::IAM::RoleService role for Lambda function
SecurityCodeAnalysisServiceRoleAWS::IAM::RoleService role for security analysis of provisioning CloudFormation template
StaticCodeAnalysisServiceRoleAWS::IAM::RoleService role for static analysis of provisioning CloudFormation template
StaticCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for static analysis of provisioning CloudFormation template
SecurityCodeAnalysisProjectAWS::CodeBuild::ProjectCodeBuild for security analysis of provisioning CloudFormation template
CodeBuildProjectAWS::CodeBuild::ProjectCodeBuild for compilation, testing, and AMI creation
CopyImageAWS::Lambda::FunctionPython Lambda function for copying AMI images into other Regions
AppPipelineAWS::CodePipeline::PipelineCodePipeline for CI/CD

To start creating your pipeline, complete the following steps:

  • Launch the CloudFormation stack with the following link:
Launch button for CloudFormation

Launch button for CloudFormation

  • Choose Next.
  • For Specify details, provide the following values:
ParameterDescription
Stack nameName of your stack
OtherRegion1Input the target Region 1 (other than current Region) for deployment
OtherRegion2Input the target Region 2 (other than current Region) for deployment
RepositoryBranchBranch name of repository
RepositoryNameRepository name of the project
S3BucketNameInput the S3 bucket name for artifact store
S3BucketNameForOtherRegion1Create a bucket in target Region 1 and specify the name for artifact store
S3BucketNameForOtherRegion2Create a bucket in target Region 2 and specify the name for artifact store

Choose Next.

  • On the Review page, select I acknowledge that this template might cause AWS CloudFormation to create IAM resources.
  • Choose Create.
  • Wait for the CloudFormation stack status to change to CREATE_COMPLETE (this takes approximately 5–7 minutes).

When the stack is complete, your pipeline should be ready and running in the current Region.

  • To validate the pipeline, check the images and EC2 instances running into the target Regions and also refer the AWS CodePipeline Execution summary as below.
AWS CodePipeline Execution Summary

AWS CodePipeline Execution Summary

We will walk you through the following steps for creating a multi-region deployment pipeline:

1. Using CodeCommit as your source code repository

The deployment workflow starts by placing the application code on the CodeCommit repository. When you add or update the source code in CodeCommit, the action generates a CloudWatch event, which triggers the pipeline to run.

2. Static code analysis of CloudFormation template to provision AWS resources

Historically, AWS CloudFormation linting was limited to the ValidateTemplate action in the service API. This action tells you if your template is well-formed JSON or YAML, but doesn’t help validate the actual resources you’ve defined.

You can use a linter such as the cfn-lint tool for static code analysis to improve your AWS CloudFormation development cycle. The tool validates the provisioning CloudFormation template properties and their values (mappings, joins, splits, conditions, and nesting those functions inside each other) against the resource specification. This can cover the most common of the underlying service constraints and help encode some best practices.

The following rules cover underlying service constraints:

  • E2530 – Checks that Lambda functions have correctly configured memory sizes
  • E3025 – Checks that your RDS instances use correct instance types for the database engine
  • W2001 – Checks that each parameter is used at least once

You can also add this step as a pre-commit hook for your GIT repository if you are using CodeCommit or GitHub.

You provision a CodeBuild project for static code analysis as the first step in CodePipeline after source. This helps in early detection of any linter issues.

3. Security code analysis of CloudFormation template to provision AWS resources

You can use Stelligent’s cfn_nag tool to perform additional validation of your template resources for security. The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure provisioning and validates against AWS best practices. For example:

  • IAM rules that are too permissive (wildcards)
  • Security group rules that are too permissive (wildcards)
  • Access logs that aren’t enabled
  • Encryption that isn’t enabled
  • Password literals

You provision a CodeBuild project for security code analysis as the second step in CodePipeline. This helps detect any insecure infrastructure provisioning issues.

4. Compiling and testing application code and generating an AMI image

Because you use a Java-based application for this walkthrough, you use Amazon Corretto as your JVM. Corretto is a no-cost, multi-platform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto comes with long-term support that includes performance enhancements and security fixes.

You also use Apache Maven as a build automation tool to build the sample application, and the HashiCorp Packer tool to generate an AMI image for the application.

You provision a CodeBuild project for compilation, unit testing, AMI generation, and storing the AMI ImageId in the Parameter Store, which the CloudFormation template uses as the next step of the pipeline.

5. Copying the AMI image into target Regions

You use a Lambda function to copy the AMI image into target Regions so the CloudFormation template can use it to provision instances into that Region as the next step of the pipeline. It also writes the target Region AMI ImageId into the target Region’s Parameter Store.

6. Deploying into multiple Regions with the CloudFormation template

You use the CloudFormation template as a cross-region action to provision AWS resources into a target Region. CloudFormation uses Parameter Store’s ImageId as reference and provisions the instances into the target Region.

Cleaning up

To avoid additional charges, you should delete the following AWS resources after you validate the pipeline:

  • The cross-region CloudFormation stack in the target and current Regions
  • The main CloudFormation stack in the current Region
  • The AMI you created in the target and current Regions
  • The Parameter Store AMI_VERSION in the target and current Regions

Conclusion

You have now created a multi-region deployment pipeline in CodePipeline without having to worry about the mechanics of creating and copying AMI images across Regions. CodePipeline abstracts the creating and copying of the images in the background in each Region. You can now upload new source code changes to the CodeCommit repository in the primary Region, and changes deploy automatically to other Regions. Cross-region actions are very powerful and are not limited to deploy actions. You can also use them with build and test actions.

Deploying a serverless application using AWS CDK

Post Syndicated from Georges Leschener original https://aws.amazon.com/blogs/devops/deploying-a-serverless-application-using-aws-cdk/

There are multiple ways to deploy API endpoints, such as this example, in which you could use an application running on Amazon EC2 to demonstrate how to integrate Amazon ElastiCache with Amazon DocumentDB (with MongoDB capability). While the approach in this example help achieve great performance and reliability through the elasticity and the ability to scale up or down the number of EC2 instances in order to accommodate the load on the application, there is still however some operational overhead you still have to manage the EC2 instances yourself. One way of addressing the operational overhead issue and related costs could be to transform the application into a serverless architecture.

The example in this blog post uses an application that provides a similar use case, leveraging a serverless architecture showcasing some of the tools that are being leveraged by customers transitioning from lift-and-shift to building cloud-native applications. It uses Amazon API Gateway to provide the REST API endpoint connected to an AWS Lambda function to provide the business logic to read and write from an Amazon Aurora Serverless database. It also showcases the deployment of most of the infrastructure with the AWS Cloud Development Kit, known as the CDK. By moving your applications to cloud native architecture like the example showcased in this blog post, you will be able to realize a number of benefits including:

  • Fast and clean deployment of your application thereby achieving fast time to market
  • Reduce operational costs by serverless and managed services

Architecture Diagram

At the end of this blog, you have an AWS Cloud9 instance environment containing a CDK project which deploys an API Gateway and Lambda function. This Lambda function leverages a secret stored in your AWS Secrets Manager to read and write from your Aurora Serverless database through the data API, as shown in the following diagram.

 

Architecture diagram for deploying a serverless application using AWS CDK

This above architecture diagram showcases the resources to be deployed in your AWS Account

Through the blog post you will be creating the following resources:

  1. Deploy an Amazon Aurora Serverless database cluster
  2. Secure the cluster credentials in AWS Secrets Manager
  3. Create and populate your database in the AWS Console
  4. Deploy an AWS Cloud9 instance used as a development environment
  5. Initialize and configure an AWS Cloud Development Kit project including the definition of your Amazon API Gateway endpoint and AWS Lambda function
  6. Deploy an AWS CloudFormation template through the AWS Cloud Development Kit

Prerequisites

In order to deploy the CDK application, there are a few prerequisites that need to be met:

  1. Create an AWS account or use an existing account.
  2. Install Postman for testing purposes

Amazon Aurora serverless cluster creation

To begin, navigate to the AWS console to create a new Amazon RDS database.

  1. Select Create Database from the Amazon RDS service.
  2. Select Standard Create under Choose a database creation method.
  3. Select Serverless under Database features.
  4. Select Amazon Aurora as the engine type under Engine options.
  5. Enter db-blog for your DB Cluster Identifier.
  6. Expand the Additional Connectivity section and select the Data API option. This functionality enables you to access Aurora Serverless with web services-based applications. It also allows you to use the query editor feature for Aurora Serverless in order to run SQL queries against your database instance.
  7. Leave the default selection for everything else and choose Create Database.

Your database instance is created in a single availability zone (AZ), but an Aurora Serverless database cluster has a capability known as automatic multi-AZ failover, which enables Aurora to recreate the database instance in a different AZ should the current database instance or the AZ become unavailable. The storage volume for the cluster is spread across multiple AZs, since Aurora separates computation capacity and storage. This allows for data to remain available even if the database instance or the associated AZ is affected by an outage.

Securing database credentials with AWS Secrets Manager

After creating the database instance, the next step is to store your secrets for your database in AWS Secrets Manager.

  • Navigate to AWS Secrets Manager, and select Store a New Secret.
  • Leave the default selection (Credentials for RDS database) for the secret type. Enter your database username and password and then select the radio button for the database you created in the previous step (in this example, db-blog), as shown in the following screenshot.

database search in aws secrets manager

  •  Choose Next.
  • Enter a name and optionally a description. For the name, make sure to add the prefix rds-db-credentials/ as shown in the following screenshot.

AWS Secrets Manager Store a new secret window

  • Choose Next and leave the default selection.
  • Review your settings on the last page and choose Store to have your secrets created and stored in AWS Secrets Manager, which you can now use to connect to your database.

Creating and populating your Amazon Aurora Serverless database

After creating the DB cluster, create the database instance; create your tables and populate them; and finally, test a connection to ensure that you can query your database.

  • Navigate to the Amazon RDS service from the AWS console, and select your db-blog database cluster.
  • Select Query under Actions to open the Connect to database window as shown in the screenshot below . Enter your database connection details. You can copy your secret manager ARN from the Secrets Manager service and paste it into the corresponding field in the database connection window.

Amazon RDS connect to database window

  • To create the DB instance run the following SQL query: CREATE DATABASE recordstore;from the Query editor shown in the screenshot below:

 

Amazon RDS Query editor

  • Before you can run the following commands, make sure you are using the Recordstore database you just created by running the command:
USE recordstore;
  • Create a records table using the following command:
CREATE TABLE IF NOT EXISTS records (recordid INT PRIMARY KEY, title VARCHAR(255) NOT NULL, release_date DATE);
  • Create a singers table using the following command:
CREATE TABLE IF NOT EXISTS singers (id INT PRIMARY KEY, name VARCHAR(255) NOT NULL, nationality VARCHAR(255) NOT NULL, recordid INT NOT NULL, FOREIGN KEY (recordid) REFERENCES records (recordid) ON UPDATE RESTRICT ON DELETE CASCADE);
  • Add a record to your records table and a singer to your singers table.
INSERT INTO records(recordid,title,release_date) VALUES(001,'Liberian Girl','2012-05-03');
INSERT INTO singers(id,name,nationality,recordid) VALUES(100,'Michael Jackson','American',001);

If you have the AWS CLI set up on your computer, you can connect to your database and retrieve records.

To test it, use the rds-data execute-statement API within the AWS CLI to connect to your database via the data API web service and query the singers table, as shown below:

aws rds-data execute-statement —secret-arn "arn:aws:secretsmanager:REGION:xxxxxxxxxxx:secret:rds-db-credentials/xxxxxxxxxxxxxxx" —resource-arn "arn:aws:rds:us-east-1:xxxxxxxxxx:cluster:db-blog" —database demodb —sql "select * from singers" —output json

You should see the following result:

    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "longValue": 100
            },
            {
                "stringValue": "Michael Jackson"
            },
            {
                "stringValue": "American"
            },
            {
                "longValue": 1
            }
        ]
    ]
}

Creating a Cloud9 instance

To create a Cloud9 instance:

  1. Navigate to the Cloud9 console and select Create Environment.
  2. Name your environment AuroraServerlessBlog.
  3. Keep the default values under the Environment Settings.

Once your instance is launched, you see the screen shown in the following screenshot:

AWS Cloud9

 

You can now install the CDK in your environment. Run the following command inside your bash terminal on the blue section at the bottom of your screen:

npm install -g [email protected]

For the next section of this example, you mostly work on the command line of your Cloud9 terminal and on your file explorer.

Creating the CDK deployment

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages. If you would like to familiarize yourself the CDKWorkshop is a great place to start.

First, create a working directory called RecordsApp and initialize a CDK project from a template.

Run the following commands:

mkdir RecordsApp
cd RecordsApp
cdk init app --language typescript
mkdir resources
npm install @aws-cdk/[email protected] @aws-cdk/[email protected] @aws-cdk/[email protected]

Now your instance should look like the example shown in the following screenshot:

AWS Cloud9 shell

 

You are mainly working in two directories:

  • Resources
  • Lib

Your initial set up is ready, and you can move into creating specific services and deploying them to your account.

Creating AWS resources using the CDK

  1. Follow these steps to create AWS resources using the CDK:
  2. Under the /lib folder,  create a new file called records_service.ts.
    • Inside of your new file, paste the following code with these changes:
    • Replace the dbARN with the ARN of your AuroraServerless DB ARN from the previous steps.

Replace the dbSecretARN with the ARN of your Secrets Manager secret ARN from the previous steps.

import core = require("@aws-cdk/core");
import apigateway = require("@aws-cdk/aws-apigateway");
import lambda = require("@aws-cdk/aws-lambda");
import iam = require("@aws-cdk/aws-iam");

//REPLACE THIS
const dbARN = "arn:aws:rds:XXXX:XXXX:cluster:aurora-serverless-blog";
//REPLACE THIS
const dbSecretARN = "arn:aws:secretsmanager:XXXXX:XXXXX:secret:rds-db-credentials/XXXXX";

export class RecordsService extends core.Construct {
  constructor(scope: core.Construct, id: string) {
    super(scope, id);

    const lambdaRole = new iam.Role(this, 'AuroraServerlessBlogLambdaRole', {
      assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
      managedPolicies: [
            iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonRDSDataFullAccess'),
            iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSLambdaBasicExecutionRole')
        ]
    });

    const handler = new lambda.Function(this, "RecordsHandler", {
     role: lambdaRole,
     runtime: lambda.Runtime.NODEJS_12_X, // So we can use async in widget.js
     code: lambda.Code.asset("resources"),
     handler: "records.main",
     environment: {
       TABLE: dbARN,
       TABLESECRET: dbSecretARN,
       DATABASE: "recordstore"
     }
   });

    const api = new apigateway.RestApi(this, "records-api", {
      restApiName: "Records Service",
      description: "This service serves records."
   });

    const getRecordsIntegration = new apigateway.LambdaIntegration(handler, {
      requestTemplates: { "application/json": '{ "statusCode": 200 }' }
    });

    api.root.addMethod("GET", getRecordsIntegration); // GET /

    const record = api.root.addResource("{id}");
    const postRecordIntegration = new apigateway.LambdaIntegration(handler);
    const getRecordIntegration = new apigateway.LambdaIntegration(handler);

    record.addMethod("POST", postRecordIntegration); // POST /{id}
    record.addMethod("GET", getRecordIntegration); // GET/{id}
  }
}

This snippet of code will instruct the AWS CDK to create the following resources:

  • IAM role: AuroraServerlessBlogLambdaRole containing the following managed policies:
    • AmazonRDSDataFullAccess
    • service-role/AWSLambdaBasicExecutionRole
  • Lambda function: RecordsHandler, which has a Node.js 8.10 runtime and three environmental variables
  • API Gateway: Records Service, which has the following characteristics:
    • GET Method
      • GET /
    • { id } Resource
      • GET method
        • GET /{id}
      • POST method
        • POST /{id}

Now that you have a service, you need to add it to your stack under the /lib directory.

  1. Open the records_app-stack.ts
  2. Replace the contents of this file with the following:
import cdk = require('@aws-cdk/core'); 
import records_service = require('../lib/records_service'); 
export class RecordsAppStack extends cdk.Stack { 
  constructor(scope: cdk.Construct, id: string, props?
: cdk.StackProps) { 
    super(scope, id, props); 
    new records_service.RecordsService(this, 'Records'
); 
  } 
}
  1. Create the Lambda code that is invoked from the API Gateway endpoint. Under the /resources directory, create a file called records.js and paste the following code in this file
const AWS = require('aws-sdk');
var rdsdataservice = new AWS.RDSDataService();

exports.main = async function(event, context) {
  try {
    var method = event.httpMethod;
    var recordName = event.path.startsWith('/') ? event.path.substring(1) : event.path;
// Defining parameters for rdsdataservice
    var params = {
      resourceArn: process.env.TABLE,
      secretArn: process.env.TABLESECRET,
      database: process.env.DATABASE,
   }
   if (method === "GET") {
      if (event.path === "/") {
       //Here is where we are defining the SQL query that will be run at the DATA API
       params['sql'] = 'select * from records';
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           records: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
     else if (recordName) {
       params['sql'] = `SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';`
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           singer: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
   }
   else if (method === "POST") {
     var payload = JSON.parse(event.body);
     if (!payload) {
       return {
         statusCode: 400,
         headers: {},
         body: "The body is missing"
       };
     }

     //Generating random IDs
     var recordId = uuidv4();
     var singerId = uuidv4();

     //Parsing the payload from body
     var recordTitle = `${payload.recordTitle}`;
     var recordReleaseDate = `${payload.recordReleaseDate}`;
     var singerName = `${payload.singerName}`;
     var singerNationality = `${payload.singerNationality}`;

      //Making 2 calls to the data API to insert the new record and singer
      params['sql'] = `INSERT INTO records(recordid,title,release_date) VALUES(${recordId},"${recordTitle}","${recordReleaseDate}");`;
      const recordsWrite = await rdsdataservice.executeStatement(params).promise();
      params['sql'] = `INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");`;
      const singersWrite = await rdsdataservice.executeStatement(params).promise();

      return {
        statusCode: 200,
        headers: {},
        body: JSON.stringify("Your record has been saved")
      };

    }
    // We got something besides a GET, POST, or DELETE
    return {
      statusCode: 400,
      headers: {},
      body: "We only accept GET, POST, and DELETE, not " + method
    };
  } catch(error) {
    var body = error.stack || JSON.stringify(error, null, 2);
    return {
      statusCode: 400,
      headers: {},
      body: body
    }
  }
}
function uuidv4() {
  return 'xxxx'.replace(/[xy]/g, function(c) {
    var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
    return v;
  });
}

Take a look at what this Lambda function is doing. You have two functions inside of your Lambda function. The first is the exported handler, which is defined as an asynchronous function. The second is a unique identifier function to generate four-digit random numbers you use as UIDs for your database records. In your handler function, you handle the following actions based on the event you get from API Gateway:

  • Method GETwith empty path /:
    • This calls the data API executeStatement method with the following SQL query:
SELECT * from records
  • Method GET with a record name in the path /{recordName}:
    • This calls the data API executeStatmentmethod with the following SQL query:
SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';
  • Method POST with a payload in the body:
    • This makes two calls to the data API executeStatement with the following SQL queries:
INSERT INTO records(recordid,titel,release_date) VALUES(${recordId},"${recordTitle}",“${recordReleaseDate}”);&lt;br /&gt;INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");

Now you have all the pieces you need to deploy your endpoint and Lambda function by running the following commands:

npm run build
cdk synth
cdk bootstrap
cdk deploy

If you change the Lambda code or add aditional AWS resources to your CDK deployment, you can redeploy the application by running all four commands in a single line:

npm run build; cdk synth; cdk bootstrap; cdk deploy

Testing with Postman

Once it’s done, you can test it using Postman:

GET = ‘RecordName’ in the path

  • example:
    • ENDPOINT/RecordName

POST = Payload in the body

  • example:
{
   "recordTitle" : "BlogTest",
   "recordReleaseDate" : "2020-01-01",
   "singerName" : "BlogSinger",
   "singerNationality" : "AWS"
}

Clean up

To clean up the resources created by the CDK, run the following command in your Cloud9 instance:

cdk destroy

To clean up the resources created manually, run the following commands:

aws rds delete-db-cluster --db-cluster-identifier Serverless-blog --skip-final-snapshot
aws secretsmanager delete-secret --secret-id XXXXX --recovery-window-in-days 7

Conclusion

This blog post demonstrated how to transform an application running on Amazon EC2 from a previous blog into serverless architecture by leveraging services such as Amazon API Gateway, Lambda, Cloud 9, AWS CDK, and Aurora Serverless. The benefit of serverless architecture is that it takes away the overhead of having to manage a server and helps reduce costs, as you only pay for the time in which your code executes.

This example used a record-store application written in Node.js that allows users to find their favorite singer’s record titles, as well as the dates when they were released. This example could be expanded, for instance, by adding a payment gateway and a shopping cart to allow users to shop and pay for their favorite records. You could then incorporate some machine learning into the application to predict user choice based on previous visits, purchases, or information provided through registration profiles.

 


 

About the Authors

Luis Lopez Soria is an AI/ML specialist solutions architect working with the AWS machine learning team. He works with AWS customers to help them with the adoption of Machine Learning on a large scale. He enjoys doing sports in addition to traveling around the world, exploring new foods and cultures.

 

 

 

 Georges Leschener is a Partner Solutions Architect in the Global System Integrator (GSI) team at Amazon Web Services. He works with our GSIs partners to help migrate customers’ workloads to AWS cloud, design and architect innovative solutions on AWS by applying AWS recommended best practices.

 

Building and testing iOS and iPadOS apps with AWS DevOps and mobile services

Post Syndicated from Abdullahi Olaoye original https://aws.amazon.com/blogs/devops/building-and-testing-ios-and-ipados-apps-with-aws-devops-and-mobile-services/

Continuous integration/continuous deployment (CI/CD) helps automate software delivery processes. With the software delivery process automated, developers can test and deliver features faster. In iOS app development, testing your apps on real devices allows you to understand how users will interact with your app and to detect potential issues in real time.

AWS has a collection of tools designed to help developers build, test, configure, and release cloud-based applications for mobile devices. This blog post shows you how to leverage some of those tools and integrate third-party build tools like Jenkins into a CI/CD Pipeline in AWS for iOS app development and testing.

A new commit to the source repository triggers the pipeline. The build is done on a Jenkins server, and the build artifact from Jenkins is passed to the test phase, which is configured with AWS Device Farm to test the application on real devices. AWS CodePipeline provides the orchestration and helps automate the build and test phases. The CodePipeline continuous delivery process is illustrated in the following screenshot.

CodePipeline Archietcture with all stages

Figure: CodePipeline Continuous Delivery Architecture

 

Prerequisites

Ensure you have the following prerequisites set up before beginning:

  1. Apple developer account
  2. Build server (macOS)
  3. Xcode Version 11.3 (installed on the build server and setup)
  4. Jenkins (installed on the build server)
  5. AWS CLI installed and configured on workstation
  6. Basic knowledge of Git

Source

This example uses a sample iOS Notes app which we have hosted in an AWS CodeCommit repository, which is in the source stage of the pipeline.

Jenkins installation

Jenkins can be installed on macOS using a homebrew package manager for macOS with the following command:

$ brew install Jenkins

Start Jenkins by typing the following command:

$ Jenkins

You can also configure Jenkins to start as a service on startup with the following command:

$ brew services start Jenkins

Jenkins configuration

On a browser on your local machine, visit http://localhost:8080. You should see the setup screen shown in the following screenshot:

Screenshot of how to retrive Jenkins secret during setup on mac

1. Grab the initial admin password from the terminal by typing:

$ cat /Users/administrator/.jenkins/secrets/initialAdminPassword

2. Follow the onscreen instructions to complete setup. This includes creating a first admin user, installing initial plugins, etc.

3. Make some changes to the config file to ensure Jenkins is accessible from anywhere, not just the local machine:

    • Open the config file:

$ sudo nano /Users/admin/Library/LaunchAgents/homebrew.mxcl.jenkins.plist

    • Find the following line:

<string>--httpListenAddress=127.0.0.1</string>

    • Change it to the following:

<string>--httpListenAddress=0.0.0.0</string>

    • Save your changes and exit.

To reach Jenkins from the internet, enter the following into a web browser:

<build-server-public-ip>:<Jenkins-port>

The default Jenkins port is 8080. For example, if a public IP address 1.2.3.4, the path is 1.2.3.4:8080.

4. Install the AWS CodePipeline Jenkins plugin:

      • Sign in to Jenkins using the user name and password you created. Choose Manage Jenkins, then Manage Plugins.
      • Switch to the Available tab and start typing CodePipeline into the filter until AWS CodePipeline Plugin appears. Select the plugin, then select Install without restart.
      • Select Restart Jenkins when installation is complete and no jobs are running.

5. Create a project. Choose New Item, then Freestyle Project. Enter a descriptive name. This example uses iosapp as the item name.

6. In the Source Code Management section, select AWS CodePipeline and configure the plugin as shown in the following screenshot.

Screenshot of Source Code Management configuration in a Jenkins freestyle project

      • AWS region: The region in which you want to create the CI/CD pipeline.
      • AWS access key and AWS secret key: Create a special IAM user and apply the AWSCodePipelineCustomActionAccess managed policy to that user. Use the access credentials for that user to configure this section.
      • Category: Choose Build. This is also used in the pipeline configuration.
      • Provider: This example uses the name Jenkins. It can be renamed, but take note of the name specified here.
      • Version: Enter 1 here. This value is used in the pipeline configuration.

7. Under Build Triggers, select Poll SCM. Enter the schedule * * * * * separated by spaces, as shown in the following screenshot.

Screenshot of Build Triggers confoguration in a Jenkins freestyle project

8. Under Build, select Add build step, then Execute shell. Enter the following commands, inserting your development team ID.

/usr/bin/xcodebuild -version
/usr/bin/xcodebuild build-for-testing -scheme MyNotes -destination generic/platform=iOS DEVELOPMENT_TEAM=<your development team ID> -allowProvisioningUpdates -derivedDataPath /Users/admin/.jenkins/workspace/iosapp
mkdir Payload && cp -r /Users/admin/.jenkins/workspace/iosapp/Build/Products/Debug-iphoneos/MyNotes.app Payload/
zip -r Payload.zip Payload && mv Payload.zip MyNotes.ipa

9. Under Post-build Actions, select Add post-build action, then AWS CodePipeline Publisher. Fill in the fields as shown in the following screenshot:

Screenshot of Post Build Action Configuration in a Jenkins freestyle project

10. Save the configuration.

11. Retrieve the public IP address for the macOS build server.

Configure Device Farm

In this section, you configure Device Farm to test the sample iOS app on real-world devices.

  1. Navigate to the AWS Device Farm Console
  2. Choose Create a new project and enter a name for the project. Choose Create project. Note the name of the project.
  3. Choose the newly created project and retrieve the project ID:
    • Copy the URL found in the browser into a text editor.
    • Note the project ID, which can be found in the URL path:

https://us-west-2.console.aws.amazon.com/devicefarm/home?region=us-east-1#/projects/<your project ID is here>/runs

    • Decide on which devices you want to test the sample app. This is known as the device pool in Device Farm. This example doesn’t use a PRIVATE device pool. It uses a CURATED device pool, which is a device pool created and managed by AWS Device Farm.
    • Retrieve the ARN of the CURATED device pool for your project using the AWS CLI:

$ aws devicefarm list-device-pools --arn arn::devicefarm:us-west-2:<account-id>:project:<project id noted above> --region us-west-2 --query 'devicePools[?name==`Top Devices`]'

Note the device pool ARN.

Configure the CodeCommit repository

In this section, the source code repository is created and source code is pushed to the repository.

  1. Create a CodeCommit repository. Take note of the repository name.
  2. Connect to the newly created repository.
  3. Push the iOS app code from the local repository to the remote CodeCommit repository:

$ git push

Create and configure CodePipeline

CodePipeline orchestrates all phases of the example. Each action is represented as a stage.

Since you have a Jenkins stage, which is considered a custom action and has to be configured via the AWS management console, use the AWS management console to create your pipeline.

  1. Go to the AWS CodePipeline console and choose Create pipeline.
  2. Enter iosapp under Pipeline settings and select New service role.
  3. Leave the default Role name, and select Allow AWS CodePipeline to create a service role so it can be used with this new pipeline.
  4. Choose Next.
  5. Select AWS CodeCommit as the Source provider. Select the repository you created and the branch name, then select Next.
  6. Select Add Jenkins as the build provider and fill in the fields:
    • Provider name: Specify the provider name you configured for this example.
    • Server URL: Specify the public IP address of the Jenkins server and the port on which Jenkins is. For example, if 1.2.3.4 is the IP address and 8080 is the port, the server URL is http://1.2.3.4:8080.
    • Project name: Specify the name you gave to the Jenkins Freestyle project you created.
  7. Choose Next.
  8. Choose Skip deploy stage. You are integrating with Device Farm and this is only valid as a test stage, not a deploy stage.
  9. Choose Create pipeline. This creates a two-stage pipeline which starts executing immediately after creation. However, you are not done yet, so stop the current execution
  10. Now create a test stage with Device Farm. Choose Edit to modify the pipeline. Under the Build stage, select Add Stage and enter a stage name (such as Test). Choose Add stage again.
  11. In the newly added stage, choose Add action group and fill in the fields:
    • Action name: Enter an Action name
    • Action Provider: Select AWS Device Farm
    • Region: Select US West – Oregon.

      “AWS Device Farm is only supported in US-West-2 (Oregon) so this action will be a cross region action since the pipeline is in us-east-1”

    • Input artifacts: Select BuildArtifact, which is the output of the Jenkins build stage
    • ProjectId: This is the Device Farm project ID you noted earlier
    • DevicePoolArn: This is the Device Farm ARN you noted earlier
    • AppType: Enter iOS
    • App: This is the file that contains the app to test; the filename of your generated IPA is MyNotes.ipa
    • TestType: This is the type of test to run on the application; enter BUILTIN_FUZZ
  12. Leave the other fields blank and choose Done to save the action configuration, then choose Save to save the pipeline changes.
  13. Optionally, you can enable notifications to notify you of changes in the pipeline, such as when the pipeline completes, when a stage or action completes, or when there is a failure. To enable notifications, create a notification rule.
  14. Choose Release change to execute the pipeline, as shown in the following screenshot.

Completed codepipeline sample with example test failure

Verify the test on Device Farm

From the pipeline execution, you can see there is a failure in your test. Check the test results:

  1. Navigate to the AWS Device Farm Console.
  2. Select the project you created.
  3. All the tests that have run are listed, as seen in the following screenshot.
  4. Failure on AWS Device FarmChoose the test to see more details.

You can see the source of the failure. To investigate why the test failed, choose each device. The device names on which the app was tested are also shown, such as the OS version and the total duration of the test for each device. You can see screenshots of the test by switching to the Screenshots tab. More information can also be seen by clicking on a device.

Troubleshoot the failure by examining the result in each of the devices on which the test was run to determine what changes are needed in the application. After making the needed changes in the application source code, push the changes to the remote repository (in this case, a CodeCommit repository) to trigger the pipeline again. The following screenshot shows a successful pipeline execution:

Succesfuly executed CodePipeline

The following screenshot shows a successful test:

Sucessfully executed tests on Device Farm

Cleanup

Cleanup the following AWS resources:

Conclusion

This post showed you how to integrate CodePipeline with an iOS Jenkins build server and leverage the integration of CodePipeline and Device Farm to automatically build and test iOS apps on real-world devices. By taking this approach to testing iOS apps, you can visualize how an app will behave on actual devices and with the automated CI/CD pipeline, and quickly test apps as they are developed.

Use AWS Firewall Manager and VPC security groups to protect your applications hosted on EC2 instances

Post Syndicated from Kaustubh Phatak original https://aws.amazon.com/blogs/security/use-aws-firewall-manager-vpc-security-groups-to-protect-applications-hosted-on-ec2-instances/

You can use AWS Firewall Manager to centrally configure and manage Amazon Virtual Private Cloud (Amazon VPC) security groups across all your AWS accounts. This post will take you through the step-by-step instructions to apply common security group rules, audit your security groups, and detect unused and redundant rules in your security groups across your AWS environment.

In this post, I’ll show you how to create and enforce a master set of security group rules by using common security group policy, while still allowing developers to deploy and manage application-specific security group rules. In the example below, the security group rules you’ll create allow SSH access only from the public IP address of the bastion host, and to set a policy that prohibits any security group rules that allow SSH access from everywhere (port 22).

When you use Firewall Manager to centrally apply a common security group, you can do things such as ensure that all Application Load Balancers only talk to Amazon CloudFront, or the Secure Shell (SSH) protocol is only allowed from specific IP ranges, or to give system administrators access to a central database.

In many organizations, developers write their own security group rules for their applications. However, if you’re a security administrator, you want to audit the security group rules so you’ll know when a security group is misconfigured. Using audit security group policy, you can set guardrails on which security group rules can or cannot be created across your organization. For example, you could only allow security group rules on ports 10-1000, or specify that you do not allow security group rules on port 23.

As an administrator, you also want to simplify operations by detecting unused and redundant security groups across their AWS accounts. You can use a managed audit policy to help identify unused and redundant security groups.

If you haven’t used these services before, here’s a quick overview:

  1. AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organization by using AWS Config in the background. Using AWS Firewall Manager, you can easily roll out AWS WAF rules, create AWS Shield Advanced protections, and enable security groups for your Amazon Elastic Compute Cloud (Amazon EC2) and elastic network interface resource types in Amazon VPCs.
  2. VPC security groups act as a virtual, stateful firewall for your Amazon Elastic Compute Cloud (Amazon EC2) instance to control inbound and outbound traffic. You can specify separate rules for inbound and outbound traffic, and instances associated with a security group can’t talk to each other unless you add rules allowing it.

After you put the master set of security group rules in place, you’ll get notification of all non-compliant changes made by the developers. You can take remediation action if necessary using an audit security group policy. In this post, you’ll also set up a usage security group policy, so that you can flag unused security groups and merge redundant security groups for simpler administration.

Prerequisites

AWS Firewall Manager has the following prerequisites:

  • AWS Organizations: Your organization must be using AWS Organizations to manage your accounts, and All Features must be enabled. For more information, see Creating an Organization and Enabling All Features in Your Organization.
  • An administrator AWS Account: You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization.
  • AWS Config: You must enable AWS Config for all of the accounts in your organization, so that AWS Firewall Manager can detect newly created resources. To enable AWS Config for all of the accounts in your organization, you can use the Enable AWS Config template on the StackSets Sample Templates page. For more information, see Getting Started with AWS Config.

Note: You’ll be charged $100 per policy per month. In the solution in this post, you’ll create three policies. In addition, AWS Config charges also apply. For more information, see AWS Firewall Manager pricing and AWS Config pricing.

Overview

The diagram below illustrates the following steps:

  1. Complete the prerequisites that were outlined in the prerequisites section above.
  2. Create a primary security group under AWS Firewall Manager. This is a VPC security group that gets replicated as a new security group to every resource within the policy scope.
  3. In AWS Firewall Manager, create policies that can be applied to individual application security groups by mapping them to specific application name/value tags. The policies you create will result in the generation of individual new security groups.
  4. Application developers can build additional app-specific security group rules created in the previous step.

 

Figure 1: Overview of solution

Figure 1: Overview of solution

Create a common security group policy

You’ll begin by creating a common security group policy to push primary security group rules across all accounts.

  1. Sign in to the AWS Management Console using the AWS Firewall Manager administrator account that you set up in the prerequisites, and then open the Firewall Manager console.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. Using the Filter menu, select the AWS Region where your application is hosted and choose Create policy. In my example, I choose US West (Oregon).
  4. For Policy type, choose Security group.
  5. For Security group policy type, choose Common security groups, then choose Next.
  6. Enter a policy name. In my example, I’ve named my policy Test_Common_Policy.
  7. Policy rules allow you to choose how the security groups in this policy are applied and maintained. For this tutorial, choose Apply the primary security groups to every resource within the policy scope and leave the other options unchecked. You can also choose to apply only one of these policies. Note that if you choose both check boxes, a local user won’t be able to modify security group and they won’t be able to add additional security groups.
  8. Choose Add primary security group to see all security groups in your account in your specified AWS Region. Select any one of your existing security groups, or create a new security group.
  9. (Optional) If you choose to create a new security group, you’ll be taken to the VPC dashboard where you can create your primary security group by following the Creating a Security Group documentation. Under audit security group, add the following:
    1. For Ingress Rules, choose Allow access on Port 22 from 203.0.113.1/32.
    2. For Egress Rules, choose Allow all traffic on all ports.
  10. After you select the primary security group, choose Add security group.
  11. For Policy action, for this example, choose Apply policy rules and identify resources that are non-compliant but do not auto remediate. By selecting this option, Firewall Manager will notify you of any non-compliant security groups, but will not auto-remediate. Choose Next.
  12. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource Type to apply this policy, choose EC2 instances.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  13. Review the security policy, then choose Create policy.

 

Figure 2: Summary of Common Security Group policy

Figure 2: Summary of Common Security Group policy

The security policy will review all the EC2 instances in your child accounts in your specified AWS Region and add the primary security group to the primary network interface of the Amazon EC2 instances. All primary interfaces of the Amazon EC2 instances created in future will also have this primary security group. If the developers remove the security group rules of the primary security group, you’re notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of changing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will be removed. Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Create an audit security group policy

Now, you’ll create an audit security group policy to enforce the guardrails. You’ll create a security group rule that allows port 22 access from an allowed IP subnet of 203.0.113.1/32 according to the security team’s recommendations.

  1. In the AWS Management Console, select AWS WAF and AWS Shield.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. In the Filter, select the AWS Region where your application is hosted and choose Create policy. In my example, I will choose US West (Oregon).
  4. For Policy type, choose Security group. For Security group policy type, choose Auditing and enforcement guidelines for security group rules, then choose Next.
  5. Enter a policy name. In my example, I’ve named my policy Test_Audit_Policy.
  6. For Policy rules, select Allow any rules defined in audit security group.
  7. Choose Add audit security group to see all security groups in your account in your specified AWS Region. You can select a security group, or create a new security group.
  8. (Optional) If you choose to create a new security group, you’ll be taken to VPC dashboard where you can create your primary security group by following the Creating a Security Group documentation. In the audit security group, add the following:
    1. For Ingress Rules, choose Allow access on Port 22 from 203.0.113.1/32.
    2. For Egress Rules, choose Allow all traffic on all ports.
  9. After you select the audit security group, choose Add security group.
  10. For Policy action, you can only select Apply policy rules and identify resources that are non-compliant but do not auto remediate. By selecting this option, Firewall Manager will notify you of any non-compliant security groups, but will not auto-remediate. Choose Next.
  11. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource type to apply this policy, choose Security groups.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  12. Review the security policy and choose Create policy.

 

Figure 3: Summary of Audit Security Group policy

Figure 3: Summary of Audit Security Group policy

The security policy will audit all the security groups in your child accounts in your specified AWS Region and will only allow security group ingress rules that allow port 22 access from 203.0.113.1/32. All security groups created in future will also have this restriction. If Firewall Manager detects that a security groups exists that allows port 22 access from everywhere except 203.0.113.1/32, you’re notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of editing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will be removed. Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Create a usage security group policy

Lastly, you’ll create a usage security group policy to remove unused security groups, and to merge redundant security groups.

  1. In the AWS Management Console, select AWS WAF and Shield.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies. In the Filter, select the AWS Region where your application is hosted and choose Create policy. In my example, I am choosing US West (Oregon).
  3. For Policy type, choose Security group. For Security group policy type, choose Auditing and cleanup of unused and redundant security groups. Choose Next.
  4. Enter a policy name. In my example, I’ve named my policy Test_Usage_Policy.
  5. For Policy rules, select both the options: Security groups within this policy scope should be used by at least one resource and Security groups within this policy scope should not have similar content.
  6. For Policy action, select Apply policy rules and identify resources that are non-compliant but do not auto remediate. Choose Next.
  7. For Policy scope, select the following:
    1. For AWS accounts included in this policy, choose All accounts under my organization.
    2. For Resource type to apply this policy, choose Security groups.
    3. For Criteria to select the resources to protect, choose Include only resources that have the specified tags.
    4. For Key, enter Env.
    5. For Value, enter Prod.

    Choose Next.

  8. A pop-up warning message will appear. Select Exclude Firewall Manager admin account from the policy scope, so that security groups in the administrator account are not affected.
  9. Review the security policy and choose Create policy.

 

Figure 4: Summary of Usage Security Group policy

Figure 4: Summary of Usage Security Group policy

The security policy will review all the security groups in your child accounts in your specified AWS Region and check if there are any security groups that are not associated with any resource. The security policy will also review if there are any duplicate security group rules. After these cases are identified, AWS Firewall Manager will automatically merge them into one security group. All security groups created in future will also be checked for this. If Firewall Manager detects that a security groups exists that is not associated to any resource or has overlapping rules, you’ll be notified when Firewall Manager Service marks the resource as non-compliant. You can then take remediation action of editing the security policy action to Apply policy rules and auto remediate any non-compliant resources and the non-compliant security group rules will either be removed (in case of unused) or rules will be merged (in case of redundant security groups). Alternatively, you can check the non-compliant resources, then log into the AWS account and take remediation action manually.

Conclusion

In this post, you learned how you can create AWS Firewall Manager rules using the console. Using both VPC security groups and AWS Firewall Manager, you created a deployment strategy that enables the developers in your organization to maintain a security mindset and begin coding security group rules, while at the same time ensuring that all applications are still protected by a set of security group rules defined by your organization’s security team. In addition, you have reduced the likelihood of misconfigured or overly permissive security groups, as well as the operational burden, by simplifying the security groups created in all your member accounts.

For further reading, see AWS Firewall Manager Update – Support for VPC Security Groups.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kaustubh Phatak

Kaustubh is a Cloud Support Engineer II at AWS. On a daily basis, he provides solutions for customers’ cloud architecture questions related to Networking, Security, and DevOps domain. Outside of the office, Kaustubh likes to play cricket, ping-pong, and soccer. He is also an avid console gamer.

Using AWS CodeBuild to execute administrative tasks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/devops/using-aws-codebuild-to-execute-administrative-tasks/

This article is a guest post from AWS Serverless Hero Gojko Adzic.

At MindMup, we started using AWS CodeBuild to quickly lift and shift support tasks to the cloud. MindMup is a collaborative mind-mapping tool, used by millions of teachers and students to collaborate on assignments, structure ideas, and organize and navigate complex information. Still, the team behind the product consists of just two people, and we’re both responsible for everything from sales and product management to programming and customer support. One of the key reasons why such a tiny team can support a large group of users is that we tend to automate all recurring tasks in order to free up our time for more productive work.

Administrative support tasks often start as ad-hoc command line scripts, with manual intervention to resolve exceptions. As the scripts stabilize, humans can be less involved, so teams look for ways of scheduling and automating job executions. For infrastructure deployed to AWS, this also means moving away from running scripts from on-premises developers or operations computers to running in the cloud. With utilization-based pricing and on-demand capacity, AWS Lambda and AWS Fargate are the two obvious choices for running such tasks in AWS. There is a third option, often overlooked: CodeBuild. Although CodeBuild is designed for a completely different purpose, it offers some compelling features that make it very easy to set up and run periodic support jobs, especially as a first easy step towards a more systematic solution.

Solution overview

CodeBuild is, as the name suggests, a managed service for executing typical software build jobs. In some ways, such as each job having an associated IAM permissions, CodeBuild is similar to Lambda and Fargate. One of fundamental differences between Codebuild jobs and Lambda functions or Fargate tasks is the location of the executable definition of the job. The executable definition of a Lambda function is in a ZIP archive deployed to Lambda. For Fargate, the executable definition is in a Docker container image, deployed in a task with Amazon ECS or in a Kubernetes pod with Amazon EKS. Both services require an explicit deployment to update the executable definition of a task. For CodeBuild jobs, the executable definition is not deployed to an AWS service. Instead, it is in a source code control system that you can manage locally or using a service such as GitHub or AWS CodeCommit.

Sitting alongside the rest of the source code, each CodeBuild task has an entry-point configuration file, by convention called a buildspec.yml. The buildspec.yml file lists the programming language runtimes required by the job, and the steps to execute before, during and after the build job. For example, the following buildspec.yml sets up a build environment for JavaScript with Node.js 12, installs dependencies, runs tests, and then produces a deployment package using webpack.

version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 12
    commands:
      - npm install
  build:
    commands:
      - npm test
      - npm run web pack

Usually, the buildspec.yml file involves some variant of installing dependencies, compiling code and running tests, then packaging and versioning artifacts. But the steps of a buildspec.yml file are actually just shell commands, so CodeBuild doesn’t necessarily need to run tasks related to compiling or packaging. It can execute any sequence of Unix commands, scripts, or binaries. This makes CodeBuild a uniquely compelling choice for the transition from running shell scripts on an operations machine to running a shell script in the cloud.

Comparing CodeBuild and Lambda for administrative tasks

The major advantage of CodeBuild over Lambda functions for support jobs is that the scripts can be significantly more flexible. Moving from shell scripts to Lambda functions usually means rewriting the task in a language such as JavaScript or Python. You can execute a shell script from a Lambda function when using Amazon Linux 1 instances, or even use a Bash custom runtime, but when using CodeBuild, you can execute the same shell script without changes.

Lambda functions usually run only in a single language. Support tasks often perform a chain of actions, and different steps might require utilities written in different languages. Running such varied tasks with a Lambda function would require constructing a custom Lambda runtime, or splitting steps into multiple functions with different runtimes, and then somehow coordinating and passing data between them. AWS Step Functions can be used to coordinate the workflow, but most support tasks are a sequence of steps, to be executed in order if the previous one succeeds. With CodeBuild, you can configure the task to include all required runtimes.

Support tasks often need to transform the outputs of one tool and pass it into a different tool. For example, select rows from a database containing expired accounts, then filter out only the user emails, separate the data with commas, and send to an automated mailer with a template. Tools such as grep, awk, and sed become invaluable for such transformations. However, they aren’t available on new Lambda runtimes.

Lambda runtimes based on Amazon Linux 2 bundle only the absolutely minimal operating system packages. Even the basic command line Linux utilities, such as which, are not packaged with the recent Lambda runtimes. On the other hand, CodeBuild runs tasks in a full-blown Linux environment. Executing support tasks through CodeBuild means that you can pipe results into all the standard Unix tools, without having to use half-baked replacements written in a scripting language.

For applications running in the AWS ecosystem, support tasks often need to communicate with AWS services or resources. Standard CodeBuild environments also come with the aws command line tools, so you can use them without any additional setup. This becomes especially important for moving data from and to Amazon S3, where command line tools have operations for batch uploads or downloads or recursive directory synchronization. Those operations are not directly available through the programming language SDK libraries.

It is, of course, possible to install additional binaries to Lambda functions by building them for the right Linux environment. Because the standard shared system libraries are also not in the recent Lambda runtimes, compiling additional tools is akin to building a Linux distribution from scratch. With CodeBuild, most standard tools are included already, and you can add additional tools to the system by using an operating system package manager (apt-get or yum).

CodeBuild execution environments can also be more flexible in terms of execution time and performance constraints. Lambda tasks are currently limited to fifteen minutes. The only performance setting you can influence is the memory size, which proportionally impacts the CPU power. The highest setting is currently 3GB memory, which assigns two virtual cores. CodeBuild allows you to configure tasks which can run for up to 8 hours. You can also explicitly select a compute type, including using GPU processors and going all the way up to 255 GB memory or 72 virtual CPU cores. This makes CodeBuild an interesting choice for tasks that need to potentially run longer than fifteen minutes, that are very computationally intensive, or that need a lot of working memory.

On the other hand, compared to Lambda functions, CodeBuild jobs start significantly slower and running them in parallel is not as easy or convenient. For example, by default you can only run up to 60 CodeBuild tasks in parallel, but this is a soft limit that you can increase. However, support tasks are mostly batch jobs by nature, so saving a few seconds or being able to execute thousands of such tasks in parallel is not usually important.

Comparing CodeBuild or Amazon EC2/Fargate for administrative tasks

Most of the limitations of Lambda functions for admin jobs could be solved by running a virtual machine through Amazon EC2. In fact, running tasks on Amazon EC2 was the usual way of lifting support tasks from the operations computers and moving them into the cloud until Lambda became available. However, due to how Amazon EC2 instances are billed, teams often bundled all the operations tasks on a single Amazon EC2 instance. That instance needed a superset of all the security privileges required by the various tasks, opening potential security risks. That’s where Fargate can help. Fargate runs container-based tasks on demand, offering utilization-based billing and removing many restrictions of Lambda, such as the 15-minute runtime and reduced operating system environment, also allowing you to choose execution environments more flexibly.

This means that, compared to Fargate tasks, CodeBuild execution is more or less comparable in terms of what you can run and how much power you can assign to your tasks. Both can use a custom Docker container, and both run a full-blown operating system with all the standard binaries. They also have similar terms of start-up time and parallelization. However, setting up a CodeBuild job and updating it later is much easier than with Fargate using tasks or pods.

With Fargate, you need to provide a custom Docker container with the right entry point. CodeBuild lets you use custom containers or choose standard images provided by AWS, including Ubuntu or AWS Linux instances. Likewise, configuring a Fargate task involves deploying in an Amazon VPC, and if the task needs to access other AWS services, setting up a NAT gateway. CodeBuild tasks have network access by default, and can be deployed in a VPC if required.

Updating support scripts can also be easier with CodeBuild than with Fargate. Deploying a new version of a task into Fargate involves building a new Docker container and uploading it to a container manager such as Amazon ECS or Amazon EKS. Deploying a new version of a CodeBuild job involves committing to the version control system, without the need to set up a CI/CD pipeline. This makes CodeBuild a compelling way of setting up support tasks, especially for larger organizations with strict access rules. Support people can update tasks definitions by having access to the source code control system, without the need to get access to production resources on AWS.

Fargate environments are transient, similar to Lambda functions. If you want to preserve some files between job runs (for example, compiled task binaries or installed dependencies), you would have to manage that manually with Fargate. CodeBuild supports artifact caching out of the box, so it’s significantly easier to preserve data files or installed dependencies between runs.

Potential downsides

Although taking supporting tasks directly from the source code repository is one of the biggest advantages of CodeBuild over Fargate or Lambda, it can also be a major drawback. Ensuring that the scripts are always in a stable condition requires discipline regarding committing to the trunk. Without such discipline, untested or unstable code might be used for admin tasks by mistake. A potential workaround for teams without good trunk commit discipline would be to use a specific branch for CodeBuild tasks, and then merge code into that branch once it is ready to be released.

Using support scripts directly from a source code repository makes it more complicated to synchronize versions with other deployed software. If you need the support scripts to track the exact version of code that was deployed to other services, it’s probably safer and easier to use Lambda functions or Fargate containers with an explicit deployment step.

Executing support tasks through CodeBuild

CodeBuild jobs take a bit more setup than Lambda functions, but significantly less than Fargate tasks. Below is an example of a CodeBuild job set up through AWS CloudFormation.

Architecture diagram for the CodeBuild being used for administrative tasks

Here are a few things to note:

  • You can add the required IAM permissions for the task into the Policies section of the CodeBuildRole resource.
  • The Environment section of the CodeBuildProject resource is where you can define the container image, choose the virtual hardware or set up environment variables to configure the task.
  • Environment variables are directly available for the shell commands listed in the buildspec.yml file, so this trick allows you to easily parameterize jobs to use resources from the same AWS CloudFormation template.
  • The Location and BuildSpec properties in the Source section define the source code repository, and the path of the buildspec.yml file within the repository.
Resources:
  CodeBuildRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "codebuild.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'logs:*'
                Resource: '*'

  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Ref JobName
      ServiceRole: !GetAtt CodeBuildRole.Arn
      Artifacts:
        Type: NO_ARTIFACTS
      LogsConfig:
        CloudWatchLogs:
          Status: ENABLED
      Cache:
        Type: NO_CACHE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/standard:3.0
        EnvironmentVariables:
          - Name: SYSTEM_BUCKET 
            Value: !Ref SystemBucketName
      Source:
        Type: GITHUB
        Location: !Ref GithubRepository 
        GitCloneDepth: 1
        BuildSpec: !Ref BuildSpecPath 
        ReportBuildStatus: False
        InsecureSsl: False
      TimeoutInMinutes: !Ref TimeoutInMinutes

CodeBuild jobs usually run after changes to source code files. Support tasks usually need to run on a periodic schedule. The previous snippet did not define the Triggers property for the CodeBuild job, so it will not track source code changes or run automatically. Instead, you can set up an Amazon CloudWatch Event rule (or optionally use Amazon EventBridge, that provides more sophisticated rules) that will periodically trigger the CodeBuild job. Here is how to do that with AWS CloudFormation:

  RunCodeBuildJobRole:
    Condition: ScheduleRuns
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: StartTask 
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'codebuild:StartBuild'
                Resource:
                  - !GetAtt CodeBuildProject.Arn

  RunCodeBuildJobRoleRule:
    Condition: ScheduleRuns
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-scheduler'
      Description: Periodically runs codebuild job to archive defunct accounts
      ScheduleExpression: !Ref ScheduleRate
      State: ENABLED
      Targets:
        - Arn: !GetAtt CodeBuildProject.Arn
          Id: CodeBuildProject
          RoleArn: !GetAtt RunCodeBuildJobRole.Arn

Note the ScheduleExpression property of the RunCodeBuildJobRoleRule resource. You can use any supported CloudWatch schedule expression there to set up when or how frequently your job runs.

Observability and audit logs

If a support job fails for any reason, people need to know. Luckily, CodeBuild already integrates nicely with CloudWatch to report job statuses, so you can set up another CloudWatch Event rule that tracks failures and alerts someone about it. To make notifications flexible, you can send them to an Amazon SNS topic. You can then subscribe for email notifications or forward those alerts somewhere else easily. The following wires up notifications with an AWS CloudFormation template.

  SnsPublishRole:
    Condition: CreateSNSNotifications
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AllowLogs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - 'SNS:Publish'
                Resource:
                  - !Ref SnsTopicArn

  CodeBuildNotificationRule:
    Condition: CreateSNSNotifications
    Type: AWS::Events::Rule
    Properties:
      Name: !Sub '${JobName}-fail-notification'
      Description: Notify about codebuild project failures
      RoleArn: !GetAtt SnsPublishRole.Arn
      EventPattern:
        source:
          - "aws.codebuild"
        detail-type:
          - "CodeBuild Build State Change"
        detail:
          build-status:
            - "FAILED"
            - "STOPPED"
          project-name:
            - !Ref CodeBuildProject
      State: ENABLED
      Targets:
        - Arn: Ref SnsTopicArn
          Id: NotificationTopic

Another option to keep the execution of your tasks under control is to generate a report using the test report functionality introduced a few months ago and specify in the buildspec.yml file about the location of the files that store results you want to include in your report.

Testing administrative tasks

Note the build-status list inside the CodeBuildNotificationRule resource. This defines a list of statuses about which you want to publish alerts. In the previous snippet, the list does not include successful runs. That’s because it’s usually not necessary to take any action when a support job runs successfully. However, during initial testing you may want to add IN_PROGRESS (notify when a task starts) and SUCCEEDED (notify when the job ends without an error).

Finally, one of the biggest challenges when moving scripts from an operations machine to running in CodeBuild is to create the right IAM policies. Command-line users on operations machines usually have a wide set of privileges, and identifying the minimum required for a specific job usually involves starting small, then iterating over failed attempts and opening up required operations. Running that process directly through CodeBuild can be quite slow. Instead, I suggest setting up a separate IAM policy for the job, then assigning it both to the role for the CodeBuild task, and to a command-line role or a command-line user. You can then iterate quickly directly on the command line and identify all required IAM operations, then remove the additional command-line user when done.

Conclusion

The next time you need to move a support task to the cloud, and you need a rich execution environment, consider using CodeBuild, at least as the initial step towards a more systematic solution. It will allow you to quickly get a script up and running with all the benefits of IAM isolation, scheduled execution, and reliable notifications.

Gojko is author of the Running Serverless book and interactive course. He is currently working on Video Puppet, a tool for editing videos as easily as editing text. You can reach out to him on Twitter.

Providing self-service repositories to end users to connect to AWS Lambda backed services

Post Syndicated from Richard Rustean original https://aws.amazon.com/blogs/devops/providing-self-service-repositories-to-end-users-to-connect-to-aws-lambda-backed-services/

Offering products to your consumers in AWS is a great way to accelerate adoption, and offering these products through AWS Service Catalog helps to simplify and streamline the process. This blog post describes how you can offer multiple consumers access to your backend products in AWS by using some simple AWS tools and services.

In this case, the backend product uploads newly created or modified objects from an AWS CodeCommit repository to a repository-specific path in an Amazon S3 bucket via some logic in an AWS Lambda function. This method works equally well with any other backend AWS service and is particularly useful for CI/CD or machine learning pipelines in which some logic is required before the pipeline processes the files. In a recent project, I used this method to push machine learning models to dynamically created Amazon EMR clusters.

Overview

The architecture behind the customer-facing portion of this solution is relatively simple, using only three AWS services. As discussed in the summary, the backend architecture uses a single Lambda function to push objects to Amazon S3. In reality, this could be a much larger and more complex solution.

Architecture diagram showing that we only need three AWS Services for this example

Getting started

This example deploys all components of this infrastructure as code using AWS CloudFormation. AWS CloudFormation templates are deployed using AWS CLI. You can deploy them using the AWS console if you prefer, but that is not covered in this blog post.

Prerequisites

This post assumes that you have an AWS account in place with permissions to allow the following:

  • Access to create AWS Lambda functions
  • Access to create AWS CodeCommit repositories and push to them
  • Access to create AWS Service Catalog products
  • Access to create and subscribe to Amazon SNS topics
  • AWS CLI Installed with the above access to your AWS account
  • Amazon S3 bucket created

You should download the AWS CloudFormation templates for this project, unzip them, and store them in a local folder.

Deploying the backend service

In the source code for this blog post, find an AWS CloudFormation template called backend-function.yml. This is the backend service with which you interact. When you create your repository through AWS Service Catalog, you specify this backend service as an input, which allows your single AWS Service Catalog product to serve many different backend products.

  1. Download the backend AWS CloudFormation templates as discussed in the Prerequisites section, unzip them, and place them in a folder on your local computer
  2. Navigate to that folder and run the following AWS CLI command. In this command, you assume that you act on commit to the master branch of your repository. If this is not the case, change the codeCommitBranch key to the branch on which you are acting. You should also replace the value <myS3Bucket> with the correct name for your Amazon S3 bucket.
    aws cloudformation create-stack --stack-name myBackendFunction --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM CAPABILITY_IAM --template-body file://backend-function.yml --parameters ParameterKey=codeCommitBranch,ParameterValue=master ParameterKey=s3BucketName,ParameterValue=<myS3Bucket>
    This returns a stack ID such as the following:
    {
    "StackId": "arn:aws:cloudformation:eu-west-1:737661087350:stack/myBackendFunction/c0d04af0-f98a-11e9-8f65-06c34fd08df4"
    }
  3. You can check on the progress of the AWS CloudFormation stack creation by running the following command and looking at the StackStatus.
    aws cloudformation describe-stacks --stack-name "<StackId from the above command>"
    Once your status is set to CREATE_COMPLETE, you can continue to the next step.
  4.  Looking at the output from the aws cloudformation describe-stacks command, you should also note down the ExportName in the Outputs section. This is the value that you use when provisioning the CodeCommit repositories so that they connect to this specific backend product. In this case, the name is myBackendFunction-BackendLambdaCode.

Deploying the Service Catalog product

In the folder that you downloaded and unzipped the project files into, find an AWS CloudFormation template called service-catalog-product.yml. This is the code that creates the service catalog product for your consumers and contains the CodeCommit repository that they use. It does this by calling another AWS CloudFormation template that you upload to your Amazon S3 bucket.

  1. In the folder into which you downloaded and unzipped the project files, find an AWS CloudFormation template called create-backend-linked-repository.yml. You need to upload this to the Amazon S3 bucket you created. In practice, this is on a secured bucket owned by your infrastructure team, but in this example, place it on the same bucket to which your backend function is writing. Upload it using the following AWS CLI command, where <myS3Bucket> is the name of your Amazon S3 bucket
    aws s3 cp create-backend-linked-repository.yml s3://<myS3Bucket>
  2.  In the folder into which you downloaded and unzipped the project files, find the file named service-catalog-product.yml.
  3. Navigate to the local folder with the files you downloaded and run the following AWS CLI command. You should replace the value <myS3Bucket> with the correct name for your Amazon S3 bucket, and replace the value <permissionArn> with the full ARN of a user, group, or role that needs to be able to deploy the repositories from the AWS Service Catalog.
    aws cloudformation create-stack --stack-name myServicCatalogProduct --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM CAPABILITY_IAM --template-body file://service-catalog-product.yml --parameters ParameterKey=s3BucketName,ParameterValue=<myS3Bucket> ParameterKey=permissionsArn,ParameterValue=< permissionsArn >
    This returns a stack ID such as the following:
    {
    "StackId": "arn:aws:cloudformation:eu-west-1:737661087350:stack/myServicCatalogProduct/dcb48f80-f988-11e9-8199-0637bdb794d0"
    }
  4.  You can check on the progress of the AWS CloudFormation stack creation by running the following command and looking at the StackStatus:
    aws cloudformation describe-stacks --stack-name "<StackId from the above command>"

Once your status is set to CREATE_COMPLETE, you can continue to the next step.

Deploying the AWS CodeCommit repository as a user

Now that you have deployed the infrastructure around this AWS Service Catalog product, you can deploy the actual repository just as a user would. You do this from the AWS Service Catalog page in the AWS console.

  1. Open the AWS Service Catalog page and navigate to the product lists. You should see the product you just created, called CodeCommit Repository for Demo. Choose the product name, and then choose Launch Product.
  2. Give the product a name and choose Next.
  3. Enter the details into the Parameters page. You can leave the default values in there for this example or change the values to something more meaningful. The parameter for backendFunction should be the name of the backend function. This is the ExportName that you noted down in Step 4 in the Deploying the backend service section of this blog (in this case it is myBackendFunction-BackendLambdaCode).
  4. Enter any tags that you want to use and then choose Next.
  5. Leave the checkbox unselected in the Notifications section and choose Next.
  6.  Choose Launch to create your new repository.

Uploading content to the AWS CodeCommit Repository

Note that, in the AWS CodeCommit console, you have created a new repository. You can now choose the Clone URL links (either HTTPS or SSH) and connect from your favorite Git client, as shown in the following screenshot.

View of the CodeCommit Repository that was created in the previous step

If you prefer, you can also use the AWS CodeCommit user interface to add and update your files, as shown in the following screenshot.

Adding files directly to the Repository using the AWS CodeCommit UI

Once you commit to the master branch, you can see your files in the Amazon S3 bucket you referenced for this project, which validates that your integration has worked.

Cleaning up your environment

There are three steps to cleaning up your environment after deploying this infrastructure. You must first remove any AWS CodeCommit repositories that you provisioned using AWS Service Catalog, then remove the infrastructure AWS CloudFormation Templates that you deployed and finally you should remove any data that you pushed into your AWS CodeCommit repository from Amazon S3.

Since the end user created the AWS CodeCommit Repository via AWS Service Catalog, we will get them to remove these repositories in the same way.

  1. Open the AWS Service Catalog page and navigate to the Provisioned product list. You should see the repository that you created earlier. Hit the three dots to the left of the product and select Terminate provisioned product.
  2. Click Terminate in the warning window that appears.
  3. After a few minutes hit the refresh button and you will see that this provisioned product disappears.

Snip showing how to terminate an AWS Service Catalog provisioned products

Now that we have cleaned up our repositories, we need to remove the AWS CloudFormation stacks that contain all of the logic. Since we deployed these using the AWS CLI, we will remove them in the same way.

  1. You should first remove the Service Catalog stack by running the command:
    aws cloudformation delete-stack --stack-name myServicCatalogProduct
  2. You can check on the progress of the AWS CloudFormation stack deletion by running the following command and looking at the StackStatus:
    aws cloudformation list-stacks
    When this stack shows a StackStatus of DELETE_COMPLETE then it has been successfully removed and you can move onto the next step.
  3. Next you need to remove the backend stack. You can do this by running the following command:
    aws cloudformation delete-stack --stack-name myBackendFunction
  4. You can check on the progress of the AWS CloudFormation stack deletion by running the following command and looking at the StackStatus:
    aws cloudformation list-stacks
    When this stack shows a StackStatus of DELETE_COMPLETE then it has been successfully removed and you can move onto the next step.

Finally you should remove any unwanted test data from the Amazon S3 bucket that you chose as a target for our repository. All objects will be in a folder with the same name as the repository and this whole folder can now be removed. Please ensure that any data being removed is no longer required before deleting.

Conclusion

In this blog post, you used AWS CloudFormation and AWS CLI to deploy an AWS Service Catalog product and associated a backend Lambda function to move files from a CodeCommit repository to an Amazon S3 bucket. As previously discussed, this is a simple use case for what you can do using this type of infrastructure. By changing the Lambda function to match your requirements, you can use the same infrastructure for practically anything.

 

Deploying a ASP.NET Core web application to Amazon ECS using an Azure DevOps pipeline

Post Syndicated from John Formento original https://aws.amazon.com/blogs/devops/deploying-a-asp-net-core-web-application-to-amazon-ecs-using-an-azure-devops-pipeline/

For .NET developers, leveraging Team Foundation Server (TFS) has been the cornerstone for CI/CD over the years. As more and more .NET developers start to deploy onto AWS, they have been asking questions about using the same tools to deploy to the AWS cloud. By configuring a pipeline in Azure DevOps to deploy to the AWS cloud, you can easily use familiar Microsoft development tools to build great applications.

Solution overview

This blog post demonstrates how to create a simple Azure DevOps project, repository, and pipeline to deploy an ASP.NET Core web application to Amazon ECS using Azure DevOps. The following screenshot shows a high-level architecture diagram of the pipeline:

 

Solution Architecture Diagram

In this example, you perform the following steps:

  1. Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application.
  2. Create a pipeline in Azure DevOps
  3. Build an Amazon ECS Cluster, Task and Service.
  4. Kick-off deployment of the ASP.Net Core web application using the newly create Azure DevOps pipeline.

 

Prerequisites

Ensure you have the following prerequisites set up:

  • An Amazon ECR repository
  • An IAM user with permissions for Amazon ECR and Amazon ECS (the user will need an access key and secret access key)

 

Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application

Follow these steps to deploy a .NET Core app onto your Amazon ECS cluster using the Azure DevOps (ADO) repository and pipeline:

 

  1. Login to dev.azure.com and navigate to the marketplace.
  2. Go to Visual Studio, search for “AWS”, and add the AWS Tools for Microsoft Visual Studio Team Services.
  3. Create a project in ADO: Provide a project name and choose Create.
  4. On the Project Summary page, choose Project Settings.
  5. In the Project Settings pane, navigate to the Service Connections page.
  6. Choose Create service connection, select AWS, and choose Next.
  7. Input an Access Key ID and Secret Access Key. (You’ll need an IAM user with permissions for Amazon ECR and Amazon ECS in order to deploy via the Azure DevOps pipeline.) Choose Save.
  8. Choose Repos in the left pane, then Clone in Visual Studio under Clone to your computer.
  9. Create a ASP.NET Core web application in Visual Studio, set the location to locally cloned repository, and check Enable Docker support.
  10. Once you’ve created the new project, perform an initial commit and push to the repository in Azure DevOps.

 

Creating a pipeline in Azure DevOps

Now that you have synced the repository, create a pipeline in Azure DevOps.

  1. Go to the pipeline page within Azure DevOps and choose Create Pipeline.
  2. Choose Use the classic editor.Pipeline configuration with repository
  3. Select Azure Repos Git for the location of your code and select the repository you created earlier.
  4. On the Choose a Template page, select Docker Container and choose Apply.
  5. Remove the Push an image step.
  6. Add an Amazon ECR Push task by choosing the + symbol next to Agent job 1. You can search for “AWS” in the Add tasks pane to filter for all AWS tasks.

 

Now, configure each task:

  1. Choose the Build an image task and ensure that the action is set to Build an image. Additionally, you can modify the Image Name to your standards.Pipeline configuration page Azure DevOps
  2. Choose the Push Image task and provide the following
    • Enter a name under Display Name.
    • Select the AWS Credentials that you created in Service Connections.
    • Select the AWS Region.
    • Provide the source image name, which you can find in the setting for the Build an image task.
    • Enter the name of the repository in Amazon ECR to which the image is pushedPipeline configuration page Azure DevOps
  3. Choose Save and queue.

Build Amazon ECS Cluster, Task, and Service

The goal here is to test up to building the Docker image and ensure it’s pushed to Amazon ECR. Once the Docker image is in Amazon ECR, you can create the Amazon ECS cluster, task definition, and service leveraging the newly created Docker image.

  1. Create an Amazon ECS cluster.
  2. Create an Amazon ECS task definition. When you create the task definition and configure the container, use the Amazon ECR URI for the Docker image that was just pushed to Amazon ECR.
  3. Create an Amazon ECS service.

Go back and edit the pipeline:

  1. Add the last step by choosing the + symbol next to Agent job 1.
  2. Search for “AWS CLI” in the search bar and add the task.
  3. Choose AWS CLI and configure the task.
  4. Enter a name under Display Name, such as Update ECS Service.
  5. Select the AWS Credentials that you created in Service Connections.
  6. Select the AWS Region.
  7. Input the following command, which updates the Amazon ECS service after a new image is pushed to Amazon ECR. Replace <clustername> and <servicename> with your Amazon ECS cluster and service names.
    • Command:ecs
    • Subcommand:update-service
    • Options and parameters: --cluster <clustername> --service <servicename> --force-new-deployment
  8. Now choose the Triggers tab and select Enable continuous integration with the repository you created.
  9. Choose Save and queue.

 

At this point, your build pipeline kicks off and builds a Docker image from the source code in the repository you created, pushes the image to Amazon ECR, and updates the Amazon ECS service with the new image.

You can verify by viewing the build. Choose Pipelines in Azure DevOps, selecting the entry for the latest run, and then the icon under the status column. Once it successfully completes, you can log in to the AWS console and view the updated image in Amazon ECR and the updated service in Amazon ECS.Pipeline status page Azure DevOps

Every time you commit and push your code through Visual Studio, this pipeline kicks off and builds and deploys your application to Amazon ECS.

Cleanup

At the end of this example, once you’ve completed all steps and are finished testing, follow these steps to disable or delete resources to avoid incurring costs:

  1. Go to the Amazon ECS console within the AWS Console.
  2. Navigate to the cluster you created, then choose the Tasks tab.
  3. Choose Stop all to turn off the tasks.

Conclusion

This blog post reviewed how to create a CI/CD pipeline in Azure DevOps to deploy a Docker Image to Amazon ECR and container to Amazon ECS. It provided detailed steps on how to set up a basic CI/CD pipeline, leveraging tools with which .NET developers are familiar and the steps needed to integrate with Amazon ECR and Amazon ECS.

I hope this post was informative and has helped you learn the basics of how to integrate Amazon ECR and Amazon ECS with Azure DevOps to create a robust CI/CD pipeline.

About the Authors

John Formento

 

 

John Formento is a Solution Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud.

Enhancing automated database continuous integration with AWS CodeBuild and Amazon RDS Database Snapshot

Post Syndicated from bobyeh original https://aws.amazon.com/blogs/devops/enhancing-automated-database-continuous-integration-with-aws-codebuild-and-amazon-rds-database-snapshot/

In major integration merges, it’s sometimes necessary to verify the changes with existing online data. To inspect the changes with a cloned database can give us confidence to deploy to the production database. This post demonstrates how to use AWS CodeBuild and Amazon RDS Database Snapshot to verify your code revisions in both the application layer and the underlying layer, ensuring that your existing data works seamlessly with your revised code.

Making code revisions using continuous integration requires running periodic verification to ensure that your new deliverable works functionally and reliably. It’s easy to focus attention solely on the surface level changes made to the application layer. However, it’s important to remember to inspect the changes made to the underlying data layer too.

From the application layer, users modify the data model for different reasons. Any data model definition change in the application layer maps to a schema change in the database. For those services backed with a relational database (RDBMS), a user might perform data definition language (DDL) operations directly toward a database schema or rely on an object-relational mapping (ORM) library to migrate the schema to fit the application revision. These schema changes (CREATE, DROP, ALTER, TRUNCATE, etc.) can be very critical, especially for those services serving real customers.

Performing proper verification and simulation for these changes mitigates the risk of bringing down services. After the changes are applied, fundamental operation testing (CRUD – CREATE, READ, UPDATE, DELETE) toward data models is mandatory; this leads to data control language (DCL) operations (INSERT, SELECT, UPDATE, DELETE, etc.). After all the necessary steps, a user can move on to the deployment stage.

About this page

  • Time to read:6 minutes
  • Time to complete:30 minutes
  • Cost to complete (estimated):Less than $1 for 1-GB database snapshot and restored instance
  • Learning level:Advanced (300)
  • Services used:AWS CodeBuild, IAM, RDS

Solution overview

This example uses a buildspec file in CodeBuild. Set up a build project that points to a source control repository containing that buildspec file. The CodeBuild runtime environment restores the database server from an RDS snapshot.We restore snapshot to an Amazon Aurora cluster as example through AWS Command Line Interface (AWS CLI). After the database is restored, the build process starts to run your integration process, which is in mock code in the buildspec definition. After the verification stage, CodeBuild drops the restored database.

 

Architecture diagram showing an overview of how we use CodeBuild to restore a database snapshot to verify and validate the new database schema change.

Prerequisites

The following components are required to implement this example:

Walkthrough

Follow these steps to execute the solution.

Prepare your build specification file

Before you begin, prepare your CodeBuild Build Specification file with following information:

  • db-cluster-identifier-prefix
  • db-snapshot-identifier
  • region-ID
  • account-ID
  • vpc-security-group-id

The db-cluster-identifier-prefix creates a temporary database followed by a timestamp. Make sure that this value does not overlap with any other databases. The db-snapshot-identifier points to the snapshot you are calling to run with your application. Region-ID and account-ID describe the account on which you are running. The vpc-security-group-id indicates the security group you use in the CodeBuild environment and temporary database.

YAML
Version: 0.2
phases:
  install:
    runtime-versions:
      python: 3.7
pre_build:
  commands:
    - pip3 install awscli --upgrade --user
    - export DATE=`date +%Y%m%d%H%M`
    - export DBIDENTIFIER=db-cluster-identifier-prefix-$DATE
    - echo $DBIDENTIFIER
    - aws rds restore-db-cluster-from-snapshot --snapshot-identifier arn:aws:rds:region-ID:account-ID:cluster-snapshot:db-snapshot-identifier –vpc-security-group-ids vpc-security-group-id --db-cluster-identifier $DBIDENTIFIER --engine aurora
    - while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
    - echo "Temp db ready"
    - export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
    - echo $ENDPOINT
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    - echo proceed db migrate update, DDL proceed here
    - echo proceed application test, CRUD test run here
post_build:
  commands:
    - echo Build completed on `date`
    - echo $DBNAME
    - aws rds delete-db-cluster --db-cluster-identifier $DBIDENTIFIER --skip-final-snapshot &

 

After you finish editing the file, name it buildspec.yml. Save it in the root directory with which you plan to build, then commit the file into your code repository.

  1. Open the CodeBuild console.
  2. Choose Create build project.
  3. In Project Configuration, enter the name and description for the build project.
  4. In Source, select the source provider for your code repository.
  5. In Environment image, choose Managed image, Ubuntu, and the latest runtime version.
  6. Choose the appropriate service role for your project.
  7. In the Additional configuration menu, select the VPC with your Amazon RDS database snapshots, as shown in the following screenshot, and then select Validate VPC Settings. For more information, see Use CodeBuild with Amazon Virtual Private Cloud.
  8. In Security Groups, select the security group needed for the CodeBuild environment to access your temporary database.
  9. In Build Specifications, select Use a buildspec file.

CodeBuild Project Additional Configuration - VPC

Grant permission for the build project

Follow these steps to grant permission.

  1. Navigate to the AWS Management Console Policies.
  2. Choose Create a policy and select the JSON tab.To give CodeBuild access to the Amazon RDS resource in the pre_build stage, you must grant RestoreDBClusterFromSnapshot and DeleteDBCluster. Follow the least privilege guideline and limit the DeleteDBCluster action point to “arn:aws:rds:*:*:cluster: db-cluster-identifier-*”.
  3. Copy the following code and paste it into your policy:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": "rds:RestoreDBClusterFromSnapshot",
          "Resource": "*"
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": "rds:DeleteDBCluster*",
          "Resource": "arn:aws:rds:*:*:cluster:db-cluster-identifier-*"
        }
      ]
    }
  4. Choose Review Policy.
  5. Enter a Name and Description for this policy, then choose Create Policy.
  6. After the policy is ready, attach it to your CodeBuild service role, as shown in the following screenshot.

Attach created policy to IAM role

Use database snapshot restore to launch the build process

  1. Navigate back to CodeBuild and locate the project you just created.
  2. Give an appropriate timeout setting and make sure that you set it to the correct branch for your repository.
  3. Choose Start Build.
  4. Open the Build Log to view the database cluster from your snapshot in the pre_build stage, as shown in the following screenshot.CodeBuild ProjectBuild Log - pre_build stage
  5. In the build stage, use $ENDPOINT to point your application to this temporary database, as shown in the following screenshot.CodeBuild Project Build Log - build stage
  6. In the post_build, delete the cluster, as shown in the following screenshot.CodeBuild Project Build log - post build stage

Test your database schema change

After you set up this pipeline, you can begin to test your database schema change within your application code. This example defines several steps in the Build Specifications file to migrate the schema and run with the latest application code. In this example, you can verify that all the modifications fit from the application to the database.

YAML
build:
  commands:
    - echo Build started on `date`
    - echo proceed db connection to $ENDPOINT
    # run a script to apply your latest schema change
    - echo proceed db migrate update
    # start the latest code, and run your own testing
    - echo proceed application test

After validation

After we validated the database schema change in the above steps, a suitable strategy for deployment to production should be utilized that would align with the criteria to satisfy the business goals.

Cleaning up

To avoid incurring future charges, delete the resources as following steps:

  1. Open the CodeBuild console
  2. Click the project you created for this test.
  3. Click the delete build project and input delete to confirm deletion.

Conclusion

In this post, you created a mechanism to set up a temporary database and limit access into the build runtime. The temporary database stands alone and isolated. This mechanism can be applied to secure the permission control for the database snapshot, or not to break any existing environment. The database engine applies to all available RDS options, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. This provides options, without impacting any existing environments, for critical events triggered by major changes in the production database schema, or data format changes required by business decisions.