Tag Archives: Developer Tools

Standardizing CI/CD pipelines for .NET web applications with AWS Service Catalog

Post Syndicated from Borja Prado Miguelez original https://aws.amazon.com/blogs/devops/standardizing-cicd-pipelines-net-web-applications-aws-service-catalog/

As companies implement DevOps practices, standardizing the deployment of continuous integration and continuous deployment (CI/CD) pipelines is increasingly important. Your developer team may not have the ability or time to create your own CI/CD pipelines and processes from scratch for each new project. Additionally, creating a standardized DevOps process can help your entire company ensure that all development teams are following security and governance best practices.

Another challenge that large enterprise and small organization IT departments deal with is managing their software portfolio. This becomes even harder in agile scenarios working with mobile and web applications where you need to not only provision the cloud resources for hosting the application, but also have a proper DevOps process in place.

Having a standardized portfolio of products for your development teams enables you to provision the infrastructure resources needed to create development environments, and helps reduce the operation overhead and accelerate the overall development process.

This post shows you how to provide your end-users a catalog of resources with all the functionality a development team needs to check in code and run it in a highly scalable load balanced cloud compute environment.

We use AWS Service Catalog to provision a cloud-based AWS Cloud9 IDE, a CI/CD pipeline using AWS CodePipeline, and the AWS Elastic Beanstalk compute service to run the website. AWS Service Catalog allows organizations to keep control of the services and products that can be provisioned across the organization’s AWS account, and there’s an effective software delivery process in place by using CodePipeline to orchestrate the application deployment. The following diagram illustrates this architecture.

Architecture Diagram

You can find all the templates we use in this post on the AWS Service Catalog Elastic Beanstalk Reference architecture GitHub repo.

Provisioning the AWS Service Catalog portfolio

To get started, you must provision the AWS Service Catalog portfolio with AWS CloudFormation.

  1. Choose Launch Stack, which creates the AWS Service Catalog portfolio in your AWS account.Launch Stack action
  2. If you’re signed into AWS as an AWS Identity and Access Management (IAM) role, add your role name in the LinkedRole1 parameter.
  3. Continue through the stack launch screens using the defaults and choosing Next.
  4. Select the acknowledgements in the Capabilities box on the third screen.

When the stack is complete, a top-level CloudFormation stack with the default name SC-RA-Beanstalk-Portfolio, which contains five nested stacks, has created the AWS Service Catalog products with the services the development team needs to implement a CI/CD pipeline and host the web application. This AWS Service Catalog reference architecture provisions the AWS Service Catalog products needed to set up the DevOps pipeline and the application environment.

Cloudformation Portfolio Stack

When the portfolio has been created, you have completed the administrator setup. As an end-user (any roles you added to the LinkedRole1 or LinkedRole2 parameters), you can access the portfolio section on the AWS Service Catalog console and review the product list, which now includes the AWS Cloud9 IDE, Elastic Beanstalk application, and CodePipeline project that we will use for continuous delivery.

Service Catalog Products

On the AWS Service Catalog administrator section, inside the Elastic Beanstalk reference architecture portfolio, we can add and remove groups, roles, and users by choosing Add groups, roles, users on the Group, roles, and users tab. This lets us enable developers or other users to deploy the products from this portfolio.

Service Catalog Groups, Roles, and Users

Solution overview

The rest of this post walks you through how to provision the resources you need for CI/CD and web application deployment. You complete the following steps:

  1. Deploy the CI/CD pipeline.
  2. Provision the AWS Cloud9 IDE.
  3. Create the Elastic Beanstalk environment.

Deploying the CI/CD pipeline

The first product you need is the CI/CD pipeline, which manages the code and deployment process.

  1. Sign in to the AWS Service Catalog console in the same Region where you launched the CloudFormation stack earlier.
  2. On the Products list page, locate the CodePipeline product you created earlier.
    Service Catalog Products List
  3. Choose Launch product.

You now provision the CI/CI pipeline. For this post, we use some name examples for the pipeline name, Elastic Beanstalk application name, and code repository, which you can of course modify.

  1. Enter a name for the provisioned Codepipeline product.
  2. Select the Windows version and click Next.
  3. For the application and repository name, enter dotnetapp.
  4. Leave all other settings at their default and click Next.
  5. Choose Launch to start the provisioning of the CodePipeline product.

When you’re finished, the provisioned pipeline should appear on the Provisioned products list.

CodePipeline Product Provisioned

  1. Copy the CloneUrlHttp output to use in a later step.

You now have the CI/CD pipeline ready, with the code repository and the continuous integration service that compiles the code, runs tests, and generates the software bundle stored in Amazon Simple Storage Service (Amazon S3) ready to be deployed. The following diagram illustrates this architecture.

CodePipeline Configuration Diagram

When the Elastic Beanstalk environment is provisioned, the deploy stage takes care of deploying the bundle application stored in Amazon S3, so the DevOps pipeline takes care of the full software delivery as shown in the earlier architecture diagram.

The Region we use should support the WINDOWS_SERVER_2019_CONTAINER build image that AWS CodeBuild uses. You can modify the environment type or create a custom one by editing the CloudFormation template used for the CodePipeline for Windows.

Provisioning the AWS Cloud9 IDE

To show the full lifecycle of the deployment of a web application with Elastic Beanstalk, we use a .NET web application, but this reference architecture also supports Linux. To provision an AWS Cloud9 environment, complete the following steps:

  1. From the AWS Service Catalog product list, choose the AWS Cloud9 IDE.
  2. Click Launch product.
  3. Enter a name for the provisioned Cloud9 product and click Next.
  4. Enter an EnvironmentName and select the InstanceType.
  5. Set LinkedRepoPath to /dotnetapp.
  6. For LinkedRepoCloneUrl, enter the CloneUrlHttp from the previous step.
  7. Leave the default parameters for tagOptions and Notifications, and click Launch.
    Cloud9 Environment Settings

Now we download a sample ASP.NET MVC application in the AWS Cloud9 IDE, move it under the folder we specified in the previous step, and push the code.

  1. Open the IDE with the Cloud9Url link from AWS Service Catalog output.
  2. Get the sample .NET web application and move it under the dotnetapp. See the following code:
  3. cp -R aws-service-catalog-reference-architectures/labs/SampleDotNetApplication/* dotnetapp/
  1. Check in to the sample application to the CodeCommit repo:
  2. cd dotnetapp
    git add --all
    git commit -m "initial commit"
    git push

Now that we have committed the application to the code repository, it’s time to review the DevOps pipeline.

  1. On the CodePipeline console, choose Pipelines.

You should see the pipeline ElasticBeanstalk-ProductPipeline-dotnetapp running.

CodePipeline Execution

  1. Wait until the three pipeline stages are complete, this may take several minutes.

The code commitment and web application build stages are successful, but the code deployment stage fails because we haven’t provisioned the Elastic Beanstalk environment yet.

If you want to deploy your own sample or custom ASP.NET web application, CodeBuild requires the build specification file buildspec-build-dotnet.yml for the .NET Framework, which is located under the elasticbeanstalk/codepipeline folder in the GitHub repo. See the following example code:

version: 0.2
env:
  variables:
    DOTNET_FRAMEWORK: 4.6.1
phases:
  build:
    commands:
      - nuget restore
      - msbuild /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /t:Package
      - dir obj\Release\Package
artifacts:
  files:
    - 'obj/**/*'
    - 'codepipeline/*'

Creating the Elastic Beanstalk environment

Finally, it’s time to provision the hosting system, an Elastic Beanstalk Windows-based environment, where the .NET sample web application runs. For this, we follow the same approach from the previous steps and provision the Elastic Beanstalk AWS Service Catalog product.

  1. On the AWS Service Catalog console, on the Product list page, choose the Elastic Beanstalk application product.
  2. Choose Launch product.
  3. Enter an environment name and click Next.
  4. Enter the application name.
  5. Enter the S3Bucket and S3SourceBundle that were generated (you can retrieve them from the Amazon S3 console).
  6. Set the SolutionStackName to 64bit Windows Server Core 2019 v2.5.8 running IIS 10.0. Follow this link for up to date platform names.
  7. Elastic Beanstalk Environment Settings
  1. Launch the product.
  2. To verify that you followed the steps correctly, review that the provisioned products are all available (AWS Cloud9 IDE, Elastic Beanstalk CodePipeline project, and Elastic Beanstalk application) and the recently created Elastic Beanstalk environment is healthy.

As in the previous step, if you’re planning to deploy your own sample or custom ASP.NET web application, AWS CodeDeploy requires the deploy specification file buildspec-deploy-dotnet.yml for the .NET Framework, which should be located under the codepipeline folder in the GitHub repo. See the following code:

version: 0.2
phases:
  pre_build:
    commands:          
      - echo application deploy started on `date`      
      - ls -l
      - ls -l obj/Release/Package
      - aws s3 cp ./obj/Release/Package/SampleWebApplication.zip s3://$ARTIFACT_BUCKET/$EB_APPLICATION_NAME-$CODEBUILD_BUILD_NUMBER.zip
  build:
    commands:
      - echo Pushing package to Elastic Beanstalk...      
      - aws elasticbeanstalk create-application-version --application-name $EB_APPLICATION_NAME --version-label v$CODEBUILD_BUILD_NUMBER --description "Auto deployed from CodeCommit build $CODEBUILD_BUILD_NUMBER" --source-bundle S3Bucket="$ARTIFACT_BUCKET",S3Key="$EB_APPLICATION_NAME-$CODEBUILD_BUILD_NUMBER.zip"
      - aws elasticbeanstalk update-environment --environment-name "EB-ENV-$EB_APPLICATION_NAME" --version-label v$CODEBUILD_BUILD_NUMBER

The same codepipeline folder contains some build and deploy specification files besides the .NET ones, which you could use if you prefer to use a different framework like Python to deploy a web application with Elastic Beanstalk.

  1. To complete the application deployment, go to the application pipeline and release the change, which triggers the pipeline with the application environment now ready.
    Deployment Succeeded

When you create the environment through the AWS Service Catalog, you can access the provisioned Elastic Beanstalk environment.

  1. In the Events section, locate the LoadBalancerURL, which is the public endpoint that we use to access the website.
    Elastic Beanstalk LoadBalancer URL
  1. In our preferred browser, we can check that the website has been successfully deployed.ASP.NET Sample Web Application

Cleaning up

When you’re finished, you should complete the following steps to delete the resources you provisioned to avoid incurring further charges and keep the account free of unused resources.

  1. The CodePipeline product creates an S3 bucket which you must empty from the S3 console.
  2. On the AWS Service Catalog console, end the provisioned resources from the Provisioned products list.
  3. As administrator, in the CloudFormation console, delete the stack SC-RA-Beanstalk-Portfolio.

Conclusions

This post has shown you how to deploy a standardized DevOps pipeline which was then used to manage and deploy a sample .NET application on Elastic Beanstalk using the Service Catalog Elastic Beanstalk reference architecture. AWS Service Catalog is the ideal service for administrators who need to centrally provision and manage the AWS services needed with a consistent governance model. Deploying web applications to Elastic Beanstalk is very simple for developers and provides built in scalability, patch maintenance, and self-healing for your applications.

The post includes the information and references on how to extend the solution with other programming languages and operating systems supported by Elastic Beanstalk.

About the Authors

Borja Prado
Borja Prado Miguelez

Borja is a Senior Specialist Solutions Architect for Microsoft workloads at Amazon Web Services. He is passionate about web architectures and application modernization, helping customers to build scalable solutions with .NET and migrate their Windows workloads to AWS.

Chris Chapman
Chris Chapman

Chris is a Partner Solutions Architect covering AWS Marketplace, Service Catalog, and Control Tower. Chris was a software developer and data engineer for many years and now his core mission is helping customers and partners automate AWS infrastructure deployment and provisioning.

Integrating Jenkins with AWS CodeArtifact to publish and consume Python artifacts

Post Syndicated from Matt Ulinski original https://aws.amazon.com/blogs/devops/using-jenkins-with-codeartifact/

Python packages are used to share and reuse code across projects. Centralized artifact storage allows sharing versioned artifacts across an organization. This post explains how you can set up two Jenkins projects. The first project builds the Python package and publishes it to AWS CodeArtifact using twine (Python utility for publishing packages), and the second project consumes the package using pip and deploys an application to AWS Fargate.

Solution overview

The following diagram illustrates this architecture.

Architecture Diagram

 

The solution consists of two GitHub repositories and two Jenkins projects. The first repository contains the source code of a Python package. Jenkins builds this package and publishes it to a CodeArtifact repository.

The second repository contains the source code of a Python Flask application that has a dependency on the package produced by the first repository. Jenkins builds a Docker image containing the application and its dependencies, pushes the image to an Amazon Elastic Container Registry (Amazon ECR) registry, and deploys it to AWS Fargate using AWS CloudFormation.

Prerequisites

For this walkthrough, you should have the following prerequisites:

To create a new Jenkins server that includes the required dependencies, complete the following steps:

  1. Launch a CloudFormation stack with the following link:
    Launch CloudFormation stack
  2. Choose Next.
  3. Enter the name for your stack.
  4. Select the Amazon Elastic Compute Cloud (Amazon EC2) instance type for your Jenkins server.
  5. Select the subnet and corresponding VPC.
  6. Choose Next.
  7. Scroll down to the bottom of the page and choose Next.
  8. Review the stack configuration and choose Create stack.

AWS CloudFormation creates the following resources:

  • JenkinsInstance – Amazon EC2 instance that Jenkins and its dependencies is installed on
  • JenkinsWaitCondition – CloudFormation wait condition that waits for Jenkins to be fully installed before finishing the deployment
  • JenkinsSecurityGroup – Security group attached to the EC2 instance that allows inbound traffic on port 8080

The stack takes a few minutes to deploy. When it’s fully deployed, you can find the URL and initial password for Jenkins on the Outputs tab of the stack.

CloudFormation outputs tab

Use the initial password to unlock the Jenkins installation, then follow the setup wizard to install the suggested plugins and create a new Jenkins user. After the user is created, the initial password no longer works.

On the Jenkins homepage, complete the following steps:

  1. Choose Manage Jenkins.
  2. Choose Manage Plugins.
  3. On the Available tab, search for “Docker Pipeline” and select it.
    Jenkins plugins available tab
  4. Choose Download now and install after restart.
  5. Select Restart Jenkins when installation is complete and no jobs are running.

Jenkins plugins installation complete

Jenkins is ready to use after it restarts. Log in with the user you created with the setup wizard.

Setting up a CodeArtifact repository

To get started, create a CodeArtifact repository to store the Python packages.

  1. On the CodeArtifact console, choose Create repository.
  2. For Repository name, enter a name (for this post, I use my-repository).
  3. For Public upstream repositories, choose pypi-store.
  4. Choose Next.
    AWS CodeArtifact repository wizard
  5. Choose This AWS account.
  6. If you already have a CodeArtifact domain, choose it from the drop-down menu. If you don’t already have a CodeArtifact domain, choose a name for your domain and the console creates it for you. For this post, I named my domain my-domain.
  7. Choose Next.
  8. Review the repository details and choose Create repository.
    CodeArtifact repository overview

You now have a CodeArtifact repository created, which you use to store and retrieve Python packages used by the application.

Configuring Jenkins: Creating an IAM user

  1. On the IAM console, choose User.
  2. Choose Add user.
  3. Enter a name for the user (for this post, I used the name Jenkins).
  4. Select Programmatic access as the access type.
  5. Choose Next: Permissions.
  6. Select Attach existing policies directly.
  7. Choose the following policies:
    1. AmazonEC2ContainerRegistryPowerUser – Allows Jenkins to push Docker images to ECR.
    2. AmazonECS_FullAccess – Allows Jenkins to deploy your application to AWS Fargate.
    3. AWSCloudFormationFullAccess – Allows Jenkins to update the CloudFormation stack.
    4. AWSCodeArtifactAdminAccessAllows Jenkins access to the CodeArtifact repository.
  8. Choose Next: Tags.
  9. Choose Next: Review.
  10. Review the configuration and choose Create user.
  11. Record the Access key ID and Secret access key; you need them to configure Jenkins.

Configuring Jenkins: Adding credentials

After you create your IAM user, you need to set up the credentials in Jenkins.

  1. Open Jenkins.
  2. From the left pane, choose Manage Jenkins
  3. Choose Manage Credentials.
  4. Hover over the (global) domain and expand the drop-down menu.
  5. Choose Add credentials.
    Jenkins credentials
  6. Enter the following credentials:
    1. Kind – User name with password.
    2. Scope – Global (Jenkins, nodes, items, all child items).
    3. Username – Enter the Access key ID for the Jenkins IAM user.
    4. Password – Enter the Secret access key for the Jenkins IAM user.
    5. ID – Name for the credentials (for this post, I used AWS).
  7. Choose OK.

You use the credentials to make API calls to AWS as part of the builds.

Publishing a Python package

To publish your Python package, complete the following steps:

  1. Create a new GitHub repo to store the source of the sample package.
  2. Clone the sample GitHub repo onto your local machine.
  3. Navigate to the package_src directory.
  4. Place its contents in your GitHub repo.
    Package repository contents

When your GitHub repo is populated with the sample package, you can create the first Jenkins project.

  1. On the Jenkins homepage, choose New Item.
  2. Enter a name for the project; for example, producer.
  3. Choose Freestyle project.
  4. Choose OK.
    Jenkins new project wizard
  5. In the Source Code Management section, choose Git.
  6. Enter the HTTP clone URL of your GitHub repo into the Repository URL
  7. To make sure that the workspace is clean before each build, under Additional Behaviors, choose Add and select Clean before checkout.
    Jenkins source code managnment
  8. To have builds start automatically when a change occurs in the repository, under Build Triggers, select Poll SCM and enter * * * * * in the Schedule
    Jenkins build triggers
  9. In the Build Environment section, select Use secret text(s) or file(s).
  10. Choose Add and choose Username and password (separated).
  11. Enter the following information:
    1. UsernameAWS_ACCESS_KEY_ID
    2. PasswordAWS_SECRET_ACCESS_KEY
    3. Credentials – Select Specific Credentials and from the drop-down menu and choose the previously created credentials.
      Jenkins credential binding
  12. In the Build section, choose Add build step.
  13. Choose Execute shell.
  14. Enter the following command and replace my-domain, my-repository, and my-region with the name of your CodeArtifact domain, repository, and Region:
    python3 setup.py sdist bdist_wheel
    aws codeartifact login --tool twine --domain my-domain --repository my-repository --region my-region
    python3 -m twine upload dist/* --repository codeartifact

    These commands do the following:

    • Build the Python package
    • Run the aws codeartifact login AWS Command Line Interface (AWS CLI) command, which retrieves the access token for CodeArtifact and configures the twine client
    • Use twine to publish the Python package to CodeArtifact
  15. Choose Save.
  16. Start a new build by choosing Build Now in the left pane.After a build starts, it shows in the Build History on the left pane. To view the build’s details, choose the build’s ID number.
    Jenkins project builds
  17. To view the results of the run commands, from the build details page, choose Console Output.
  18. To see that the package has been successfully published, check the CodeArtifact repository on the console.
    CodeArtifact console showing package

When a change is pushed to the repo, Jenkins will start a new build and attempt to publish the package. CodeArtifact will prevent publishing duplicates of the same package version, failing the Jenkins build.

If you want to publish a new version of the package, you will need to increment the version number.

The sample package uses semantic versioning (major.minor.maintenance), to change the version number modify the version='1.0.0' value in the setup.py file. You can do this manually before pushing any changes to the repo, or automatically as part of the build process by using the python-semantic-release package, or a similar solution.

Consuming a package and deploying an application

After you have a package published, you can use it in an application.

  1. Create a new GitHub repo for this application.
  2. Populate it with the contents of the application_src directory from the sample repo.
    Sample application repository

The version of the sample package used by the application is defined in the requirements.txt file. If you have published a new version of the package and want the application to use it modify the fantastic-ascii==1.0.0 value in this file.

After the repository created, you need to deploy the CloudFormation template application.yml. The template creates the following resources:

  • ECRRepository – Amazon ECR repository to store your Docker image.
  • ClusterAmazon Elastic Container Service (Amazon ECS) cluster that contains the service of your application.
  • TaskDefinition – ECS task definition that defines how your Docker image is deployed.
  • ExecutionRole – IAM role that Amazon ECS uses to pull the Docker image.
  • TaskRole – IAM role provided to the ECS task.
  • ContainerSecurityGroup – Security group that allows outbound traffic to ports 8080 and 80.
  • Service – Amazon ECS service that launches and manages your Docker containers.
  • TargetGroup – Target group used by the Load Balancer to send traffic to Docker containers.
  • Listener – Load Balancer Listener that listens for incoming traffic on port 80.
  • LoadBalancer – Load Balancer that sends traffic to the ECS task.
  1. Choose the following link to create the application’s CloudFormation stack:
    Launch CloudFormation stack
  2. Choose Next.
  3. Enter the following parameters:
    1. Stack name – Name for the CloudFormation stack. For this post, I use the name Consumer.
    2. Container Name – Name for your application (for this post, I use application).
    3. Image Tag – Leave this field blank. Jenkins populates it when you deploy the application.
    4. VPC – Choose a VPC in your account that contains two public subnets.
    5. SubnetA – Choose a public subnet from the previously chosen VPC.
    6. SubnetB – Choose a public subnet from the previously chosen VPC.
  4. Choose Next.
  5. Scroll down to the bottom of the page and choose Next.
  6. Review the configuration of the stack.
  7. Acknowledge the IAM resources warning to allow CloudFormation to create the TaskRole IAM role.
  8. Choose Create Stack.

After the stack is created, the Outputs tab contains information you can use to configure the Jenkins project.

Application stack outputs tab

To access the sample application, choose the ApplicationUrl link. Because the application has not yet been deployed, you receive an error message.

You can now create the second Jenkins project, which uses a configured through a Jenkinsfile stored in the source repository. The Jenkinsfile defines the steps that the build takes to build and deploy a Docker image containing your application.

The Jenkinsfile included in the sample instructs Jenkins to perform these steps:

  1. Get the authorization token for CodeArtifact:
    withCredentials([usernamePassword(
        credentialsId: CREDENTIALS_ID,
        passwordVariable: 'AWS_SECRET_ACCESS_KEY',
        usernameVariable: 'AWS_ACCESS_KEY_ID'
    )]) {
        authToken = sh(
                returnStdout: true,
                script: 'aws codeartifact get-authorization-token \
                --domain $AWS_CA_DOMAIN \
                --query authorizationToken \
                --output text \
                --duration-seconds 900'
        ).trim()
    }

  2. Start a Docker build and pass the authorization token as an argument to the build:
    sh ("""
        set +x
        docker build -t $CONTAINER_NAME:$BUILD_NUMBER \
        --build-arg CODEARTIFACT_TOKEN='$authToken' \
        --build-arg DOMAIN=$AWS_CA_DOMAIN-$AWS_ACCOUNT_ID \
        --build-arg REGION=$AWS_REGION \
        --build-arg REPO=$AWS_CA_REPO .
    """)

  3. Inside of Docker, the passed argument is used to configure pip to use CodeArtifact:
    RUN pip config set global.index-url "https://aws:$CODEARTIFACT_TOKEN@$DOMAIN.d.codeartifact.$REGION.amazonaws.com/pypi/$REPO/simple/"
    RUN pip install -r requirements.txt

  4. Test the image by starting a container and performing a simple GET request.
  5. Log in to the Amazon ECR repository and push the Docker image.
  6. Update the CloudFormation template and start a deployment of the application.

Look at the Jenkinsfile and Dockerfile in your repository to review the exact commands being used, then take the following steps to setup the second Jenkins projects:

  1. Change the variables defined in the environment section at the top of the Jenkinsfile:
    environment {
        AWS_ACCOUNT_ID = 'Your AWS Account ID'
        AWS_REGION = 'Region you used for this project'
        AWS_CA_DOMAIN = 'Name of your CodeArtifact domain'
        AWS_CA_REPO = 'Name of your CodeArtifact repository'
        AWS_STACK_NAME = 'Name of the CloudFormation stack'
        CONTAINER_NAME = 'Container name provided to CloudFormation'
        CREDENTIALS_ID = 'Jenkins credentials ID
    }
  2. Commit the changes to the GitHub repo.
  3. To create a new Jenkins project, on the Jenkins homepage, choose New Item.
  4. Enter a name for the project, for example, Consumer.
  5. Choose Pipeline.
  6. Choose OK.
    Jenkins pipeline wizard
  7. To have a new build start automatically when a change is detected in the repository, under Build Triggers, select Poll SCM and enter * * * * * in the Schedule field.
    Jenkins source polling configuration
  8. In the Pipeline section, choose Pipeline script from SCM from the Definition drop-down menu.
  9. Choose Git for the SCM
  10. Enter the HTTP clone URL of your GitHub repo into the Repository URL
  11. To make sure that your workspace is clean before each build, under Additional Behaviors, choose Add and select Clean before checkout.
    Jenkins source configuration
  12. Choose Save.

The Jenkins project is now ready. To start a new job, choose Build Now from the navigation pane. You see a visualization of the pipeline as it moves through the various stages, gathering the dependencies and deploying your application.

Jenkins application pipeline visualization

When the Deploy to ECS stage of the pipeline is complete, you can choose ApplicationUrl on the Outputs tab of the CloudFormation stack. You see a simple webpage that uses the Python package to display the current time.

Deployed application displaying in browser

Cleaning up

To avoid incurring future charges, delete the resources created in this post.

To empty the Amazon ECR repository:

  1. Open the application’s CloudFormation stack.
  2. On the Resources tab, choose the link next to the ECRRepository
  3. Select the check-box next to each of the images in the repository.
  4. Choose Delete.
  5. Confirm the deletion.

To delete the CloudFormation stacks:

  1. On the AWS CloudFormation console, select the application stack you deployed earlier.
  2. Choose Delete.
  3. Confirm the deletion.

If you created a Jenkins as part of this post, select the Jenkins stack and delete it.

To delete the CodeArtifact repository:

  1. On the CodeArtifact console, navigate to the repository you created.
  2. Choose Delete.
  3. Confirm the deletion.

If you’re not using the CodeArtifact domain for other repositories, you should follow the previous steps to delete the pypi-store repository, because it contains the public packages that were used by the application, then delete the CodeArtifact domain:

  1. On the CodeArtifact console, navigate to the domain you created.
  2. Choose Delete.
  3. Confirm the deletion.

Conclusion

In this post I showed how you can use Jenkins to publish and consume a Python package with Jenkins and CodeArtifact. I walked you through creating two Jenkins projects, a Jenkins freestyle project that built a package and published it to CodeArtifact, and a Jenkins pipeline project that built a Docker image that used the package in an application that was deployed to AWS Fargate.

About the author

Matt Ulinski is a Cloud Support Engineer with Amazon Web Services.

 

 

Cross-account and cross-region deployment using GitHub actions and AWS CDK

Post Syndicated from DAMODAR SHENVI WAGLE original https://aws.amazon.com/blogs/devops/cross-account-and-cross-region-deployment-using-github-actions-and-aws-cdk/

GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

A cross-account deployment strategy is a CI/CD pattern or model in AWS. In this pattern, you have a designated AWS account called tools, where all CI/CD pipelines reside. Deployment is carried out by these pipelines across other AWS accounts, which may correspond to dev, staging, or prod. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline.

In this post, we show you how to use GitHub Actions to deploy an AWS Lambda-based API to an AWS account and Region using the cross-account deployment strategy.

Using GitHub Actions may have associated costs in addition to the cost associated with the AWS resources you create. For more information, see About billing for GitHub Actions.

Prerequisites

Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work:

  • Tools – Where you create an AWS Identity and Access Management (IAM) user for GitHub Actions to use to carry out deployment.
  • Target – Where deployment occurs. You can call this as your dev/stage/prod environment.

You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. So, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI.

Solution overview

You start by building the necessary resources in the tools account (an IAM user with permissions to assume a specific IAM role from the target account to carry out deployment). For simplicity, we refer to this IAM role as the cross-account role, as specified in the architecture diagram.

You also create the cross-account role in the target account that trusts the IAM user in the tools account and provides the required permissions for AWS CDK to bootstrap and initiate creating an AWS CloudFormation deployment stack in the target account. GitHub Actions uses the tools account IAM user credentials to the assume the cross-account role to carry out deployment.

In addition, you create an AWS CloudFormation execution role in the target account, which AWS CloudFormation service assumes in the target account. This role has permissions to create your API resources, such as a Lambda function and Amazon API Gateway, in the target account. This role is passed to AWS CloudFormation service via AWS CDK.

You then configure your tools account IAM user credentials in your Git secrets and define the GitHub Actions workflow, which triggers upon pushing code to a specific branch of the repo. The workflow then assumes the cross-account role and initiates deployment.

The following diagram illustrates the solution architecture and shows AWS resources across the tools and target accounts.

Architecture diagram

Creating an IAM user

You start by creating an IAM user called git-action-deployment-user in the tools account. The user needs to have only programmatic access.

  1. Clone the GitHub repo aws-cross-account-cicd-git-actions-prereq and navigate to folder tools-account. Here you find the JSON parameter file src/cdk-stack-param.json, which contains the parameter CROSS_ACCOUNT_ROLE_ARN, which represents the ARN for the cross-account role we create in the next step in the target account. In the ARN, replace <target-account-id> with the actual account ID for your designated AWS target account.                                             Replace <target-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the tools AWS account profile you created earlier. The script compiles the code, builds a package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd aws-cross-account-cicd-git-actions-prereq/tools-account/
./deploy.sh "<AWS-TOOLS-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in the tools account: CDKToolkit and cf-GitActionDeploymentUserStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package. cf-GitActionDeploymentUserStack creates the IAM user with permission to assume git-action-cross-account-role (which you create in the next step). On the Outputs tab of the stack, you can find the user access key and the AWS Secrets Manager ARN that holds the user secret. To retrieve the secret, you need to go to Secrets Manager. Record the secret to use later.

Stack that creates IAM user with its secret stored in secrets manager

Creating a cross-account IAM role

In this step, you create two IAM roles in the target account: git-action-cross-account-role and git-action-cf-execution-role.

git-action-cross-account-role provides required deployment-specific permissions to the IAM user you created in the last step. The IAM user in the tools account can assume this role and perform the following tasks:

  • Upload deployment assets such as the CloudFormation template and Lambda code package to a designated S3 bucket via AWS CDK
  • Create a CloudFormation stack that deploys API Gateway and Lambda using AWS CDK

AWS CDK passes git-action-cf-execution-role to AWS CloudFormation to create, update, and delete the CloudFormation stack. It has permissions to create API Gateway and Lambda resources in the target account.

To deploy these two roles using AWS CDK, complete the following steps:

  1. In the already cloned repo from the previous step, navigate to the folder target-account. This folder contains the JSON parameter file cdk-stack-param.json, which contains the parameter TOOLS_ACCOUNT_USER_ARN, which represents the ARN for the IAM user you previously created in the tools account. In the ARN, replace <tools-account-id> with the actual account ID for your designated AWS tools account.                                             Replace <tools-account-id> with designated AWS account id
  2. Run deploy.sh by passing the name of the target AWS account profile you created earlier. The script compiles the code, builds the package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code:
cd ../target-account/
./deploy.sh "<AWS-TARGET-ACCOUNT-PROFILE-NAME>"

You should now see two stacks in your target account: CDKToolkit and cf-CrossAccountRolesStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket to hold deployment assets such as the CloudFormation template and Lambda code package. The cf-CrossAccountRolesStack creates the two IAM roles we discussed at the beginning of this step. The IAM role git-action-cross-account-role now has the IAM user added to its trust policy. On the Outputs tab of the stack, you can find these roles’ ARNs. Record these ARNs as you conclude this step.

Stack that creates IAM roles to carry out cross account deployment

Configuring secrets

One of the GitHub actions we use is aws-actions/configure-aws-credentials@v1. This action configures AWS credentials and Region environment variables for use in the GitHub Actions workflow. The AWS CDK CLI detects the environment variables to determine the credentials and Region to use for deployment.

For our cross-account deployment use case, aws-actions/configure-aws-credentials@v1 takes three pieces of sensitive information besides the Region: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET, and CROSS_ACCOUNT_ROLE_TO_ASSUME. Secrets are recommended for storing sensitive pieces of information in the GitHub repo. It keeps the information in an encrypted format. For more information about referencing secrets in the workflow, see Creating and storing encrypted secrets.

Before we continue, you need your own empty GitHub repo to complete this step. Use an existing repo if you have one, or create a new repo. You configure secrets in this repo. In the next section, you check in the code provided by the post to deploy a Lambda-based API CDK stack into this repo.

  1. On the GitHub console, navigate to your repo settings and choose the Secrets tab.
  2. Add a new secret with name as TOOLS_ACCOUNT_ACCESS_KEY_ID.
  3. Copy the access key ID from the output OutGitActionDeploymentUserAccessKey of the stack GitActionDeploymentUserStack in tools account.
  4. Enter the ID in the Value field.                                                                                                                                                                Create secret
  5. Repeat this step to add two more secrets:
    • TOOLS_ACCOUNT_SECRET_ACCESS_KEY (value retrieved from the AWS Secrets Manager in tools account)
    • CROSS_ACCOUNT_ROLE (value copied from the output OutCrossAccountRoleArn of the stack cf-CrossAccountRolesStack in target account)

You should now have three secrets as shown below.

All required git secrets

Deploying with GitHub Actions

As the final step, first clone your empty repo where you set up your secrets. Download and copy the code from the GitHub repo into your empty repo. The folder structure of your repo should mimic the folder structure of source repo. See the following screenshot.

Folder structure of the Lambda API code

We can take a detailed look at the code base. First and foremost, we use Typescript to deploy our Lambda API, so we need an AWS CDK app and AWS CDK stack. The app is defined in app.ts under the repo root folder location. The stack definition is located under the stack-specific folder src/git-action-demo-api-stack. The Lambda code is located under the Lambda-specific folder src/git-action-demo-api-stack/lambda/ git-action-demo-lambda.

We also have a deployment script deploy.sh, which compiles the app and Lambda code, packages the Lambda code into a .zip file, bootstraps the app by copying the assets to an S3 bucket, and deploys the stack. To deploy the stack, AWS CDK has to pass CFN_EXECUTION_ROLE to AWS CloudFormation; this role is configured in src/params/cdk-stack-param.json. Replace <target-account-id> with your own designated AWS target account ID.

Update cdk-stack-param.json in git-actions-cross-account-cicd repo with TARGET account id

Finally, we define the Git Actions workflow under the .github/workflows/ folder per the specifications defined by GitHub Actions. GitHub Actions automatically identifies the workflow in this location and triggers it if conditions match. Our workflow .yml file is named in the format cicd-workflow-<region>.yml, where <region> in the file name identifies the deployment Region in the target account. In our use case, we use us-east-1 and us-west-2, which is also defined as an environment variable in the workflow.

The GitHub Actions workflow has a standard hierarchy. The workflow is a collection of jobs, which are collections of one or more steps. Each job runs on a virtual machine called a runner, which can either be GitHub-hosted or self-hosted. We use the GitHub-hosted runner ubuntu-latest because it works well for our use case. For more information about GitHub-hosted runners, see Virtual environments for GitHub-hosted runners. For more information about the software preinstalled on GitHub-hosted runners, see Software installed on GitHub-hosted runners.

The workflow also has a trigger condition specified at the top. You can schedule the trigger based on the cron settings or trigger it upon code pushed to a specific branch in the repo. See the following code:

name: Lambda API CICD Workflow
# This workflow is triggered on pushes to the repository branch master.
on:
  push:
    branches:
      - master

# Initializes environment variables for the workflow
env:
  REGION: us-east-1 # Deployment Region

jobs:
  deploy:
    name: Build And Deploy
    # This job runs on Linux
    runs-on: ubuntu-latest
    steps:
      # Checkout code from git repo branch configured above, under folder $GITHUB_WORKSPACE.
      - name: Checkout
        uses: actions/checkout@v2
      # Sets up AWS profile.
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.TOOLS_ACCOUNT_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.TOOLS_ACCOUNT_SECRET_ACCESS_KEY }}
          aws-region: ${{ env.REGION }}
          role-to-assume: ${{ secrets.CROSS_ACCOUNT_ROLE }}
          role-duration-seconds: 1200
          role-session-name: GitActionDeploymentSession
      # Installs CDK and other prerequisites
      - name: Prerequisite Installation
        run: |
          sudo npm install -g [email protected]
          cdk --version
          aws s3 ls
      # Build and Deploy CDK application
      - name: Build & Deploy
        run: |
          cd $GITHUB_WORKSPACE
          ls -a
          chmod 700 deploy.sh
          ./deploy.sh

For more information about triggering workflows, see Triggering a workflow with events.

We have configured a single job workflow for our use case that runs on ubuntu-latest and is triggered upon a code push to the master branch. When you create an empty repo, master branch becomes the default branch. The workflow has four steps:

  1. Check out the code from the repo, for which we use a standard Git action actions/checkout@v2. The code is checked out into a folder defined by the variable $GITHUB_WORKSPACE, so it becomes the root location of our code.
  2. Configure AWS credentials using aws-actions/configure-aws-credentials@v1. This action is configured as explained in the previous section.
  3. Install your prerequisites. In our use case, the only prerequisite we need is AWS CDK. Upon installing AWS CDK, we can do a quick test using the AWS Command Line Interface (AWS CLI) command aws s3 ls. If cross-account access was successfully established in the previous step of the workflow, this command should return a list of buckets in the target account.
  4. Navigate to root location of the code $GITHUB_WORKSPACE and run the deploy.sh script.

You can check in the code into the master branch of your repo. This should trigger the workflow, which you can monitor on the Actions tab of your repo. The commit message you provide is displayed for the respective run of the workflow.

Workflow for region us-east-1 Workflow for region us-west-2

You can choose the workflow link and monitor the log for each individual step of the workflow.

Git action workflow steps

In the target account, you should now see the CloudFormation stack cf-GitActionDemoApiStack in us-east-1 and us-west-2.

Lambda API stack in us-east-1 Lambda API stack in us-west-2

The API resource URL DocUploadRestApiResourceUrl is located on the Outputs tab of the stack. You can invoke your API by choosing this URL on the browser.

API Invocation Output

Clean up

To remove all the resources from the target and tools accounts, complete the following steps in their given order:

  1. Delete the CloudFormation stack cf-GitActionDemoApiStack from the target account. This step removes the Lambda and API Gateway resources and their associated IAM roles.
  2. Delete the CloudFormation stack cf-CrossAccountRolesStack from the target account. This removes the cross-account role and CloudFormation execution role you created.
  3. Go to the CDKToolkit stack in the target account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.
  4. Delete the CloudFormation stack cf-GitActionDeploymentUserStack from tools account. This removes cross-account-deploy-user IAM user.
  5. Go to the CDKToolkit stack in the tools account and note the BucketName on the Output tab. Empty that bucket and then delete the stack.

Security considerations

Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific Amazon S3 and CloudFormation permissions. This makes sure that the cross-account role can only do those things. The actual creation of Lambda, API Gateway, and Amazon DynamoDB resources happens via the AWS CloudFormation IAM role, which AWS  CloudFormation assumes in the target AWS account.

Make sure that you use secrets to store your sensitive workflow configurations, as specified in the section Configuring secrets.

Conclusion

In this post we showed how you can leverage GitHub’s popular software development platform to securely deploy to AWS accounts and Regions using GitHub actions and AWS CDK.

Build your own GitHub Actions CI/CD workflow as shown in this post.

About the author

 

Damodar Shenvi Wagle is a Cloud Application Architect at AWS Professional Services. His areas of expertise include architecting serverless solutions, ci/cd and automation.

How Pushly Media used AWS to pivot and quickly spin up a StartUp

Post Syndicated from Eddie Moser original https://aws.amazon.com/blogs/devops/how-pushly-media-used-aws-to-pivot-and-quickly-spin-up-a-startup/

This is a guest post from Pushly. In their own words, “Pushly provides a scalable, easy-to-use platform designed to deliver targeted and timely content via web push notifications across all modern desktop browsers and Android devices.”

Introduction

As a software engineer at Pushly, I’m part of a team of developers responsible for building our SaaS platform.

Our customers are content publishers spanning the news, ecommerce, and food industries, with the primary goal of increasing page views and paid subscriptions, ultimately resulting in increased revenue.

Pushly’s platform is designed to integrate seamlessly into a publisher’s workflow and enables advanced features such as customizable opt-in flow management, behavioral targeting, and real-time reporting and campaign delivery analytics.

As developers, we face various challenges to make all this work seamlessly. That’s why we turned to Amazon Web Services (AWS). In this post, I explain why and how we use AWS to enable the Pushly user experience.

At Pushly, my primary focus areas are developer and platform user experience. On the developer side, I’m responsible for building and maintaining easy-to-use APIs and a web SDK. On the UX side, I’m responsible for building a user-friendly and stable platform interface.

The CI/CD process

We’re a cloud native company and have gone all in with AWS.

AWS CodePipeline lets us automate the software release process and release new features to our users faster. Rapid delivery is key here, and CodePipeline lets us automate our build, test, and release process so we can quickly and easily test each code change and fail fast if needed. CodePipeline is vital to ensuring the quality of our code by running each change through a staging and release process.

One of our use cases is continuous reiteration deployment. We foster an environment where developers can fully function in their own mindset while adhering to our company’s standards and the architecture within AWS.

We deploy code multiple times per day and rely on AWS services to run through all checks and make sure everything is packaged uniformly. We want to fully test in a staging environment before moving to a customer-facing production environment.

The development and staging environments

Our development environment allows developers to securely pull down applications as needed and access the required services in a development AWS account. After an application is tested and is ready for staging, the application is deployed to our staging environment—a smaller reproduction of our production environment—so we can test how the changes work together. This flow allows us to see how the changes run within the entire Pushly ecosystem in a secure environment without pushing to production.

When testing is complete, a pull request is created for stakeholder review and to merge the changes to production branches. We use AWS CodeBuild, CodePipeline, and a suite of in-house tools to ensure that the application has been thoroughly tested to our standards before being deployed to our production AWS account.

Here is a high level diagram of the environment described above:

Diagram showing at a high level the Pushly environment.Ease of development

Ease of development was—and is—key. AWS provides the tools that allow us to quickly iterate and adapt to ever-changing customer needs. The infrastructure as code (IaC) approach of AWS CloudFormation allows us to quickly and simply define our infrastructure in an easily reproducible manner and rapidly create and modify environments at scale. This has given us the confidence to take on new challenges without concern over infrastructure builds impacting the final product or causing delays in development.

The Pushly team

Although Pushly’s developers all have the skill-set to work on both front-end-facing and back-end-facing projects, primary responsibilities are split between front-end and back-end developers. Developers that primarily focus on front-end projects concentrate on public-facing projects and internal management systems. The back-end team focuses on the underlying architecture, delivery systems, and the ecosystem as a whole. Together, we create and maintain a product that allows you to segment and target your audiences, which ensures relevant delivery of your content via web push notifications.

Early on we ran all services entirely off of AWS Lambda. This allowed us to develop new features quickly in an elastic, cost efficient way. As our applications have matured, we’ve identified some services that would benefit from an always on environment and moved them to AWS Elastic Beanstalk. The capability to quickly iterate and move from service to service is a credit to AWS, because it allows us to customize and tailor our services across multiple AWS offerings.

Elastic Beanstalk has been the fastest and simplest way for us to deploy this suite of services on AWS; their blue/green deployments allow us to maintain minimal downtime during deployments. We can easily configure deployment environments with capacity provisioning, load balancing, autoscaling, and application health monitoring.

The business side

We had several business drivers behind choosing AWS: we wanted to make it easier to meet customer demands and continually scale as much as needed without worrying about the impact on development or on our customers.

Using AWS services allowed us to build our platform from inception to our initial beta offering in fewer than 2 months! AWS made it happen with tools for infrastructure deployment on top of the software deployment. Specifically, IaC allowed us to tailor our infrastructure to our specific needs and be confident that it’s always going to work.

On the infrastructure side, we knew that we wanted to have a staging environment that truly mirrored the production environment, rather than managing two entirely disparate systems. We could provide different sets of mappings based on accounts and use the templates across multiple environments. This functionality allows us to use the exact same code we use in our current production environment and easily spin up additional environments in 2 hours.

The need for speed

It took a very short time to get our project up and running, which included rewriting different pieces of the infrastructure in some places and completely starting from scratch in others.

One of the new services that we adopted is AWS CodeArtifact. It lets us have fully customized private artifact stores in the cloud. We can keep our in-house libraries within our current AWS accounts instead of relying on third-party services.

CodeBuild lets us compile source code, run test suites, and produce software packages that are ready to deploy while only having to pay for the runtime we use. With CodeBuild, you don’t need to provision, manage, and scale your own build servers, which saves us time.

The new tools that AWS is releasing are going to even further streamline our processes. We’re interested in the impact that CodeArtifact will have on our ability to share libraries in Pushly and with other business units.

Cost savings is key

What are we saving by choosing AWS? A lot. AWS lets us scale while keeping costs at a minimum. This was, and continues to be, a major determining factor when choosing a cloud provider.

By using Lambda and designing applications with horizontal scale in mind, we have scaled from processing millions of requests per day to hundreds of millions, with very little change to the underlying infrastructure. Due to the nature of our offering, our traffic patterns are unpredictable. Lambda allows us to process these requests elastically and avoid over-provisioning. As a result, we can increase our throughput tenfold at any time, pay for the few minutes of extra compute generated by a sudden burst of traffic, and scale back down in seconds.

In addition to helping us process these requests, AWS has been instrumental in helping us manage an ever-growing data warehouse of clickstream data. With Amazon Kinesis Data Firehose, we automatically convert all incoming events to Parquet and store them in Amazon Simple Storage Service (Amazon S3), which we can query directly using Amazon Athena within minutes of being received. This has once again allowed us to scale our near-real-time data reporting to a degree that would have otherwise required a significant investment of time and resources.

As we look ahead, one thing we’re interested in is Lambda custom stacks, part of AWS’s Lambda-backed custom resources. Amazon supports many languages, so we can run almost every language we need. If we want to switch to a language that AWS doesn’t support by default, they still provide a way for us to customize a solution. All we have to focus on is the code we’re writing!

The importance of speed for us and our customers is one of our highest priorities. Think of a news publisher in the middle of a briefing who wants to get the story out before any of the competition and is relying on Pushly—our confidence in our ability to deliver on this need comes from AWS services enabling our code to perform to its fullest potential.

Another way AWS has met our needs was in the ease of using Amazon ElastiCache, a fully managed in-memory data store and cache service. Although we try to be as horizontal thinking as possible, some services just can’t scale with the immediate elasticity we need to handle a sudden burst of requests. We avoid duplicate lookups for the same resources with ElastiCache. ElastiCache allows us to process requests quicker and protects our infrastructure from being overwhelmed.

In addition to caching, ElastiCache is a great tool for job locking. By locking messages by their ID as soon as they are received, we can use the near-unlimited throughput of Amazon Simple Queue Service (Amazon SQS) in a massively parallel environment without worrying that messages are processed more than once.

The heart of our offering is in the segmentation of subscribers. We allow building complex queries in our dashboard that calculate reach in real time and are available to use immediately after creation. These queries are often never-before-seen and may contain custom properties provided by our clients, operate on complex data types, and include geospatial conditions. No matter the size of the audience, we see consistent sub-second query times when calculating reach. We can provide this to our clients using Amazon Elasticsearch Service (Amazon ES) as the backbone to our subscriber store.

Summary

AWS has countless positives, but one key theme that we continue to see is overall ease of use, which enables us to rapidly iterate. That’s why we rely on so many different AWS services—Amazon API Gateway with Lambda integration, Elastic Beanstalk, Amazon Relational Database Service (Amazon RDS), ElastiCache, and many more.

We feel very secure about our future working with AWS and our continued ability to improve, integrate, and provide a quality service. The AWS team has been extremely supportive. If we run into something that we need to adjust outside of the standard parameters, or that requires help from the AWS specialists, we can reach out and get feedback from subject matter experts quickly. The all-around capabilities of AWS and its teams have helped Pushly get where we are, and we’ll continue to rely on them for the foreseeable future.

 

Introducing AWS X-Ray new integration with AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-aws-x-ray-new-integration-with-aws-step-functions/

AWS Step Functions now integrates with AWS X-Ray to provide a comprehensive tracing experience for serverless orchestration workflows.

Step Functions allows you to build resilient serverless orchestration workflows with AWS services such as AWS Lambda, Amazon SNS, Amazon DynamoDB, and more. Step Functions provides a history of executions for a given state machine in the AWS Management Console or with Amazon CloudWatch Logs.

AWS X-Ray is a distributed tracing system that helps developers analyze and debug their applications. It traces requests as they travel through the individual services and resources that make up an application. This provides an end-to-end view of how an application is performing.

What is new?

The new Step Functions integration with X-Ray provides an additional workflow monitoring experience. Developers can now view maps and timelines of the underlying components that make up a Step Functions workflow. This helps to discover performance issues, detect permission problems, and track requests made to and from other AWS services.

The Step Functions integration with X-Ray can be analyzed in three constructs:

Service map: The service map view shows information about a Step Functions workflow and all of its downstream services. This enables developers to identify services where errors are occurring, connections with high latency, or traces for requests that are unsuccessful among the large set of services within their account. The service map aggregates data from specific time intervals from one minute through six hours and has a 30-day retention.

Trace map view: The trace map view shows in-depth information from a single trace as it moves through each service. Resources are listed in the order in which they are invoked.

Trace timeline: The trace timeline view shows the propagation of a trace through the workflow and is paired with a time scale called a latency distribution histogram. This shows how long it takes for a service to complete its requests. The trace is composed of segments and sub-segments. A segment represents the Step Functions execution. Subsegments each represent a state transition.

Getting Started

X-Ray tracing is enabled using AWS Serverless Application Model (AWS SAM), AWS CloudFormation or from within the AWS Management Console. To get started with Step Functions and X-Ray using the AWS Management Console:

  1. Go to the Step Functions page of the AWS Management Console.
  2. Choose Get Started, review the Hello World example, then choose Next.
  3. Check Enable X-Ray tracing from the Tracing section.

Workflow visibility

The following Step Functions workflow example is invoked via Amazon EventBridge when a new file is uploaded to an Amazon S3 bucket. The workflow uses Amazon Textract to detect text from an image file. It translates the text into multiple languages using Amazon Translate and saves the results into an Amazon DynamoDB table. X-Ray has been enabled for this workflow.

To view the X-Ray service map for this workflow, I choose the X-Ray trace map link at the top of the Step Functions Execution details page:

The service map is generated from trace data sent through the workflow. Toggling the Service Icons displays each individual service in this workload. The size of each node is weighted by traffic or health, depending on the selection.

This shows the error percentage and average response times for each downstream service. T/min is the number of traces sent per minute in the selected time range. The following map shows a 67% error rate for the Step Functions workflow.

Accelerated troubleshooting

By drilling down through the service map, to the individual trace map, I quickly pinpoint the error in this workflow. I choose the Step Functions service from the trace map. This opens the service details panel. I then choose View traces. The trace data shows that from a group of nine responses, 3 completed successfully and 6 completed with error. This correlates with the response times listed for each individual trace. Three traces complete in over 5 seconds, while 6 took less than 3 seconds.

Choosing one of the faster traces opens the trace timeline map. This illustrates the aggregate response time for the workflow and each of its states. It shows a state named Read text from image invoked by a Lambda Function. This takes 2.3 seconds of the workflow’s total 2.9 seconds to complete.

A warning icon indicates that an error has occurred in this Lambda function. Hovering the curser over the icon, reveals that the property “Blocks” is undefined. This shows that an error occurred within the Lambda function (no text was found within the image). The Lambda function did not have sufficient error handling to manage this error gracefully, so the workflow exited.

Here’s how that same state execution failure looks in the Step Functions Graph inspector.

Performance profiling

The visualizations provided in the service map are useful for estimating the average latency in a workflow, but issues are often indicated by statistical outliers. To help investigate these, the Response distribution graph shows a distribution of latencies for each state within a workflow, and its downstream services.

Latency is the amount of time between when a request starts and when it completes. It shows duration on the x-axis, and the percentage of requests that match each duration on the y-axis. Additional filters are applied to find traces by duration or status code. This helps to discover patterns and to identify specific cases and clients with issues at a given percentile.

Sampling

X-Ray applies a sampling algorithm to determine which requests to trace. A sampling rate of 100% is used for state machines with an execution rate of less than one per second. State machines running at a rate greater than one execution per second default to a 5% sampling rate. Configure the sampling rate to determine what percentage of traces to sample. Enable trace sampling with the AWS Command Line Interface (AWS CLI) using the CreateStateMachine and UpdateStateMachine APIs with the enable-Trace-Sampling attribute:

--enable-trace-sampling true

It can also be configured in the AWS Management Console.

Trace data retention and limits

X-Ray retains tracing data for up to 30 days with a single trace holding up to 7 days of execution data. The current minimum guaranteed trace size is 100Kb, which equates to approximately 80 state transitions.   The actual number of state transitions supported will depend on the upstream and downstream calls and duration of the workflow. When the trace size limit is reached, the trace cannot be updated with new segments or updates to existing segments. The traces that have reached the limit are indicated with a banner in the X-Ray console.

For a full service comparison of X-Ray trace data and Step Functions execution history, please refer to the documentation.

Conclusion

The Step Functions integration with X-Ray provides a single monitoring dashboard for workflows running at scale. It provides a high-level system overview of all workflow resources and the ability to drill down to view detailed timelines of workflow executions. You can now use the orchestration capabilities of Step Functions with the tracing, visualization, and debug capabilities of AWS X-Ray.

This enables developers to reduce problem resolution times by visually identifying errors in resources and viewing error rates across workflow executions. You can profile and improve application performance by identifying outliers while analyzing and debugging high latency and jitter in workflow executions.

This feature is available in all Regions where both AWS Step Functions and AWS X-Ray are available. View the AWS Regions table to learn more. For pricing information, see AWS X-Ray pricing.

To learn more about Step Functions, read the Developer Guide. For more serverless learning resources, visit https://serverlessland.com.

Using Lambda layers to simplify your development process

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/

Serverless developers frequently import libraries and dependencies into their AWS Lambda functions. While you can zip these dependencies as part of the build and deployment process, in many cases it’s easier to use layers instead. In this post, I explain how layers work, and how you can build and include layers in your own applications.

This blog post references the Happy Path application, which shows how to build a flexible backend to a photo-processing web application. To learn more, refer to Using serverless backends to iterate quickly on web apps – part 1. This code in this post is available at this GitHub repo.

Overview of Lambda layers

A Lambda layer is an archive containing additional code, such as libraries, dependencies, or even custom runtimes. When you include a layer in a function, the contents are extracted to the /opt directory in the execution environment. You can include up to five layers per function, which count towards the standard Lambda deployment size limits.

Layers are deployed as immutable versions, and the version number increments each time you publish a new layer. When you include a layer in a function, you specify the layer version you want to use. Layers are automatically set as private, but they can be shared with other AWS accounts, or shared publicly. Permissions only apply to a single version of a layer.

Using layers can make it faster to deploy applications with the AWS Serverless Application Model (AWS SAM) or the Serverless framework. By moving runtime dependencies from your function code to a layer, this can help reduce the overall size of the archive uploaded during a deployment.

Creating a layer containing the AWS SDK

The AWS SDK allows you to interact programmatically with AWS services using one of the supported runtimes. The Lambda service includes the AWS SDK so you can use it without explicitly importing in your deployment package.

However, there is no guarantee of the version provided in the execution environment. The SDK is upgraded frequently to support new AWS services and features. As a result, the version may change at any time. You can see the current version used by Lambda by declaring an instance of the SDK and logging out the version method:

Logging out the version method

For production workloads, it’s best practice to lock the version of the AWS SDK used in your functions. You can achieve this by including the SDK with your code package. Once you include this library, your code always uses the version in the deployment package and not the version included in the Lambda service.

A serverless application may consist of many functions, which all use a common SDK version. Instead of bundling the SDK with each function deployment, you can create a layer containing the SDK. The effect of this is to reduce the size of the uploaded archive, which makes your deployments faster.

To create an AWS SDK layer:

  1. First, clone this blog post’s GitHub repo. From a terminal window, execute:
    git clone https://github.com/aws-samples/aws-lambda-layers-aws-sam-examples
    cd ./aws-sdk-layer
  2. This directory contains an AWS SAM template and Node.js package.json file. Install the package.json contents:
    npm install
  3. Create the layer directory defined in the AWS SAM template and the nodejs directory required by Lambda. Next, move the node_modules directory:
    mkdir -p ./layer/nodejs
    mv ./node_modules ./layer/nodejs
  4. Next, deploy the AWS SAM template to create the layer:
    sam deploy --guided
  5. For the Stack name, enter “aws-sdk-layer”. Enter your preferred AWS Region and accept the other defaults.
  6. After the deployment completes, the new Lambda layer is available to use. Run this command to see the available layers:aws lambda list-layersaws lambda list-layers output

After adding a layer to a function, you can use console.log to log out the AWS SDK version. This shows that the function is now using the SDK version in the layer instead of the version provided by the Lambda service:

Use the SDK layer instead of the bundled layer

Creating layers with OS-specific binaries

Many code libraries include binaries that are operating-system specific. When you build packages on your local development machine, by default the binaries for that operating system are used. These may not be the right binaries for Lambda, which runs on Amazon Linux. If you are not using a compatible operating system, you must ensure you include Linux binaries in the layer.

The simplest way to package these libraries correctly is to use AWS Cloud9. This is an IDE in the AWS Cloud, which runs on Amazon EC2. After creating an environment, you can clone a git repository directly to the local storage of the instance, and run the necessary build scripts.

The Happy Path application resizes images using the Sharp npm library. This library uses libvips, which is written in C, so the compilation is operating system-specific. By creating a layer containing this library, it simplifies the packaging and deployment of the consuming Lambda function.

To create a Sharp layer using AWS Cloud9:

  1. Navigate to the AWS Cloud9 console.
  2. Choose Create environment.
  3. Enter the name “My IDE” and choose Next step.
  4. Accept all the default and choose Next step.
  5. Review the settings and choose Create environment.
  6. In the terminal panel, enter:
    git clone https://github.com/aws-samples/aws-lambda-layers-aws-sam-examples
    cd ./aws-lambda-layers-aws-sam-examples/sharp-layer
    npm installCreating a layer in Cloud9
  7. From a terminal window, ensure you are in the directory where you cloned this post’s GitHub repo. Execute the following commands:cd ./sharp-layer
    npm install
    mkdir -p ./layer/nodejs
    mv ./node_modules ./layer/nodejsCreating the layer in Cloud9
  8. Next, deploy the AWS SAM template to create the layer:
    sam deploy --guided
  9. For the Stack name, enter “sharp-layer”. Enter your preferred AWS Region and accept the other defaults. After the deployment completes, the new Lambda layer is available to use.

In some runtimes, you can specify a local set of packages for development, and another set for production. For example, in Node.js, the package.json file allows you to specify two sections for dependencies. If your development machine uses a different operating system to Lambda, and therefore uses different binaries, you can use package.json to resolve this. In the Happy Path Resizer function, which uses the Sharp layer, the package.json refers to a local binary for development.

Adding development dependencies to package.json

AWS SAM defines Lambda functions with the AWS::Serverless::Function resource. Layers are defined as a property of functions, as a list of layer ARNs including the version:

  MyLambdaFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: myFunction/
      Handler: app.handler
      MemorySize: 128
      Layers:
        - !Ref SharpLayerARN

Sharing a layer

Layers are private to your account by default but you can optionally share with other AWS accounts or make a layer public. You cannot share layers via the AWS Management Console but instead use the AWS CLI.

To share a layer, use add-layer-version-permission, specifying the layer name, version, AWS Region, and principal:

aws lambda add-layer-version-permission \
  --layer-name node-sharp \
  --principal '*' \
  --action lambda:GetLayerVersion \
  --version-number 3 
  --statement-id public 
  --region us-east-1

In the principal parameter, specify an individual account ID or use an asterisk to make the layer public. The CLI responds with a RevisionId containing the current revision of the policy:

add-layer-version output

You can check the permissions associated with a layer version by calling get-layer-version-policy with the layer name and version:

aws lambda get-layer-version-policy \
  --layer-name node-sharp \
  --version-number 3 \
  --region us-east-1

get-layer-version-policy output

Similarly, you can delete permissions associated with a layer version by calling remove-layer-vesion-permission with the layer name, statement ID, and version:

aws lambda remove-layer-version-permission \
 -- layer-name node-sharp \
 -- statement-id public \
 -- version-number 3

Once the permissions are removed, calling get-layer-version-policy results in an error:

Error invoking after removal

Conclusion

Lambda layers provide a convenient and effective way to package code libraries for sharing with Lambda functions in your account. Using layers can help reduce the size of uploaded archives and make it faster to deploy your code.

Layers can contain packages using OS-specific binaries, providing a convenient way to distribute these to developers. While layers are private by default, you can share with other accounts or make a layer public. Layers are published as immutable versions, and deleting a layer has no effect on deployed Lambda functions already using that layer.

To learn more about using Lambda layers, visit the documentation, or see how layers are used in the Happy Path web application.

Jump-starting your serverless development environment

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/jump-starting-your-serverless-development-environment/

Developers building serverless applications often wonder how they can jump-start their local development environment. This blog post provides a broad guide for those developers wanting to set up a development environment for building serverless applications.

serverless development environment

AWS and open source tools for a serverless development environment .

To use AWS Lambda and other AWS services, create and activate an AWS account.

Command line tooling

Command line tools are scripts, programs, and libraries that enable rapid application development and interactions from within a command line shell.

The AWS CLI

The AWS Command Line Interface (AWS CLI) is an open source tool that enables developers to interact with AWS services using a command line shell. In many cases, the AWS CLI increases developer velocity for building cloud resources and enables automating repetitive tasks. It is an important piece of any serverless developer’s toolkit. Follow these instructions to install and configure the AWS CLI on your operating system.

AWS enables you to build infrastructure with code. This provides a single source of truth for AWS resources. It enables development teams to use version control and create deployment pipelines for their cloud infrastructure. AWS CloudFormation provides a common language to model and provision these application resources in your cloud environment.

AWS Serverless Application Model (AWS SAM CLI)

AWS Serverless Application Model (AWS SAM) is an extension for CloudFormation that further simplifies the process of building serverless application resources.

It provides shorthand syntax to define Lambda functions, APIs, databases, and event source mappings. During deployment, the AWS SAM syntax is transformed into AWS CloudFormation syntax, enabling you to build serverless applications faster.

The AWS SAM CLI is an open source command line tool used to locally build, test, debug, and deploy serverless applications defined with AWS SAM templates.

Install AWS SAM CLI on your operating system.

Test the installation by initializing a new quick start project with the following command:

$ sam init
  1. Choose 1 for the “Quick Start Templates
  2. Choose 1 for the “Node.js runtime
  3. Use the default name.

The generated /sam-app/template.yaml contains all the resource definitions for your serverless application. This includes a Lambda function with a REST API endpoint, along with the necessary IAM permissions.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello-world/
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

Deploy this application using the AWS SAM CLI guided deploy:

$ sam deploy -g

Local testing with AWS SAM CLI

The AWS SAM CLI requires Docker containers to simulate the AWS Lambda runtime environment on your local development environment. To test locally, install Docker Engine and run the Lambda function with following command:

$ sam local invoke "HelloWorldFunction" -e events/event.json

The first time this function is invoked, Docker downloads the lambci/lambda:nodejs12.x container image. It then invokes the Lambda function with a pre-defined event JSON file.

Helper tools

There are a number of open source tools and packages available to help you monitor, author, and optimize your Lambda-based applications. Some of the most popular tools are shown in the following list.

Template validation tooling

CloudFormation Linter is a validation tool that helps with your CloudFormation development cycle. It analyses CloudFormation YAML and JSON templates to resolve and validate intrinsic functions and resource properties. By analyzing your templates before deploying them, you can save valuable development time and build automated validation into your deployment release cycle.

Follow these instructions to install the tool.

Once, installed, run the cfn-lint command with the path to your AWS SAM template provided as the first argument:

cfn-lint template.yaml
AWS SAM template validation with cfn-lint

AWS SAM template validation with cfn-lint

The following example shows that the template is not valid because the !GettAtt function does not evaluate correctly.

IDE tooling

Use AWS IDE plugins to author and invoke Lambda functions from within your existing integrated development environment (IDE). AWS IDE toolkits are available for PyCharm, IntelliJ. Visual Studio.

The AWS Toolkit for Visual Studio Code provides an integrated experience for developing serverless applications. It enables you to invoke Lambda functions, specify function configurations, locally debug, and deploy—all conveniently from within the editor. The toolkit supports Node.js, Python, and .NET.

The AWS Toolkit for Visual Studio Code

From Visual Studio Code, choose the Extensions icon on the Activity Bar. In the Search Extensions in Marketplace box, enter AWS Toolkit and then choose AWS Toolkit for Visual Studio Code as shown in the following example. This opens a new tab in the editor showing the toolkit’s installation page. Choose the Install button in the header to add the extension.

AWS Toolkit extension for Visual Studio Code

AWS Toolkit extension for Visual Studio Code

AWS Cloud9

Another option to build a development environment without having to install anything locally is to use AWS Cloud9. AWS Cloud9 is a cloud-based integrated development environment (IDE) for writing, running, and debugging code from within the browser.

It provides a seamless experience for developing serverless applications. It has a preconfigured development environment that includes AWS CLI, AWS SAM CLI, SDKs, code libraries, and many useful plugins. AWS Cloud9 also provides an environment for locally testing and debugging AWS Lambda functions. This eliminates the need to upload your code to the Lambda console. It allows developers to iterate on code directly, saving time, and improving code quality.

Follow this guide to set up AWS Cloud9 in your AWS environment.

Advanced tooling

Efficient configuration of Lambda functions is critical when expecting optimal cost and performance of your serverless applications. Lambda allows you to control the memory (RAM) allocation for each function.

Lambda charges based on the number of function requests and the duration, the time it takes for your code to run. The price for duration depends on the amount of RAM you allocate to your function. A smaller RAM allocation may reduce the performance of your application if your function is running compute-heavy workloads. If performance needs outweigh cost, you can increase the memory allocation.

Cost and performance optimization tooling

AWS Lambda power tuner is an open source tool that uses an AWS Step Functions state machine to suggest cost and performance optimizations for your Lambda functions. It invokes a given function with multiple memory configurations. It analyzes the execution log results to determine and suggest power configurations that minimize cost and maximize performance.

To deploy the tool:

  1. Clone the repository as follows:
    $ git clone https://github.com/alexcasalboni/aws-lambda-power-tuning.git
  2. Create an Amazon S3 bucket and enter the deployment configurations in /scripts/deploy.sh:
    # config
    BUCKET_NAME=your-sam-templates-bucket
    STACK_NAME=lambda-power-tuning
    PowerValues='128,512,1024,1536,3008'
  3. Run the deploy.sh script from your terminal, this uses the AWS SAM CLI to deploy the application:
    $ bash scripts/deploy.sh
  4. Run the power tuning tool from the terminal using the AWS CLI:
    aws stepfunctions start-execution \
    --state-machine-arn arn:aws:states:us-east-1:0123456789:stateMachine:powerTuningStateMachine-Vywm3ozPB6Am \
    --input "{\"lambdaARN\": \"arn:aws:lambda:us-east-1:1234567890:function:testytest\", \"powerValues\":[128,256,512,1024,2048],\"num\":50,\"payload\":{},\"parallelInvocation\":true,\"strategy\":\"cost\"}" \
    --output json
  5. The Step Functions execution output produces a link to a visual summary of the suggested results:

    AWS Lambda power tuning results

    AWS Lambda power tuning results

Monitoring and debugging tooling

Sls-dev-tools is an open source serverless tool that delivers serverless metrics directly to the terminal. It provides developers with feedback on their serverless application’s metrics and key bindings that deploy, open, and manipulate stack resources. Bringing this data directly to your terminal or IDE, reduces context switching between the developer environment and the web interfaces. This can increase application development speed and improve user experience.

Follow these instructions to install the tool onto your development environment.

To open the tool, run the following command:

$ Sls-dev-tools

Follow the in-terminal interface to choose which stack to monitor or edit.

The following example shows how the tool can be used to invoke a Lambda function with a custom payload from within the IDE.

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Serverless database tooling

NoSQL Workbench for Amazon DynamoDB is a GUI application for modern database development and operations. It provides a visual IDE tool for data modeling and visualization with query development features to help build serverless applications with Amazon DynamoDB tables. Define data models using one or more tables and visualize the data model to see how it works in different scenarios. Run or simulate operations and generate the code for Python, JavaScript (Node.js), or Java.

Choose the correct operating system link to download and install NoSQL Workbench on your development machine.

The following example illustrates a connection to a DynamoDB table. A data scan is built using the GUI, with Node.js code generated for inclusion in a Lambda function:

Connecting to an Amazon DynamoBD table with NoSQL Workbench for AmazonDynamoDB

Connecting to an Amazon DynamoDB table with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Conclusion

Building serverless applications allows developers to focus on business logic instead of managing and operating infrastructure. This is achieved by using managed services. Developers often struggle with knowing which tools, libraries, and frameworks are available to help with this new approach to building applications. This post shows tools that builders can use to create a serverless developer environment to help accelerate software development.

This list represents AWS and open source tools but does not include our APN Partners. For partner offers, check here.

Read more to start building serverless applications.