Tag Archives: devops

Integrating SonarQube as a pull request approver on AWS CodeCommit

Post Syndicated from David Jackson original https://aws.amazon.com/blogs/devops/integrating-sonarqube-as-a-pull-request-approver-on-aws-codecommit/

Integrating SonarQube as a pull request approver on AWS CodeCommit

On Nov 25th, AWS CodeCommit launched a new feature that allows customers to configure approval rules on pull requests. Approval rules act as a gate on your source code changes. Pull requests which fail to satisfy the required approvals cannot be merged into your important branches. Additionally, CodeCommit launched the ability to create approval rule templates, which are rulesets that can automatically be applied to all pull requests created for one or more repositories in your AWS account. With templates, it becomes simple to create rules like “require one approver from my team” for any number of repositories in your AWS account.

A common problem for software developers is accidentally or unintentionally merging code with bugs, defects, or security vulnerabilities into important master branches. Once bad code is merged into a master branch, it can be difficult to remove. It’s also potentially costly if the code is deployed into production environments and causes outages or other serious issues. Using CodeCommit’s new features, adding required approvers to your repository pull requests can help identify and mitigate those issues before they are merged into your master branches.

The most rudimentary use of required approvers is to require at least one team member to approve each pull request. While adding human team members as approvers is an important part of the pull request workflow, this feature can also be used to require ‘robot’ approvers of your pull requests, and you can trigger them automatically on each new or updated pull request. Robotic approvers can help find issues that humans miss and enforce best practices regarding code style, test coverage, and more.

Customers have been asking us how we can integrate code review tools with AWS CodeCommit pull requests. I encourage you to check out Amazon CodeGuru Reviewer, which is a service that uses program analysis and machine learning to detect potential defects that are difficult for developers to find and recommends fixes in your Java code, and was launched in preview at the AWS Re:Invent 2019 conference. Another popular tool is SonarQube, which is an open-source platform for performing code quality analysis. It helps detect defects, bugs, and security vulnerabilities in your pull requests. This blog post shows you how to integrate SonarQube into the pull requests workflow.

This post shows…

Time to read10 minutes
Time to complete20 minutes
Cost to complete (estimated)$0.40/month for secret, ~$0.02 per build on CodeBuild. $0-1 for CodeCommit user depending on current free tier status. (at publication time)
Learning levelIntermediate (200)
Services usedAWS CodeCommit, AWS CodeBuild, AWS CloudFormation, Amazon Elastic Compute Cloud (EC2), AWS CloudWatch Events, AWS Identity and Access Management, AWS Secrets Manager

Solution overview

In this solution, you create a CodeCommit repository that requires a successful SonarQube quality analysis before pull requests can be merged. You can create the required AWS resources in your account by using the provided AWS CloudFormation template. This template creates the following resources:

  • A new CodeCommit repository, containing a starter Java project that uses the Apache Maven build system, as well as a custom buildspec.yml file to facilitate communication with SonarQube and CodeCommit.
  • An AWS CodeBuild project which invokes your SonarQube instance on build, then reports the status of the analysis back to CodeCommit.
  • An Amazon CloudWatch Events Rule, which listens for pullRequestCreated and pullRequestSourceBranchUpdated events from CodeCommit, and invokes your CodeBuild project.
  • An AWS Secrets Manager secret, which securely stores and provides the username and password of your SonarQube user to the CodeBuild project on-demand.
  • IAM roles for CodeBuild and CloudWatch events.

Although this tutorial showcases a Java project with Maven, the design principles should also apply for other languages and build systems with SonarQube integrations.

Design

The following diagram shows the flow of data, starting with a new or updated pull request on CodeCommit. CloudWatch Events listens for these events and invokes your CodeBuild project. The CodeBuild container clones your repository source commit, performs a Maven install, and invokes the quality analysis on SonarQube, using the credentials obtained from AWS Secrets Manager. When finished, CodeBuild leaves a comment on your pull request, and potentially approves your pull request.

 

Diagram showing the flow of data between the AWS service components, as well as the SonarQube.

Prerequisites

For this walkthrough, you require:

  • An AWS account
  • A SonarQube server instance (Optional setup instructions included if you don’t have one already)

SonarQube instance setup (Optional)

This tutorial shows a basic setup of SonarQube on Amazon EC2 for informational purposes only. It does not include details about securing your Amazon EC2 instance or SonarQube installation. Please be sure you have secured your environments before placing sensitive data on them.

  1. To start, get a SonarQube server instance up and running. If you are already using SonarQube, feel free to skip these instructions and just note down your host URL and port number for later. If you don’t have one already, I recommend using a fresh Amazon EC2 instance for the job. You can get up and running quickly in just a few commands. I’ve selected an Amazon Linux 2 AMI for my EC2 instance.
  2. Download and install the latest JDK 11 module. Because I am using an Amazon Linux 2 EC2 instance, I can directly install Amazon Corretto 11 with yum.

$ sudo yum install java-11-amazon-corretto-headless

  1. After it’s installed, verify you’re using this version of Java:

$ sudo alternatives --config java

  1. Choose the Java 11 version you just installed.
  2. Download the latest SonarQube installation.
  3. Copy the zip-file onto your Amazon EC2 instance.
  4. Unzip the file into your home directory:

$ unzip sonarqube-8.0.zip -d ~/

This will copy the files into a directory like /home/ec2-user/sonarqube-8.0.

Now, start the server!

$ ~/sonarqube-8.0/bin/linux-x86-64/sonar.sh start

This should start a SonarQube server running on an address like http://<instance-address>:9000. It may take a few moments for the server to start.

Steps

Follow these steps to create automated pull request approvals.

Create a SonarQube User

Get started by creating a SonarQube user from your SonarQube webpage. This user is the identity used by the robot caller to your SonarQube for this workflow.

  1. Go to the Administration tab on your SonarQube instance.
  2. Choose Security, then Users, as shown in the following screenshot.Screenshot showing where to find the user management options inside SonarQube.
  3. Choose Create User. Fill in the form, and note down the Login and Password You will need to provide these values when creating the following AWS resources.
  4. Choose Create.

Create AWS resources

For this integration, you need to create some AWS resources:

  • AWS CodeCommit repository
  • AWS CodeBuild project
  • Amazon CloudWatch Events rule (to trigger builds when pull requests are created or updated)
  • IAM role (for CodeBuild to assume)
  • IAM role (for CloudWatch Events to assume and invoke CodeBuild)
  • AWS Secrets Manager secret (to store and manage your SonarQube user credentials)

I have created an AWS CloudFormation template to provision these resources for you. You can download the template from the sample repository on GitHub for this blog demo. This repository also contains the sample code which will be uploaded to your CodeCommit repository. The contents of this GitHub repository will automatically be copied into your new CodeCommit repository for you when you create this CloudFormation stack. This is because I’ve conveniently uploaded a zip-file of the contents into a publicly-readable S3 bucket, and am using it within this CloudFormation template.

  1. Download or copy the CloudFormation template from GitHub and save it as template.yaml on your local computer.
  2. At the CloudFormation console, choose Create Stack (with new resources).
  3. Choose Upload a template file.
  4. Choose Choose file and select the template.yaml file you just saved.
  5. Choose Next.
  6. Give your stack a name, optionally update the CodeCommit repository name and description, and paste in the username and password of the SonarQube user you created.
  7. Choose Next.
  8. Review the stack options and choose Next.
  9. On Step 4, review your stack, acknowledge the required capabilities, and choose Create Stack.
  10. Wait for the stack creation to complete before proceeding.
  11. Before leaving the AWS CloudFormation console, choose the Resources tab and note down the newly created CodeBuildRole’s Physical Id, as shown in the following screenshot. You need this in the next step. Screenshot showing the Physical Id of the CodeBuild role created through CloudFormation.

Create an Approval Rule Template

Now that your resources are created, create an Approval Rule Template in the CodeCommit console. This template allows you to define a required approver for new pull requests on specific repositories.

  1. On the CodeCommit console home page, choose Approval rule templates in the left panel. Choose Create template.
  2. Give the template a name (like Require SonarQube approval) and optionally, a description.
  3. Set the number of approvals needed as 1.
  4. Under Approval pool members, choose Add.
  5. Set the approver type to Fully qualified ARN. Since the approver will be the identity obtained by assuming the CodeBuild execution role, your approval pool ARN should be the following string:
    arn:aws:sts::<Your AccountId>:assumed-role/<Your CodeBuild IAM role name>/*
    The CodeBuild IAM role name is the Physical Id of the role you created and noted down above. You can also find the full name either in the IAM console or the AWS CloudFormation stack details. Adding this role to the approval pool allows any identity assuming your CodeBuild role to satisfy this approval rule.
  6. Under Associated repositories, find and choose your repository (PullRequestApproverBlogDemo). This ensures that any pull requests subsequently created on your repository will have this rule by default.
  7. Choose Create.

Update the repository with a SonarQube endpoint URL

For this step, you update your CodeCommit repository code to include the endpoint URL of your SonarQube instance. This allows CodeBuild to know where to go to invoke your SonarQube.

You can use the AWS Management Console to make this code change.

  1. Head back to the CodeCommit home page and choose your repository name from the Repositories list.
  2. You need a new branch on which to update the code. From the repository page, choose Branches, then Create branch.
  3. Give the new branch a name (such as update-url) and make sure you are branching from master. Choose Create branch.
  4. You should now see two branches in the table. Choose the name of your new branch (update-url) to start browsing the code on this branch. On the update-url branch, open the buildspec.yml file by choosing it.
  5. Choose Edit to make a change.
  6. In the pre_build steps, modify line 17 with your SonarQube instance url and listen port number, as shown in the following screenshot.Screenshot showing buildspec yaml code.
  7. To save, scroll down and fill out the author, email, and commit message. When you’re happy, commit this by choosing Commit changes.

Create a Pull Request

You are now ready to create a pull request!

  1. From the CodeCommit console main page, choose Repositories and PullRequestApproverBlogDemo.
  2. In the left navigation panel, choose Pull Requests.
  3. Choose Create pull request.
  4. Select master as your destination branch, and your new branch (update-url) as the source branch.
  5. Choose Compare.
  6. Give your pull request a title and description, and choose Create pull request.

It’s time to see the magic in action. Now that you’ve created your pull request, you should already see that your pull request requires one approver but is not yet approved. This rule comes from the template you created and associated earlier.

You’ll see images like the following screenshot if you browse through the tabs on your pull request:

Screenshot showing that your pull request has 0 of 1 rule satisfied, with 0 approvals. Screenshot showing a table of approval rules on this pull request which were applied by a template. Require SonarQube approval is listed but not yet satisfied.

Thanks to the CloudWatch Events Rule, CodeBuild should already be hard at work cloning your repository, performing a build, and invoking your SonarQube instance. It is able to find the SonarQube URL you provided because CodeBuild is cloning the source branch of your pull request. If you choose to peek at your project in the CodeBuild console, you should see an in-progress build.

Once the build has completed, head back over to your CodeCommit pull request page. If all went well, you’ll be able to see that SonarQube approved your pull request and left you a comment. (Or alternatively, failed and also left you a comment while not approving).

The Activity tab should resemble that in the following screenshot:

Screenshot showing that a comment was made by SonarQube through CodeBuild, and that the quality gate passed. The comment includes a link back to the SonarQube instance.

The Approvals tab should resemble that in the following screenshot:

Screenshot of Approvals tab on the pull request. The approvals table shows an approval by the SonarQube and that the rule to require SonarQube approval is satisfied.

Suppose you need to make a change to your pull request. If you perform updates to your source branch, the approval status will be reset. As your push completes, a new SonarQube analysis will begin just as it did the first time.

Once your SonarQube thresholds are satisfied and your pull request is approved, feel free to merge it!

Cleanup

To avoid incurring additional charges, you may want to delete the AWS resources you created for this project. To do this, simply navigate to the CloudFormation console, select the stack you created above, and choose Delete. If you are sure you want to delete, confirm by choosing Delete stack. CloudFormation will delete all the resources you created with this stack.

Conclusion

In this tutorial, you created a workflow to watch for pull request changes to your repository, triggered a CodeBuild project execution which invoked your SonarQube for code quality analysis, and then reported back to CodeCommit to approve your pull request.

I hope this guide illustrates the potential power of combining pull request approval rules with robotic approvers. While this example is specifically about integrating SonarQube, the same pattern can be used to invoke other robotic approvers using CodeBuild, or by invoking an AWS Lambda function instead.

This tutorial was written and tested using SonarQube Version 8.0 (build 29455).

DevOps at re:Invent 2019!

Post Syndicated from Matt Dwyer original https://aws.amazon.com/blogs/devops/devops-at-reinvent-2019/

re:Invent 2019 is fast approaching (NEXT WEEK!) and we here at the AWS DevOps blog wanted to take a moment to highlight DevOps focused presentations, share some tips from experienced re:Invent pro’s, and highlight a few sessions that still have availability for pre-registration. We’ve broken down the track into one overarching leadership session and four topic areas: (a) architecture, (b) culture, (c) software delivery/operations, and (d) AWS tools, services, and CLI.

In total there will be 145 DevOps track sessions, stretched over 5 days, and divided into four distinct session types:

  • Sessions (34) are one-hour presentations delivered by AWS experts and customer speakers who share their expertise / use cases
  • Workshops (20) are two-hours and fifteen minutes, hands-on sessions where you work in teams to solve problems using AWS services
  • Chalk Talks (41) are interactive white-boarding sessions with a smaller audience. They typically begin with a 10–15-minute presentation delivered by an AWS expert, followed by 45–50-minutes of Q&A
  • Builders Sessions (50) are one-hour, small group sessions with six customers and one AWS expert, who is there to help, answer questions, and provide guidance
  • Select DevOps focused sessions have been highlighted below. If you want to view and/or register for any session, including Keynotes, builders’ fairs, and demo theater sessions, you can access the event catalog using your re:Invent registration credentials.

Reserve your seat for AWS re:Invent activities today >>

re:Invent TIP #1: Identify topics you are interested in before attending re:Invent and reserve a seat. We hold space in sessions, workshops, and chalk talks for walk-ups, however, if you want to get into a popular session be prepared to wait in line!

Please see below for select sessions, workshops, and chalk talks that will be conducted during re:Invent.

LEADERSHIP SESSION DELIVERED BY KEN EXNER, DIRECTOR AWS DEVELOPER TOOLS

[Session] Leadership Session: Developer Tools on AWS (DOP210-L) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Ken Exner – Director, AWS Dev Tools, Amazon Web Services
Speaker 2: Kyle Thomson – SDE3, Amazon Web Services

Join Ken Exner, GM of AWS Developer Tools, as he shares the state of developer tooling on AWS, as well as the future of development on AWS. Ken uses insight from his position managing Amazon’s internal tooling to discuss Amazon’s practices and patterns for releasing software to the cloud. Additionally, Ken provides insight and updates across many areas of developer tooling, including infrastructure as code, authoring and debugging, automation and release, and observability. Throughout this session Ken will recap recent launches and show demos for some of the latest features.

re:Invent TIP #2: Leadership Sessions are a topic area’s State of the Union, where AWS leadership will share the vision and direction for a given topic at AWS.re:Invent.

(a) ARCHITECTURE

[Session] Amazon’s approach to failing successfully (DOP208-RDOP208-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Becky Weiss – Senior Principal Engineer, Amazon Web Services

Welcome to the real world, where things don’t always go your way. Systems can fail despite being designed to be highly available, scalable, and resilient. These failures, if used correctly, can be a powerful lever for gaining a deep understanding of how a system actually works, as well as a tool for learning how to avoid future failures. In this session, we cover Amazon’s favorite techniques for defining and reviewing metrics—watching the systems before they fail—as well as how to do an effective postmortem that drives both learning and meaningful improvement.

[Session] Improving resiliency with chaos engineering (DOP309-RDOP309-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Olga Hall – Senior Manager, Tech Program Management
Speaker 2: Adrian Hornsby – Principal Evangelist, Amazon Web Services

Failures are inevitable. Regardless of the engineering efforts put into building resilient systems and handling edge cases, sometimes a case beyond our reach turns a benign failure into a catastrophic one. Therefore, we should test and continuously improve our system’s resilience to failures to minimize impact on a user’s experience. Chaos engineering is one of the best ways to achieve that. In this session, you learn how Amazon Prime Video has implemented chaos engineering into its regular testing methods, helping it achieve increased resiliency.

[Session] Amazon’s approach to security during development (DOP310-RDOP310-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Colm MacCarthaigh – Senior Principal Engineer, Amazon Web Services

At AWS we say that security comes first—and we really mean it. In this session, hear about how AWS teams both minimize security risks in our products and respond to security issues proactively. We talk through how we integrate security reviews, penetration testing, code analysis, and formal verification into the development process. Additionally, we discuss how AWS engineering teams react quickly and decisively to new security risks as they emerge. We also share real-life firefighting examples and the lessons learned in the process.

[Session] Amazon’s approach to building resilient services (DOP342-RDOP342-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Marc Brooker – Senior Principal Engineer, Amazon Web Services

One of the biggest challenges of building services and systems is predicting the future. Changing load, business requirements, and customer behavior can all change in unexpected ways. In this talk, we look at how AWS builds, monitors, and operates services that handle the unexpected. Learn how to make your own services handle a changing world, from basic design principles to patterns you can apply today.

re:Invent TIP #3: Not sure where to spend your time? Let an AWS Hero give you some pointers. AWS Heroes are prominent AWS advocates who are passionate about sharing AWS knowledge with others. They have written guides to help attendees find relevant activities by providing recommendations based on specific demographics or areas of interest.

(b) CULTURE

[Session] Driving change and building a high-performance DevOps culture (DOP207-R; DOP207-R1)

Speaker: Mark Schwartz – Enterprise Strategist, Amazon Web Services

When it comes to digital transformation, every enterprise is different. There is often a person or group with a vision, knowledge of good practices, a sense of urgency, and the energy to break through impediments. They may be anywhere in the organizational structure: high, low, or—in a typical scenario—somewhere in middle management. Mark Schwartz, an enterprise strategist at AWS and the author of “The Art of Business Value” and “A Seat at the Table: IT Leadership in the Age of Agility,” shares some of his research into building a high-performance culture by driving change from every level of the organization.

[Session] Amazon’s approach to running service-oriented organizations (DOP301-R; DOP301-R1DOP301-R2)

Speaker: Andy Troutman – Director AWS Developer Tools, Amazon Web Services

Amazon’s “two-pizza teams” are famously small teams that support a single service or feature. Each of these teams has the autonomy to build and operate their service in a way that best supports their customers. But how do you coordinate across tens, hundreds, or even thousands of two-pizza teams? In this session, we explain how Amazon coordinates technology development at scale by focusing on strategies that help teams coordinate while maintaining autonomy to drive innovation.

re:Invent TIP #4: The max number of 60-minute sessions you can attend during re:Invent is 24! These sessions (e.g., sessions, chalk talks, builders sessions) will usually make up the bulk of your agenda.

(c) SOFTWARE DELIVERY AND OPERATIONS

[Session] Strategies for securing code in the cloud and on premises. Speakers: (DOP320-RDOP320-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Craig Smith – Senior Solutions Architect
Speaker 2: Lee Packham – Solutions Architect

Some people prefer to keep their code and tooling on premises, though this can create headaches and slow teams down. Others prefer keeping code off of laptops that can be misplaced. In this session, we walk through the alternatives and recommend best practices for securing your code in cloud and on-premises environments. We demonstrate how to use services such as Amazon WorkSpaces to keep code secure in the cloud. We also show how to connect tools such as Amazon Elastic Container Registry (Amazon ECR) and AWS CodeBuild with your on-premises environments so that your teams can go fast while keeping your data off of the public internet.

[Session] Deploy your code, scale your application, and lower Cloud costs using AWS Elastic Beanstalk (DOP326) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Prashant Prahlad – Sr. Manager

You can effortlessly convert your code into web applications without having to worry about provisioning and managing AWS infrastructure, applying patches and updates to your platform or using a variety of tools to monitor health of your application. In this session, we show how anyone- not just professional developers – can use AWS Elastic Beanstalk in various scenarios: From an administrator moving a Windows .NET workload into the Cloud, a developer building a containerized enterprise app as a Docker image, to a data scientist being able to deploy a machine learning model, all without the need to understand or manage the infrastructure details.

[Session] Amazon’s approach to high-availability deployment (DOP404-RDOP404-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Peter Ramensky – Senior Manager

Continuous-delivery failures can lead to reduced service availability and bad customer experiences. To maximize the rate of successful deployments, Amazon’s development teams implement guardrails in the end-to-end release process to minimize deployment errors, with a goal of achieving zero deployment failures. In this session, learn the continuous-delivery practices that we invented that help raise the bar and prevent costly deployment failures.

[Session] Introduction to DevOps on AWS (DOP209-R; DOP209-R1)

Speaker 1: Jonathan Weiss – Senior Manager
Speaker 2: Sebastien Stormacq – Senior Technical Evangelist

How can you accelerate the delivery of new, high-quality services? Are you able to experiment and get feedback quickly from your customers? How do you scale your development team from 1 to 1,000? To answer these questions, it is essential to leverage some key DevOps principles and use CI/CD pipelines so you can iterate on and quickly release features. In this talk, we walk you through the journey of a single developer building a successful product and scaling their team and processes to hundreds or thousands of deployments per day. We also walk you through best practices and using AWS tools to achieve your DevOps goals.

[Workshop] DevOps essentials: Introductory workshop on CI/CD practices (DOP201-R; DOP201-R1; DOP201-R2; DOP201-R3)

Speaker 1: Leo Zhadanovsky – Principal Solutions Architect
Speaker 2: Karthik Thirugnanasambandam – Partner Solutions Architect

In this session, learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities. We demonstrate a prescriptive approach to incrementally adopt and embrace some of the best practices around continuous integration and delivery using AWS developer tools and third-party solutions, including, AWS CodeCommit, AWS CodeBuild, Jenkins, AWS CodePipeline, AWS CodeDeploy, AWS X-Ray and AWS Cloud9. We also highlight some best practices and productivity tips that can help make your software release process fast, automated, and reliable.

[Workshop] Implementing GitFLow with AWS tools (DOP202-R; DOP202-R1; DOP202-R2)

Speaker 1: Amit Jha – Sr. Solutions Architect
Speaker 2: Ashish Gore – Sr. Technical Account Manager

Utilizing short-lived feature branches is the development method of choice for many teams. In this workshop, you learn how to use AWS tools to automate merge-and-release tasks. We cover high-level frameworks for how to implement GitFlow using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You also get an opportunity to walk through a prebuilt example and examine how the framework can be adopted for individual use cases.

[Chalk Talk] Generating dynamic deployment pipelines with AWS CDK (DOP311-R; DOP311-R1; DOP311-R2)

Speaker 1: Flynn Bundy – AppDev Consultant
Speaker 2: Koen van Blijderveen – Senior Security Consultant

In this session we dive deep into dynamically generating deployment pipelines that deploy across multiple AWS accounts and Regions. Using the power of the AWS Cloud Development Kit (AWS CDK), we demonstrate how to simplify and abstract the creation of deployment pipelines to suit a range of scenarios. We highlight how AWS CodePipeline—along with AWS CodeBuild, AWS CodeCommit, and AWS CodeDeploy—can be structured together with the AWS deployment framework to get the most out of your infrastructure and application deployments.

[Chalk Talk] Customize AWS CloudFormation with open-source tools (DOP312-R; DOP312-R1; DOP312-E)

Speaker 1: Luis Colon – Senior Developer Advocate
Speaker 2: Ryan Lohan – Senior Software Engineer

In this session, we showcase some of the best open-source tools available for AWS CloudFormation customers, including conversion and validation utilities. Get a glimpse of the many open-source projects that you can use as you create and maintain your AWS CloudFormation stacks.

[Chalk Talk] Optimizing Java applications for scale on AWS (DOP314-R; DOP314-R1; DOP314-R2)

Speaker 1: Sam Fink – SDE II
Speaker 2: Kyle Thomson – SDE3

Executing at scale in the cloud can require more than the conventional best practices. During this talk, we offer a number of different Java-related tools you can add to your AWS tool belt to help you more efficiently develop Java applications on AWS—as well as strategies for optimizing those applications. We adapt the talk on the fly to cover the topics that interest the group most, including more easily accessing Amazon DynamoDB, handling high-throughput uploads to and downloads from Amazon Simple Storage Service (Amazon S3), troubleshooting Amazon ECS services, working with local AWS Lambda invocations, optimizing the Java SDK, and more.

[Chalk Talk] Securing your CI/CD tools and environments (DOP316-R; DOP316-R1; DOP316-R2)

Speaker: Leo Zhadanovsky – Principal Solutions Architect

In this session, we discuss how to configure security for AWS CodePipeline, deployments in AWS CodeDeploy, builds in AWS CodeBuild, and git access with AWS CodeCommit. We discuss AWS Identity and Access Management (IAM) best practices, to allow you to set up least-privilege access to these services. We also demonstrate how to ensure that your pipelines meet your security and compliance standards with the CodePipeline AWS Config integration, as well as manual approvals. Lastly, we show you best-practice patterns for integrating security testing of your deployment artifacts inside of your CI/CD pipelines.

[Chalk Talk] Amazon’s approach to automated testing (DOP317-R; DOP317-R1; DOP317-R2)

Speaker 1: Carlos Arguelles – Principal Engineer
Speaker 2: Charlie Roberts – Senior SDET

Join us for a session about how Amazon uses testing strategies to build a culture of quality. Learn Amazon’s best practices around load testing, unit testing, integration testing, and UI testing. We also discuss what parts of testing are automated and how we take advantage of tools, and share how we strategize to fail early to ensure minimum impact to end users.

[Chalk Talk] Building and deploying applications on AWS with Python (DOP319-R; DOP319-R1; DOP319-R2)

Speaker 1: James Saryerwinnie – Senior Software Engineer
Speaker 2: Kyle Knapp – Software Development Engineer

In this session, hear from core developers of the AWS SDK for Python (Boto3) as we walk through the design of sample Python applications. We cover best practices in using Boto3 and look at other libraries to help build these applications, including AWS Chalice, a serverless microframework for Python. Additionally, we discuss testing and deployment strategies to manage the lifecycle of your applications.

[Chalk Talk] Deploying AWS CloudFormation StackSets across accounts and Regions (DOP325-R; DOP325-R1)

Speaker 1: Mahesh Gundelly – Software Development Manager
Speaker 2: Prabhu Nakkeeran – Software Development Manager

AWS CloudFormation StackSets can be a critical tool to efficiently manage deployments of resources across multiple accounts and regions. In this session, we cover how AWS CloudFormation StackSets can help you ensure that all of your accounts have the proper resources in place to meet security, governance, and regulation requirements. We also cover how to make the most of the latest functionalities and discuss best practices, including how to plan for safe deployments with minimal blast radius for critical changes.

[Chalk Talk] Monitoring and observability of serverless apps using AWS X-Ray (DOP327-R; DOP327-R1; DOP327-R2)

Speaker 1 (R, R1, R2): Shengxin Li – Software Development Engineer
Speaker 2 (R, R1): Sirirat Kongdee – Solutions Architect
Speaker 3 (R2): Eric Scholz – Solutions Architect, Amazon

Monitoring and observability are essential parts of DevOps best practices. You need monitoring to debug and trace unhandled errors, performance bottlenecks, and customer impact in the distributed nature of a microservices architecture. In this chalk talk, we show you how to integrate the AWS X-Ray SDK to your code to provide observability to your overall application and drill down to each service component. We discuss how X-Ray can be used to analyze, identify, and alert on performance issues and errors and how it can help you troubleshoot application issues faster.

[Chalk Talk] Optimizing deployment strategies for speed & safety (DOP341-R; DOP341-R1; DOP341-R2)

Speaker: Karan Mahant – Software Development Manager, Amazon

Modern application development moves fast and demands continuous delivery. However, the greatest risk to an application’s availability can occur during deployments. Join us in this chalk talk to learn about deployment strategies for web servers and for Amazon EC2, container-based, and serverless architectures. Learn how you can optimize your deployments to increase productivity during development cycles and mitigate common risks when deploying to production by using canary and blue/green deployment strategies. Further, we share our learnings from operating production services at AWS.

[Chalk Talk] Continuous integration using AWS tools (DOP216-R; DOP216-R1; DOP216-R2)

Speaker: Richard Boyd – Sr Developer Advocate, Amazon Web Services

Today, more teams are adopting continuous-integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase best practices for continuous integration and discuss how to effectively use AWS tools for CI.

re:Invent TIP #5: If you’re traveling to another session across campus, give yourself at least 60 minutes!

(d) AWS TOOLS, SERVICES, AND CLI

[Session] Best practices for authoring AWS CloudFormation (DOP302-R; DOP302-R1)

Speaker 1: Olivier Munn – Sr Product Manager Technical, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

Incorporating infrastructure as code into software development practices can help teams and organizations improve automation and throughput without sacrificing quality and uptime. In this session, we cover multiple best practices for writing, testing, and maintaining AWS CloudFormation template code. You learn about IDE plug-ins, reusability, testing tools, modularizing stacks, and more. During the session, we also review sample code that showcases some of the best practices in a way that lends more context and clarity.

[Chalk Talk] Using AWS tools to author and debug applications (DOP215-RDOP215-R1DOP215-R2) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Fabian Jakobs – Principal Engineer, Amazon Web Services

Every organization wants its developers to be faster and more productive. AWS Cloud9 lets you create isolated cloud-based development environments for each project and access them from a powerful web-based IDE anywhere, anytime. In this session, we demonstrate how to use AWS Cloud9 and provide an overview of IDE toolkits that can be used to author application code.

[Session] Migrating .Net frameworks to the cloud (DOP321) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Robert Zhu – Principal Technical Evangelist, Amazon Web Services

Learn how to migrate your .NET application to AWS with minimal steps. In this demo-heavy session, we share best practices for migrating a three-tiered application on ASP.NET and SQL Server to AWS. Throughout the process, you get to see how AWS Toolkit for Visual Studio can enable you to fully leverage AWS services such as AWS Elastic Beanstalk, modernizing your application for more agile and flexible development.

[Session] Deep dive into AWS Cloud Development Kit (DOP402-R; DOP402-R1)

Speaker 1: Elad Ben-Israel – Principal Software Engineer, Amazon Web Services
Speaker 2: Jason Fulghum – Software Development Manager, Amazon Web Services

The AWS Cloud Development Kit (AWS CDK) is a multi-language, open-source framework that enables developers to harness the full power of familiar programming languages to define reusable cloud components and provision applications built from those components using AWS CloudFormation. In this session, you develop an AWS CDK application and learn how to quickly assemble AWS infrastructure. We explore the AWS Construct Library and show you how easy it is to configure your cloud resources, manage permissions, connect event sources, and build and publish your own constructs.

[Session] Introduction to the AWS CLI v2 (DOP406-R; DOP406-R1)

Speaker 1: James Saryerwinnie – Senior Software Engineer, Amazon Web Services
Speaker 2: Kyle Knapp – Software Development Engineer, Amazon Web Services

The AWS Command Line Interface (AWS CLI) is a command-line tool for interacting with AWS services and managing your AWS resources. We’ve taken all of the lessons learned from AWS CLI v1 (launched in 2013), and have been working on AWS CLI v2—the next major version of the AWS CLI—for the past year. AWS CLI v2 includes features such as improved installation mechanisms, a better getting-started experience, interactive workflows for resource management, and new high-level commands. Come hear from the core developers of the AWS CLI about how to upgrade and start using AWS CLI v2 today.

[Session] What’s new in AWS CloudFormation (DOP408-R; DOP408-R1; DOP408-R2)

Speaker 1: Jing Ling – Senior Product Manager, Amazon Web Services
Speaker 2: Luis Colon – Senior Developer Advocate, Amazon Web Services

AWS CloudFormation is one of the most widely used AWS tools, enabling infrastructure as code, deployment automation, repeatability, compliance, and standardization. In this session, we cover the latest improvements and best practices for AWS CloudFormation customers in particular, and for seasoned infrastructure engineers in general. We cover new features and improvements that span many use cases, including programmability options, cross-region and cross-account automation, operational safety, and additional integration with many other AWS services.

[Workshop] Get hands-on with Python/boto3 with no or minimal Python experience (DOP203-R; DOP203-R1; DOP203-R2)

Speaker 1: Herbert-John Kelly – Solutions Architect, Amazon Web Services
Speaker 2: Carl Johnson – Enterprise Solutions Architect, Amazon Web Services

Learning a programming language can seem like a huge investment. However, solving strategic business problems using modern technology approaches, like machine learning and big-data analytics, often requires some understanding. In this workshop, you learn the basics of using Python, one of the most popular programming languages that can be used for small tasks like simple operations automation, or large tasks like analyzing billions of records and training machine-learning models. You also learn about and use the AWS SDK (software development kit) for Python, called boto3, to write a Python program running on and interacting with resources in AWS.

[Workshop] Building reusable AWS CloudFormation templates (DOP304-R; DOP304-R1; DOP304-R2)

Speaker 1: Chelsey Salberg – Front End Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

AWS CloudFormation gives you an easy way to define your infrastructure as code, but are you using it to its full potential? In this workshop, we take real-world architecture from a sandbox template to production-ready reusable code. We start by reviewing an initial template, which you update throughout the session to incorporate AWS CloudFormation features, like nested stacks and intrinsic functions. By the end of the workshop, expect to have a set of AWS CloudFormation templates that demonstrate the same best practices used in AWS Quick Starts.

[Workshop] Building a scalable serverless application with AWS CDK (DOP306-R; DOP306-R1; DOP306-R2; DOP306-R3)

Speaker 1: David Christiansen – Senior Partner Solutions Architect, Amazon Web Services
Speaker 2: Daniele Stroppa – Solutions Architect, Amazon Web Services

Dive into AWS and build a web application with the AWS Mythical Mysfits tutorial. In this workshop, you build a serverless application using AWS Lambda, Amazon API Gateway, and the AWS Cloud Development Kit (AWS CDK). Through the tutorial, you get hands-on experience using AWS CDK to model and provision a serverless distributed application infrastructure, you connect your application to a backend database, and you capture and analyze data on user behavior. Other AWS services that are utilized include Amazon Kinesis Data Firehose and Amazon DynamoDB.

[Chalk Talk] Assembling an AWS CloudFormation authoring tool chain (DOP313-R; DOP313-R1; DOP313-R2)

Speaker 1: Nathan McCourtney – Sr System Development Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

In this session, we provide a prescriptive tool chain and methodology to improve your coding productivity as you create and maintain AWS CloudFormation stacks. We cover authoring recommendations from editors and plugins, to setting up a deployment pipeline for your AWS CloudFormation code.

[Chalk Talk] Build using JavaScript with AWS Amplify, AWS Lambda, and AWS Fargate (DOP315-R; DOP315-R1; DOP315-R2)

Speaker 1: Trivikram Kamat – Software Development Engineer, Amazon Web Services
Speaker 2: Vinod Dinakaran – Software Development Manager, Amazon Web Services

Learn how to build applications with AWS Amplify on the front end and AWS Fargate and AWS Lambda on the backend, and protocols (like HTTP/2), using the JavaScript SDKs in the browser and node. Leverage the AWS SDK for JavaScript’s modular NPM packages in resource-constrained environments, and benefit from the built-in async features to run your node and mobile applications, and SPAs, at scale.

[Chalk Talk] Scaling CI/CD adoption using AWS CodePipeline and AWS CloudFormation (DOP318-R; DOP318-R1; DOP318-R2)

Speaker 1: Andrew Baird – Principal Solutions Architect, Amazon Web Services
Speaker 2: Neal Gamradt – Applications Architect, WarnerMedia

Enabling CI/CD across your organization through repeatable patterns and infrastructure-as-code templates can unlock development speed while encouraging best practices. The SEAD Architecture team at WarnerMedia helps encourage CI/CD adoption across their company. They do so by creating and maintaining easily extensible infrastructure-as-code patterns for creating new services and deploying to them automatically using CI/CD. In this session, learn about the patterns they have created and the lessons they have learned.

re:Invent TIP #6: There are lots of extra activities at re:Invent. Expect your evenings to fill up onsite! Check out the peculiar programs including, board games, bingo, arts & crafts or ‘80s sing-alongs…

Notifying 3rd Party Services of CodeBuild State Changes

Post Syndicated from Nick Lee original https://aws.amazon.com/blogs/devops/notifying-3rd-party-services-of-codebuild-state-changes/

It is often useful to notify other systems of the build status of a code change, such as by creating release tickets in your project-tracking software when a build succeeds, or posting a message to your team’s chat solution. A previous blog post showed you how to integrate AWS Lambda and Amazon SNS to extend AWS CodeCommit to send email notifications for file changes. This blog post shows you how to integrate AWS Lambda, AWS Systems Manager Parameter Store, Amazon DynamoDB and Amazon CloudWatch Events to extend AWS CodeBuild by adding webhook functionality, allowing you to make authenticated API calls to 3rd-party services in response to CodeBuild state changes. It also provides an example of how to use this solution to create an issue in JIRA, a popular issue and project-tracking software solution, in response to a CodeBuild build status change.

Some of the services used include:

  • Amazon DynamoDB: a fully-managed key-value and document database that delivers single-digit millisecond performance at any scale. This solution uses it as a registry for webhook receivers, and takes advantage of its on-demand capacity mode so that you only pay for the resources you consume.
  • AWS Lambda: a popular serverless service that lets you run code without provisioning or managing servers. This solution uses a Lambda function to query DynamoDB for a list of webhook receivers and to notify those receivers of CodeBuild build status changes.
  • Amazon CloudWatch Events: Amazon CloudWatch Events delivers a near real-time stream of system events which allow you to detect changes to your AWS resources and to set up rules to respond to those changes (for example, by invoking a Lambda function in response to build notifications).
  • AWS Systems Manager Parameter Store: a secure, hierarchical storage solution which can be used to store items such as configuration data, passwords, database strings, and license codes. This solution uses SSM Parameter Store to store the HTTP endpoints for 3rd party providers and custom headers, rather than storing them in plaintext in DynamoDB.

To help you quickly deploy the solution, I have made it available as an AWS CloudFormation template. AWS CloudFormation is a management tool that provides a common language to describe and provision all of the infrastructure resources in AWS.

Overview

The following diagram shows how this solution uses AWS services to invoke 3rd-party services in response to CodeBuild state changes.

An overview of the workflow for this solution, showing CodeBuild publishing to CloudWatch Events which invokes the Lambda to notify the 3rd party service.

CodeBuild publishes several useful CloudWatch events, which can notify you of build state changes and build phase transitions. By setting up a CloudWatch event rule, you can detect when a CodeBuild job enters a specific state. In this solution, I create a CloudWatch event rule which captures CodeBuild state changes for all AWS CodeBuild projects in an account, then invokes a Lambda function to handle these change notifications. When this Lambda function is triggered, the following steps are executed:

  1. Query the CodeBuildWebhooks DynamoDB table to find any registered webhook receivers for the CodeBuild project which triggered the event rule.
  2. For each registered receiver:
    1. Obtain the HTTP endpoint and any custom headers from SSM Parameter Store. Some headers/endpoints may be considered sensitive, so the solution stores them in SSM Parameter Store as SecureStrings where needed. The DynamoDB records reference the relevant SSM Parameter by name. The parameter names must all be prefixed with /webhooks/ in order for the webhook Lambda function to access them.
    2. After obtaining the URL for the webhook receiver from SSM Parameter Store, the Lambda function checks if the record from DynamoDB contains a custom HTTP body template. If so, it loads this template, substituting any placeholder values. If no custom template is found, a default template is used.
    3. Finally, the HTTP request is sent with the processed body template.

I use Python and Boto 3 to implement this function. The full source code is published on GitHub. You can find it in the aws-codebuild-webhooks repository.

Getting started

The following sections describe the steps to deploy and use the solution.

Deploying the Solution

There is an AWS CloudFormation template, template.yaml, in the source code which uses the AWS Serverless Application Model to define required components of this solution. For convenience, I have made it available as a one-click launch template:

When launching the stack, the default behavior is to expect SSM parameters to be encrypted using the AWS KMS AWS managed CMK for SSM. However, you can input a different Key ID as the value for the SSMKeyId parameter if required. The above launch stack button deploys the solution in us-east-1, however links for other regions are available on the solution GitHub page.

The template deploys:

  • A Lambda function and associated IAM role for sending HTTP requests
  • A DynamoDB table for registering webhook receivers
  • A CloudWatch event rule for triggering the Lambda function in response to CodeBuild events

The Lambda function code demonstrates how to make authenticated HTTP requests to 3rd party services. You can extend the sample code to add in additional features such as deployment of the Lambda function in a VPC to access private resources like an on-premises Jira.

Example: Creating a Ticket in Jira

To demonstrate the solution, I set up a webhook to create a bug ticket in Jira whenever a build fails. In order to follow this example, you need to install and configure the AWS CLI.

First off, I store the Jira URL securely in the SSM parameter store:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-issues-webhook-url",
  "Value": "https://<my-jira-server>/rest/api/latest/issue/",
  "Type": "String",
  "Description": "Jira issues Rest API URL"
}'

For this sample, I use basic authentication with the JIRA Rest API. After following Jira’s instructions to generate a BASE64-encoded authorization string, I store the headers as a JSON string in SSM:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-basic-auth-headers",
  "Value": "{\"Authorization\": \"Basic <base64 encoded useremail:api_token>\"}",
  "Type": "SecureString",
  "Description": "Jira basic auth headers for CodeBuild webhooks"
}'

For more authentication options, consult the Jira docs.

Now I need to register the webhook receiver in my CodeBuildWebhooks DynamoDB table. In order to make requests to the JIRA REST API, my Lambda function must supply a JSON string containing a payload accepted by the Jira API for creating issues. To do this, I save the following JSON as item.json in my current working directory:

{
  "project": {"S": "MyCodeBuildProject"},
  "hook_url_param_name": {"S": "/webhooks/jira-issues-webhook-url"},
  "hook_headers_param_name": {"S": "/webhooks/jira-basic-auth-headers"},
  "statuses": {"L": [{"S": "FAILED"}]},
  "template": {"S": "{\"fields\": {\"project\":{\"id\": \"10000\"},\"summary\": \"$PROJECT build failing\",\"description\": \"AWS CodeBuild project $PROJECT latest build $STATUS\",\"issuetype\":{\"id\": \"10004\"}}}"}
}

In the template, my project ID is 10000 and the bug issue type is 10004. You can obtain this information from your JIRA instance by invoking the “createmeta” API.

Finally, I register the webhook receiver in my CodeBuildWebhooks DynamoDB table, referencing the JSON file I just created:

aws dynamodb put-item --table-name CodeBuildWebhooks --item file://item.json

That’s it! The next time my CodeBuild project fails, an issue is created in JIRA for someone to action, as shown in the following screenshot:

A Jira Kanban board showing the newly created issue for the failing CodeBuild project

You could extend this example to populate other fields Jira such as Labels, Components, or Assignee.

Cleanup

To remove the resources created as part of this blog post, first delete the stack:

aws cloudformation delete-stack --stack-name aws-codebuild-webhooks

Then delete the parameters from SSM Parameter Store:

aws ssm delete-parameters --names /webhooks/jira-issues-webhook-url /webhooks/jira-basic-auth-headers

Conclusion

In this blog post, I showed you how to use an AWS CloudFormation template to quickly build a sample solution that can help you integrate AWS CodeBuild with other 3rd party tools via AWS Lambda.

The CloudFormation template used in this post and Lambda function can be found in the aws-codebuild-webhooks GitHub repository, along with other examples.

If you have questions or other feedback about this example, please open an issue or submit a pull request.

About the Author

Nick Lee is part of the AWS Solution Builders team in the UK. Nick works with the AWS Solution Architecture community to create standardized tools, code samples, demonstrations and quick starts.

 

 

Transforming DevOps at Broadridge on AWS

Post Syndicated from Som Chatterjee original https://aws.amazon.com/blogs/devops/transforming-devops-for-a-fintech-on-aws/

by Tom Koukourdelis (Broadridge – Vice President, Head of Global Cloud Platform Development and Engineering), Sreedhar Reddy (Broadridge – Vice President, Enterprise Cloud Architecture)

We have seen large enterprises in all industry segments meaningfully utilizing AWS to build new capabilities and deliver business value. While doing so, enterprises have to balance existing systems, processes, tools, and culture while innovating at pace with industry disruptors. Broadridge Financial Solutions, Inc. (NYSE: BR) is no exception. Broadridge is a $4 billion global FinTech leader and a leading provider of investor communications and technology-driven solutions to banks, broker-dealers, asset and wealth managers, and corporate issuers.

This blog post explores how we adopted AWS at scale while being secured and compliant, as well as delivering a high degree of productivity for our builders on AWS. It also describes the steps we took to create technical (a cloud solution as a foundation based on AWS) and procedural (organizational) capabilities by leveraging AWS cloud adoption constructs. The improvement in our builder productivity and agility directly contributes to rolling out differentiated business capabilities addressing our customer needs in a timely manner. In this post, we share real-life learnings and takeaways to adopt AWS at scale, transform business and application team experiences, and deliver customer delight.

Background

At Broadridge we have number of distributed and mainframe systems supporting multiple financial services domains and sub-domains such as post trade, proxy communications, financial and regulatory reporting, portfolio management, and financial operations. The majority of these systems were built and deployed years ago at on-premises data centers all over the US and abroad.

Builder personas at Broadridge are diverse in terms of location, culture, and the technology stack they use to build and support applications (we use a number of front-end JS frameworks; .NET; Java; ColdFusion for web development; ORMs for data entity relational mapping; IBM MQ; Apache Camel for messaging; databases like SQL, Oracle, Sybase, and other open source stacks for transaction management; databases, and batch processing on virtualized and bare metal instances). With more than 200 on-premises distributed applications and mainframe systems across front-, mid-, and back-office ecosystems, we wanted to leverage AWS to improve efficiency and build agility, and to reduce costs. The ability to reach customers at new geographies, reduced time to market, and opportunities to build new business competencies were key parameters as well.

Broadridge’s core tenents for cloud adoption

When AWS adoption within Broadridge attained a critical mass (known as the Foundation stage of adoption), the business and technology leadership teams defined our posture of cloud adoption and shared them with teams across the organization using the following tenets. Enterprises looking to adopt AWS at scale should define similar tenets fit for their organizations in plain language understandable by everyone across the board.

  • Iterate: Understanding that we cannot disrupt ongoing initiatives, small and iterative approach of moving workloads to cloud in waves— rinse and repeat— was to be adopted. Staying away from long-drawn, capital-intensive big bangs were to be avoided.
  • Fully automate: Starting from infrastructure deployment to application build, test, and release, we decided early on that automation and no-touch deployment are the right approach both to leverage cloud capabilities and to fuel a shift toward a matured DevOps culture.
  • Trust but verify only exceptions: Security and regulatory compliance are paramount for an organization like Broadridge. Guardrails (such as service control policies, managed AWS Config rules, multi-account strategy) and controls (such as PCI, NIST control frameworks) are iteratively developed to baseline every AWS account and AWS resource deployed. Manual security verification of workloads isn’t needed unless an exception is raised. Defense in depth (distancing attack surface from sensitive data and resources using multi-layered security) strategies were to be applied.
  • Go fast; re-hosting is acceptable: Not every workload needs to go through years of rewriting and refactoring before it is deemed suitable for the cloud. Minor tweaking (light touch re-platforming) to go fast (such as on-premises Oracle to RDS for Oracle) is acceptable.
  • Timeliness and small wins are key: Organizations spend large sums of capital to completely rewrite applications and by the time they are done, the business goal and customer expectation will have changed. That leads to material dissatisfaction with customers. We wanted to avoid that by setting small, measurable targets.
  • Cloud fluency: Investment in training and upskilling builders and leaders across the organization (developers, infra-ops, sec-ops, managers, salesforce, HR, and executive leadership) were to be to made to build fluency on the cloud.

The first milestone

The first milestone in our adoption journey was synonymous with Project stage of adoption and had the following characteristics.

A controlled sprawl of shadow IT

We first gave small teams with little to no exposure to critical business functions (such as customer data and SLA-oriented workloads) sandboxes to test out proofs of concepts (PoC) on AWS. We created the cloud sandboxes with least privilege, and added additional privileges upon request after verification. During this time, our key AWS usage characteristics were:

  • Manual AWS account setup with least privilege
  • Manual IAM role creation with role boundaries and authentication and authorization from the existing enterprise Active Directory
  • Integration with existing Security Information and Event Management (SIEM) tools to audit role sprawl and config changes
  • Proofs of concepts only
  • Account tagging for chargeback and tracking purposes
  • No automated build, test, deploy, or integration with existing delivery pipeline
  • Small and definitive timeframes for PoCs with defined goals

A typical AWS environment at this stage will resemble that shown in the following diagram:

Representative AWS usage during first milestone

As shown above, at this time the corporate assets were connected to a highly restrictive AWS environment through VPN. The access to the AWS environment were setup based on AWS Identity primitives or IAM roles mapped to and federated with the on-premises Active Directory. There was a single VPC setup for a sandbox account with no egress to the internet. There were no customer data hosted on this AWS environment and the AWS environment was connected with our SIEM of choice.

Early adopters became first educators and mentors

Members of the first teams to carry out proofs of concept on AWS shared learnings with each other and with the leadership team within Broadridge. This helped build communities of practices (CoPs) over time. Initial CoPs established were for networking and security, and were later extended to various practices like Terraform, Chef, and Jenkins.

Tech PMO team within Broadridge as the quasi-central cloud team

Ownership is vital no matter how small the effort and insignificant the impact of risky experimentation. The ownership of account setup, role creation, integration with on-premises AD and SIEM, and oversight to ensure that the experimentation does not pose any risk to the brand led us to build a central cloud team with experienced AWS and infrastructure practitioners. This team created a process for cloud migration with first manual guardrails of allowed and disallowed actions, manual interventions, and checkpoints built in every step.

At this stage, a representative pattern of work products across teams resembles what is shown below.

Work products across teams during initial stages of AWS usage

As the diagram suggests, individual application teams built overlapping—and, in many cases, identical—technical building blocks across the teams. This was acceptable as the teams were experimenting and running PoCs on AWS. In an actual production application delivery, the blocks marked with a * would be considered technical and functional waste—that is, undifferentiated lift which increases the cost of doing business.

The second milestone

In hindsight, this is perhaps the most important milestone in our cloud adoption journey. This step was marked with following key characteristics:

  • Every new team doing PoCs are rebuilding the same building blocks: This includes networking (VPCs and security groups), identity primitives (account, roles, and policies), monitoring (Amazon CloudWatch setup and custom metrics), and compute (images with org-mandated security patches).
  • The teams usually asking the same first fundamental questions: These include questions such as: What is an ideal CIDR block range? How do we integrate with SIEM? How do we spin up web servers on Amazon EC2? How do we secure access to data? How do we setup workload monitoring?
  • Security reviews rarely finding new security gaps but adding time to the process: A central security group as part of the central cloud team reviewed every new account request and every new service usage request without finding new security gaps when the application team used the baseline guardrails.
  • Manual effort is spent on tagging, chargeback, and other approvals: A portion of the application PoC/minimum viable product (MVP) lifecycle was spent on housekeeping. While housekeeping was necessary, the effort spent was undifferentiated.

The follow diagram represents the efforts for every team during the first phase.

Team wise efforts showing duplicative work

As shown above, every application team spent effort on building nearly the same capabilities before they could begin developing their team specific application functionalities and assets. The common blocks of work are undifferentiated and leads to spending effort which also varies depending on the efficiency of the team.

During this step, learnings from the PoCs led us to establish the tenets shared earlier in this post. To address the learnings, Broadridge established a cloud platform team. The cloud platform team, also referred to as the cloud enablement engine (CEE), is a team of builders who create the foundational building blocks on AWS that address common infrastructure, security, monitoring, auditing, and break-glass controls. At the same time, we established a cloud business office (CBO) as a liaison between the application and business teams and the CEE. CBO exists to manage and prioritize foundational requirements from multiple application teams as they go online on AWS and helps create the product backlog for CEE.

Cloud Enablement Engine Responsibilities:

  • Build out foundational building blocks utilizing AWS multi-account strategy
  • Build security guardrails, compliance controls, infrastructure as code automation, auditing and monitoring controls
  • Implement cloud platform backlog that funnel from CBO as common asks from app teams
  • Work with our AWS team to understand service roadmap, future releases, and provide feedback

Cloud Business Office Responsibilities:

  • Identify and prioritize repeating technical building blocks that cuts across multiple teams
  • Establish acceptable architecture patterns based on application use cases
  • Manage cloud programs to ensure CEE deliverables and business expectations align
  • Identify skilling needs, budget, and track spend
  • Contribute to the cloud platform backlog
  • Work with AWS team to understand service roadmap, future releases, and provide feedback

These teams were set up to scale AWS adoption, put building blocks into the hands of the applications teams, and ultimately deliver differentiated capabilities to Broadridge’s business teams and end customers. The following diagram translates the relationship and modus operandi among the teams:

CEE and CBO working model

Upon establishing the conceptual working model, the CBO and CEE teams looked at solutions from AWS to enable them to achieve the working model quickly. The starting point was AWS Landing Zone (ALZ). ALZ is an AWS solution based on the AWS multi-account strategy. It is a set of vetted constructs and best practices that we use as mechanisms to accelerate AWS adoption.

AWS multi-account strategy

The multi-account strategy employs best practices around separation of concerns, reduction of blast radius, account setup based on Software Development Life Cycle (SDLC) phases, and base operational roles for auditing, monitoring, security, and compliance, as shown in the above diagram. This strategy defines the need for having centralized shared or core accounts, which works as the master account for monitoring, governance, security, and auditing. A number of AWS services like Amazon GuardDuty, AWS Security Hub, and AWS Config configurations are set in these centralized accounts. Spoke or child accounts are vended as per a team’s requirement which are spun up with these governance, monitoring, and security defaults connected to the centralized account for log capturing, threat detection, configuration management, and security management.

The third milestone

The third milestone is synonymous with the Foundation stage of adoption

Using the ALZ construct, our CEE team developed a core set of principles to be used by every application team. Based on our core tenets, the CEE team built out an entry point (a web-based UI workflow application). This web UI was the entry point for any application team requesting an environment within AWS for experimentation or to begin the application development life cycle. Simplistically, the web UI sat on top of an automation engine built using APIs from AWS, ALZ components (Account Vending Machine, Shared Services Account, Logging Account, Security Account, default security groups, default IAM roles, and AD groups), and Terraform based code. The CBO team helped establish the common architecture patterns that was codified into this engine.

Team on-boarding workflow using foundational building blocks on AWS

An Angular based web UI is the starting point for application team to request for the AWS accounts. The web UI entry point asks a number of questions validating the type of account requested along with its intended purpose, ingress/egress requirements, high availability and disaster recovery requirements, business unit for charge back and ownership purposes. Once all information is entered, it sends out a notification based on a preset organization dispatch matrix rule. Upon receiving the request, the approver has the option to approve it or asks further clarification question. Once satisfactorily answered the approver approves the account vending request and a Terraform code is kicked in to create the default account.

When an account is created through this process, the following defaults are set up for a secure environment for development, testing, and staging. Similar guardrails are deployed in the production accounts as well.

  • Creates a new account under an existing AWS Organizational Unit (OU) based on the input parameters. Tags the chargeback codes, custom tags, and also integrates the resources with existing CMDB
  • Connects the new account to the master shared services and logging account as per the AWS Landing Zone constructs
  • Integrates with the CloudWatch event bus as a sender account
  • Runs stsAssumeRole commands on the new account to create infosec cross-account roles
  • Defines actions, conditions, role limits, and account policies
  • Creates environment variables related to the account in the parameter store within AWS Systems Manager
  • Connects the new account to TrendMicro for AV purposes
  • Attaches the default VPC of the new account to an existing AWS Transit Gateway
  • Generates a Splunk key for the account to store in the Splunk KV store
  • Uses AWS APIs to attach Enterprise support to the new account
  • Creates or amends a new AD group based on the IAM role
  • Integrates as an Amazon Macie member account
  • Enables AWS Security Hub for the account by running an enable-security-hub call
  • Sets up Chef runner for the new account
  • Runs account setting lock procedures to set Amazon S3 public settings, EBS default encryption setting
  • Enable firewall by setting AWS WAF rules for the account
  • Integrates the newly created account with CloudHealth and Dome9

Deploying all these guardrails in any new accounts removes the need for manual setup and intervention. This gives application developers the needed freedom to stop worrying about infrastructure and access provisioning while giving them a higher speed to value.

Using these technical and procedural cloud adoption constructs, we have been able to reduce application onboarding time. This has led to quicker delivery of business capability with the application teams focusing only on what differentiates their business rather than repeatedly building undifferentiated work products. This has also led to creation of mature building blocks over time for use of the application teams. Using these building blocks the teams are also modernizing applications by iteratively replacing old application blocks.

Conclusion

In summary, we are able to deliver better business outcomes and differentiated customer experience by:

  • Building common asks as reusable and automated enterprise assets and improving the overall enterprise-wide maturity by indexing on and growing these assets.
  • Depending on an experienced team to deliver baseline operational controls and guardrails.
  • Improving their security posture with higher-level and managed AWS security services instead of rebuilding everything from the ground up.
  • Using the Cloud Business Office to improve funneling of common asks. This helps the next team on AWS to benefit from a readily available set of approved services and application blueprints.

We will continue to build on and maturing these reusable building blocks by using AWS services and new feature releases.

 

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

Debugging with Amazon CloudWatch Synthetics and AWS X-Ray

Post Syndicated from Nizar Tyrewalla original https://aws.amazon.com/blogs/devops/debugging-with-amazon-cloudwatch-synthetics-and-aws-x-ray/

Today, AWS X-Ray launches support for Amazon CloudWatch Synthetics, enabling developers to trace end-to-end requests from configurable scripts called “canaries”.  These canaries run the test script to monitor web endpoints and APIs using modular, light-weight tests that run 24×7, once per minute. It continuously captures the behavior and availability of the endpoint or URL being monitored, reporting what end-customers are seeing. Customers get alerted immediately in case of failures and tie them back to the root-cause using traces. These canaries provide a complete picture of all the services invoked in the path. With this feature, developers and DevOps engineers can correlate failures in the application with the impact on end customers, determine the root cause of the failures, and identify the upstream and downstream services impacted.

Overview

X-Ray helps developers and DevOps engineers analyze and debug distributed applications, such as applications built with microservice architecture. Using X-Ray, you can understand how your application and its underlying services are performing in order to identify and troubleshoot the root causes of performance issues and errors. X-Ray helps you debug and triage distributed applications, wherever those applications are running, and whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of the above.

Amazon CloudWatch Synthetics is a fully managed synthetic monitoring service that allows developers and DevOps engineers to view their application endpoints and URLs using configurable scripts called “canaries” that run 24×7. Canaries alert you as soon as something does not work as expected, as defined by your script. CloudWatch Synthetics canaries can be customized to check for availability, latency, transactions, broken or dead links, step-by-step task completions, page load errors, load latencies for UI assets, complex wizard flows, or checkout flows in your applications.

CloudWatch Synthetics supports End-User Experience Monitoring (EUEM—Synthetic Monitoring), Web Services Monitoring (REST APIs), URL Monitoring, and Website Content Monitoring (protection from unauthorized changes in your websites from phishing, code injection, and cross-site scripting). Canary traffic can continually verify your customers’ experiences even when you don’t have any customer traffic. Canaries can discover issues as soon as your customers do.

Use cases

Here are some of the use cases for which developers and DevOps engineers can use AWS X-Ray with Amazon CloudWatch Synthetics.

Determine if there is a problem

You can determine if there is a problem with your CloudWatch Synthetics canaries by setting CloudWatch alarms based on certain thresholds. You can also determine an increase in errors, faults, throttling rates, or slow response times within your X-Ray Service Map.

Developers can set up thresholds on canaries in Amazon CloudWatch Synthetics console which creates CloudWatch alarms to understand if there were any issues. These thresholds are managed centrally to help you get alerted based on the granularity of the test run. They can then use Amazon Simple Notification Service (SNS) or CloudWatch Event for their alerts. For example, you can set a notification when the canary fails 5 times in 5 runs if the canary runs once per min. You can further analyze and triage HAR File, screenshots and logs in the CloudWatch Synthetics console.

Enable thresholds in CloudWatch Synthetics

Canary thresholds in Amazon CloudWatch

Screenshots in Amazon CloudWatch Synthetics console

 

 

HAR file in Amazon CloudWatch Synthetics console

Developers can also use the new synthetic canary client nodes with type Client::Synthetics in AWS X-Ray console to pinpoint which synthetic canaries are experiencing issues with errors, faults, or throttling rates for the selected time frame. Select the synthetic canary node to view the response time distribution of the entire request. Zoom into a specific time range and dive deep into analyzing trends using X-Ray Analytics, as shown in the following screenshot of a service map with a synthetic canary client node.

X-Ray Service Map with Synthetics node showing errors

 

 

Find the root cause of ongoing failures

You can find the root cause of ongoing failures in upstream and downstream services to reduce the mean time to resolution.

Developers can view the end-to-end path of requests within the X-Ray service map to easily understand the services invoked, as well as the upstream and downstream services. You can also use trace maps for individual traces to dissect each request and determine which service results in the most latency or is causing an error, as shown in the following diagrams.

 

X-Ray Service Map with end to end path of the request invoked by Synthetic canaries

X-Ray Trace Map to view individual request invoked by Synthetic canaries

Once you receive a CloudWatch alarm for failures in a synthetic canary, it becomes critical to determine the root cause of the issue and reduce the impact on end users. Developers can use X-Ray Analytics to address this. X-Ray performs statistical modeling on trace data to determine the probable root cause of an issue. As you can see in the following screenshots, the synthetic tests for console “www” that is running on AWS Elastic Beanstalk are failing. It is because of the throughput capacity exception from the DynamoDB table, impacting the product microservice. This can be quickly determined in the X-Ray Analytics console. Also, developers can determine which of their canary tests are causing this issue using the already created X-Ray annotation “Annotation.aws:canary_arn”.

 

Analyze root cause of the issues in Synthetic canaries using X-Ray Analytics

 

Root cause of the issue using X-Ray Analytics

 

Identify performance bottlenecks and trends

You can identify performance bottlenecks and trends seen by your Synthetics canaries’ traces over time.

View trends in performance of your website or endpoint over time using the continuous traffic from your synthetic canaries to populate a response time histogram over a period of time. Understand the p50, p90, and p95 percentile for your requests originating from Synthetics tests and determine proportion of failures and time series activity to determine the time of occurrence.

Response time histogram and time series activity for Synthetic canaries

 

Compare latency and error/fault rates and experience of end user with Synthetic canaries

You can compare latency and error/fault rates and end-user experiences with the Synthetic canaries.

Using X-Ray Analytics, developers can compare any two trends or collection of traces. For example, in the below diagram, we are comparing the experience of end users using the system (shown in blue trend line) to what the Synthetics tests are reporting (shown in green trend line), which started around 3:00 am. We can easily identify that Synthetics tests are reporting more 5xx errors as compared to end users.

Compare end-user experience with Synthetics canaries using X-Ray Analytics

 

Identify if you have enough canary coverage for all the API’s and URLs.

When debugging synthetic canaries, developers and DevOps engineers are looking to understand if they have enough coverage for their APIs and URLs and if their Synthetic canaries have adequate tests. Using X-Ray Analytics, developers will be able to compare the experience of Synthetic canaries with the rest of their end users. The screenshot below shows blue trend line for canaries and green for the rest of the users. You can also identify that two out of the three URLs don’t have canary tests.

Identify which URLs and API's have canaries running.

Slice and dice the X-Ray service map to focus only on Synthetics tests and its path to downstream services using X-Ray groups

Developers can have multiple APIs and applications running within an account. It becomes critical at that point to focus on certain set of workflows like Synthetics tests for application “www” running on AWS Elastic Beanstalk as shown below. Developers can create X-Ray Group by using a filter expression “edge(id(name: “www”, type: “client::Synthetics”), id(name: “www”, type: “AWS::ElasticBeanstalk::Environment”))” to just focus on traces generated by their Synthetics tests for “www”.

Create X-Ray Groups for Synthetics canaries

 

Getting Started

Getting started with Synthetics is easy. Enable X-Ray tracing for your APIs, web application endpoints and, URLs being monitored by Synthetics canaries.  Visit the documentation to learn more about getting started using AWS X-Ray with Amazon CloudWatch Synthetics.

Conclusion

In this post, I outlined how you can use AWS X-Ray to debug and triage canaries running in Amazon CloudWatch Synthetics. We looked at some of the use cases that will enable developer and DevOps engineers to reduce time to resolution by viewing the end-to-end path of the request, determining the root cause of the issue and understanding which upstream or downstream services are impacted.

Running AWS commands from Slack using AWS Chatbot

Post Syndicated from Ilya Bezdelev original https://aws.amazon.com/blogs/devops/running-aws-commands-from-slack-using-aws-chatbot/

DevOps teams widely use Slack channels as communication hubs where team members interact—both with one another and with the systems they operate. Chatbots help facilitate these interactions, delivering important notifications and relaying commands from users back to systems. Many teams even prefer that operational events and notifications come through Slack channels. This allows the entire team to see notifications and act on them through commands to chatbots.

Earlier this year, AWS announced a beta of AWS Chatbot that enabled DevOps teams receive AWS notifications in Slack channels and Amazon Chime chat rooms. Customers have used AWS Chatbot to stay on top of their operational, security, CI/CD, and cost alerts.

Today, we introduced a new feature that enables DevOps teams to run AWS commands and actions from Slack. You can retrieve diagnostic information, invoke AWS Lambda functions, and create support cases right from your Slack channels, so your team can collaborate and respond to events faster. AWS Chatbot supports commands using the already familiar AWS Command Line Interface syntax that you can use from Slack on desktop or mobile devices.

AWS Chatbot commands diagram

In addition to running commands, you can retrieve Amazon CloudWatch Logs by simply clicking the Show logs button on CloudWatch Alarms notifications in Slack. AWS Chatbot supports actions for showing logs for AWS Lambda and Amazon API Gateway.

The scope of AWS Chatbot’s permissions in your account is defined by an IAM role that you can create using policy templates in the AWS Chatbot console or by specifying custom IAM policies with granular permissions that meet your needs. If you already use AWS Chatbot for sending notifications to Slack, you must create a new IAM role or update the existing one with additional permissions to be able to run commands.

AWS Chatbot is available free of charge and you only pay for the AWS resources you use, such as CloudWatch Log Insights that is used for querying logs.

Walkthrough

In this post, I will walk you through the configuration steps to set up AWS Chatbot in a Slack channel and show how to get help and run commands using the bot.

As part of this tutorial, we will do the following:

  1. Configure AWS Chatbot in a Slack channel
  2. Test AWS Chatbot in Slack and get help
  3. Show CloudWatch Alarms in Slack
  4. Describe AWS resources in Slack
  5. Invoke a Lambda function from Slack
  6. Retrieve logs for a Lambda function
  7. Create an AWS Support case

Prerequisites

To follow along with this example, you need an AWS account, as well as a Slack channel to configure with AWS Chatbot.

1. Configure AWS Chatbot in a Slack channel

In the AWS Chatbot console’s home page, choose Slack in the Chat client dropdown and choose Configure client.

The setup wizard redirects you to the Slack OAuth 2.0 page. In the top-right corner, select the Slack workspace to configure and choose Allow. Your Slack workspace installs the AWS Slack App, and the AWS account that you logged in with is now authorized to communicate with your Slack workspace.

Slack OAuth 2.0 flow

Slack redirects you from here to the Configure Slack Channel page. Select the channel in which you want to run commands. You can either select a public channel from the dropdown list or paste the URL or ID of a private channel.

For private Slack channels, find the URL of the channel by opening the context (right-click) menu on the channel name in the left sidebar in Slack, and choosing Copy link.

After you choose the Slack channel, under Permissions, choose Create an IAM role using a template. Type a role name in the Role name textbox. In the Policy templates dropdown, choose Read-only command permissions, Lambda-invoke command permissions, and AWS Support command permissions. AWS Chatbot will create an IAM role that it will assume to run commands from the selected Slack channel. You can see the permissions granted to AWS Chatbot or modify them in the IAM console. Learn more about permissions in AWS Chatbot documentation.

Permissions in AWS Chatbot console

Finally, if you also want to receive notifications, such as CloudWatch Alarms or AWS Budgets, select SNS topics that those notifications are published to.

After you choose Configure, the configuration completes.

2. Test AWS Chatbot in Slack and get help

To test AWS Chatbot, open the Slack channel that you configured in step 1 and type /invite @aws to invite AWS Chatbot to the channel.

Type @aws help to get help on using AWS Chatbot.

Example of a help message

3. Show CloudWatch Alarms in Slack

When you have an operational event or want to check in on your application’s health, you can use AWS Chatbot to show details about CloudWatch Alarms in your account.

Type @aws cloudwatch describe-alarms --region us-east-1 to see all alarms in North Virginia Region. The bot will return an image with CloudWatch alarms and metric trends as well as the standard output of the CloudWatch DescribeAlarms API call.

Example of @aws cloudwatch describe-alarm command

You can modify the command to only include alarms in the ALARM state by adding the --state ALARM parameter: @aws cloudwatch describe-alarms --state ALARM. There is no need to specify the Region this time because AWS Chatbot remembered it from the previous command execution.

4. Describe AWS resources in Slack

You can use AWS Chatbot to look up information about your resources.

Example 1: You can use AWS Chatbot to look up timeout and memory size parameters for a Lambda function.

If you don’t know the exact function name, you can first run @aws lambda list-functions, locate the function and then run @aws lambda get-function –-function-name FUNCTION_NAME.

Note that for commands that have only one required parameter, for example, function-name for lambda get-function, you can omit the parameter name: @aws lambda get-function FUNCTION_NAME.

Example of @aws lambda get-function command

Example 2: To get an output value from a CloudFormation stack, you can run @aws cloudformation describe-stacks --stack-name STACK_NAME.

The example below shows a CloudFormation output for an API Gateway endpoint.

Example of getting a value from a CloudFormation stack using AWS Chatbot.

5. Invoke a Lambda function from Slack

To trigger a workflow or a runbook from Slack, you can invoke a Lambda function by running @aws lambda invoke FUNCTION_NAME. AWS Chatbot will ask for a confirmation.

Example of an @aws lambda invoke command.

After you choose Yes, AWS Chatbot will invoke the function. Once the function invocation completes, AWS Chatbot will show the output of the Invoke call.

Example of an @aws lambda invoke command.

If you want to provide inputs to your function, you can add them with a --payload parameter, for example, @aws lambda invoke MyAwesomeLambda --payload {"key":"value"}.

6. Retrieve logs for a Lambda function

You can quickly access logs for Lambda invocations using the new AWS Chatbot action buttons on CloudWatch Alarm notifications in Slack.

To get started, first configure Slack notifications for CloudWatch Alarms for a Lambda function via AWS Chatbot. Then, make your function fail to trigger the CloudWatch Alarm to go into the alarm state.

Example of an Amazon CloudWatch Alarm notification with Show Logs buttons.

Choose Show logs to access the Lambda execution logs. AWS Chatbot will show the first 30 log entries starting from the beginning of the alarm evaluation period.

Example of logs returned by the Show logs action.

Choose Show error logs to filter results to only log entries containing “error”, “exception”, or “fail”.

AWS Chatbot uses Amazon CloudWatch Log Insights to query for logs. The response will contain a deep link to the CloudWatch Log Insights console where you can continue the log dive.

7. Create an AWS Support case

To complete this step, you need to have a subscription to an AWS Support plan.

To create an AWS Support case from Slack, type @aws support create-case and follow the AWS Chatbot prompts to provide it with all the required parameters. For example, to provide a subject type @aws subject SUBJECT STRING.

Example of @aws support create-case command.

When you finish providing required parameters, AWS Chatbot will ask you to confirm creation of the case. After you choose Yes, AWS Chatbot will return the Support case ID.

Example of @aws support create-case command.

Note that file attachments are not currently supported in AWS Chatbot.

Conclusion

Running AWS commands from Slack using AWS Chatbot expands the toolkit your team uses to respond to operational events and interact with AWS. In this post, I walked you through some of the use cases where AWS Chatbot helped reduce the time to recovery while also increasing transparency within DevOps teams.

About the author

Ilya Bezdelev (AWS)

Ilya Bezdelev is the Principal Product Manager for AWS User Experience, where he focuses on conversational interfaces. He cares about making DevOps teams more effective and helping them minimize the mean time to recovery using collaborative ChatOps on AWS.

 

 

 

Test Reports with AWS CodeBuild

Post Syndicated from Muhammad Mansoor original https://aws.amazon.com/blogs/devops/test-reports-with-aws-codebuild/

AWS CodeBuild announced the launch of a new feature in CodeBuild called Reports. This feature allows you to view the reports generated by functional or integration tests. The reports can be in the JUnit XML or Cucumber JSON format. You can view metrics such as Pass Rate %, Test Run Duration, and number of Passed versus Failed/Error test cases in one location. Builders can use any testing frameworks as long as the reports are generated in the supported formats.

You can see the test reports created by CodeBuild in the respective Report Group, where they are stored for 30 days. For longer retention, you can store the reports in an Amazon S3 bucket. Each test report is further broken down by individual test cases.

Getting started with CodeBuild Reports

To store and view the reports of your unit tests, you need to add a new sequence configuration called reports in your buildspec.yml file. CodeBuild creates a new report group under this name or uses the existing report group if one exists already. The following sample buildspec.yml file generates test reports from JUnit tests using Surefire and stores it in the Report Group named <project-name>-SurefireReports, where <project-name> is the name of the CodeBuild project.

version: 0.2

env:
  variables:
    JAVA_HOME: "/usr/lib/jvm/java-8-openjdk-amd64"
phases:
  install:
    runtime-versions:
      java: corretto8
  build:
    commands:
      - echo Build started on `date`
      - mvn surefire-report:report #Running this task to execute unit tests and generate report.
reports: #New
  SurefireReports: # CodeBuild will create a report group called "SurefireReports".
    files: #Store all of the files
      - '**/*'
    base-directory: 'target/surefire-reports' # Location of the reports 

You can specify the unit tests in either the build or the post_build sequence in the buildspec.yml file.

CodeBuild needs additional AWS Identity and Access Management (IAM) permissions to create test reports. These permissions are already included in the predefined AWS managed policies used by CodeBuild. To add the permissions to the existing CodeBuild projects, you need to modify the policy. Use the following IAM policy to add permissions.

{
    
    "Statement": [
        {
            "Resource": "arn:aws:codebuild:your-region:your-aws-account-id:report-group/my-project-*", 
            "Effect": "Allow",
            "Action": [
                "codebuild:CreateReportGroup",
                "codebuild:CreateReport",
                "codebuild:UpdateReport",
                "codebuild:BatchPutTestCases"
            ]
        }
    ]
}

Once your unit tests start to execute as the part of CodeBuild project, you see the reports and the Trends for the report group along with the metrics. Reports also show the pass rate and average report duration as well as the time taken by each individual test.

Multiple CodeBuild projects can use the same report group. This feature is helpful if you test your code separately (such as for micro services and APIs) and want to see all of the reports in one place. Other examples include running integration tests across different projects and using one report group to view the results of the test.

Apart from the aggregates, you can see the metrics captured by each report separately. For each report, you can see the breakdown of individual unit tests, status of the tests, duration, and messages from the tests. The summary section shows you the overall Passed/Failed test count and pass rate along with the duration taken by each test/report, as shown in the following screenshot.

Trends in Test Reports

Conclusion

The blog post showed how to use Test Reports feature from CodeBuild. Developers can use this feature to see previously executed test results without leaving the AWS console. For teams that rely on Test Driven Development (TDD) techniques, test reports provide a valuable insights such as highlighting the number of tests with Passed, Failed/Error, or Unknown results.

For further information, see the following resources:

Integrating CodePipeline with on-premises Bitbucket Server

Post Syndicated from Alex Rosa original https://aws.amazon.com/blogs/devops/integrating-codepipeline-with-on-premises-bitbucket-server/

This blog post demonstrates how to integrate AWS CodePipeline with on-premises Bitbucket Server. If you want to integrate with Bitbucket Cloud, see Integrating Git with AWS CodePipeline. The AWS Lambda function provided can get the source code from a Bitbucket Server repository whenever the user sends a new code push and store it in a designed Amazon Simple Storage Service (Amazon S3) bucket.

The Bitbucket Server integration uses webhooks configured in the Bitbucket repository. Webhooks are ideal for this case and avoid the need for performing frequent polling to check for changes in the repository.

Some security protections are available with this solution:

  • The Amazon S3 bucket has encryption enabled using SSE-AES, and every object created is encrypted by default.
  • The Lambda function accepts only events signed by the Bitbucket Server.
  • All environment variables used by the Lambda function are encrypted in rest using AWS KMS.

Overview

During the creation of the CloudFormation stack, you can select either Amazon API Gateway or Elastic Load Balancing to communicate with the Lambda function. The following diagram shows how the integration works.

 

This diagram explains the solution flow, from an user code push to Bitbucket server to the CodePipeline trigger and what happen in the between.

 

  1. The user pushes code to the Bitbucket repository.
  2. Based on that user action, the Bitbucket server generates a new webhook event and sends it to Elastic Load Balancing or API Gateway based on which endpoint type you selected during AWS CloudFormation stack creation.
  3. API Gateway or Elastic Load Balancing forwards the request to the Lambda function, which checks the message signature using the secret configured in the webhook. If the signature is valid, then the Lambda function moves to the next step.
  4. The Lambda function calls the Bitbucket server API and requests that it generate a ZIP package with the content of the branch modified by the user in Step 1.
  5. The Lambda function sends the ZIP package to the Amazon S3 bucket.
  6. CodePipeline is triggered when it detects a new or updated file in the Amazon S3 bucket path.

Requirements

Before starting the solution setup, make sure you have:

  • An Amazon S3 bucket available to store the Lambda function setup files
  •  NPM or Yarn to install the package dependencies
  • AWS CLI

Setup

Follow these steps for setup.

Creating a personal token on the Bitbucket server

Create a personal token on the Bitbucket server that the Lambda function uses to access the repository.

  1. Log in to the Bitbucket server.
  2. Choose your user avatar, then choose Manage Account.
  3. On the Account screen, choose Personal access tokens.
  4. Choose Create a token.
  5. Fill out the form with the token name. In the Permissions section, leave Read for Projects and Repositories as is.
  6. Choose Create to finish.

Launch a CloudFormation stack

Using the steps below you will upload the Lambda function and Lambda layer ZIP files to an Amazon S3 bucket and launch the AWS CloudFormation stack to create the resources on your AWS account.

1. Clone the Git repository containing the solution source code:

git clone https://github.com/aws-samples/aws-codepipeline-bitbucket-integration.git

2. Install the NodeJS packages with npm:

cd code
npm install
cd ..

3. Prepare the packages for deployment.

aws cloudformation package --template-file ./infra/infra.yaml --s3-bucket your_bucket_name --output-template-file package.yaml

4. Edit the AWS CloudFormation parameters file.

Open the file located at infra/parameters.json in your favorite text editor and replace the parameters as follows:

BitbucketSecret – Bitbucket webhook secret used to sign webhook events. You should define the secret and use the same value here and in the Bitbucket server webhook.
BitbucketServerUrl – URL of your Bitbucket Server, such as https://server:port.
BitbucketToken – Bitbucket server personal token used by the Lambda function to access the Bitbucket API.
EndpointType – Select the type of endpoint to integrate with the Lambda function. It can be the Application Load Balancer or the API Gateway.
LambdaSubnets – Subnets where the Lambda function runs.
LBCIDR – CIDR allowed to communicate with the Load Balancer. It should allow the Bitbucket server IP address. Leave it blank if you are using the API Gateway endpoint type.
LBSubnets – Subnets where the Application Load Balancer runs. Leave it blank if you are using the API Gateway endpoint type.
LBSSLCertificateArn – SSL Certificate to associate with the Application Load Balancer. Leave it blank if you are using the API Gateway endpoint type.
S3BucketCodePipelineName – Amazon S3 bucket name that this stack creates to store the Bitbucket repository content.
VPCID – VPC ID where the Application Load Balancer and the Lambda function run.
WebProxyHost – Hostname of your proxy server used by the Lambda function to access the Bitbucket server, such as myproxy.mydomain.com. If you don’t need a web proxy, leave it blank.
WebProxyPort – Port of your proxy server used by the Lambda function to access the Bitbucket server, such as 8080. If you don’t need a web proxy leave it blank.

5. Create the CloudFormation stack:

aws cloudformation create-stack --stack-name CodePipeline-Bitbucket-Integration --template-body file:///package.yaml --parameters file:///infra/parameters.json --capabilities CAPABILITY_NAMED_IAM

Creating a webhook on the Bitbucket Server

Next, create the webhook on Bitbucket server to notify the Lambda function of push events to the repository:

  1. Log into the Bitbucket server and navigate to the repository page.
  2. Choose Repository settings.
  3. Select Webhook.
  4. Choose Create webhook.
  5. Fill out the form with the name of the webhook, such as CodePipeline.
  6. Fill out the URL field with the API Gateway or Load Balancer URL. To obtain this URL, choose the Outputs tab of the AWS CloudFormation stack.
  7. Fill out the Secret field with the same value used in the AWS CloudFormation stack.
  8. In the Events section, ensure Push is selected.
  9. Choose Create to finish.
  10. Repeat these steps for each repository in which you want to enable the integration.

Configuring your pipeline

Finally, change your pipeline on CodePipeline to use the Amazon S3 bucket created by the AWS CloudFormation stack as the source of your pipeline.

The Lambda function uploads the files to the Amazon S3 bucket using the following path structure:

Project Name/Repository Name/Branch Name.zip

Now, every time someone pushes code to the Bitbucket repository, your pipeline starts automatically.

Cleaning up

If you want to remove the integration and clean up the resources created at AWS, you need to delete the CloudFormation stack. Run the command below to delete the stack and associated resources.

aws cloudformation delete-stack --stack-name CodePipeline-Bitbucket-Integration 

Conclusion

This post demonstrated how to integrate your on-premises Bitbucket Server with CodePipeline.

Migration to AWS CodeCommit, AWS CodePipeline and AWS CodeBuild From GitLab

Post Syndicated from Martin Schade original https://aws.amazon.com/blogs/devops/migration-to-aws-codecommit-aws-codepipeline-and-aws-codebuild-from-gitlab/

This walkthrough shows you how to migrate multiple repositories to AWS CodeCommit from GitLab and set up a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild. Event notifications and pull requests are sent to Amazon Chime for project team member communication.

AWS CodeCommit supports all Git commands and works with existing Git tools. I can keep using my preferred development environment plugins, continuous integration/continuous delivery (CI/CD) systems, and graphical clients with AWS CodeCommit.

Over the years the number of repositories hosted in my GitLab environment grew beyond 100 and maintaining it with patches, updates, and backups was time consuming and risky. Migrating over to AWS CodeCommit project by project manually would have been a tedious process and error pone. I wanted to run a script to handle the AWS setup and migration of code for me.

The documentation for AWS CodeCommit has an example how to migrate a single repository, I wanted to migrate many though.

As part of the migration, I had a requirement to set up a CI/CD pipeline using AWS CodePipeline and send notifications on activity in the repository to Amazon Chime, which I use for communication between project members.

Overview

Component overview of migration setup for AWS CodeCommit from GitLab

The migration script calls the GitLab API to get a list of git repositories and subsequently runs

git clone --mirror <ssh-repository-url> <project-name> 

commands against the SSH endpoint of the repositories.

For every GitLab repository, a CloudFormation template creates a AWS CodeCommit repository and the AWS CodePipeline, AWS CodeBuild resources. If an Amazon Chime webhook is configured, also the Lambda function to post to Amazon Chime is created.

One S3 bucket for artifacts is also setup with the first AWS CodeCommit repository and shared across all other AWS CodeCommit and AWS CodePipeline resources.

The migration script can be executed on any system able to communicate with the existing GitLab environment through SSH and the GitLab API and with AWS endpoints and has permissions to create AWS CloudFormation stacks, AWS IAM roles and policies, AWS Lambda, AWS CodeCommit, AWS CodePipeline, .

To pull all the projects from GitLab without needing to define them previously, a GitLab personal access token is used.

You can configure to migrate user specific GitLab project, repositories for specific groups or individual projects or do a full migration of all projects.

For the AWS CodeCommit, CodePipeline, and CodeBuild – following best practices – I use CloudFormation templates that allow me to automate the creation of resources.

The Amazon Chime Notifications are setup using a serverless Lambda function triggered by CloudWatch Event Rules and are optional.

Walkthrough

Requirements

I wrote and tested the solution in Python 3.6 and assume pip and git are installed. Python 2 is not supported.

The GitLab version that we migrated off of and tested against was 10.5. I expect the script to work fine against other versions that support REST calls as well, but didn’t test it against those.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account
  2. An EC2 instance running Linux with access to your GitLab environment or a Laptop or Desktop running MacOS or Linux. The solution has not been tested on Windows/Cygwin
  3. Git installed
  4. AWS CLI installed.

Setup

  1. Run a pip install on a command line: pip install gitlab-to-codecommit-migration
  2. Create a personal access token in GitLab (instructions)
  3. Configure ssh-key based access for your user in GitLab (Create and add your SSH public key in GitLab Docs)
  4. Setup your AWS account for CodeCommit following (Setup Steps for SSH Connections to AWS CodeCommit Repositories on Linux, macOS, or Unix). You can use the same SSH key for both, GitLab and AWS.
  5. Setup your ~/.ssh/config to have one entry for the GitLab server and one for the CodeCommit environment. Example:
    Host my-gitlab-server-example.com
      IdentityFile ~/.ssh/<your-private-key-name>
    
    Host git-codecommit.*.amazonaws.com
      User APKEXAMPLEEXAMPLE-replace-with-your-user
      IdentityFile ~/.ssh/<your-private-key-name>

    This way the git client uses the key for both domains and the correct user. Make sure to use the SSH key ID and not the AWS Access key ID.

  6. “Configure your AWS Command Line Interface (AWS CLI) environment. This environment helps execute the CloudFormation template creation part of the script. For setup instructions, see (Configuring the AWS CLI
  7. When executing the script on a remote server on AWS or in your data center, use a terminal multiplexer like tmux
  8. If you migrate more than 33 repositories, you should check the CloudWatch Events limit, which has a default of 100 https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html. The link to increase the limits is on the same page. The setup uses CloudWatch Events Rules to trigger the pipeline (one rule) and notifications (two rules) to Amazon Chime for a total of three CloudWatch Events Rule per pipeline.
  9. For even larger migrations of more than 200 repos you should check CloudFormation limits, which default to max 200 (aws cloudformation describe-account-limits), CodePipeline has a limit of 300 and CodeCommit has a default limit of 1000, same as the CodeBuild limit of 1000. All the limits can be increased through a support ticket and the link to create it is on the limits page in the documentation.

Migrate

After you have set up the environment, I recommend to test the migration with one sample project. On a command line, type

gitlab-to-codecommit --gitlab-access-token youraccesstokenhere --gitlab-url https://yourgitlab.yourdomain.com --repository-names namespace/sample-project

It will take around 30 seconds for the CloudFormation template to create the AWS CodeCommit repository and the AWS CodePipeline and deploy the Lambda function. While deploying or when you are interested in the setup you can check the state in the AWS Management Console in the CloudFormation service section and look at the template.

Example screenshot

AWS CloudFormation stack creation output for migration stack

Time it takes to push the code depends on the size of your repository. Once you see this running successful you can continue to push all or a subset of projects.


gitlab-to-codecommit --gitlab-access-token youraccesstokenhere --gitlab-url https://gitlab.yourdomain.com --all

I also included a script to set repositories to read-only in GitLab, because once you migrated to CodeCommit it is a good way to avoid users still pushing to the old remote in GitLab.


gitlab-set-read-only --gitlab-access-token youraccesstokenhere --gitlab-url https://gitlab.yourdomain.com --all

Cleaning up

To avoid incurring future charges for test environments, delete the resources by deleting the CloudFormation templates account-setup and the stack for the repository you created.

The CloudFormation template has a DeletionPolicy: Retain for the CodeCommit Repository to avoid accidentally deleting the code when deleting the CloudFormation template. If you want to remove the CodeCommit repository as well at one point, you can change the default behavior or delete the repository through API, CLI, or Console. During testing I would sometimes fail the deployment of a template because I didn’t delete the CodeCommit repository after deleting the CloudFormation template. For migration purposes you will not run into any issues and not delete a CodeCommit repository by mistake when deleting a CloudFormation template.

In order to delete the repository use the AWS Management Console and select the AWS CodeCommit service. Then select the repository and click the delete button.

Example screenshot

Delete AWS CodeCommit repository from AWS Management Console

Conclusion

The blog post did show how to migrate repositories to AWS CodeCommit from GitLab and set up a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild.

The source code is available at https://github.com/aws-samples/gitlab-to-codecommit-migration

Please create issues or pull requests on the GitHub repository when you have additional requirements or use cases.

Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy

Post Syndicated from Noha Ghazal original https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CodeBuild artifacts with AWS CodeDeploy, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into CodeBuild, then the output is deployed on CodeDeploy.

Solution overview

The functioning pipeline creates a fully managed build service that compiles your source code. It then produces code artifacts that can be used by CodeDeploy to deploy to your production environment automatically.

The deployment workflow starts by placing the application code on the GitHub repository. To automate this scenario, I added source code management to the Jenkins project under the Source Code section. I chose the GitHub option, which by design clones a copy from the GitHub repo content in the Jenkins local workspace directory.

In the second step of my automation procedure, I enabled a trigger for the Jenkins server using an “Poll SCM” option. This option makes Jenkins check the configured repository for any new commits/code changes with a specified frequency. In this testing scenario, I configured the trigger to perform every two minutes. The automated Jenkins deployment process works as follows:

  1. Jenkins checks for any new changes on GitHub every two minutes.
  2. Change determination:
    1. If Jenkins finds no changes, Jenkins exits the procedure.
    2. If it does find changes, Jenkins clones all the files from the GitHub repository to the Jenkins server workspace directory.
  3. The File Operation plugin deletes all the files cloned from GitHub. This keeps the Jenkins workspace directory clean.
  4. The AWS CodeBuild plugin zips the files and sends them to a predefined Amazon S3 bucket location then initiates the CodeBuild project, which obtains the code from the S3 bucket. The project then creates the output artifact zip file, and stores that file again on the S3 bucket.
  5. The HTTP Request plugin downloads the CodeBuild output artifacts from the S3 bucket.
    I edited the S3 bucket policy to allow access from the Jenkins server IP address. See the following example policy:

    {
      "Version": "2012-10-17",
      "Id": "S3PolicyId1",
      "Statement": [
        {
          "Sid": "IPAllow",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": "arn:aws:s3:::examplebucket/*",
          "Condition": {
             "IpAddress": {"aws:SourceIp": "x.x.x.x/x"},  <--- IP of the Jenkins server
          } 
        } 
      ]
    }
    
    

    This policy enables the HTTP request plugin to access the S3 bucket. This plugin doesn’t use the IAM instance profile or the AWS access keys (access key ID and secret access key).

  6. The output artifact is a compressed ZIP file. The CodeDeploy plugin by design requires the files to be unzipped to zip them and send them over to the S3 bucket for the CodeDeploy deployment. For that, I used the File Operation plugin to perform the following:
    1. Unzip the CodeBuild zipped artifact output in the Jenkins root workspace directory. At this point, the workspace directory should include the original zip file downloaded from the S3 bucket from Step 5 and the files extracted from this archive.
    2. Delete the original .zip file, and leave only the source bundle contents for the deployment.
  7. The CodeDeploy plugin selects and zips all workspace directory files. This plugin uses the CodeDeploy application name, deployment group name, and deployment configurations that you configured to initiate a new CodeDeploy deployment. The CodeDeploy plugin then uploads the newly zipped file according to the S3 bucket location provided to CodeDeploy as a source code for its new deployment operation.

Walkthrough

In this post, I walk you through the following steps:

  • Creating resources to build the infrastructure, including the Jenkins server, CodeBuild project, and CodeDeploy application.
  • Accessing and unlocking the Jenkins server.
  • Creating a project and configuring the CodeDeploy Jenkins plugin.
  • Testing the whole CI/CD pipeline.

Create the resources

In this section, I show you how to launch an AWS CloudFormation template, a tool that creates the following resources:

  • Amazon S3 bucket—Stores the GitHub repository files and the CodeBuild artifact application file that CodeDeploy uses.
  • IAM S3 bucket policy—Allows the Jenkins server access to the S3 bucket.
  • JenkinsRole—An IAM role and instance profile for the Amazon EC2 instance for use as a Jenkins server. This role allows Jenkins on the EC2 instance to access the S3 bucket to write files and access to create CodeDeploy deployments.
  • CodeDeploy application and CodeDeploy deployment group.
  • CodeDeploy service role—An IAM role to enable CodeDeploy to read the tags applied to the instances or the EC2 Auto Scaling group names associated with the instances.
  • CodeDeployRole—An IAM role and instance profile for the EC2 instances of CodeDeploy. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • CodeBuildRole—An IAM role to be used by CodeBuild to access the S3 bucket and create the build projects.
  • Jenkins server—An EC2 instance running Jenkins.
  • CodeBuild project—This is configured with the S3 bucket and S3 artifact.
  • Auto Scaling group—Contains EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancer.
  • Auto Scaling launch configurations—For use by the Auto Scaling group.
  • Security groups—For the Jenkins server, the load balancer, and the CodeDeploy EC2 instances.

 

  1. To create the CloudFormation stack (for example in the AWS Frankfurt Region) click the below link:
    .

    .
  2. Choose Next and provide the following values on the Specify Details page:
    • For Stack name, name your stack as you prefer.
    • For CodedeployInstanceCount, choose the default of t2.medium.
      To check the supported instance types by AWS Region, see Supported Regions.
    • For InstanceCount, keep the default of 3, to launch three EC2 instances for CodeDeploy.
    • For JenkinsInstanceType, keep the default of t2.medium.
    • For KeyName, choose an existing EC2 key pair in your AWS account. Use this to connect by using SSH to the Jenkins server and the CodeDeploy EC2 instances. Make sure that you have access to the private key of this key pair.
    • For PublicSubnet1, choose a public subnet from which the load balancer, Jenkins server, and CodeDeploy web servers launch.
    • For PublicSubnet2, choose a public subnet from which the load balancers and CodeDeploy web servers launch.
    • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
    • For YourIPRange, enter the CIDR block of the network from which you connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, go to https://www.whatismyip.com/ to find your IP address, and then enter your IP address followed by /32. If you don’t have a static IP address (or aren’t sure if you have one), enter 0.0.0.0/0. Then, any address can reach your Jenkins server.
      .
  3. Choose Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box.
  5. Choose Create and wait for the CloudFormation stack status to change to CREATE_COMPLETE. This takes approximately 6–10 minutes.
  6. Check the resulting values on the Outputs tab. You need them later.
    .
  7. Browse to the ELBDNSName value from the Outputs tab, verifying that you can see the Sample page. You should see a congratulatory message.
  8. Your Jenkins server should be ready to deploy.

Access and unlock your Jenkins server

In this section, I discuss how to access, unlock, and customize your Jenkins server.

  1. Copy the JenkinsServerDNSName value from the Outputs tab of the CloudFormation stack, and paste it into your browser.
  2. To unlock the Jenkins server, SSH to the server using the IP address and key pair, following the instructions from Unlocking Jenkins.
  3. Use the root user to Cat the log file (/var/log/jenkins/jenkins.log) and copy the automatically generated alphanumeric password (between the two sets of asterisks). Then, use the password to unlock your Jenkins server, as shown in the following screenshots.
    .
  4. On the Customize Jenkins page, choose Install suggested plugins.

  5. Wait until Jenkins installs all the suggested plugins. When the process completes, you should see the check marks alongside all of the installed plugins.
    .
    .
  6. On the Create First Admin User page, enter a user name, password, full name, and email address of the Jenkins user.
  7. Choose Save and continue, Save and finish, and Start using Jenkins.
    .
    After you install all the needed Jenkins plugins along with their required dependencies, the Jenkins server restarts. This step should take about two minutes. After Jenkins restarts, refresh the page. Your Jenkins server should be ready to use.

Create a project and configure the CodeDeploy Jenkins plugin

Now, to create our project in Jenkins we need to configure the required Jenkins plugin.

  1. Sign in to Jenkins with the user name and password that you created earlier and click on Manage Jenkins then Manage Plugins.
  2. From the Available tab search for and select the below plugins then choose Install without restart:
    .
    AWS CodeDeploy
    AWS CodeBuild
    Http Request
    File Operations
    .
  3. Select the Restart Jenkins when installation is complete and no jobs are running.
    Jenkins will take couple of minutes to download the plugins along with their dependencies then will restart.
  4. Login then choose New Item, Freestyle project.
  5. Enter a name for the project (for example, CodeDeployApp), and choose OK.
    .

    .
  6. On the project configuration page, under Source Code Management, choose Git. For Repository URL, enter the URL of your GitHub repository.
    .

    .
  7. For Build Triggers, select the Poll SCM check box. In the Schedule, for testing enter H/2 * * * *. This entry tells Jenkins to poll GitHub every two minutes for updates.
    .

    .
  8. Under Build Environment, select the Delete workspace before build starts check box. Each Jenkins project has a dedicated workspace directory. This option allows you to wipe out your workspace directory with each new Jenkins build, to keep it clean.
    .

    .
  9. Under Build Actions, add a Build Step, and AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys.
    .
    .
  10. From the CloudFormation stack Outputs tab, copy the AWS CodeBuild project name (myProjectName) and paste it in the Project Name field. Also, set the Region that you are using and choose Use Jenkins source.
    It is a best practice is to store AWS credentials for CodeBuild in the native Jenkins credential store. For more information, see the Jenkins AWS CodeBuild Plugin wiki.
    .
    .
  11. To make sure that all files cloned from the GitHub repository are deleted choose Add build step and select File Operation plugin, then click Add and select File Delete. Under File Delete operation in the Include File Pattern, type an asterisk.
    .
    .
  12. Under Build, configure the following:
    1. Choose Add a Build step.
    2. Choose HTTP Request.
    3. Copy the S3 bucket name from the CloudFormation stack Outputs tab and paste it after (http://s3-eu-central-1.amazonaws.com/) along with the name of the zip file codebuild-artifact.zip as the value for HTTP Plugin URL.
      Example: (http://s3-eu-central-1.amazonaws.com/mybucketname/codebuild-artifact.zip)
    4. For Ignore SSL errors?, choose Yes.
      .

      .
  13. Under HTTP Request, choose Advanced and leave the default values for Authorization, Headers, and Body. Under Response, for Output response to file, enter the codebuild-artifact.zip file name.
    .

    .
  14. Add the two build steps for the File Operations plugin, in the following order:
    1. Unzip action: This build step unzips the codebuild-artifact.zip file and places the contents in the root workspace directory.
    2. File Delete action: This build step deletes the codebuild-artifact.zip file, leaving only the source bundle contents for deployment.
      .
      .
  15. On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box.
  16. Enter the following values from the Outputs tab of your CloudFormation stack and leave the other settings at their default (blank):
    • For AWS CodeDeploy Application Name, enter the value of CodeDeployApplicationName.
    • For AWS CodeDeploy Deployment Group, enter the value of CodeDeployDeploymentGroup.
    • For AWS CodeDeploy Deployment Config, enter CodeDeployDefault.OneAtATime.
    • For AWS Region, choose the Region where you created the CodeDeploy environment.
    • For S3 Bucket, enter the value of S3BucketName.
      The CodeDeploy plugin uses the Include Files option to filter the files based on specific file names existing in your current Jenkins deployment workspace directory. The plugin zips specified files into one file. It then sends them to the location specified in the S3 Bucket parameter for CodeDeploy to download and use in the new deployment.
      .
      As shown below, in the optional Include Files field, I used (**) so all files in the workspace directory get zipped.
      .
      .
  17. Choose Deploy Revision. This option registers the newly created revision to your CodeDeploy application and gets it ready for deployment.
  18. Select the Wait for deployment to finish? check box. This option allows you to view the CodeDeploy deployments logs and events on your Jenkins server console output.
    .
    .
    Now that you have created a project, you are ready to test deployment.

Testing the whole CI/CD pipeline

To test the whole solution, put an application on your GitHub repository. You can download the sample from here.

The following screenshot shows an application tree containing the application source files, including text and binary files, executables, and packages:

In this example, the application files are the templates directory, test_app.py file, and web.py file.

The appspec.yml file is the main application specification file telling CodeDeploy how to deploy your application. Jenkins uses the AppSpec file to manage each deployment as a series of lifecycle event “hooks”, as defined in the file. For information about how to create a well-formed AppSpec file, see AWS CodeDeploy AppSpec File Reference.

The buildspec.yml file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a build spec as part of the source code, or you can define a build spec when you create a build project. For more information, see How AWS CodeBuild Works.

The scripts folder contains the scripts that you would like to run during the CodeDeploy LifecycleHooks execution with respect to your application requirements. For more information, see Plan a Revision for AWS CodeDeploy.

To test this solution, perform the following steps:

  1. Unzip the application files and send them to your GitHub repository, run the following git commands from the path where you placed your sample application:
    $ git add -A
    
    $ git commit -m 'Your first application'
    
    $ git push
  2. On the Jenkins server dashboard, wait for two minutes until the previously set project trigger starts working. After the trigger starts working, you should see a new build taking place.
    .

    .
  3. In the Jenkins server Console Output page, check the build events and review the steps performed by each Jenkins plugin. You can also review the CodeDeploy deployment in detail, as shown in the following screenshot:
    .

On completion, Jenkins should report that you have successfully deployed a web application. You can also use your ELBDNSName value to confirm that the deployed application is running successfully.

.

.Conclusion

In this post, I outlined how you can use a Jenkins open-source automation server to deploy CodeBuild artifacts with CodeDeploy. I showed you how to construct a functioning CI/CD pipeline with these tools. I walked you through how to build the deployment infrastructure and automatically deploy application version changes from GitHub to your production environment.

Hopefully, you have found this post informative and the proposed solution useful. As always, AWS welcomes all feedback or comment.

About the Author

.

 

Noha Ghazal is a Cloud Support Engineer at Amazon Web Services. She is is a subject matter expert for AWS CodeDeploy. In her role, she enjoys supporting customers with their CodeDeploy and other DevOps configurations. Outside of work she enjoys drawing portraits, fishing and playing video games.

 

 

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

Building and testing polyglot applications using AWS CodeBuild

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/building-and-testing-polyglot-applications-using-aws-codebuild/

Prakash Palanisamy, Solutions Architect

Microservices are becoming the new normal, and it’s natural to use multiple different programming languages for different microservices in the same application. This blog post explains how easy it is to build polyglot applications, test them, and package them for deployment using a single AWS CodeBuild project.

CodeBuild adds support for Polyglot builds using runtime-versions. With CodeBuild, it is possible to specify multiple runtimes in the buildspec file as part of the install phase. Runtimes can be composed of different major versions of the same programming language, or different programming languages altogether. For a complete list of supported runtime versions and how they must be specified in the buildspec file, see the build specification reference for CodeBuild.

This post provides an example of a microservices application comprised of three different microservices written using different programming languages. The complete code is available in the GitHub repository aws-codebuild-polyglot-application.

  • A microservice providing a greeting message (microservice-greeting) is written in Python.
  • A microservice obtaining users’ details (microservice-name) from an Amazon DynamoDB table is written using Node.js.
  • Another microservice making API calls to the above-mentioned name and greeting services, and provide a personalized greeting (microservice-webapp), is written in Java.

All of these microservices are deployed as serverless functions in AWS Lambda and front-ended by an API interface using Amazon API Gateway. To enable automated packaging and deployment, use AWS Serverless Application Model (AWS SAM) and the AWS SAM CLI. The complete application is deployed locally using DynamoDB Local and the sam local command. Automated UI testing uses the built-in headless browsers in the standard CodeBuild containers. To run DynamoDB Local and SAM Local docker runtime is being used hence the privileged mode should be enabled in the CodeBuild project.

The example buildspec file contains the following code:

version: 0.2

env:
  variables:
    ARTIFACT_BUCKET: "polyglot-codebuild-artifact-eu-west-1"

phases:
  install:
    runtime-versions:	
      nodejs: 10
      java: corretto8
      python: 3.7
      docker: 18
    commands:
      - pip3 install --upgrade aws-sam-cli selenium pylint
  pre_build:
    commands:
      - python --version
      - pylint $CODEBUILD_SRC_DIR/microservices-greeting/greeting_handler.py
      - own_ip=`hostname -i`
      - sed -ie "s/CODEBUILD_IP/$own_ip/g" $CODEBUILD_SRC_DIR/env-webapp.json
  build:
    commands:
      - node --version
      - cd $CODEBUILD_SRC_DIR/microservices-name
      - npm install
      - npm run build
      - java -version
      - cd $CODEBUILD_SRC_DIR/microservices-webapp
      - mvn clean package -Plambda
      - cd $CODEBUILD_SRC_DIR
      - |
        sam package --template-file greeting-sam.yaml \
        --output-template-file greeting-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file name-sam.yaml \
        --output-template-file name-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
      - |
        sam package --template-file webapp-sam.yaml \
        --output-template-file webapp-sam-packaged.yaml \
        --s3-bucket $ARTIFACT_BUCKET
  post_build:
    commands:
      - cd $CODEBUILD_SRC_DIR
      - echo "Starting local DynamoDB instance"
      - docker network create codebuild-local
      - |
        docker run -d -p 8000:8000 --network codebuild-local \
        --name dynamodb amazon/dynamodb-local
      - |
        aws dynamodb create-table --endpoint-url http://127.0.0.1:8000 \
        --table-name NamesTable \
        --attribute-definitions AttributeName=Id,AttributeType=S \
        --key-schema AttributeName=Id,KeyType=HASH \
        --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
      - |
        aws dynamodb --endpoint-url http://127.0.0.1:8000 batch-write-item \
        --cli-input-json file://ddb_names.json
      - echo "Starting APIs locally..."
      - |
        nohup sam local start-api --template greeting-sam.yaml \
        --port 3000 --host 0.0.0.0 &> greeting.log &
      - |
        nohup sam local start-api --template name-sam.yaml \
        --port 3001 --host 0.0.0.0 --env-vars env.json \
        --docker-network codebuild-local &> name.log &
      - |
        nohup sam local start-api --template webapp-sam.yaml \
        --port 3002 --host 0.0.0.0 \
        --env-vars env-webapp.json &> webapp.log &
      - cd $CODEBUILD_SRC_DIR/website
      - nohup python3 -m http.server 8080 &>webserver.log &
      - sleep 20
      - echo "Starting headless UI testing..."
      - python $CODEBUILD_SRC_DIR/tests/testsuite.py
artifacts:
  files:
    - greeting-sam-packaged.yaml
    - name-sam-packaged.yaml
    - webapp-sam-packaged.yaml

In the env section, an environment variable named ARTIFACT_BUCKET is uploaded and initialized. It contains the name of the S3 bucket in which the sam package command generated the AWS SAM template. This variable can be overridden either in the build project or while running the build. For more information about the order of precedence, see Run a Build (Console).

In the install phase, there are two sequences: runtime-versions and commands. As part of the runtime versions, four runtimes are specified: Three programming languages’ runtime (Node.js, Java, and Python), which are required to build the microservices, and Docker runtime to deploy the application locally using the sam local command. In the commands sequence, the required packages are installed and updated.

In the pre_build phase, use pylint to lint the Python-based microservice. Get the IP address of the CodeBuild container, to be used later while running the application locally.

In the build phase, use the command sequence to build individual microservices based on their programming language. Use the sam package command to create the Lambda deployment package and generate an updated AWS SAM template.

In the post_build phase, start a DynamoDB Local container as a daemon, create a table in that database, and insert the user details in to that table. After that, start a local instance of greeting, name, and web app microservices using the sam local command. The repository also contains a simple static website that can make API calls to these microservices and provide a user-friendly response. This website is hosted locally using Python’s built-in HTTP server. After the application starts, execute the automated UI testing by running the testsuite.py script. This test script uses Selenium WebDriver. It runs UI tests on headless Firefox and Chrome browsers to validate whether the local deployment of the application works as expected.

If all the phases complete successfully, the updated AWS SAM template files are stored as artifacts based on the specification in artifacts section.

The repository also contains an AWS CloudFormation template that creates a pipeline on AWS CodePipeline with AWS CodeCommit as sources. It builds the application using the previously mentioned buildspec in CodeBuild, and deploys the application using AWS CloudFormation.

Conclusion

In this post, you learned how to use the runtime-versions capability in CodeBuild to build a polyglot application. You also learned about automated UI testing using built-in headless browsers in CodeBuild. Using polygot build capabilities of CodeBuild you shall efficiently handle building applications developed on multiple programming languages using a single build project, instead of creating and maintaining multiple build projects. Similarly, since all the dependent components are built as part of the same project, it also improves the ease of testing including the integration between components.

Applying Netflix DevOps Patterns to Windows

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/applying-netflix-devops-patterns-to-windows-2a57f2dbbf79?source=rss----2615bd06b42e---4

Baking Windows with Packer

By Justin Phelps and Manuel Correa

Customizing Windows images at Netflix was a manual, error-prone, and time consuming process. In this blog post, we describe how we improved the methodology, which technologies we leveraged, and how this has improved service deployment and consistency.

Artisan Crafted Images

In the Netflix full cycle DevOps culture the team responsible for building a service is also responsible for deploying, testing, infrastructure, and operation of that service. A key responsibility of Netflix engineers is identifying gaps and pain points in the development and operation of services. Though the majority of our services run on Linux Amazon Machine Images (AMIs), there are still many services critical to the Netflix Playback Experience running on Windows Elastic Compute Cloud (EC2) instances at scale.

We looked at our process for creating a Windows AMI and discovered it was error-prone and full of toil. First, an engineer would launch an EC2 instance and wait for the instance to come online. Once the instance was available, the engineer would use a remote administration tool like RDP to login to the instance to install software and customize settings. This image was then saved as an AMI and used in an Auto Scale Group to deploy a cluster of instances. Because this process was time consuming and painful, our Windows instances were usually missing the latest security updates from Microsoft.

Last year, we decided to improve the AMI baking process. The challenges with service management included:

  • Stale documentation
  • OS Updates
  • High cognitive overhead
  • A lack of continuous testing

Scaling Image Creation

Our existing AMI baking tool Aminator does not support Windows so we had to leverage other tools. We had several goals in mind when trying to improve the baking methodology:

Configuration as Code

The first part of our new Windows baking solution is Packer. Packer allows you to describe your image customization process as a JSON file. We make use of the amazon-ebs Packer builder to launch an EC2 instance. Once online, Packer uses WinRM to copy files and run PowerShell scripts against the instance. If all of the configuration steps are successful then Packer saves a new AMI. The configuration file, referenced scripts, and artifact dependency definitions all live in an internal git repository. We now have the software and instance configuration as code. This means changes can be tracked and reviewed like any other code change.

Packer requires specific information for your baking environment and extensive AWS IAM permissions. In order to simplify the use of Packer for our software developers, we bundled Netflix-specific AWS environment information and helper scripts. Initially, we did this with a git repository and Packer variable files. There was also a special EC2 instance where Packer was executed as Jenkins jobs. This setup was better than manually baking images but we still had some ergonomic challenges. For example, it became cumbersome to ensure users of Packer received updates.

The last piece of the puzzle was finding a way to package our software for installation on Windows. This would allow for reuse of helper scripts and infrastructure tools without requiring every user to copy that solution into their Packer scripts. Ideally, this would work similar to how applications are packaged in the Animator process. We solved this by leveraging Chocolatey, the package manager for Windows. Chocolatey packages are created and then stored in an internal artifact repository. This repository is added as a source for the choco install command. This means we can create and reuse packages that help integrate Windows into the Netflix ecosystem.

Leverage Spinnaker for Continuous Delivery

Flow chart showing how Docker image inheretance is used in the creation of a Windows AMI.
The Base Dockerfile allows updates of Packer, helper scripts, and environment configuration to propagate through the entire Windows Baking process.

To make the baking process more robust we decided to create a Docker image that contains Packer, our environment configuration, and helper scripts. Downstream users create their own Docker images based on this base image. This means we can update the base image with new environment information and helper scripts, and users get these updates automatically. With their new Docker image, users launch their Packer baking jobs using Titus, our container management system. The Titus job produces a property file as part of a Spinnaker pipeline. The resulting property file contains the AMI ID and is consumed by later pipeline stages for deployment. Running the bake in Titus removed the single EC2 instance limitation, allowing for parallel execution of the jobs.

Now each change in the infrastructure is tested, canaried, and deployed like any other code change. This process is automated via a Spinnaker pipeline:

Screenshot of an example Spinnaker pipeline showing Docker image, Windows AMI, Canary Analysis, and Deployment stages.
Example Spinnaker pipeline showing the bake, canary, and deployment stages.

In the canary stage, Kayenta is used to compare metrics between a baseline (current AMI) and the canary (new AMI). The canary stage will determine a score based on metrics such as CPU, threads, latency, and GC pauses. If this score is within a healthy threshold the AMI is deployed to each environment. Running a canary for each change and testing the AMI in production allows us to capture insights around impact on Windows updates, script changes, tuning web server configuration, among others.

Eliminate Toil

Automating these tedious operational tasks allows teams to move faster. Our engineers no longer have to manually update Windows, Java, Tomcat, IIS, and other services. We can easily test server tuning changes, software upgrades, and other modifications to the runtime environment. Every code and infrastructure change goes through the same testing and deployment pipeline.

Reaping the Benefits

Changes that used to require hours of manual work are now easy to modify, test, and deploy. Other teams can quickly deploy secure and reproducible instances in an automated fashion. Services are more reliable, testable, and documented. Changes to the infrastructure are now reviewed like any other code change. This removes unnecessary cognitive load and documents tribal knowledge. Removing toil has allowed the team to focus on other features and bug fixes. All of these benefits reduce the risk of a customer-affecting outage. Adopting the Immutable Server pattern for Windows using Packer and Chocolatey has paid big dividends.


Applying Netflix DevOps Patterns to Windows was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Integrating AWS X-Ray with AWS App Mesh

Post Syndicated from Ignacio Riesgo original https://aws.amazon.com/blogs/compute/integrating-aws-x-ray-with-aws-app-mesh/

This post is contributed by Lulu Zhao | Software Development Engineer II, AWS

 

AWS X-Ray helps developers and DevOps engineers quickly understand how an application and its underlying services are performing. When it’s integrated with AWS App Mesh, the combination makes for a powerful analytical tool.

X-Ray helps to identify and troubleshoot the root causes of errors and performance issues. It’s capable of analyzing and debugging distributed applications, including those based on a microservices architecture. It offers insights into the impact and reach of errors and performance problems.

In this post, I demonstrate how to integrate it with App Mesh.

Overview

App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high application availability.

With App Mesh, it’s easy to maintain consistent visibility and network traffic control for services built across multiple types of compute infrastructure. App Mesh configures each service to export monitoring data and implements consistent communications control logic across your application.

A service mesh is like a communication layer for microservices. All communication between services happens through the mesh. Customers use App Mesh to configure a service mesh that contains virtual services, virtual nodes, virtual routes, and corresponding routes.

However, it’s challenging to visualize the way that request traffic flows through the service mesh while attempting to identify latency and other types of performance issues. This is particularly true as the number of microservices increases.

It’s in exactly this area where X-Ray excels. To show a detailed workflow inside a service mesh, I implemented a tracing extension called X-Ray tracer inside Envoy. With it, I ensure that I’m tracing all inbound and outbound calls that are routed through Envoy.

Traffic routing with color app

The following example shows how X-Ray works with App Mesh. I used the Color App, a simple demo application, to showcase traffic routing.

This app has two Go applications that are included in the AWS X-Ray Go SDK: color-gateway and color-teller. The color-gateway application is exposed to external clients and responds to http://service-name:port/color, which retrieves color from color-teller. I deployed color-app using Amazon ECS. This image illustrates how color-gateway routes traffic into a virtual router and then into separate nodes using color-teller.

 

The following image shows client interactions with App Mesh in an X-Ray service map after requests have been made to the color-gateway and to color-teller.

Integration

There are two types of service nodes:

  • AWS::AppMesh::Proxy is generated by the X-Ray tracing extension inside Envoy.
  • AWS::ECS::Container is generated by the AWS X-Ray Go SDK.

The service graph arrows show the request workflow, which you may find helpful as you try to understand the relationships between services.

To send Envoy-generated segments into X-Ray, install the X-Ray daemon. The following code example shows the ECS task definition used to install the daemon into the container.

{
    "name": "xray-daemon",

    "image": "amazon/aws-xray-daemon",

    "user": "1337",

    "essential": true,

    "cpu": 32,

    "memoryReservation": 256,

    "portMappings": [

        {

            "hostPort": 2000,

            "containerPort": 2000,

            "protocol": "udp"

         }

After the Color app successfully launched, I made a request to color-gateway to fetch a color.

  • First, the Envoy proxy appmesh/colorgateway-vn in front of default-gateway received the request and routed it to the server default-gateway.
  • Then, default-gateway made a request to server default-colorteller-white to retrieve the color.
  • Instead of directly calling the color-teller server, the request went to the default-gateway Envoy proxy and the proxy routed the call to color-teller.

That’s the advantage of using the Envoy proxy. Envoy is a self-contained process that is designed to run in parallel with all application servers. All of the Envoy proxies form a transparent communication mesh through which each application sends and receives messages to and from localhost while remaining unaware of the broader network topology.

For App Mesh integration, the X-Ray tracer records the mesh name and virtual node name values and injects them into the segment JSON document. Here is an example:

“aws”: {
	“app_mesh”: {
		“mesh_name”: “appmesh”,
		“virtual_node_name”: “colorgateway-vn”
	}
},

To enable X-Ray tracing through App Mesh inside Envoy, you must set two environment variable configurations:

  • ENABLE_ENVOY_XRAY_TRACING
  • XRAY_DAEMON_PORT

The first one enables X-Ray tracing using 127.0.0.1:2000 as the default daemon endpoint to which generated segments are sent. If the daemon you installed listens on a different port, you can specify a port value to override the default X-Ray daemon port by using the second configuration.

Conclusion

Currently, AWS X-Ray supports SDKs written in multiple languages (including Java, Python, Go, .NET, and .NET Core, Node.js, and Ruby) to help you implement your services. For more information, see Getting Started with AWS X-Ray.

Implementing GitFlow Using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy

Post Syndicated from Ashish Gore original https://aws.amazon.com/blogs/devops/implementing-gitflow-using-aws-codepipeline-aws-codecommit-aws-codebuild-and-aws-codedeploy/

This blog post shows how AWS customers who use a GitFlow branching model can model their merge and release process by using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. This post provides a framework, AWS CloudFormation templates, and AWS CLI commands.

Before we begin, we want to point out that GitFlow isn’t something that we practice at Amazon because it is incompatible with the way we think about CI/CD. Continuous integration means that every developer is regularly merging changes back to master (at least once per day). As we’ll explain later, GitFlow involves creating multiple levels of branching off of master where changes to feature branches are only periodically merged all the way back to master to trigger a release. Continuous delivery requires the capability to get every change into production quickly, safely, and sustainably. Research by groups such as DORA has shown that teams that practice CI/CD get features to customers more quickly, are able to recover from issues more quickly, experience fewer failed deployments, and have higher employee satisfaction.

Despite our differing view, we recognize that our customers have requirements that might make branching models like GitFlow attractive (or even mandatory). For this reason, we want to provide information that helps them use our tools to automate merge and release tasks and get as close to CI/CD as possible. With that disclaimer out of the way, let’s dive in!

When Linus Torvalds introduced Git version control in 2005, it really changed the way developers thought about branching and merging. Before Git, these tasks were scary and mostly avoided. As the tools became more mature, branching and merging became both cheap and simple. They are now part of the daily development workflow. In 2010, Vincent Driessen introduced GitFlow, which became an extremely popular branch and release management model. It introduced the concept of a develop branch as the mainline integration and the well-known master branch, which is always kept in a production-ready state. Both master and develop are permanent branches, but GitFlow also recommends short-lived feature, hotfix, and release branches, like so:

GitFlow guidelines:

  • Use development as a continuous integration branch.
  • Use feature branches to work on multiple features.
  • Use release branches to work on a particular release (multiple features).
  • Use hotfix branches off of master to push a hotfix.
  • Merge to master after every release.
  • Master contains production-ready code.

Now that you have some background, let’s take a look at how we can implement this model using services that are part of AWS Developer Tools: AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. In this post, we assume you are familiar with these AWS services. If you aren’t, see the links in the Reference section before you begin. We also assume that you have installed and configured the AWS CLI.

Throughout the post, we use the popular GitFlow tool. It’s written on top of Git and automates the process of branch creation and merging. The tool follows the GitFlow branching model guidelines. You don’t have to use this tool. You can use Git commands instead.

For simplicity, production-like pipelines that have approval or testing stages have been omitted, but they can easily fit into this model. Also, in an ideal production scenario, you would keep Dev and Prod accounts separate.

AWS Developer Tools and GitFlow

Let’s take a look at how can we model AWS CodePipeline with GitFlow. The idea is to create a pipeline per branch. Each pipeline has a lifecycle that is tied to the branch. When a new, short-lived branch is created, we create the pipeline and required resources. After the short-lived branch is merged into develop, we clean up the pipeline and resources to avoid recurring costs.

The following would be permanent and would have same lifetime as the master and develop branches:

  • AWS CodeCommit master/develop branch
  • AWS CodeBuild project across all branches
  • AWS CodeDeploy application across all branches
  • AWS Cloudformation stack (EC2 instance) for master (prod) and develop (stage)

The following would be temporary and would have the same lifetime as the short-lived branches:

  • AWS CodeCommit feature/hotfix/release branch
  • AWS CodePipeline per branch
  • AWS CodeDeploy deployment group per branch
  • AWS Cloudformation stack (EC2 instance) per branch

Here’s how it would look:

Basic guidelines (assuming EC2/on-premises):

  • Each branch has an AWS CodePipeline.
  • AWS CodePipeline is configured with AWS CodeCommit as the source provider, AWS CodeBuild as the build provider, and AWS CodeDeploy as the deployment provider.
  • AWS CodeBuild is configured with AWS CodePipeline as the source.
  • Each AWS CodePipeline has an AWS CodeDeploy deployment group that uses the Name tag to deploy.
  • A single Amazon S3 bucket is used as the artifact store, but you can choose to keep separate buckets based on repo.

 

Step 1: Use the following AWS CloudFormation templates to set up the required roles and environment for master and develop, including the commit repo, VPC, EC2 instance, CodeBuild, CodeDeploy, and CodePipeline.

$ aws cloudformation create-stack --stack-name GitFlowEnv \
--template-body https://s3.amazonaws.com/devops-workshop-0526-2051/git-flow/aws-devops-workshop-environment-setup.template \
--capabilities CAPABILITY_IAM 

$ aws cloudformation create-stack --stack-name GitFlowCiCd \
--template-body https://s3.amazonaws.com/devops-workshop-0526-2051/git-flow/aws-pipeline-commit-build-deploy.template \
--capabilities CAPABILITY_IAM \
--parameters ParameterKey=MainBranchName,ParameterValue=master ParameterKey=DevBranchName,ParameterValue=develop 

Here is how the pipelines should appear in the CodePipeline console:

Step 2: Push the contents to the AWS CodeCommit repo.

Download https://s3.amazonaws.com/gitflowawsdevopsblogpost/WebAppRepo.zip. Unzip the file, clone the repo, and then commit and push the contents to CodeCommit – WebAppRepo.

Step 3: Run git flow init in the repo to initialize the branches.

$ git flow init

Assume you need to start working on a new feature and create a branch.

$ git flow feature start <branch>

Step 4: Update the stack to create another pipeline for feature-x branch.

$ aws cloudformation update-stack --stack-name GitFlowCiCd \
--template-body https://s3.amazonaws.com/devops-workshop-0526-2051/git-flow/aws-pipeline-commit-build-deploy-update.template \
--capabilities CAPABILITY_IAM \
--parameters ParameterKey=MainBranchName,ParameterValue=master ParameterKey=DevBranchName,ParameterValue=develop ParameterKey=FeatureBranchName,ParameterValue=feature-x

When you’re done, you should see the feature-x branch in the CodePipeline console. It’s ready to build and deploy. To test, make a change to the branch and view the pipeline in action.

After you have confirmed the branch works as expected, use the finish command to merge changes into the develop branch.

$ git flow feature finish <feature>

After the changes are merged, update the AWS CloudFormation stack to remove the branch. This will help you avoid charges for resources you no longer need.

$ aws cloudformation update-stack --stack-name GitFlowCiCd \
--template-body https://s3.amazonaws.com/devops-workshop-0526-2051/git-flow/aws-pipeline-commit-build-deploy.template \
--capabilities CAPABILITY_IAM \
--parameters ParameterKey=MainBranchName,ParameterValue=master ParameterKey=DevBranchName,ParameterValue=develop

The steps for the release and hotfix branches are the same.

End result: Pipelines and deployment groups

You should end up with pipelines that look like this.

Next steps

If you take the CLI commands and wrap them in your own custom bash script, you can use GitFlow and the script to quickly set up and tear down pipelines and resources for short-lived branches. This helps you avoid being charged for resources you no longer need. Alternatively, you can write a scheduled Lambda function that, based on creation date, deletes the short-lived pipelines on a regular basis.

Summary

In this blog post, we showed how AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy can be used to model GitFlow. We hope you can use the information in this post to improve your CI/CD strategy, specifically to get your developers working in feature/release/hotfixes branches and to provide them with an environment where they can collaborate, test, and deploy changes quickly.

References

Improve Build Performance and Save Time Using Local Caching in AWS CodeBuild

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/improve-build-performance-and-save-time-using-local-caching-in-aws-codebuild/

AWS CodeBuild now supports local caching, which makes it possible for you to persist intermediate build artifacts locally on the build host so that they are available for reuse in subsequent build runs.

Your build project can use one of two types of caching: Amazon S3 or local. In this blog post, we will discuss how to use the local caching feature.

Local caching stores a cache on a build host. The cache is available to that build host only for a limited time and until another build is complete. For example, when you are dealing with large Java projects, compilation might take a long time. You can speed up subsequent builds by using local caching. This is a good option for large intermediate build artifacts because the cache is immediately available on the build host.

Local caching increases build performance for:

  • Projects with a large, monolithic source code repository.
  • Projects that generate and reuse many intermediate build artifacts.
  • Projects that build large Docker images.
  • Projects with many source dependencies.

To use local caching

1. Open AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use an AWS CodeCommit repository name.

5. For Environment image, choose Managed image or Custom image, as appropriate. For environment type, choose Linux or Windows Server. Specify a runtime, runtime version, and service role for your project.

6. Configure the buildspec file for your project.

7. In Artifacts, expand Additional Configuration. For Cache type, choose Local, as shown here.

Local caching supports the following caching modes:

Source cache mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored. No changes are required in the buildspec file.

Docker layer cache mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.

Note

  • You can use a Docker layer cache in the Linux environment only.
  • The privileged flag must be set so that your project has the required Docker permissions
  • You should consider the security implications before you use a Docker layer cache.

Custom cache mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other two local cache modes. If you use a custom cache:

  • Only directories can be specified for caching. You cannot specify individual files.
  • Symlinks are used to reference cached directories.
  • Cached directories are linked to your build before it downloads its project sources. Cached items are overridden if a source item has the same name. Directories are specified using cache paths in the buildspec file.

To use source cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Source cache, as shown here.

To use Docker layer cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Docker layer cache, as shown here.

Under Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges. This grants elevated privileges to the Docker process running on the build host.

To use custom cache mode

In your buildspec file, specify the cache path, as shown here.

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Custom cache, as shown here.


version: 0.2
phases:
  pre_build:
    commands:
      - echo "Enter pre_build commands"
  build:
    commands:
      - echo "Enter build commands"
      
cache:
  paths:
    - '/root/.m2/**/*'
    - '/root/.npm/**/*'
    - 'build/**/*'

Conclusion

We hope you find the information in this post helpful. If you have feedback, please leave it in the Comments section below. If you have questions, start a new thread on the AWS CodeBuild forum or contact AWS Support.

 

 

 

 

 

Using Git with AWS CodeCommit Across Multiple AWS Accounts

Post Syndicated from Steve Engledow original https://aws.amazon.com/blogs/devops/using-git-with-aws-codecommit-across-multiple-aws-accounts/

I use AWS CodeCommit to host all of my private Git repositories. My repositories are split across several AWS accounts for different purposes: personal projects, internal projects at work, and customer projects.

The CodeCommit documentation shows you how to configure and clone a repository from one place, but in this blog post I want to share how I manage my Git configuration across multiple AWS accounts.

Background

First, I have profiles configured for each of my AWS environments. I connect to some of them using IAM user credentials and others by using cross-account roles.

I intentionally do not have any credentials associated with the default profile. That way I must always be sure I have selected a profile before I run any AWS CLI commands.

Here’s an anonymized copy of my ~/.aws/config file:

[profile personal]
region = eu-west-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile work]
region = us-east-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile customer]
region = eu-west-2
source_profile = work
role_arn = arn:aws:iam::123456789012:role/CrossAccountPowerUser

If I am doing some work in one of those accounts, I run export AWS_PROFILE=work and use the AWS CLI as normal.

The problem

I use the Git credential helper so that the Git client works seamlessly with CodeCommit. However, because I use different profiles for different repositories, my use case is a little more complex than the average.

In general, to use the credential helper, all you need to do is place the following options into your ~/.gitconfig file, like this:

[credential]
    helper = !aws codecommit credential-helper [email protected]
    UserHttpPath = true

I could make this work across accounts by setting the appropriate value for AWS_PROFILE before I use Git in a repository, but there is a much neater way to deal with this situation using a feature released in Git version 2.13, conditional includes.

A solution

First, I separate my work into different folders. My ~/code/ directory looks like this:

code
    personal
        repo1
        repo2
    work
        repo3
        repo4
    customer
        repo5
        repo6

Using this layout, each folder that is directly underneath the code folder has different requirements in terms of configuration for use with CodeCommit.

Solving this has two parts; first, I create a .gitconfig file in each of the three folder locations. The .gitconfig files contain any customization (specifically, configuration for the credential helper) that I want in place while I work on projects in those folders.

For example:

[user]
    # Use a custom email address
    email = [email protected]

[credential]
    # Note the use of the --profile switch
    helper = !aws --profile work codecommit credential-helper [email protected]
    UseHttpPath = true

I also make sure to specify the AWS CLI profile to use in the .gitconfig file which means that, when I am working in the folder, I don’t need to set AWS_PROFILE before I run git push, etc.

Secondly, to make use of these folder-level .gitconfig files, I need to reference them in my global Git configuration at ~/.gitconfig

This is done through the includeIf section. For example:

[includeIf "gitdir:~/code/personal/"]
    path = ~/code/personal/.gitconfig

This example specifies that if I am working with a Git repository that is located anywhere under ~/code/personal/``, Git should load additional configuration from ~/code/personal/.gitconfig. That additional file specifies the appropriate credential helper invocation with the corresponding AWS CLI profile selected as detailed earlier.

The contents of the new file are treated as if they are inserted into the main .gitconfig file at the location of the includeIf section. This means that the included configuration will only override any configuration specified earlier in the config.

I hope you find this approach useful. If you have any questions or feedback, please free to leave them in the comments.

Learn about AWS Services & Solutions – February 2019 AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-february-2019-aws-online-tech-talks/

AWS Tech Talks

Join us this February to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Application Integration

February 20, 2019 | 11:00 AM – 12:00 PM PTCustomer Showcase: Migration & Messaging for Mission Critical Apps with S&P Global Ratings – Learn how S&P Global Ratings meets the high availability and fault tolerance requirements of their mission critical applications using the Amazon MQ.

AR/VR

February 28, 2019 | 1:00 PM – 2:00 PM PTBuild AR/VR Apps with AWS: Creating a Multiplayer Game with Amazon Sumerian – Learn how to build real-world augmented reality, virtual reality and 3D applications with Amazon Sumerian.

Blockchain

February 18, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on Amazon Managed Blockchain – Explore the components of blockchain technology, discuss use cases, and do a deep dive into capabilities, performance, and key innovations in Amazon Managed Blockchain.

Compute

February 25, 2019 | 9:00 AM – 10:00 AM PTWhat’s New in Amazon EC2 – Learn about the latest innovations in Amazon EC2, including new instances types, related technologies, and consumption options that help you optimize running your workloads for performance and cost.

February 27, 2019 | 1:00 PM – 2:00 PM PTDeploy and Scale Your First Cloud Application with Amazon Lightsail – Learn how to quickly deploy and scale your first multi-tier cloud application using Amazon Lightsail.

Containers

February 19, 2019 | 9:00 AM – 10:00 AM PTSecuring Container Workloads on AWS Fargate – Explore the security controls and best practices for securing containers running on AWS Fargate.

Data Lakes & Analytics

February 18, 2019 | 1:00 PM – 2:00 PM PTAmazon Redshift Tips & Tricks: Scaling Storage and Compute Resources – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand.

Databases

February 18, 2019 | 9:00 AM – 10:00 AM PTBuilding Real-Time Applications with Redis – Learn about Amazon’s fully managed Redis service and how it makes it easier, simpler, and faster to build real-time applications.

February 21, 2019 | 1:00 PM – 2:00 PM PT – Introduction to Amazon DocumentDB (with MongoDB Compatibility) – Get an introduction to Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that makes it easy to run, manage & scale MongoDB-workloads.

DevOps

February 20, 2019 | 1:00 PM – 2:00 PM PTFireside Chat: DevOps at Amazon with Ken Exner, GM of AWS Developer Tools – Join our fireside chat with Ken Exner, GM of Developer Tools, to learn about Amazon’s DevOps transformation journey and latest practices and tools that support the current DevOps model.

End-User Computing

February 28, 2019 | 9:00 AM – 10:00 AM PTEnable Your Remote and Mobile Workforce with Amazon WorkLink – Learn about Amazon WorkLink, a new, fully-managed service that provides your employees secure, one-click access to internal corporate websites and web apps using their mobile phones.

Enterprise & Hybrid

February 26, 2019 | 1:00 PM – 2:00 PM PTThe Amazon S3 Storage Classes – For cloud ops professionals, by cloud ops professionals. Wallace and Orion will tackle your toughest AWS hybrid cloud operations questions in this live Office Hours tech talk.

IoT

February 26, 2019 | 9:00 AM – 10:00 AM PTBring IoT and AI Together – Learn how to bring intelligence to your devices with the intersection of IoT and AI.

Machine Learning

February 19, 2019 | 1:00 PM – 2:00 PM PTGetting Started with AWS DeepRacer – Learn about the basics of reinforcement learning, what’s under the hood and opportunities to get hands on with AWS DeepRacer and how to participate in the AWS DeepRacer League.

February 20, 2019 | 9:00 AM – 10:00 AM PTBuild and Train Reinforcement Models with Amazon SageMaker RL – Learn about Amazon SageMaker RL to use reinforcement learning and build intelligent applications for your businesses.

February 21, 2019 | 11:00 AM – 12:00 PM PTTrain ML Models Once, Run Anywhere in the Cloud & at the Edge with Amazon SageMaker Neo – Learn about Amazon SageMaker Neo where you can train ML models once and run them anywhere in the cloud and at the edge.

February 28, 2019 | 11:00 AM – 12:00 PM PTBuild your Machine Learning Datasets with Amazon SageMaker Ground Truth – Learn how customers are using Amazon SageMaker Ground Truth to build highly accurate training datasets for machine learning quickly and reduce data labeling costs by up to 70%.

Migration

February 27, 2019 | 11:00 AM – 12:00 PM PTMaximize the Benefits of Migrating to the Cloud – Learn how to group and rationalize applications and plan migration waves in order to realize the full set of benefits that cloud migration offers.

Networking

February 27, 2019 | 9:00 AM – 10:00 AM PTSimplifying DNS for Hybrid Cloud with Route 53 Resolver – Learn how to enable DNS resolution in hybrid cloud environments using Amazon Route 53 Resolver.

Productivity & Business Solutions

February 26, 2019 | 11:00 AM – 12:00 PM PTTransform the Modern Contact Center Using Machine Learning and Analytics – Learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights.

Serverless

February 19, 2019 | 11:00 AM – 12:00 PM PTBest Practices for Serverless Queue Processing – Learn the best practices of serverless queue processing, using Amazon SQS as an event source for AWS Lambda.

Storage

February 25, 2019 | 11:00 AM – 12:00 PM PT Introducing AWS Backup: Automate and Centralize Data Protection in the AWS Cloud – Learn about this new, fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud as well as on-premises.

How to Use Cross-Account ECR Images in AWS CodeBuild for Your Build Environment

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/how-to-use-cross-account-ecr-images-in-aws-codebuild-for-your-build-environment/

AWS CodeBuild now makes it possible for you to access Docker images from any Amazon Elastic Container Registry repository in another account as the build environment. With this feature, AWS CodeBuild allows you to pull any image from a repository to which you have been granted resource-level permissions.

In this blog post, we will show you how to provision a build environment using an image from another AWS account.

Here is a quick overview of the services used in our example:

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It provides a fully preconfigured build platform for most popular programming languages and build tools, including Apache Maven, Gradle, and more.

Amazon Elastic ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

We will use a sample Docker image in an Amazon ECR image repository in AWS account B. The CodeBuild project in AWS account A will pull the images from the Amazon ECR image repository in AWS account B.

Prerequisites:

To get started you need:

·       Two AWS accounts (AWS account A and AWS account B).

·       In AWS account A, an image registry in Amazon ECR. In AWS account B, images that you would like to use for your build environment. If you do not have an image registry and a sample image, see Docker Sample in the AWS CodeBuild User Guide.

·       In AWS account A, an AWS CodeCommit repository with a buildspec.yml file and sample code.

·       Using the following steps, permissions in your Amazon ECR image repository for AWS CodeBuild to pull the repository’s Docker image into the build environment.

To grant CodeBuild permissions to pull the Docker image into the build environment

1.     Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.

2.     Choose the name of the repository you created.

3.     On the Permissions tab, choose Edit JSON policy.

4.     Apply the following policy and save.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CodeBuildAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "<arn of the service role>"  
      },
      "Action": [
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:BatchCheckLayerAvailability"
      ]
    }
  ]
}

To use an image from account B and set up a build project in account A

1. Open the AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use the AWS CodeCommit repository name.

 

5.  For Environment, we will pull the Docker image from AWS account B and use the image to create the build environment to build artifacts. To configure the build environment, choose Custom Image. For Image registry, choose Amazon ECR. For ECR account, choose Other ECR account.

6.  In Amazon ECR repository URI, enter the URI for the image repository from AWS account B and then choose Create build project.

7. Go to the build project you just created, and choose Start build. The build execution will download the source code from the AWS CodeCommit repository and provision the build environment using the image retrieved from the image registry.

Next steps

Now that you have seen how to use cross-account ECR images, you can integrate a build step in AWS CodePipeline and use the build environment to create artifacts and deploy your application. To integrate a build step in your pipeline, see Working with Deployments in AWS CodeDeploy in the AWS CodeDeploy User Guide

If you have any feedback, please leave it in the Comments section below. If you have questions, please start a thread on the AWS CodeBuild forum or contact AWS Support.

 

How to Use Docker Images from a Private Registry for Your Build Environment

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/how-to-use-docker-images-from-a-private-registry-in-aws-codebuild-for-your-build-environment/

AWS CodeBuild now supports using a Docker image that is stored in a private registry as your runtime environment. Previously, the service supported the use of Docker images from public Docker Hub or Amazon ECR only.

In this blog post, we will show you how to use a Docker image from a private registry to create the AWS CodeBuild runtime environment. The credentials for the private registry are stored in AWS Secrets Manager.

Here is an overview of the services used in our example:

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It provides a fully preconfigured build platform for most popular programming languages and build tools, including Apache Maven, Gradle, and more.

Docker Hub repositories allow you to share container images with your team, customers, or the Docker community at large.

AWS Secrets Manager protects secrets required to access your applications, services, and IT resources. The services makes it possible for you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

 

Prerequisites:

To get started you need:

·       A private repository or account.

·       A Secrets Manager secret that stores your Docker Hub credentials. The credentials are used to access your private repository.

·       An AWS account to create an AWS CodeBuild project.

·       A service role created in IAM that grants access to your Secrets Manager secret.

·       An AWS CodeCommit repository set up in your AWS account with a buildspec.yml file and sample code.

Create a private registry

If you do not have a private registry, follow the steps in the documentation

on the Docker website. Alternatively, you can execute the following commands in a terminal to pull an image, get its ID, and push it to a new repository.

docker pull amazonlinux

docker images amazonlinux --format {{.ID}}

docker tag image-id your-username/repository-name:latest

docker login

docker push your-username/repository-name

 

Create a basic secret in AWS Secrets Manager

In AWS Secrets Manager, a basic secret is one with a minimum of metadata and a single encrypted secret value. The one version that’s stored in the secret is automatically labeled AWSCURRENT.

To create a basic secret

1.     Open the AWS Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.

2.     Choose Store a new secret.

3.     In the Select a secret type section, specify the kind of secret that you want to create by choosing Other type of secrets, and then enter a user name and password to access your private registry.

4.     In Secret key/value, create one key-value pair for your Docker Hub user name and one key-value pair for your Docker Hub password.

5.     For Secret name, enter a name, such as dockerhub. You can enter an optional description to help you remember that this is a secret for Docker Hub.

6.     Leave Disable automatic rotation selected because the keys correspond to your Docker Hub credentials.

7.     Review your settings, and then choose Store secret.

 

 

Create an AWS CodeBuild project to pull Docker images from a private registry

Note

If your private registry is in your VPC, it must have public internet access. AWS CodeBuild cannot pull an image from a private IP address in a VPC.

To use a Docker image from a private registry in your AWS CodeBuild project

1.     Open the AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2.     Choose Create project.

3.     In Project configuration, for Project name, enter a name and description for the build project.

4.     In Source, for Source provider, choose the source code provider type. In this example, we are using the name of an AWS CodeCommit repository.

 

5. We will pull the Docker image from a private registry and use the image to create the build environment to build artifacts. To configure the build environment, in Environment, choose Custom image. For Environment type, choose Linux or Windows. For Custom image type, choose Other location, and then enter the image location and the ARN or name of your Secrets Manager credentials.

6. Go to the build project you just created, and choose Start build. The build execution will download the source code from the AWS CodeCommit repository and provision the build environment using the image retrieved from the registry.

Conclusion

Using the above guidelines, you now can now provision build environment using docker images from private registry.

Next steps:

Now that you have seen how to use Docker images to provision build environments from a private registry, you can integrate a build step in AWS CodePipeline and use the build environment to create artifacts and deploy your application. To integrate a build step in your pipeline, see Working with Deployments in AWS CodeDeploy in the AWS CodeDeploy User Guide.

If you have feedback, please leave it in the Comments section below. If you have questions, please start a thread on the AWS CodeBuild forum or contact AWS Support