Tag Archives: Developer Tools

DevOps at re:Invent 2019!

Post Syndicated from Matt Dwyer original https://aws.amazon.com/blogs/devops/devops-at-reinvent-2019/

re:Invent 2019 is fast approaching (NEXT WEEK!) and we here at the AWS DevOps blog wanted to take a moment to highlight DevOps focused presentations, share some tips from experienced re:Invent pro’s, and highlight a few sessions that still have availability for pre-registration. We’ve broken down the track into one overarching leadership session and four topic areas: (a) architecture, (b) culture, (c) software delivery/operations, and (d) AWS tools, services, and CLI.

In total there will be 145 DevOps track sessions, stretched over 5 days, and divided into four distinct session types:

  • Sessions (34) are one-hour presentations delivered by AWS experts and customer speakers who share their expertise / use cases
  • Workshops (20) are two-hours and fifteen minutes, hands-on sessions where you work in teams to solve problems using AWS services
  • Chalk Talks (41) are interactive white-boarding sessions with a smaller audience. They typically begin with a 10–15-minute presentation delivered by an AWS expert, followed by 45–50-minutes of Q&A
  • Builders Sessions (50) are one-hour, small group sessions with six customers and one AWS expert, who is there to help, answer questions, and provide guidance
  • Select DevOps focused sessions have been highlighted below. If you want to view and/or register for any session, including Keynotes, builders’ fairs, and demo theater sessions, you can access the event catalog using your re:Invent registration credentials.

Reserve your seat for AWS re:Invent activities today >>

re:Invent TIP #1: Identify topics you are interested in before attending re:Invent and reserve a seat. We hold space in sessions, workshops, and chalk talks for walk-ups, however, if you want to get into a popular session be prepared to wait in line!

Please see below for select sessions, workshops, and chalk talks that will be conducted during re:Invent.

LEADERSHIP SESSION DELIVERED BY KEN EXNER, DIRECTOR AWS DEVELOPER TOOLS

[Session] Leadership Session: Developer Tools on AWS (DOP210-L) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Ken Exner – Director, AWS Dev Tools, Amazon Web Services
Speaker 2: Kyle Thomson – SDE3, Amazon Web Services

Join Ken Exner, GM of AWS Developer Tools, as he shares the state of developer tooling on AWS, as well as the future of development on AWS. Ken uses insight from his position managing Amazon’s internal tooling to discuss Amazon’s practices and patterns for releasing software to the cloud. Additionally, Ken provides insight and updates across many areas of developer tooling, including infrastructure as code, authoring and debugging, automation and release, and observability. Throughout this session Ken will recap recent launches and show demos for some of the latest features.

re:Invent TIP #2: Leadership Sessions are a topic area’s State of the Union, where AWS leadership will share the vision and direction for a given topic at AWS.re:Invent.

(a) ARCHITECTURE

[Session] Amazon’s approach to failing successfully (DOP208-RDOP208-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Becky Weiss – Senior Principal Engineer, Amazon Web Services

Welcome to the real world, where things don’t always go your way. Systems can fail despite being designed to be highly available, scalable, and resilient. These failures, if used correctly, can be a powerful lever for gaining a deep understanding of how a system actually works, as well as a tool for learning how to avoid future failures. In this session, we cover Amazon’s favorite techniques for defining and reviewing metrics—watching the systems before they fail—as well as how to do an effective postmortem that drives both learning and meaningful improvement.

[Session] Improving resiliency with chaos engineering (DOP309-RDOP309-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Olga Hall – Senior Manager, Tech Program Management
Speaker 2: Adrian Hornsby – Principal Evangelist, Amazon Web Services

Failures are inevitable. Regardless of the engineering efforts put into building resilient systems and handling edge cases, sometimes a case beyond our reach turns a benign failure into a catastrophic one. Therefore, we should test and continuously improve our system’s resilience to failures to minimize impact on a user’s experience. Chaos engineering is one of the best ways to achieve that. In this session, you learn how Amazon Prime Video has implemented chaos engineering into its regular testing methods, helping it achieve increased resiliency.

[Session] Amazon’s approach to security during development (DOP310-RDOP310-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Colm MacCarthaigh – Senior Principal Engineer, Amazon Web Services

At AWS we say that security comes first—and we really mean it. In this session, hear about how AWS teams both minimize security risks in our products and respond to security issues proactively. We talk through how we integrate security reviews, penetration testing, code analysis, and formal verification into the development process. Additionally, we discuss how AWS engineering teams react quickly and decisively to new security risks as they emerge. We also share real-life firefighting examples and the lessons learned in the process.

[Session] Amazon’s approach to building resilient services (DOP342-RDOP342-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Marc Brooker – Senior Principal Engineer, Amazon Web Services

One of the biggest challenges of building services and systems is predicting the future. Changing load, business requirements, and customer behavior can all change in unexpected ways. In this talk, we look at how AWS builds, monitors, and operates services that handle the unexpected. Learn how to make your own services handle a changing world, from basic design principles to patterns you can apply today.

re:Invent TIP #3: Not sure where to spend your time? Let an AWS Hero give you some pointers. AWS Heroes are prominent AWS advocates who are passionate about sharing AWS knowledge with others. They have written guides to help attendees find relevant activities by providing recommendations based on specific demographics or areas of interest.

(b) CULTURE

[Session] Driving change and building a high-performance DevOps culture (DOP207-R; DOP207-R1)

Speaker: Mark Schwartz – Enterprise Strategist, Amazon Web Services

When it comes to digital transformation, every enterprise is different. There is often a person or group with a vision, knowledge of good practices, a sense of urgency, and the energy to break through impediments. They may be anywhere in the organizational structure: high, low, or—in a typical scenario—somewhere in middle management. Mark Schwartz, an enterprise strategist at AWS and the author of “The Art of Business Value” and “A Seat at the Table: IT Leadership in the Age of Agility,” shares some of his research into building a high-performance culture by driving change from every level of the organization.

[Session] Amazon’s approach to running service-oriented organizations (DOP301-R; DOP301-R1DOP301-R2)

Speaker: Andy Troutman – Director AWS Developer Tools, Amazon Web Services

Amazon’s “two-pizza teams” are famously small teams that support a single service or feature. Each of these teams has the autonomy to build and operate their service in a way that best supports their customers. But how do you coordinate across tens, hundreds, or even thousands of two-pizza teams? In this session, we explain how Amazon coordinates technology development at scale by focusing on strategies that help teams coordinate while maintaining autonomy to drive innovation.

re:Invent TIP #4: The max number of 60-minute sessions you can attend during re:Invent is 24! These sessions (e.g., sessions, chalk talks, builders sessions) will usually make up the bulk of your agenda.

(c) SOFTWARE DELIVERY AND OPERATIONS

[Session] Strategies for securing code in the cloud and on premises. Speakers: (DOP320-RDOP320-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker 1: Craig Smith – Senior Solutions Architect
Speaker 2: Lee Packham – Solutions Architect

Some people prefer to keep their code and tooling on premises, though this can create headaches and slow teams down. Others prefer keeping code off of laptops that can be misplaced. In this session, we walk through the alternatives and recommend best practices for securing your code in cloud and on-premises environments. We demonstrate how to use services such as Amazon WorkSpaces to keep code secure in the cloud. We also show how to connect tools such as Amazon Elastic Container Registry (Amazon ECR) and AWS CodeBuild with your on-premises environments so that your teams can go fast while keeping your data off of the public internet.

[Session] Deploy your code, scale your application, and lower Cloud costs using AWS Elastic Beanstalk (DOP326) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Prashant Prahlad – Sr. Manager

You can effortlessly convert your code into web applications without having to worry about provisioning and managing AWS infrastructure, applying patches and updates to your platform or using a variety of tools to monitor health of your application. In this session, we show how anyone- not just professional developers – can use AWS Elastic Beanstalk in various scenarios: From an administrator moving a Windows .NET workload into the Cloud, a developer building a containerized enterprise app as a Docker image, to a data scientist being able to deploy a machine learning model, all without the need to understand or manage the infrastructure details.

[Session] Amazon’s approach to high-availability deployment (DOP404-RDOP404-R1) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Peter Ramensky – Senior Manager

Continuous-delivery failures can lead to reduced service availability and bad customer experiences. To maximize the rate of successful deployments, Amazon’s development teams implement guardrails in the end-to-end release process to minimize deployment errors, with a goal of achieving zero deployment failures. In this session, learn the continuous-delivery practices that we invented that help raise the bar and prevent costly deployment failures.

[Session] Introduction to DevOps on AWS (DOP209-R; DOP209-R1)

Speaker 1: Jonathan Weiss – Senior Manager
Speaker 2: Sebastien Stormacq – Senior Technical Evangelist

How can you accelerate the delivery of new, high-quality services? Are you able to experiment and get feedback quickly from your customers? How do you scale your development team from 1 to 1,000? To answer these questions, it is essential to leverage some key DevOps principles and use CI/CD pipelines so you can iterate on and quickly release features. In this talk, we walk you through the journey of a single developer building a successful product and scaling their team and processes to hundreds or thousands of deployments per day. We also walk you through best practices and using AWS tools to achieve your DevOps goals.

[Workshop] DevOps essentials: Introductory workshop on CI/CD practices (DOP201-R; DOP201-R1; DOP201-R2; DOP201-R3)

Speaker 1: Leo Zhadanovsky – Principal Solutions Architect
Speaker 2: Karthik Thirugnanasambandam – Partner Solutions Architect

In this session, learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities. We demonstrate a prescriptive approach to incrementally adopt and embrace some of the best practices around continuous integration and delivery using AWS developer tools and third-party solutions, including, AWS CodeCommit, AWS CodeBuild, Jenkins, AWS CodePipeline, AWS CodeDeploy, AWS X-Ray and AWS Cloud9. We also highlight some best practices and productivity tips that can help make your software release process fast, automated, and reliable.

[Workshop] Implementing GitFLow with AWS tools (DOP202-R; DOP202-R1; DOP202-R2)

Speaker 1: Amit Jha – Sr. Solutions Architect
Speaker 2: Ashish Gore – Sr. Technical Account Manager

Utilizing short-lived feature branches is the development method of choice for many teams. In this workshop, you learn how to use AWS tools to automate merge-and-release tasks. We cover high-level frameworks for how to implement GitFlow using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You also get an opportunity to walk through a prebuilt example and examine how the framework can be adopted for individual use cases.

[Chalk Talk] Generating dynamic deployment pipelines with AWS CDK (DOP311-R; DOP311-R1; DOP311-R2)

Speaker 1: Flynn Bundy – AppDev Consultant
Speaker 2: Koen van Blijderveen – Senior Security Consultant

In this session we dive deep into dynamically generating deployment pipelines that deploy across multiple AWS accounts and Regions. Using the power of the AWS Cloud Development Kit (AWS CDK), we demonstrate how to simplify and abstract the creation of deployment pipelines to suit a range of scenarios. We highlight how AWS CodePipeline—along with AWS CodeBuild, AWS CodeCommit, and AWS CodeDeploy—can be structured together with the AWS deployment framework to get the most out of your infrastructure and application deployments.

[Chalk Talk] Customize AWS CloudFormation with open-source tools (DOP312-R; DOP312-R1; DOP312-E)

Speaker 1: Luis Colon – Senior Developer Advocate
Speaker 2: Ryan Lohan – Senior Software Engineer

In this session, we showcase some of the best open-source tools available for AWS CloudFormation customers, including conversion and validation utilities. Get a glimpse of the many open-source projects that you can use as you create and maintain your AWS CloudFormation stacks.

[Chalk Talk] Optimizing Java applications for scale on AWS (DOP314-R; DOP314-R1; DOP314-R2)

Speaker 1: Sam Fink – SDE II
Speaker 2: Kyle Thomson – SDE3

Executing at scale in the cloud can require more than the conventional best practices. During this talk, we offer a number of different Java-related tools you can add to your AWS tool belt to help you more efficiently develop Java applications on AWS—as well as strategies for optimizing those applications. We adapt the talk on the fly to cover the topics that interest the group most, including more easily accessing Amazon DynamoDB, handling high-throughput uploads to and downloads from Amazon Simple Storage Service (Amazon S3), troubleshooting Amazon ECS services, working with local AWS Lambda invocations, optimizing the Java SDK, and more.

[Chalk Talk] Securing your CI/CD tools and environments (DOP316-R; DOP316-R1; DOP316-R2)

Speaker: Leo Zhadanovsky – Principal Solutions Architect

In this session, we discuss how to configure security for AWS CodePipeline, deployments in AWS CodeDeploy, builds in AWS CodeBuild, and git access with AWS CodeCommit. We discuss AWS Identity and Access Management (IAM) best practices, to allow you to set up least-privilege access to these services. We also demonstrate how to ensure that your pipelines meet your security and compliance standards with the CodePipeline AWS Config integration, as well as manual approvals. Lastly, we show you best-practice patterns for integrating security testing of your deployment artifacts inside of your CI/CD pipelines.

[Chalk Talk] Amazon’s approach to automated testing (DOP317-R; DOP317-R1; DOP317-R2)

Speaker 1: Carlos Arguelles – Principal Engineer
Speaker 2: Charlie Roberts – Senior SDET

Join us for a session about how Amazon uses testing strategies to build a culture of quality. Learn Amazon’s best practices around load testing, unit testing, integration testing, and UI testing. We also discuss what parts of testing are automated and how we take advantage of tools, and share how we strategize to fail early to ensure minimum impact to end users.

[Chalk Talk] Building and deploying applications on AWS with Python (DOP319-R; DOP319-R1; DOP319-R2)

Speaker 1: James Saryerwinnie – Senior Software Engineer
Speaker 2: Kyle Knapp – Software Development Engineer

In this session, hear from core developers of the AWS SDK for Python (Boto3) as we walk through the design of sample Python applications. We cover best practices in using Boto3 and look at other libraries to help build these applications, including AWS Chalice, a serverless microframework for Python. Additionally, we discuss testing and deployment strategies to manage the lifecycle of your applications.

[Chalk Talk] Deploying AWS CloudFormation StackSets across accounts and Regions (DOP325-R; DOP325-R1)

Speaker 1: Mahesh Gundelly – Software Development Manager
Speaker 2: Prabhu Nakkeeran – Software Development Manager

AWS CloudFormation StackSets can be a critical tool to efficiently manage deployments of resources across multiple accounts and regions. In this session, we cover how AWS CloudFormation StackSets can help you ensure that all of your accounts have the proper resources in place to meet security, governance, and regulation requirements. We also cover how to make the most of the latest functionalities and discuss best practices, including how to plan for safe deployments with minimal blast radius for critical changes.

[Chalk Talk] Monitoring and observability of serverless apps using AWS X-Ray (DOP327-R; DOP327-R1; DOP327-R2)

Speaker 1 (R, R1, R2): Shengxin Li – Software Development Engineer
Speaker 2 (R, R1): Sirirat Kongdee – Solutions Architect
Speaker 3 (R2): Eric Scholz – Solutions Architect, Amazon

Monitoring and observability are essential parts of DevOps best practices. You need monitoring to debug and trace unhandled errors, performance bottlenecks, and customer impact in the distributed nature of a microservices architecture. In this chalk talk, we show you how to integrate the AWS X-Ray SDK to your code to provide observability to your overall application and drill down to each service component. We discuss how X-Ray can be used to analyze, identify, and alert on performance issues and errors and how it can help you troubleshoot application issues faster.

[Chalk Talk] Optimizing deployment strategies for speed & safety (DOP341-R; DOP341-R1; DOP341-R2)

Speaker: Karan Mahant – Software Development Manager, Amazon

Modern application development moves fast and demands continuous delivery. However, the greatest risk to an application’s availability can occur during deployments. Join us in this chalk talk to learn about deployment strategies for web servers and for Amazon EC2, container-based, and serverless architectures. Learn how you can optimize your deployments to increase productivity during development cycles and mitigate common risks when deploying to production by using canary and blue/green deployment strategies. Further, we share our learnings from operating production services at AWS.

[Chalk Talk] Continuous integration using AWS tools (DOP216-R; DOP216-R1; DOP216-R2)

Speaker: Richard Boyd – Sr Developer Advocate, Amazon Web Services

Today, more teams are adopting continuous-integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase best practices for continuous integration and discuss how to effectively use AWS tools for CI.

re:Invent TIP #5: If you’re traveling to another session across campus, give yourself at least 60 minutes!

(d) AWS TOOLS, SERVICES, AND CLI

[Session] Best practices for authoring AWS CloudFormation (DOP302-R; DOP302-R1)

Speaker 1: Olivier Munn – Sr Product Manager Technical, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

Incorporating infrastructure as code into software development practices can help teams and organizations improve automation and throughput without sacrificing quality and uptime. In this session, we cover multiple best practices for writing, testing, and maintaining AWS CloudFormation template code. You learn about IDE plug-ins, reusability, testing tools, modularizing stacks, and more. During the session, we also review sample code that showcases some of the best practices in a way that lends more context and clarity.

[Chalk Talk] Using AWS tools to author and debug applications (DOP215-RDOP215-R1DOP215-R2) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Fabian Jakobs – Principal Engineer, Amazon Web Services

Every organization wants its developers to be faster and more productive. AWS Cloud9 lets you create isolated cloud-based development environments for each project and access them from a powerful web-based IDE anywhere, anytime. In this session, we demonstrate how to use AWS Cloud9 and provide an overview of IDE toolkits that can be used to author application code.

[Session] Migrating .Net frameworks to the cloud (DOP321) — SPACE AVAILABLE! REGISTER TODAY!

Speaker: Robert Zhu – Principal Technical Evangelist, Amazon Web Services

Learn how to migrate your .NET application to AWS with minimal steps. In this demo-heavy session, we share best practices for migrating a three-tiered application on ASP.NET and SQL Server to AWS. Throughout the process, you get to see how AWS Toolkit for Visual Studio can enable you to fully leverage AWS services such as AWS Elastic Beanstalk, modernizing your application for more agile and flexible development.

[Session] Deep dive into AWS Cloud Development Kit (DOP402-R; DOP402-R1)

Speaker 1: Elad Ben-Israel – Principal Software Engineer, Amazon Web Services
Speaker 2: Jason Fulghum – Software Development Manager, Amazon Web Services

The AWS Cloud Development Kit (AWS CDK) is a multi-language, open-source framework that enables developers to harness the full power of familiar programming languages to define reusable cloud components and provision applications built from those components using AWS CloudFormation. In this session, you develop an AWS CDK application and learn how to quickly assemble AWS infrastructure. We explore the AWS Construct Library and show you how easy it is to configure your cloud resources, manage permissions, connect event sources, and build and publish your own constructs.

[Session] Introduction to the AWS CLI v2 (DOP406-R; DOP406-R1)

Speaker 1: James Saryerwinnie – Senior Software Engineer, Amazon Web Services
Speaker 2: Kyle Knapp – Software Development Engineer, Amazon Web Services

The AWS Command Line Interface (AWS CLI) is a command-line tool for interacting with AWS services and managing your AWS resources. We’ve taken all of the lessons learned from AWS CLI v1 (launched in 2013), and have been working on AWS CLI v2—the next major version of the AWS CLI—for the past year. AWS CLI v2 includes features such as improved installation mechanisms, a better getting-started experience, interactive workflows for resource management, and new high-level commands. Come hear from the core developers of the AWS CLI about how to upgrade and start using AWS CLI v2 today.

[Session] What’s new in AWS CloudFormation (DOP408-R; DOP408-R1; DOP408-R2)

Speaker 1: Jing Ling – Senior Product Manager, Amazon Web Services
Speaker 2: Luis Colon – Senior Developer Advocate, Amazon Web Services

AWS CloudFormation is one of the most widely used AWS tools, enabling infrastructure as code, deployment automation, repeatability, compliance, and standardization. In this session, we cover the latest improvements and best practices for AWS CloudFormation customers in particular, and for seasoned infrastructure engineers in general. We cover new features and improvements that span many use cases, including programmability options, cross-region and cross-account automation, operational safety, and additional integration with many other AWS services.

[Workshop] Get hands-on with Python/boto3 with no or minimal Python experience (DOP203-R; DOP203-R1; DOP203-R2)

Speaker 1: Herbert-John Kelly – Solutions Architect, Amazon Web Services
Speaker 2: Carl Johnson – Enterprise Solutions Architect, Amazon Web Services

Learning a programming language can seem like a huge investment. However, solving strategic business problems using modern technology approaches, like machine learning and big-data analytics, often requires some understanding. In this workshop, you learn the basics of using Python, one of the most popular programming languages that can be used for small tasks like simple operations automation, or large tasks like analyzing billions of records and training machine-learning models. You also learn about and use the AWS SDK (software development kit) for Python, called boto3, to write a Python program running on and interacting with resources in AWS.

[Workshop] Building reusable AWS CloudFormation templates (DOP304-R; DOP304-R1; DOP304-R2)

Speaker 1: Chelsey Salberg – Front End Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

AWS CloudFormation gives you an easy way to define your infrastructure as code, but are you using it to its full potential? In this workshop, we take real-world architecture from a sandbox template to production-ready reusable code. We start by reviewing an initial template, which you update throughout the session to incorporate AWS CloudFormation features, like nested stacks and intrinsic functions. By the end of the workshop, expect to have a set of AWS CloudFormation templates that demonstrate the same best practices used in AWS Quick Starts.

[Workshop] Building a scalable serverless application with AWS CDK (DOP306-R; DOP306-R1; DOP306-R2; DOP306-R3)

Speaker 1: David Christiansen – Senior Partner Solutions Architect, Amazon Web Services
Speaker 2: Daniele Stroppa – Solutions Architect, Amazon Web Services

Dive into AWS and build a web application with the AWS Mythical Mysfits tutorial. In this workshop, you build a serverless application using AWS Lambda, Amazon API Gateway, and the AWS Cloud Development Kit (AWS CDK). Through the tutorial, you get hands-on experience using AWS CDK to model and provision a serverless distributed application infrastructure, you connect your application to a backend database, and you capture and analyze data on user behavior. Other AWS services that are utilized include Amazon Kinesis Data Firehose and Amazon DynamoDB.

[Chalk Talk] Assembling an AWS CloudFormation authoring tool chain (DOP313-R; DOP313-R1; DOP313-R2)

Speaker 1: Nathan McCourtney – Sr System Development Engineer, Amazon Web Services
Speaker 2: Dan Blanco – Developer Advocate, Amazon Web Services

In this session, we provide a prescriptive tool chain and methodology to improve your coding productivity as you create and maintain AWS CloudFormation stacks. We cover authoring recommendations from editors and plugins, to setting up a deployment pipeline for your AWS CloudFormation code.

[Chalk Talk] Build using JavaScript with AWS Amplify, AWS Lambda, and AWS Fargate (DOP315-R; DOP315-R1; DOP315-R2)

Speaker 1: Trivikram Kamat – Software Development Engineer, Amazon Web Services
Speaker 2: Vinod Dinakaran – Software Development Manager, Amazon Web Services

Learn how to build applications with AWS Amplify on the front end and AWS Fargate and AWS Lambda on the backend, and protocols (like HTTP/2), using the JavaScript SDKs in the browser and node. Leverage the AWS SDK for JavaScript’s modular NPM packages in resource-constrained environments, and benefit from the built-in async features to run your node and mobile applications, and SPAs, at scale.

[Chalk Talk] Scaling CI/CD adoption using AWS CodePipeline and AWS CloudFormation (DOP318-R; DOP318-R1; DOP318-R2)

Speaker 1: Andrew Baird – Principal Solutions Architect, Amazon Web Services
Speaker 2: Neal Gamradt – Applications Architect, WarnerMedia

Enabling CI/CD across your organization through repeatable patterns and infrastructure-as-code templates can unlock development speed while encouraging best practices. The SEAD Architecture team at WarnerMedia helps encourage CI/CD adoption across their company. They do so by creating and maintaining easily extensible infrastructure-as-code patterns for creating new services and deploying to them automatically using CI/CD. In this session, learn about the patterns they have created and the lessons they have learned.

re:Invent TIP #6: There are lots of extra activities at re:Invent. Expect your evenings to fill up onsite! Check out the peculiar programs including, board games, bingo, arts & crafts or ‘80s sing-alongs…

Notifying 3rd Party Services of CodeBuild State Changes

Post Syndicated from Nick Lee original https://aws.amazon.com/blogs/devops/notifying-3rd-party-services-of-codebuild-state-changes/

It is often useful to notify other systems of the build status of a code change, such as by creating release tickets in your project-tracking software when a build succeeds, or posting a message to your team’s chat solution. A previous blog post showed you how to integrate AWS Lambda and Amazon SNS to extend AWS CodeCommit to send email notifications for file changes. This blog post shows you how to integrate AWS Lambda, AWS Systems Manager Parameter Store, Amazon DynamoDB and Amazon CloudWatch Events to extend AWS CodeBuild by adding webhook functionality, allowing you to make authenticated API calls to 3rd-party services in response to CodeBuild state changes. It also provides an example of how to use this solution to create an issue in JIRA, a popular issue and project-tracking software solution, in response to a CodeBuild build status change.

Some of the services used include:

  • Amazon DynamoDB: a fully-managed key-value and document database that delivers single-digit millisecond performance at any scale. This solution uses it as a registry for webhook receivers, and takes advantage of its on-demand capacity mode so that you only pay for the resources you consume.
  • AWS Lambda: a popular serverless service that lets you run code without provisioning or managing servers. This solution uses a Lambda function to query DynamoDB for a list of webhook receivers and to notify those receivers of CodeBuild build status changes.
  • Amazon CloudWatch Events: Amazon CloudWatch Events delivers a near real-time stream of system events which allow you to detect changes to your AWS resources and to set up rules to respond to those changes (for example, by invoking a Lambda function in response to build notifications).
  • AWS Systems Manager Parameter Store: a secure, hierarchical storage solution which can be used to store items such as configuration data, passwords, database strings, and license codes. This solution uses SSM Parameter Store to store the HTTP endpoints for 3rd party providers and custom headers, rather than storing them in plaintext in DynamoDB.

To help you quickly deploy the solution, I have made it available as an AWS CloudFormation template. AWS CloudFormation is a management tool that provides a common language to describe and provision all of the infrastructure resources in AWS.

Overview

The following diagram shows how this solution uses AWS services to invoke 3rd-party services in response to CodeBuild state changes.

An overview of the workflow for this solution, showing CodeBuild publishing to CloudWatch Events which invokes the Lambda to notify the 3rd party service.

CodeBuild publishes several useful CloudWatch events, which can notify you of build state changes and build phase transitions. By setting up a CloudWatch event rule, you can detect when a CodeBuild job enters a specific state. In this solution, I create a CloudWatch event rule which captures CodeBuild state changes for all AWS CodeBuild projects in an account, then invokes a Lambda function to handle these change notifications. When this Lambda function is triggered, the following steps are executed:

  1. Query the CodeBuildWebhooks DynamoDB table to find any registered webhook receivers for the CodeBuild project which triggered the event rule.
  2. For each registered receiver:
    1. Obtain the HTTP endpoint and any custom headers from SSM Parameter Store. Some headers/endpoints may be considered sensitive, so the solution stores them in SSM Parameter Store as SecureStrings where needed. The DynamoDB records reference the relevant SSM Parameter by name. The parameter names must all be prefixed with /webhooks/ in order for the webhook Lambda function to access them.
    2. After obtaining the URL for the webhook receiver from SSM Parameter Store, the Lambda function checks if the record from DynamoDB contains a custom HTTP body template. If so, it loads this template, substituting any placeholder values. If no custom template is found, a default template is used.
    3. Finally, the HTTP request is sent with the processed body template.

I use Python and Boto 3 to implement this function. The full source code is published on GitHub. You can find it in the aws-codebuild-webhooks repository.

Getting started

The following sections describe the steps to deploy and use the solution.

Deploying the Solution

There is an AWS CloudFormation template, template.yaml, in the source code which uses the AWS Serverless Application Model to define required components of this solution. For convenience, I have made it available as a one-click launch template:

When launching the stack, the default behavior is to expect SSM parameters to be encrypted using the AWS KMS AWS managed CMK for SSM. However, you can input a different Key ID as the value for the SSMKeyId parameter if required. The above launch stack button deploys the solution in us-east-1, however links for other regions are available on the solution GitHub page.

The template deploys:

  • A Lambda function and associated IAM role for sending HTTP requests
  • A DynamoDB table for registering webhook receivers
  • A CloudWatch event rule for triggering the Lambda function in response to CodeBuild events

The Lambda function code demonstrates how to make authenticated HTTP requests to 3rd party services. You can extend the sample code to add in additional features such as deployment of the Lambda function in a VPC to access private resources like an on-premises Jira.

Example: Creating a Ticket in Jira

To demonstrate the solution, I set up a webhook to create a bug ticket in Jira whenever a build fails. In order to follow this example, you need to install and configure the AWS CLI.

First off, I store the Jira URL securely in the SSM parameter store:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-issues-webhook-url",
  "Value": "https://<my-jira-server>/rest/api/latest/issue/",
  "Type": "String",
  "Description": "Jira issues Rest API URL"
}'

For this sample, I use basic authentication with the JIRA Rest API. After following Jira’s instructions to generate a BASE64-encoded authorization string, I store the headers as a JSON string in SSM:

aws ssm put-parameter --cli-input-json '{
  "Name": "/webhooks/jira-basic-auth-headers",
  "Value": "{\"Authorization\": \"Basic <base64 encoded useremail:api_token>\"}",
  "Type": "SecureString",
  "Description": "Jira basic auth headers for CodeBuild webhooks"
}'

For more authentication options, consult the Jira docs.

Now I need to register the webhook receiver in my CodeBuildWebhooks DynamoDB table. In order to make requests to the JIRA REST API, my Lambda function must supply a JSON string containing a payload accepted by the Jira API for creating issues. To do this, I save the following JSON as item.json in my current working directory:

{
  "project": {"S": "MyCodeBuildProject"},
  "hook_url_param_name": {"S": "/webhooks/jira-issues-webhook-url"},
  "hook_headers_param_name": {"S": "/webhooks/jira-basic-auth-headers"},
  "statuses": {"L": [{"S": "FAILED"}]},
  "template": {"S": "{\"fields\": {\"project\":{\"id\": \"10000\"},\"summary\": \"$PROJECT build failing\",\"description\": \"AWS CodeBuild project $PROJECT latest build $STATUS\",\"issuetype\":{\"id\": \"10004\"}}}"}
}

In the template, my project ID is 10000 and the bug issue type is 10004. You can obtain this information from your JIRA instance by invoking the “createmeta” API.

Finally, I register the webhook receiver in my CodeBuildWebhooks DynamoDB table, referencing the JSON file I just created:

aws dynamodb put-item --table-name CodeBuildWebhooks --item file://item.json

That’s it! The next time my CodeBuild project fails, an issue is created in JIRA for someone to action, as shown in the following screenshot:

A Jira Kanban board showing the newly created issue for the failing CodeBuild project

You could extend this example to populate other fields Jira such as Labels, Components, or Assignee.

Cleanup

To remove the resources created as part of this blog post, first delete the stack:

aws cloudformation delete-stack --stack-name aws-codebuild-webhooks

Then delete the parameters from SSM Parameter Store:

aws ssm delete-parameters --names /webhooks/jira-issues-webhook-url /webhooks/jira-basic-auth-headers

Conclusion

In this blog post, I showed you how to use an AWS CloudFormation template to quickly build a sample solution that can help you integrate AWS CodeBuild with other 3rd party tools via AWS Lambda.

The CloudFormation template used in this post and Lambda function can be found in the aws-codebuild-webhooks GitHub repository, along with other examples.

If you have questions or other feedback about this example, please open an issue or submit a pull request.

About the Author

Nick Lee is part of the AWS Solution Builders team in the UK. Nick works with the AWS Solution Architecture community to create standardized tools, code samples, demonstrations and quick starts.

 

 

A simpler deployment experience with AWS SAM CLI

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/a-simpler-deployment-experience-with-aws-sam-cli/

The AWS Serverless Application Model (SAM) CLI provides developers with a local tool for managing serverless applications on AWS. The command line tool allows developers to initialize and configure applications, debug locally using IDEs like Visual Studio Code or JetBrains WebStorm, and deploy to the AWS Cloud.

On November 25, we announced improvements to the deployment process using the SAM CLI. These improvements allow users to deploy serverless applications with less manual setup, fewer repeated steps, and shorter CLI commands.

To install the latest version of the AWS SAM CLI, please refer to the installation section of the AWS SAM page.

What’s new?

Amazon S3 bucket management

Previously, developers had to manually create and manage an Amazon S3 bucket to host deployment artifacts for each desired Region. With this latest release, the SAM CLI automatically creates a Region-specific bucket via AWS CloudFormation, based on your local AWS credentials. If you deploy an application to a Region where no bucket exists, a new managed bucket is created in the new Region.

Minimized deployment commands

Before this update, a minimal deployment process would look like this:

sam package --s3-bucket my-regional-bucket --output-template-file out.yaml
sam deploy --template-file out.yaml --capabilities CAPABILITY_IAM --stack-name MyStackName

This series of commands was required at every deployment. With this latest update to SAM CLI, the package and deployment commands have been combined. The syntax is now:

sam deploy

The guided deployment

How does SAM CLI know where to deploy and what to name the application? The answer to this is found in the “guided deployment.” This is an interactive version of the deployment process that collects and saves information needed to deploy the application.

If sam deploy is running and cannot find the required information for deployment, the process errors out, recommending that the guided deployment process be run. To use the guided process:

sam deploy -g or --guided

SAM guided deploy

Once the information is collected, it is saved in the application as the samconfig.toml file. Subsequent calls to sam deploy use the existing data to deploy. If you update a setting between deployments, run the sam deploy -g command again to update the stored values.

Frequently asked questions

How many buckets are created?

When you run the sam deploy -g command with provided values, SAM checks the account for an existing SAM deployment bucket in that Region. This Regional bucket is created via CloudFormation by SAM as an artifact repository for all applications for the current account in the current Region. For a root level account, there is only a single bucket per Region that contains deployed SAM serverless applications.

What if the Region is changed for the application?

If you change the Region in samconfig.toml before running sam deploy, the process errors out. The selected deployment Region does not match the artifacts bucket Region stored in the samconfig.toml file. The error also occurs if you use the –region flag, and a Region is different to the Region in the samconfig.toml file. To change the Region for a deployment, use the sam deploy -g option to update the Region. SAM verifies that a bucket for the new Region exists, or creates one automatically.

What if the samconfig.toml file is deleted?

If the samconfig.toml file is deleted, SAM treats the application as new. We recommend that you use the -g flag to reconfigure the application.

What about backwards compatibility?

If you are using SAM for a non-interactive deployment, it is possible to pass all required information as parameters. For example, for a continuous integration continuous delivery (CICD) pipeline:

SAM deploy values

This same deployment is achieved using the older process with the following commands:

sam package --s3-bucket aws-sam-cli-managed-default-samclisourcebucket-xic3fipuh9n9 --output-template-file out.yaml
sam deploy --template-file out.yaml --capabilities CAPABILITY_IAM --stack-name sam-app --region us-west-2

The package command still exists in the latest version of SAM CLI for backwards compatibility with existing CICD processes.

Updated user experience

Along a streamlined process for deploying applications, the new version of SAM CLI brings an improved user interface. This provides developers with more feedback and validation choices. First, during the deployment process, all deployment parameters are displayed:

SAM deploy values

Once the changeset is created, the developer is presented with all the proposed changes.

SAM change-set report

Developers also have the option to confirm the changes, or cancel the deployment. This option is a setting in the samconfig.toml file that can be turned on or off as needed.

SAM change-set prompt

As the changeset is applied, the console displays the changes being made in the AWS Cloud.

SAM deploy status

Finally, the resulting output is displayed.

Conclusion

By streamlining the deployment process, removing the need to manage an S3 bucket, and providing clear deployment feedback and data, the latest version SAM CLI makes serverless development easier for developers.

Happy coding and deploying!

Test Reports with AWS CodeBuild

Post Syndicated from Muhammad Mansoor original https://aws.amazon.com/blogs/devops/test-reports-with-aws-codebuild/

AWS CodeBuild announced the launch of a new feature in CodeBuild called Reports. This feature allows you to view the reports generated by functional or integration tests. The reports can be in the JUnit XML or Cucumber JSON format. You can view metrics such as Pass Rate %, Test Run Duration, and number of Passed versus Failed/Error test cases in one location. Builders can use any testing frameworks as long as the reports are generated in the supported formats.

You can see the test reports created by CodeBuild in the respective Report Group, where they are stored for 30 days. For longer retention, you can store the reports in an Amazon S3 bucket. Each test report is further broken down by individual test cases.

Getting started with CodeBuild Reports

To store and view the reports of your unit tests, you need to add a new sequence configuration called reports in your buildspec.yml file. CodeBuild creates a new report group under this name or uses the existing report group if one exists already. The following sample buildspec.yml file generates test reports from JUnit tests using Surefire and stores it in the Report Group named <project-name>-SurefireReports, where <project-name> is the name of the CodeBuild project.

version: 0.2

env:
  variables:
    JAVA_HOME: "/usr/lib/jvm/java-8-openjdk-amd64"
phases:
  install:
    runtime-versions:
      java: corretto8
  build:
    commands:
      - echo Build started on `date`
      - mvn surefire-report:report #Running this task to execute unit tests and generate report.
reports: #New
  SurefireReports: # CodeBuild will create a report group called "SurefireReports".
    files: #Store all of the files
      - '**/*'
    base-directory: 'target/surefire-reports' # Location of the reports 

You can specify the unit tests in either the build or the post_build sequence in the buildspec.yml file.

CodeBuild needs additional AWS Identity and Access Management (IAM) permissions to create test reports. These permissions are already included in the predefined AWS managed policies used by CodeBuild. To add the permissions to the existing CodeBuild projects, you need to modify the policy. Use the following IAM policy to add permissions.

{
    
    "Statement": [
        {
            "Resource": "arn:aws:codebuild:your-region:your-aws-account-id:report-group/my-project-*", 
            "Effect": "Allow",
            "Action": [
                "codebuild:CreateReportGroup",
                "codebuild:CreateReport",
                "codebuild:UpdateReport",
                "codebuild:BatchPutTestCases"
            ]
        }
    ]
}

Once your unit tests start to execute as the part of CodeBuild project, you see the reports and the Trends for the report group along with the metrics. Reports also show the pass rate and average report duration as well as the time taken by each individual test.

Multiple CodeBuild projects can use the same report group. This feature is helpful if you test your code separately (such as for micro services and APIs) and want to see all of the reports in one place. Other examples include running integration tests across different projects and using one report group to view the results of the test.

Apart from the aggregates, you can see the metrics captured by each report separately. For each report, you can see the breakdown of individual unit tests, status of the tests, duration, and messages from the tests. The summary section shows you the overall Passed/Failed test count and pass rate along with the duration taken by each test/report, as shown in the following screenshot.

Trends in Test Reports

Conclusion

The blog post showed how to use Test Reports feature from CodeBuild. Developers can use this feature to see previously executed test results without leaving the AWS console. For teams that rely on Test Driven Development (TDD) techniques, test reports provide a valuable insights such as highlighting the number of tests with Passed, Failed/Error, or Unknown results.

For further information, see the following resources:

Creating CI/CD pipelines for ASP.NET 4.x with AWS CodePipeline and AWS Elastic Beanstalk

Post Syndicated from Kirk Davis original https://aws.amazon.com/blogs/devops/creating-ci-cd-pipelines-for-asp-net-4-x-with-aws-codepipeline-and-aws-elastic-beanstalk/

By Kirk Davis, Specialized Solutions Architect, Microsoft Platform team

As customers migrate ASP.NET (on .NET Framework) applications to AWS, many choose to deploy these apps with AWS Elastic Beanstalk, which provides a managed .NET platform to deploy, scale, and update the apps. Customers often ask how to create CI/CD pipelines for these ASP.NET 4.x (.NET Framework) apps without needing to set up or manage Jenkins instances or other infrastructure.

You can easily create these pipelines using AWS CodePipeline as the orchestrator, AWS CodeBuild for performing builds, and AWS CodeCommit, GitHub, or other systems for source control. This blog post demonstrates how to set up a simplified CI/CD pipeline that you could expand on later to include unit tests, using a CodeCommit Git repository for source control.

Creating a project and adding a buildspec.yml file

The first step in setting up this simplified CI/CD pipeline is to create a project and add a buildspec.yml file.

Creating or choosing an ASP.NET web application (.NET Framework)

First, either create a new ASP.NET Web Application (.NET Framework) project or choose an existing application to use. You can choose MVC, Web API, or even Web Forms project types based on ASP.NET 4.x. Whichever type you choose, make sure it builds and runs locally.

To set up your first CodePipeline for an ASP.NET (.NET Framework) application, you may wish to use a simple app that doesn’t require databases or other resources and which consists of a single project. The following screenshot shows the project type to choose when you create a new project in Visual Studio 2019.

Visual Studio 2019's Create New Project dialog window showing "ASP.NET Web Application (.NET Framework)" project type selected.

Visual Studio Create New Project dialog

Adding the project to CodeCommit

Next, add your project to a CodeCommit Git repository. You can either create a new repository in the CodeCommit web console and then add your new or legacy application to it by following the steps in the CodeCommit documentation or create the new repository from within Visual Studio’s Team Explorer by taking advantage of AWS Toolkit for Visual Studio’s integration with CodeCommit.

If you wish to use Team Explorer to create and interact with the CodeCommit Git repository for your project, follow Step 2 in the Integrate Visual Studio with AWS CodeCommit documentation to create the connection, and then follow the steps under Create a CodeCommit Repository from Visual Studio in the same section. Alternatively, you can work with Git from the command line.

You can reduce the number of files being stored in Git by adding a .gitignore file specific to .NET projects using Visual Studio’s Team Explorer:

  1. Choose the Home icon in the Team Explorer toolbar.
  2. Choose Settings, then Repository Settings.
  3. Choose the Add option for Ignore file under Ignore & Attributes Files, as shown in the following screenshot.
Visual Studio's Team Explorer - Repository Settings pane, showing the Add link for Ignore and Attribute Files.

Team Explorer – Repository Settings

After adding a .gitignore file and optionally connecting Visual Studio to CodeCommit, push your code up to the remote in CodeCommit using either git push or Team Explorer. After pushing your changes, you can use the CodeCommit management console in your browser to verify that all your files are there.

Adding a buildspec.yml file to your project

CodeBuild, which does the actual compilation, essentially launches a container using a docker image you specify, then runs a series of commands to install any required software and perform the actual build or tests that you want. Finally, it takes whatever output files you specify—artifacts—and uploads them in a .zip file to Amazon S3 for the next stage of the CodePipeline pipeline. The commands that CodeBuild executes in the container are specified in a buildspec.yml file, which is part of the source code of your project. You can also add it directly to the CodeBuild configuration, but it’s more convenient to edit and track in source control. When running CodeBuild with Windows containers, the default shell for these commands is PowerShell.

Add a plain text file to the root of your ASP.NET project named buildspec.yml and then open the file in an editor. Ensure you add the file to your project to easily find and edit it later. For details on the structure and contents of buildspec.yml files, refer to the CodeBuild documentation.

You can use the following sample buildspec.yml file and simply replace the values for PROJECT and DOTNET_FRAMEWORK with the name and .NET Framework target version for your project.

version: 0.2

env:
  variables:
    PROJECT: AspNetMvcSampleApp
    DOTNET_FRAMEWORK: 4.6.1
phases:
  build:
    commands:
      - nuget restore
      - msbuild $env:PROJECT.csproj /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\ /t:Package
artifacts:
  files:
    - '**/*'
  base-directory: C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\'

Walkthrough of the buildspec commands

Looking at the buildspec.yml file above, you can see that the only phase defined for this sample application is build. If you need to perform some action either before or after the build, you can add pre_build and post_build phases.

The first command executed in the build phase is nuget restore to download any NuGet packages your project references. Then, MS build kicks off the build itself. Using the /t:Package parameter generates the web deployment folder structure that Elastic Beanstalk expects for ASP.NET Framework applications, and includes the archive.xml, parameters.xml, and systemInfo.xml files.

By default, the output of this type of build is a .zip file. However, when used in conjunction with CodePipeline, CodeBuild always zips up the artifact files that you specify, even if they’re already zipped. To avoid this double zipping, use the /p:PackageAsSingleFile=false parameter, which outputs the folder structure in a folder called Archive instead. The /p:OutDir parameter specifies where MSBuild should write the files. This example uses C:\codebuild\artifacts\.

Finally, in the artifacts node, specify which files (or artifacts) CodeBuild should compress and provide to CodePipeline. The sample above includes all the files (the ‘**/*’) in the C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\ folder, in which ${env:PROJECT} is automatically replaced by the value of the variable for the project name specified at the top of the file.

After you finish editing the buildspec.yml file, commit and push your changes to ensure the file is in your CodeCommit Git repository.

Create an Elastic Beanstalk application and initial deployment

The CodePipeline deployment provider for Elastic Beanstalk deploys to an existing Elastic Beanstalk application environment. So before you build out your pipeline, manually deploy your application and create the destination application and environment in Elastic Beanstalk. The easiest way to do this is using the AWS Toolkit for Visual Studio. If you don’t have it installed, use the Visual Studio Extensions tool to search for aws and install the toolkit.

Once it’s installed, open your project in Visual Studio, right-click the project node in the Solutions Explorer pane, and choose Publish to AWS Elastic Beanstalk. This launches the publish wizard.

For step-by-step instructions on using the publishing wizard, see Deploy a Traditional ASP.NET Application to Elastic Beanstalk.

Once the publish wizard has finished deploying to Elastic Beanstalk, you should see the URL in the Elastic Beanstalk environment pane in Visual Studio, as shown in the following screenshot.

Alternately, you can navigate to the Elastic Beanstalk management console in your browser, select your application and environment, and see the URL in the environment dashboard. Verify that your application is viewable in your browser.

The AWS Toolkit for Visual Studio's Elastic Beanstalk deployment pane, with the environment URL circled.

AWS Toolkit – Elastic Beanstalk Environment

Creating the CI/CD pipeline

Next, create the CodePipeline pipeline.

Adding the source stage

Now that your source code is in CodeCommit, and you have an existing Elastic Beanstalk app, create your pipeline:

  1. In your browser, navigate to the CodePipeline management console.
  2. Choose Create pipeline and give your pipeline a name. To keep things simple, you might want to use the same name as your CodeCommit repo.
  3. Choose Next.
  4. Under Source, choose CodeCommit.
  5. Select your repository name from the drop-down, and choose the branch you wish to use. If you haven’t added any branches, your only choice will be the master branch.

Creating the build stage

Next, create the build stage:

  1. After choosing Next, select AWS CodeBuild as the build provider.
  2. Select your region, then choose Create project, which will open CodeBuild in another browser window.
  3. In the CodeBuild window, you can optionally assign your build project a name and description.
  4. Under Environment, select the Custom image option, and select Windows as the environment type.
  5. For building ASP.NET 4.x (.NET Framework) web projects, it’s easiest to start out with Microsoft’s .NET Framework SDK docker image, which they host on their registry.
    Select Other registry, and use mcr.microsoft.com/dotnet/framework/sdk:[version-tag] as the registry URL. Replace version-tag with the .NET framework version. For .NET Framework 4.x, the most likely options are 4.7.1, 4.7.2 or 4.8. This example uses mcr.microsoft.com/dotnet/framework/sdk:4.7.2.

For details about the .NET Framework SDK container image, see the container image page on Dockerhub. The SDK includes the Visual Studio Build Tools, the NuGet CLI, and ASP.NET Web Targets.

Next, choose a group name for Amazon CloudWatch logs under Logs (near the bottom of the page). This will output detailed build logs for each build to CloudWatch. Leave the rest of the settings as they are.

Then choose Continue to CodePipeline to save the CodeBuild configuration and return to the CodePipeline wizard’s Add build stage step. Ensure your newly created build project is specified in Project name, then choose Next.

Adding the deploy stage

In the Add deploy stage step:

  1. Select AWS Elastic Beanstalk as the Deploy provider.
  2. Select your region.
  3. In the Application name field, select the Elastic Beanstalk application you previously deployed.
  4. Select the environment you previously deployed and choose Next.
  5. Review all your settings and choose Create pipeline.

Testing out the pipeline

To test out your pipeline, make an easily visible change to your application’s code, such as adding some text to the home page. Then, commit your changes and push.

Within a few moments, the Source stage in your pipeline should move to in progress, followed by the Build stage. It can take 10 minutes or more for the build stage to complete, and then the Deploy stage should finish quickly.

After the Deploy stage status changes to Succeeded, choose AWS Elastic Beanstalk in that stage in the pipeline view, as shown in the following screenshot, to navigate to your Elastic Beanstalk application.

Select the environment to which you’re deploying and select the URL. You should see that your changes are now live.

After a successful build and deploy, your pipeline should appear as it does in the following screenshot.

Screenshot of a sample CodePipeline pipeline with all stages showing a successful build and deploy.

Screenshot of successful CodePipeline pipeline

Conclusion

In this blog post, I showed you how to create a simple CI/CD pipeline for ASP.NET 4.x web applications, built with the .NET Framework, using AWS services including CodeCommit, CodePipeline, CodeBuild and Elastic Beanstalk. You can extend this pipeline with additional build actions for things like unit tests, or by adding manual approval steps.

We welcome your feedback.

Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy

Post Syndicated from Noha Ghazal original https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CodeBuild artifacts with AWS CodeDeploy, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into CodeBuild, then the output is deployed on CodeDeploy.

Solution overview

The functioning pipeline creates a fully managed build service that compiles your source code. It then produces code artifacts that can be used by CodeDeploy to deploy to your production environment automatically.

The deployment workflow starts by placing the application code on the GitHub repository. To automate this scenario, I added source code management to the Jenkins project under the Source Code section. I chose the GitHub option, which by design clones a copy from the GitHub repo content in the Jenkins local workspace directory.

In the second step of my automation procedure, I enabled a trigger for the Jenkins server using an “Poll SCM” option. This option makes Jenkins check the configured repository for any new commits/code changes with a specified frequency. In this testing scenario, I configured the trigger to perform every two minutes. The automated Jenkins deployment process works as follows:

  1. Jenkins checks for any new changes on GitHub every two minutes.
  2. Change determination:
    1. If Jenkins finds no changes, Jenkins exits the procedure.
    2. If it does find changes, Jenkins clones all the files from the GitHub repository to the Jenkins server workspace directory.
  3. The File Operation plugin deletes all the files cloned from GitHub. This keeps the Jenkins workspace directory clean.
  4. The AWS CodeBuild plugin zips the files and sends them to a predefined Amazon S3 bucket location then initiates the CodeBuild project, which obtains the code from the S3 bucket. The project then creates the output artifact zip file, and stores that file again on the S3 bucket.
  5. The HTTP Request plugin downloads the CodeBuild output artifacts from the S3 bucket.
    I edited the S3 bucket policy to allow access from the Jenkins server IP address. See the following example policy:

    {
      "Version": "2012-10-17",
      "Id": "S3PolicyId1",
      "Statement": [
        {
          "Sid": "IPAllow",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": "arn:aws:s3:::examplebucket/*",
          "Condition": {
             "IpAddress": {"aws:SourceIp": "x.x.x.x/x"},  <--- IP of the Jenkins server
          } 
        } 
      ]
    }
    
    

    This policy enables the HTTP request plugin to access the S3 bucket. This plugin doesn’t use the IAM instance profile or the AWS access keys (access key ID and secret access key).

  6. The output artifact is a compressed ZIP file. The CodeDeploy plugin by design requires the files to be unzipped to zip them and send them over to the S3 bucket for the CodeDeploy deployment. For that, I used the File Operation plugin to perform the following:
    1. Unzip the CodeBuild zipped artifact output in the Jenkins root workspace directory. At this point, the workspace directory should include the original zip file downloaded from the S3 bucket from Step 5 and the files extracted from this archive.
    2. Delete the original .zip file, and leave only the source bundle contents for the deployment.
  7. The CodeDeploy plugin selects and zips all workspace directory files. This plugin uses the CodeDeploy application name, deployment group name, and deployment configurations that you configured to initiate a new CodeDeploy deployment. The CodeDeploy plugin then uploads the newly zipped file according to the S3 bucket location provided to CodeDeploy as a source code for its new deployment operation.

Walkthrough

In this post, I walk you through the following steps:

  • Creating resources to build the infrastructure, including the Jenkins server, CodeBuild project, and CodeDeploy application.
  • Accessing and unlocking the Jenkins server.
  • Creating a project and configuring the CodeDeploy Jenkins plugin.
  • Testing the whole CI/CD pipeline.

Create the resources

In this section, I show you how to launch an AWS CloudFormation template, a tool that creates the following resources:

  • Amazon S3 bucket—Stores the GitHub repository files and the CodeBuild artifact application file that CodeDeploy uses.
  • IAM S3 bucket policy—Allows the Jenkins server access to the S3 bucket.
  • JenkinsRole—An IAM role and instance profile for the Amazon EC2 instance for use as a Jenkins server. This role allows Jenkins on the EC2 instance to access the S3 bucket to write files and access to create CodeDeploy deployments.
  • CodeDeploy application and CodeDeploy deployment group.
  • CodeDeploy service role—An IAM role to enable CodeDeploy to read the tags applied to the instances or the EC2 Auto Scaling group names associated with the instances.
  • CodeDeployRole—An IAM role and instance profile for the EC2 instances of CodeDeploy. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • CodeBuildRole—An IAM role to be used by CodeBuild to access the S3 bucket and create the build projects.
  • Jenkins server—An EC2 instance running Jenkins.
  • CodeBuild project—This is configured with the S3 bucket and S3 artifact.
  • Auto Scaling group—Contains EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancer.
  • Auto Scaling launch configurations—For use by the Auto Scaling group.
  • Security groups—For the Jenkins server, the load balancer, and the CodeDeploy EC2 instances.

 

  1. To create the CloudFormation stack (for example in the AWS Frankfurt Region) click the below link:
    .

    .
  2. Choose Next and provide the following values on the Specify Details page:
    • For Stack name, name your stack as you prefer.
    • For CodedeployInstanceCount, choose the default of t2.medium.
      To check the supported instance types by AWS Region, see Supported Regions.
    • For InstanceCount, keep the default of 3, to launch three EC2 instances for CodeDeploy.
    • For JenkinsInstanceType, keep the default of t2.medium.
    • For KeyName, choose an existing EC2 key pair in your AWS account. Use this to connect by using SSH to the Jenkins server and the CodeDeploy EC2 instances. Make sure that you have access to the private key of this key pair.
    • For PublicSubnet1, choose a public subnet from which the load balancer, Jenkins server, and CodeDeploy web servers launch.
    • For PublicSubnet2, choose a public subnet from which the load balancers and CodeDeploy web servers launch.
    • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
    • For YourIPRange, enter the CIDR block of the network from which you connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, go to https://www.whatismyip.com/ to find your IP address, and then enter your IP address followed by /32. If you don’t have a static IP address (or aren’t sure if you have one), enter 0.0.0.0/0. Then, any address can reach your Jenkins server.
      .
  3. Choose Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box.
  5. Choose Create and wait for the CloudFormation stack status to change to CREATE_COMPLETE. This takes approximately 6–10 minutes.
  6. Check the resulting values on the Outputs tab. You need them later.
    .
  7. Browse to the ELBDNSName value from the Outputs tab, verifying that you can see the Sample page. You should see a congratulatory message.
  8. Your Jenkins server should be ready to deploy.

Access and unlock your Jenkins server

In this section, I discuss how to access, unlock, and customize your Jenkins server.

  1. Copy the JenkinsServerDNSName value from the Outputs tab of the CloudFormation stack, and paste it into your browser.
  2. To unlock the Jenkins server, SSH to the server using the IP address and key pair, following the instructions from Unlocking Jenkins.
  3. Use the root user to Cat the log file (/var/log/jenkins/jenkins.log) and copy the automatically generated alphanumeric password (between the two sets of asterisks). Then, use the password to unlock your Jenkins server, as shown in the following screenshots.
    .
  4. On the Customize Jenkins page, choose Install suggested plugins.

  5. Wait until Jenkins installs all the suggested plugins. When the process completes, you should see the check marks alongside all of the installed plugins.
    .
    .
  6. On the Create First Admin User page, enter a user name, password, full name, and email address of the Jenkins user.
  7. Choose Save and continue, Save and finish, and Start using Jenkins.
    .
    After you install all the needed Jenkins plugins along with their required dependencies, the Jenkins server restarts. This step should take about two minutes. After Jenkins restarts, refresh the page. Your Jenkins server should be ready to use.

Create a project and configure the CodeDeploy Jenkins plugin

Now, to create our project in Jenkins we need to configure the required Jenkins plugin.

  1. Sign in to Jenkins with the user name and password that you created earlier and click on Manage Jenkins then Manage Plugins.
  2. From the Available tab search for and select the below plugins then choose Install without restart:
    .
    AWS CodeDeploy
    AWS CodeBuild
    Http Request
    File Operations
    .
  3. Select the Restart Jenkins when installation is complete and no jobs are running.
    Jenkins will take couple of minutes to download the plugins along with their dependencies then will restart.
  4. Login then choose New Item, Freestyle project.
  5. Enter a name for the project (for example, CodeDeployApp), and choose OK.
    .

    .
  6. On the project configuration page, under Source Code Management, choose Git. For Repository URL, enter the URL of your GitHub repository.
    .

    .
  7. For Build Triggers, select the Poll SCM check box. In the Schedule, for testing enter H/2 * * * *. This entry tells Jenkins to poll GitHub every two minutes for updates.
    .

    .
  8. Under Build Environment, select the Delete workspace before build starts check box. Each Jenkins project has a dedicated workspace directory. This option allows you to wipe out your workspace directory with each new Jenkins build, to keep it clean.
    .

    .
  9. Under Build Actions, add a Build Step, and AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys.
    .
    .
  10. From the CloudFormation stack Outputs tab, copy the AWS CodeBuild project name (myProjectName) and paste it in the Project Name field. Also, set the Region that you are using and choose Use Jenkins source.
    It is a best practice is to store AWS credentials for CodeBuild in the native Jenkins credential store. For more information, see the Jenkins AWS CodeBuild Plugin wiki.
    .
    .
  11. To make sure that all files cloned from the GitHub repository are deleted choose Add build step and select File Operation plugin, then click Add and select File Delete. Under File Delete operation in the Include File Pattern, type an asterisk.
    .
    .
  12. Under Build, configure the following:
    1. Choose Add a Build step.
    2. Choose HTTP Request.
    3. Copy the S3 bucket name from the CloudFormation stack Outputs tab and paste it after (http://s3-eu-central-1.amazonaws.com/) along with the name of the zip file codebuild-artifact.zip as the value for HTTP Plugin URL.
      Example: (http://s3-eu-central-1.amazonaws.com/mybucketname/codebuild-artifact.zip)
    4. For Ignore SSL errors?, choose Yes.
      .

      .
  13. Under HTTP Request, choose Advanced and leave the default values for Authorization, Headers, and Body. Under Response, for Output response to file, enter the codebuild-artifact.zip file name.
    .

    .
  14. Add the two build steps for the File Operations plugin, in the following order:
    1. Unzip action: This build step unzips the codebuild-artifact.zip file and places the contents in the root workspace directory.
    2. File Delete action: This build step deletes the codebuild-artifact.zip file, leaving only the source bundle contents for deployment.
      .
      .
  15. On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box.
  16. Enter the following values from the Outputs tab of your CloudFormation stack and leave the other settings at their default (blank):
    • For AWS CodeDeploy Application Name, enter the value of CodeDeployApplicationName.
    • For AWS CodeDeploy Deployment Group, enter the value of CodeDeployDeploymentGroup.
    • For AWS CodeDeploy Deployment Config, enter CodeDeployDefault.OneAtATime.
    • For AWS Region, choose the Region where you created the CodeDeploy environment.
    • For S3 Bucket, enter the value of S3BucketName.
      The CodeDeploy plugin uses the Include Files option to filter the files based on specific file names existing in your current Jenkins deployment workspace directory. The plugin zips specified files into one file. It then sends them to the location specified in the S3 Bucket parameter for CodeDeploy to download and use in the new deployment.
      .
      As shown below, in the optional Include Files field, I used (**) so all files in the workspace directory get zipped.
      .
      .
  17. Choose Deploy Revision. This option registers the newly created revision to your CodeDeploy application and gets it ready for deployment.
  18. Select the Wait for deployment to finish? check box. This option allows you to view the CodeDeploy deployments logs and events on your Jenkins server console output.
    .
    .
    Now that you have created a project, you are ready to test deployment.

Testing the whole CI/CD pipeline

To test the whole solution, put an application on your GitHub repository. You can download the sample from here.

The following screenshot shows an application tree containing the application source files, including text and binary files, executables, and packages:

In this example, the application files are the templates directory, test_app.py file, and web.py file.

The appspec.yml file is the main application specification file telling CodeDeploy how to deploy your application. Jenkins uses the AppSpec file to manage each deployment as a series of lifecycle event “hooks”, as defined in the file. For information about how to create a well-formed AppSpec file, see AWS CodeDeploy AppSpec File Reference.

The buildspec.yml file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a build spec as part of the source code, or you can define a build spec when you create a build project. For more information, see How AWS CodeBuild Works.

The scripts folder contains the scripts that you would like to run during the CodeDeploy LifecycleHooks execution with respect to your application requirements. For more information, see Plan a Revision for AWS CodeDeploy.

To test this solution, perform the following steps:

  1. Unzip the application files and send them to your GitHub repository, run the following git commands from the path where you placed your sample application:
    $ git add -A
    
    $ git commit -m 'Your first application'
    
    $ git push
  2. On the Jenkins server dashboard, wait for two minutes until the previously set project trigger starts working. After the trigger starts working, you should see a new build taking place.
    .

    .
  3. In the Jenkins server Console Output page, check the build events and review the steps performed by each Jenkins plugin. You can also review the CodeDeploy deployment in detail, as shown in the following screenshot:
    .

On completion, Jenkins should report that you have successfully deployed a web application. You can also use your ELBDNSName value to confirm that the deployed application is running successfully.

.

.Conclusion

In this post, I outlined how you can use a Jenkins open-source automation server to deploy CodeBuild artifacts with CodeDeploy. I showed you how to construct a functioning CI/CD pipeline with these tools. I walked you through how to build the deployment infrastructure and automatically deploy application version changes from GitHub to your production environment.

Hopefully, you have found this post informative and the proposed solution useful. As always, AWS welcomes all feedback or comment.

About the Author

.

 

Noha Ghazal is a Cloud Support Engineer at Amazon Web Services. She is is a subject matter expert for AWS CodeDeploy. In her role, she enjoys supporting customers with their CodeDeploy and other DevOps configurations. Outside of work she enjoys drawing portraits, fishing and playing video games.

 

 

Improve Your App Testing With Amplify Console’s Pull Requests Previews and Cypress Testing

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/improve-your-app-testing-with-amplify-consoles-pull-requests-previews-and-cypress-testing/

Amplify Console allows developers to easly configure a Git-based workflow for continuous deployment and hosting of fullstack serverless web apps. Fullstack serverless apps comprise of backend resources such as GraphQL APIs, Data and File Storage, Authentication, or Analytics, integrated with a frontend framework such as React, Gatsby, or Angular. You can read more about the Amplify Console in a previous article I wrote.

Today, we are announcing the ability to create preview URLs and to run end-to-end tests on pull requests before releasing code to production.

Pull Request previews
You can now configure Amplify Console to deploy your application to a unique URL every time a developer submits a pull request to your Git repository. The preview URL is completely different from the one used by the production site. You can see how changes look before merging the pull request into the main branch of your code repository, triggering a new release in the Amplify Console. For fullstack apps with backend environments provisioned via the Amplify CLI, every pull request spins up an ephemeral backend that is deleted when the pull request is closed. You can test changes in complete isolation from the production environment. Amplify Console creates backend infrastructures for pull requests on private git repositories only. This allows to avoid incurring extra costs in case of unsolicited pull requests.

To learn how it works, let’s start a web application with a cloud-based authentication backend, and deploy it on Amplify Console. I first create a React application (check here to learn how to install React).

npx create-react-app amplify-console-demo                                                
cd amplify-console-demo

I initialize the Amplify environment (learn how to install the Amplify CLI first). I add a cloud based authentication backend powered by Amazon Cognito. I accept all the defaults answers proposed by Amplify CLI.

npm install aws-amplify aws-amplify-react
amplify init
amplify add auth
amplify push

I then modify src/App.js to add the front end authentication user interface. The code is available in the AWS Amplify documentation. Once ready, I start the local development server to test the application locally.

npm run start

I point my browser to http://localhost:8080 to verify the scafolding (the below screenshot is taken from my AWS Cloud 9 development environment). I click Create account to create a user, verify the SignUp flow, and authenticate to the app.

After signing up, I see the application page.

There are two important details to note. First, I am using a private GitHub repository. Amplify Console only creates backend infrastructure on pull requests for private repositories, to avoid creating unnecessary infrastructure for unsollicited pull requests. Second, the Amplify Console build process looks for dependencies in package-lock.json only. This is why I added the amplify packages with npm and not with yarn.

When I am happy with my app, I push the code to a GitHub repo (let’s assume I already did git remote add origin ...).

git add amplify
git commit -am "initial commit"
git push origin master

The next step consists of configuring Amplify Console to build and deploy my app on every git commit. I login to the Amplify Console, click Connect App, choose GitHub as repository and click Continue (the first time I do this, I need to authenticate on GitHub, using my GitHub username and password)

I select my repository and the branch I want to use as source:

Amplify Console detects the type of project and proposes a build file. I select the name of my environment (dev). The first time I use Amplify Console, I follow the instructions to create a new service role. This role authorises Amplify Console to access AWS backend services on my behalf.

I click Next. I review the settings and click Save and Deploy. After a few seconds or minutes, my application is ready. I can point my browser to the deployment URL and verify the app is working correctly.

Now, let’s enable previews for pull requests. Click Preview on the left menu and Enable Previews. To enable the previews, Amplify Console requires an app to be installed in my GitHub account. I follow the instructions provided by the console to configure my GitHub account. Once set up, I select a branch, click Manage to enable / disable the pull request previews. (At anytime, I can uninstall the Amplify app from my GitHub account by visiting the Applications section of my GitHub account’s settings.)

Now that the mechanism is in place, let’s create a pull request.

I edit App.js directly on GitHub. I customize the withAuthenticator component to change the color of the Sign In button from orange to green. I save the changes and I create a pull request.

On the Pull Request detail page, I click Show all checks to get the status of the Amplify Console test. I see AWS Amplify Console Web Preview in progress. Amplify Console creates a full backend environment to test the pull request, to build and to deploy the frontend.

Eventually, I see All checks have passed and a green mark. I click Details to get the preview url. In case of an error, you can see the detailled log file of the build phase in the Amplify Console.

I can also check the status of the preview in the Amplify Console.

I point my browser to the preview URL to test my change. I can see the green Sign In button instead of the orange one.

When I try to authenticate using the username and password I created previously, I receive an User does not exist error message because this preview URL points to a different backend than the main application. I can see two Cognito user pools in the Cognito console, one for each environment.

I can control who can access the preview URL using similar access control settings that I use for the main URL.

When I am happy with the proposed changes, I merge the pull request on GitHub to trigger a new build and to deploy the change to the production environment. Amplify Console deletes the preview environment upon merging. The ephemeral backend environment created for the pull request also gets deleted.

Cypress testing
In addition to previewing changes before merging them to the main branch, we also added the capability to run end to end tests during your build process. You can use your favorite test framework to add unit or end-to-end tests to your application and automatically run the tests during the build phase. When you use Cypress test framework, Amplify Console detects the tests in your source tree and automatically adds the testing phase in your application build process.

Only projects that are passing all tests are pushed down your pipeline to the deployment phase. You can learn more about this and follow step by step instructions we posted a few weeks ago.

These two additions to Amplify Console allow you to gain higher confidence in the robustness of your pipeline and the quality of the code delivered to your production environment.

Availability
Web previews are available in all Regions where AWS Amplify Console is available today, at no additional cost on top of the regular Amplify Console pricing. With the AWS Free Usage Tier, you can get started for free. Upon sign up, new AWS customers receive 1,000 build minutes per month for the build and deploy feature, and 15 GB served per month and 5 GB data storage per month for the hosting.

— seb

NoSQL Workbench for Amazon DynamoDB – Available in Preview

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/nosql-workbench-for-amazon-dynamodb-available-in-preview/

I am always impressed by the flexibility of Amazon DynamoDB, providing our customers a fully-managed key-value and document database that can easily scale from a few requests per month to millions of requests per second.

The DynamoDB team released so many great features recently, from on-demand capacity, to support for native ACID transactions. Here’s a great recap of other recent DynamoDB announcements such as global tables, point-in-time recovery, and instant adaptive capacity. DynamoDB now encrypts all customer data at rest by default.

However, switching mindset from a relational database to NoSQL is not that easy. Last year we had two amazing talks at re:Invent that can help you understand how DynamoDB works, and how you can use it for your use cases:

To help you even further, we are introducing today in preview NoSQL Workbench for Amazon DynamoDB, a free, client-side application available for Windows and macOS to help you design and visualize your data model, run queries on your data, and generate the code for your application!

The three main capabilities provided by the NoSQL Workbench are:

  • Data modeler — to build new data models, adding tables and indexes, or to import, modify, and export existing data models.
  • Visualizer — to visualize data models based on their applications access patterns, with sample data that you can add manually or import via a SQL query.
  • Operation builder — to define and execute data-plane operations or generate ready-to-use sample code for them.

To see how this new tool can simplify working with DynamoDB, let’s build an application to retrieve information on customers and their orders.

Using the NoSQL Workbench
In the Data modeler, I start by creating a CustomerOrders data model, and I add a table, CustomerAndOrders, to hold my customer data and the information on their orders. You can use this tool to create a simple data model where customers and orders are in two distinct tables, each one with their own primary keys. There would be nothing wrong with that. Here I’d like to show how this tool can also help you use more advanced design patterns. By having the customer and order data in a single table, I can construct queries that return all the data I need with a single interaction with DynamoDB, speeding up the performance of my application.

As partition key, I use the customerId. This choice provides an even distribution of data across multiple partitions. The sort key in my data model will be an overloaded attribute, in the sense that it can hold different data depending on the item:

  • A fixed string, for example customer, for the items containing the customer data.
  • The order date, written using ISO 8601 strings such as 20190823, for the items containing orders.

By overloading the sort key with these two possible values, I am able to run a single query that returns the customer data and the most recent orders. For this reason, I use a generic name for the sort key. In this case, I use sk.

Apart from the partition key and the optional sort key, DynamoDB has a flexible schema, and the other attributes can be different for each item in a table. However, with this tool I have the option to describe in the data model all the possible attributes I am going to use for a table. In this way, I can check later that all the access patterns I need for my application work well with this data model.

For this table, I add the following attributes:

  • customerName and customerAddress, for the items in the table containing customer data.
  • orderId and deliveryAddress, for the items in the table containing order data.

I am not adding a orderDate attribute, because for this data model the value will be stored in the sk sort key. For a real production use case, you would probably have much more attributes to describe your customers and orders, but I am trying to keep things simple enough here to show what you can do, without getting lost in details.

Another access pattern for my application is to be able to get a specific order by ID. For that, I add a global secondary index to my table, with orderId as partition key and no sort key.

I add the table definition to the data model, and move on to the Visualizer. There, I update the table by adding some sample data. I add data manually, but I could import a few rows from a table in a MySQL database, for example to simplify a NoSQL migration from a relational database.

Now, I visualize my data model with the sample data to have a better understanding of what to expect from this table. For example, if I select a customerId, and I query for all the orders greater than a specific date, I also get the customer data at the end, because the string customer, stored in the sk sort key, is always greater that any date written in ISO 8601 syntax.

In the Visualizer, I can also see how the global secondary index on the orderId works. Interestingly, items without an orderId are not part of this index, so I get only 4 of the 6 items that are part of my sample data. This happens because DynamoDB writes a corresponding index entry only if the index sort key value is present in the item. If the sort key doesn’t appear in every table item, the index is said to be sparseSparse indexes are useful for queries over a subsection of a table.

I now commit my data model to DynamoDB. This step creates server-side resources such as tables and global secondary indexes for the selected data model, and loads the sample data. To do so, I need AWS credentials for an AWS account. I have the AWS Command Line Interface (CLI) installed and configured in the environment where I am using this tool, so I can just select one of my named profiles.

I move to the Operation builder, where I see all the tables in the selected AWS Region. I select the newly created CustomerAndOrders table to browse the data and build the code for the operations I need in my application.

In this case, I want to run a query that, for a specific customer, selects all orders more recent that a date I provide. As we saw previously, the overloaded sort key would also return the customer data as last item. The Operation builder can help you use the full syntax of DynamoDB operations, for example adding conditions and child expressions. In this case, I add the condition to only return orders where the deliveryAddress contains Seattle.

I have the option to execute the operation on the DynamoDB table, but this time I want to use the query in my application. To generate the code, I select between Python, JavaScript (Node.js), or Java.

You can use the Operation builder to generate the code for all the access patterns that you plan to use with your application, using all the advanced features that DynamoDB provides, including ACID transactions.

Available Now
You can find how to set up NoSQL Workbench for Amazon DynamoDB (Preview) for Windows and macOS here.

We welcome your suggestions in the DynamoDB discussion forum. Let us know what you build with this new tool and how we can help you more!

Amplify Console – Hosting for Fullstack Serverless Web Apps

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amplify-console-hosting-for-fullstack-serverless-web-apps/

AWS Amplify Console is a fullstack web app hosting service, with continuous deployment from your preferred source code repository. Amplify Console has been introduced in November 2018 at AWS re:Invent. Since then, the team has been listening to customer feedback and iterated quickly to release several new features, here is a short re:Cap.

Instant Cache Invalidation
Amplify Console allows to host single page web apps or static sites with serverless backends via a content delivery network, or CDN. A CDN is a network of distributed servers that cache files at edge locations across the world enabling low latency distribution of your web file assets.

Previously, updating content on the CDN required manually invalidating the cache and waiting 15-20 minutes for changes to propagate globally. To make frequent updates, developers found workarounds such as setting lower time-to-live (TTLs) on asset headers which enables faster updates, but adversely impacts performance. Now, you no longer have to make a tradeoff between faster deployments and faster performance. On every commit code to your repository, the Amplify Console builds and deploys changes to the CDN that are viewable immediately in the browser.

“Deploy To Amplify Console” Button

Deploy To Amplify Console

When publishing your project source code on GitHub, you can make it easy for other developers to build and deploy your application by providing a “Deploy To Amplify Console” button in the Readme document. Clicking on that button will open Amplify Console and propose a three step process to deploy your code.

You test this yourself with these example projects and have a look at the documentation. Adding a button to your own code repository is as easy as adding this line in your Readme document (be sure to replace the username and repository name in the GitHub URL):

[![amplifybutton](https://oneclick.amplifyapp.com/button.svg)](https://console.aws.amazon.com/amplify/home#/deploy?repo=https://github.com/username/repository)

Manual Deploy
I think it is a good idea to version control everything, including simple web site where you are the only developer. But just in case you do not want to use a source code repository as source for your deployment, Amplify Console allows to deploy a zip file, a local folder on your laptop, an Amazon S3 bucket or any HTTPS URL, such as a shared repository on Dropbox.

When creating a new Amplify Console project, select Deploy without Git Provider option. 
Then choose your source file (your laptop, Amazon S3 or an HTTPS URI)

AWS CloudFormation Integration
Developers love automation. Deploying code or infrastructure is no different : you must ensure your infrastructure deployments are automated and repeatable. AWS CloudFormation allows you to automate the creation of infrastruture in the cloud based on a YAML or JSON description. Amplify Console added three new resource types to AWS CloudFormation:

  • AWS::Amplify::App
  • AWS::Amplify::Branch
  • AWS::Amplify::Domain

These allows you respectively to create a new Amplify Console app, to define the Git branch, and the DNS domain name to use.

AWS CloudFormation connects to your source code repository to add a webhook to it. You need to include your Github Personal Access Token to allow this to happen, this blog post has all the details. Remember to not hardcode credentials (or OAuth tokens) into your Cloudformation templates, use parameters instead.

Deploy Multiple Git Branches
We believe your CI/CD tools must adapt to your team workflow, not the other way around. Amplify Console supports branch pattern deployments, allowing you to automatically deploy branches that match a specific pattern without any extra configuration. Pattern matching is based on regular expresssions.

When you want to test a new feature, you typically create a new branch in Git. Amplify Console and the Amplify CLI are now detecting this and will provision a separate backend and hosting infrastructure for your serverless app.

To enable branch detection, use the left menu, click on General > Edit and turn on Branch Autodetection:

Custom HTTP Headers
You can customize Amplify Console to send customized HTTP response headers. Response headers can be used for debugging, security, or informational purposes. To add your custom headers, you select App Settings > Build Settings and then edit the buildspec. For example, to enforce TLS transport and prevent XSS attacks, you can add the following headers:

customHeaders:
        - pattern: '**/*'
          headers:
                - key: 'Strict-Transport-Security'
                        value: 'max-age=31536000; includeSubDomains'
                - key: 'X-Frame-Options'
                        value: 'X-Frame-Options: SAMEORIGIN'
                - key: 'X-XSS-Protection'
                        value: 'X-XSS-Protection: 1; mode=block'
                - key: 'X-Content-Type-Options'
                        value: 'X-Content-Type-Options: nosniff'
                - key: 'Content-Security-Policy'
                        value: "default-src 'self'"

The documentation has more details.

Custom Containers for Build
Last but not least, we made several changes to the build environment. Amplify Console uses AWS CodeBuild behind the scenes. The default build container image is now based on Amazon Linux 2 and has Serverless Application Model (SAM) CLI pre-installed. If, for whatever reasons you want to use your own container for the build, you can configure Amplify Console to do so. Select App Settings > Build Settings :

And then edit the build image setting

There are a few requirements on the container image: it has to have cURL, git, OpenSSH and, if you are building NodeJS projects, node and npm. As usual, the details are in the documentation.

Each of these new features has been driven by your feedback, so please continue to tell us what is important for you by submittin, and expect to see more changes coming in the second part of the year and beyond.

— seb

New – Local Mocking and Testing with the Amplify CLI

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-local-mocking-and-testing-with-the-amplify-cli/

The open source Amplify Framework provides a set of libraries, user interface (UI) components, and a command line interface (CLI) that make it easier to add sophisticated cloud features to your web or mobile apps by provisioning backend resources using AWS CloudFormation.

A comment I often get when talking with our customers, is that when you are adding new features or solving bugs, it is important to iterate as fast as possible, getting a quick feedback from your actions. How can we improve their development experience?

Well, last week the Amplify team launched the new Predictions category, to let you quickly add machine learning capabilities to your web or mobile app. Today, they are doing it again. I am very happy to share that you can now use the Amplify CLI to mock some of the most common cloud services it provides, and test your application 100% locally!

By mocking here I mean that instead of using the actual backend component, an API in the case of cloud services, a local, simplified emulation of that API is available instead. This emulation provides the basic functionality that you need for testing during development, but not the full behavior you’d get from the production service.

With this new mocking capability you can test your changes quickly, without the need of provisioning or updating the cloud resources you are using at every step. In this way, you can set up unit and integration tests that can be executed rapidly, without affecting your cloud backend. Depending on the architecture of your app, you can set up automatic testing in your CI/CD pipeline without provisioning backend resources.

This is really useful when editing AWS AppSync resolver mapping templates, written in Apache Velocity Template Language (VTL), which take your requests as input, and output a JSON document containing the instructions for the resolver. You can now have immediate feedback on your edits, and test if your resolvers work as expected without having to wait for a deployment for every update.

For this first release, the Amplify CLI can mock locally:

API Mocking
Let’s do a quick overview of what you can do. For example, let’s create a sample app that helps people store and share the location of those nice places that allow you to refill your reusable water bottle and reduce plastic waste.

To install the Amplify CLI, I need Node.js (version >= 8.11.x) and npm (version >= 5.x):

npm install -g @aws-amplify/cli
amplify configure

Amplify supports lots of different frameworks, for this example I am using React and I start with a sample app (npx requires npm >= 5.2.x):

npx create-react-app refillapp
cd refillapp

I use the Amplify CLI to inizialize the project and add an API. The Amplify CLI are interactive, asking you questions that drive the configuration of your backend. In this case, when asked, I select to add a GraphQL API.

amplify init
amplify add api

During the creation of the API, I edit the GraphQL schema, and define a RefillLocation in this way:

type RefillLocation @model {
  id: ID!
  name: String!
  description: String
  streetAddress: String!
  city: String!
  stateProvinceOrRegion: String
  zipCode: String!
  countryCode: String!
}

The fields that have an exclamation mark ! at the end are mandatory. The other fields are optional, and can be omitted when creating a new object.

The @model you see in the first line is a directive using GraphQL Transform to define top level object types in your API that are backed by DynamoDB and generate for you all the necessary CRUDL (create, read, update, delete, and list) queries and mutations, and the subscriptions to be notified of such mutations.

Now, I would normally need to run amplify push to configure and provision the backend resources required by the project (AppSync and DynamoDB in this case). But to get a quick feedback, I use the new local mocking capability running this command:

amplify mock

Alternatively, I can use the amplify mock api command to specifically mock just my GraphQL API. It would be the same at this stage, but it can be handy when using more than one mocking capability at a time.

The output of the mock command gives you some information on what it does, and what you can do, including the AppSync Mock endpoint:

GraphQL schema compiled successfully.

Edit your schema at /MyCode/refillapp/amplify/backend/api/refillapp/schema.graphql or place .graphql files in a directory at /MyCode/refillapp/amplify/backend/api/refillapp/schema

Creating table RefillLocationTable locally

Running GraphQL codegen

✔ Generated GraphQL operations successfully and saved at src/graphql

AppSync Mock endpoint is running at http://localhost:20002

I keep the mock command running in a terminal window to get feedback of possible errors in my code. For example, when I edit a VTL template, the Amplify CLI recognizes that immediately, and generates the updated code for the resolver. In case of a mistake, I get an error from the running mock command.

The AppSync Mock endpoint gives you access to:

I can now run GraphQL queries, mutations, and subscriptions locally for my API, using a web interface. For example, to create a new RefillLocation I build the mutation visually, like this:

To get the list of the RefillLocation objects in a city, I build the query using the same web interface, and run it against the local DynamoDB storage:

When I am confident that my data model is correct, I start building the frontend code of my app, editing the App.js file of my React app, and add functionalities that I can immediately test, thanks to local mocking.

To add the Amplify Framework to my app, including the React extensions, I use Yarn:

yarn add aws-amplify
yarn add aws-amplify-react

Now, using the Amplify Framework library, I can write code like this to run a GraphQL operation:

import API, { graphqlOperation } from '@aws-amplify/api';
import { createRefillLocation } from './graphql/mutations';

const refillLocation = {
  name: "My Favorite Place",
  streetAddress: "123 Here or There",
  zipCode: "12345"
  city: "Seattle",
  countryCode: "US"
};

await API.graphql(graphqlOperation(createRefillLocation, { input: refillLocation }));

Storage Mocking
I now want to add a new feature to my app, to let users upload and share pictures of a RefillLocation. To do so, I add the Storage category to the configuration of my project and select “Content” to use S3:

amplify add storage

Using the Amplify Framework library, I can now, straight from the browser, put, get, or remove objects from S3 using the following syntax:

import Storage from '@aws-amplify/storage';

Storage.put(name, file, {
  level: 'public'
})
.then(result => console.log(result))
.catch(err => console.log(err));

Storage.get(file, {
  level: 'public'
})
.then(result => {
  console.log(result);
  this.setState({ imageUrl: result });
  fetch(result);
})
.catch(err => alert(err));

All those interactions with S3 are marked as public, because I want my users to share their pictures with each other publicly, but the Amplify Framework supports different access levels, such as private, protected, and public. You can find more information on this in the File Access Levels section of the Amplify documentation.

Since S3 storage is supported by this new mocking capability, I use again amplify mock to test my whole application locally, including the backend used by my GraphQL API (AppSync and DynamoDB) and my content storage (S3).

If I want to test only part of my application locally, I can use amplify mock api or amplify mock storage to have only the GraphQL API, or the S3 storage, mocked locally.

Available Now
There are lots of other features that I didn’t have time to cover in this post, the best way to learn is to be curious and get hands on! You can start using Amplify by following the get-started tutorial.

Here you can find a great walkthrough of the features, and a description of how we collaborated with the open source community for this release.

Being able to mock and test your application locally can help you build and refine your ideas faster, let us know what you think in the Amplify CLI GitHub repository.

Danilo

Amplify Framework Update – Quickly Add Machine Learning Capabilities to Your Web and Mobile Apps

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amplify-framework-update-quickly-add-machine-learning-capabilities-to-your-web-and-mobile-apps/

At AWS, we want to put machine learning in the hands of every developer. For example, we have pre-trained AI services for areas such as computer vision and language that you can use without any expertise in machine learning. Today we are making another step in that direction with the addition of a new Predictions category to the Amplify Framework. In this way, you can add and configure AI/ML uses cases for your web or mobile application with few lines of code!

AWS Amplify consists of a development framework and developer services that make super easy to build mobile and web applications on AWS. The open-source Amplify Framework provides an opinionated set of libraries, user interface (UI) components, and a command line interface (CLI) to build a cloud backend and integrate it with your web or mobile apps. Amplify leverages a core set of AWS services organized into categories, including storage, authentication & authorization, APIs (GraphQL and REST), analytics, push notifications, chat bots, and AR/VR.

Using the Amplify Framework CLI, you can interactively initialize your project with amplify init. Then, you can go through your storage (amplify add storage) and user authentication & authorization (amplify add auth) options.

Now, you can also use amplify add predictions to configure your app to:

  • Identify text, entities, and labels in images using Amazon Rekognition, or identify text in scanned documents to get the contents of fields in forms and information stored in tables using Amazon Textract.
  • Convert text into a different language using Amazon Translate, text to speech using Amazon Polly, and speech to text using Amazon Transcribe.
  • Interpret text to find the dominant language, the entities, the key phrases, the sentiment, or the syntax of unstructured text using Amazon Comprehend.

You can select to have each of the above actions available only to authenticated users of your app, or also for guest, unauthenticated users. Based on your inputs, Amplify configures the necessary permissions using AWS Identity and Access Management (IAM) roles and Amazon Cognito.

Let’s see how Predictions works for a web application. For example, to identify text in an image using Amazon Rekognition directly from the browser, you can use the following JavaScript syntax and pass a file object:

Predictions.identify({
  text: {
    source: file
    format: "PLAIN" # "PLAIN" uses Amazon Rekognition
  }
}).then((result) => {...})

If the image is stored on Amazon S3, you can change the source to link to the S3 bucket selected when adding storage to this project. You can also change the format to analyze a scanned document using Amazon Textract. Here’s how to extract text from a form in a document stored on S3:

Predictions.identify({
  text: {
    source: { key: "my/image" }
    format: "FORM" # "FORM" or "TABLE" use Amazon Textract
  }
}).then((result) => {...})

Here’s an example of how to interpret text using all the pre-trained capabilities of Amazon Comprehend:

Predictions.interpret({
  text: {
    source: {
      text: "text to interpret",
    },
    type: "ALL"
  }
}).then((result) => {...})

To convert text to speech using Amazon Polly, using the language and the voice selected when adding the prediction, and play it back in the browser, you can use the following code:

Predictions.convert({
  textToSpeech: {
    source: {
      text: "text to generate speech"
    }
  }
}).then(result => {
  var audio = new Audio();
  audio.src = result.speech.url;
  audio.play();
})

Available Now
You can start building you next web or mobile app using Amplify today by following the get-started tutorial here and give us your feedback in the Amplify Framework Github repository.

There are lots of other options and features available in the Predictions category of the Amplify Framework. Please see this walkthrough on the AWS Mobile Blog for an in-depth example of building a machine-learning powered app.

It has never been easier to add machine learning functionalities to a web or mobile app, please let me know what you’re going to build next.

Danilo

AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-cloud-development-kit-cdk-typescript-and-python-are-now-generally-available/

Managing your Infrastructure as Code provides great benefits and is often a stepping stone for a successful application of DevOps practices. In this way, instead of relying on manually performed steps, both administrators and developers can automate provisioning of compute, storage, network, and application services required by their applications using configuration files.

For example, defining your Infrastructure as Code makes it possible to:

  • Keep infrastructure and application code in the same repository
  • Make infrastructure changes repeatable and predictable across different environments, AWS accounts, and AWS regions
  • Replicate production in a staging environment to enable continuous testing
  • Replicate production in a performance test environment that you use just for the time required to run a stress test
  • Release infrastructure changes using the same tools as code changes, so that deployments include infrastructure updates
  • Apply software development best practices to infrastructure management, such as code reviews, or deploying small changes frequently

Configuration files used to manage your infrastructure are traditionally implemented as YAML or JSON text files, but in this way you’re missing most of the advantages of modern programming languages. Specifically with YAML, it can be very difficult to detect a file truncated while transferring to another system, or a missing line when copying and pasting from one template to another.

Wouldn’t it be better if you could use the expressive power of your favorite programming language to define your cloud infrastructure? For this reason, we introduced last year in developer preview the AWS Cloud Development Kit (CDK), an extensible open-source software development framework to model and provision your cloud infrastructure using familiar programming languages.

I am super excited to share that the AWS CDK for TypeScript and Python is generally available today!

With the AWS CDK you can design, compose, and share your own custom components that incorporate your unique requirements. For example, you can create a component setting up your own standard VPC, with its associated routing and security configurations. Or a standard CI/CD pipeline for your microservices using tools like AWS CodeBuild and CodePipeline.

Personally I really like that by using the AWS CDK, you can build your application, including the infrastructure, in your IDE, using the same programming language and with the support of autocompletion and parameter suggestion that modern IDEs have built in, without having to do a mental switch between one tool, or technology, and another. The AWS CDK makes it really fun to quickly code up your AWS infrastructure, configure it, and tie it together with your application code!

How the AWS CDK works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

How to use the AWS CDK
We continuously add new features based on the feedback of our customers. That means that when creating an AWS resource, you often have to specify many options and dependencies. For example, if you create a VPC you have to think about how many Availability Zones (AZs) to use and how to configure subnets to give private and public access to the resources that will be deployed in the VPC.

To make it easy to define the state of AWS resources, the AWS Construct Library exposes the full richness of many AWS services with sensible defaults that you can customize as needed. In the case above, the VPC construct creates by default public and private subnets for all the AZs in the VPC, using 3 AZs if not specified.

For creating and managing CDK apps, you can use the AWS CDK Command Line Interface (CLI), a command-line tool that requires Node.js and can be installed quickly with:

npm install -g aws-cdk

After that, you can use the CDK CLI with different commands:

  • cdk init to initialize in the current directory a new CDK project in one of the supported programming languages
  • cdk synth to print the CloudFormation template for this app
  • cdk deploy to deploy the app in your AWS Account
  • cdk diff to compare what is in the project files with what has been deployed

Just run cdk to see more of the available commands and options.

You can easily include the CDK CLI in your deployment automation workflow, for example using Jenkins or AWS CodeBuild.

Let’s use the AWS CDK to build two sample projects, using different programming languages.

An example in TypeScript
For the first project I am using TypeScript to define the infrastructure:

cdk init app --language=typescript

Here’s a simplified view of what I want to build, not entering into the details of the public/private subnets in the VPC. There is an online frontend, writing messages in a queue, and an asynchronous backend, consuming messages from the queue:

Inside the stack, the following TypeScript code defines the resources I need, and their relations:

  • First I define the VPC and an Amazon ECS cluster in that VPC. By using the defaults provided by the AWS Construct Library, I don’t need to specify any parameter here.
  • Then I use an ECS pattern that in a few lines of code sets up an Amazon SQS queue and an ECS service running on AWS Fargate to consume the messages in that queue.
  • The ECS pattern library provides higher-level ECS constructs which follow common architectural patterns, such as load balanced services, queue processing, and scheduled tasks.
  • A Lambda function has the name of the queue, created by the ECS pattern, passed as an environment variable and is granted permissions to send messages to the queue.
  • The code of the Lambda function and the Docker image are passed as assets. Assets allow you to bundle files or directories from your project and use them with Lambda or ECS.
  • Finally, an Amazon API Gateway endpoint provides an HTTPS REST interface to the function.
const myVpc = new ec2.Vpc(this, "MyVPC");

const myCluster = new ecs.Cluster(this, "MyCluster", {
  vpc: myVpc
});

const myQueueProcessingService = new ecs_patterns.QueueProcessingFargateService(
  this, "MyQueueProcessingService", {
    cluster: myCluster,
    memoryLimitMiB: 512,
    image: ecs.ContainerImage.fromAsset("my-queue-consumer")
  });

const myFunction = new lambda.Function(
  this, "MyFrontendFunction", {
    runtime: lambda.Runtime.NODEJS_10_X,
    timeout: Duration.seconds(3),
    handler: "index.handler",
    code: lambda.Code.asset("my-front-end"),
    environment: {
      QUEUE_NAME: myQueueProcessingService.sqsQueue.queueName
    }
  });

myQueueProcessingService.sqsQueue.grantSendMessages(myFunction);

const myApi = new apigateway.LambdaRestApi(
  this, "MyFrontendApi", {
    handler: myFunction
  });

I find this code very readable and easier to maintain than the corresponding JSON or YAML. By the way, cdk synth in this case outputs more than 800 lines of plain CloudFormation YAML.

An example in Python
For the second project I am using Python:

cdk init app --language=python

I want to build a Lambda function that is executed every 10 minutes:

When you initialize a CDK project in Python, a virtualenv is set up for you. You can activate the virtualenv and install your project requirements with:

source .env/bin/activate

pip install -r requirements.txt

Note that Python autocompletion may not work with some editors, like Visual Studio Code, if you don’t start the editor from an active virtualenv.

Inside the stack, here’s the Python code defining the Lambda function and the CloudWatch Event rule to invoke the function periodically as target:

myFunction = aws_lambda.Function(
    self, "MyPeriodicFunction",
    code=aws_lambda.Code.asset("src"),
    handler="index.main",
    timeout=core.Duration.seconds(30),
    runtime=aws_lambda.Runtime.PYTHON_3_7,
)

myRule = aws_events.Rule(
    self, "MyRule",
    schedule=aws_events.Schedule.rate(core.Duration.minutes(10)),
)
myRule.add_target(aws_events_targets.LambdaFunction(myFunction))

Again, this is easy to understand even if you don’t know the details of AWS CDK. For example, durations include the time unit and you don’t have to wonder if they are expressed in seconds, milliseconds, or days. The output of cdk synth in this case is more than 90 lines of plain CloudFormation YAML.

Available Now
There is no charge for using the AWS CDK, you pay for the AWS resources that are deployed by the tool.

To quickly get hands-on with the CDK, start with this awesome step-by-step online tutorial!

More examples of CDK projects, using different programming languages, are available in this repository:

https://github.com/aws-samples/aws-cdk-examples

You can find more information on writing your own constructs here.

The AWS CDK is open source and we welcome your contribution to make it an even better tool:

https://github.com/awslabs/aws-cdk

Check out our source code on GitHub, start building your infrastructure today using TypeScript or Python, or try different languages in developer preview, such as C# and Java, and give us your feedback!

Improve Build Performance and Save Time Using Local Caching in AWS CodeBuild

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/improve-build-performance-and-save-time-using-local-caching-in-aws-codebuild/

AWS CodeBuild now supports local caching, which makes it possible for you to persist intermediate build artifacts locally on the build host so that they are available for reuse in subsequent build runs.

Your build project can use one of two types of caching: Amazon S3 or local. In this blog post, we will discuss how to use the local caching feature.

Local caching stores a cache on a build host. The cache is available to that build host only for a limited time and until another build is complete. For example, when you are dealing with large Java projects, compilation might take a long time. You can speed up subsequent builds by using local caching. This is a good option for large intermediate build artifacts because the cache is immediately available on the build host.

Local caching increases build performance for:

  • Projects with a large, monolithic source code repository.
  • Projects that generate and reuse many intermediate build artifacts.
  • Projects that build large Docker images.
  • Projects with many source dependencies.

To use local caching

1. Open AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use an AWS CodeCommit repository name.

5. For Environment image, choose Managed image or Custom image, as appropriate. For environment type, choose Linux or Windows Server. Specify a runtime, runtime version, and service role for your project.

6. Configure the buildspec file for your project.

7. In Artifacts, expand Additional Configuration. For Cache type, choose Local, as shown here.

Local caching supports the following caching modes:

Source cache mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored. No changes are required in the buildspec file.

Docker layer cache mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.

Note

  • You can use a Docker layer cache in the Linux environment only.
  • The privileged flag must be set so that your project has the required Docker permissions
  • You should consider the security implications before you use a Docker layer cache.

Custom cache mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other two local cache modes. If you use a custom cache:

  • Only directories can be specified for caching. You cannot specify individual files.
  • Symlinks are used to reference cached directories.
  • Cached directories are linked to your build before it downloads its project sources. Cached items are overridden if a source item has the same name. Directories are specified using cache paths in the buildspec file.

To use source cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Source cache, as shown here.

To use Docker layer cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Docker layer cache, as shown here.

Under Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges. This grants elevated privileges to the Docker process running on the build host.

To use custom cache mode

In your buildspec file, specify the cache path, as shown here.

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Custom cache, as shown here.


version: 0.2
phases:
  pre_build:
    commands:
      - echo "Enter pre_build commands"
  build:
    commands:
      - echo "Enter build commands"
      
cache:
  paths:
    - '/root/.m2/**/*'
    - '/root/.npm/**/*'
    - 'build/**/*'

Conclusion

We hope you find the information in this post helpful. If you have feedback, please leave it in the Comments section below. If you have questions, start a new thread on the AWS CodeBuild forum or contact AWS Support.

 

 

 

 

 

Validating AWS CodeCommit Pull Requests with AWS CodeBuild and AWS Lambda

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/devops/validating-aws-codecommit-pull-requests-with-aws-codebuild-and-aws-lambda/

Thanks to Jose Ferraris and Flynn Bundy for this great post about how to validate AWS CodeCommit pull requests with AWS CodeBuild and AWS Lambda. Both are DevOps Consultants from the AWS Professional Services’ EMEA team.

You can help ensure a high level of code quality and avoid merging code that does not integrate with previous changes by testing proposed code changes in pull requests before they are allowed to be merged. In this blog post, we’ll show you how to set up this kind of validation using AWS CodeCommit, AWS CodeBuild, and AWS Lambda. In addition, we’ll show you how to set up a pipeline to automatically build your tested, approved, and merged code changes using AWS CodePipeline.

When we talk with customers and partners, we find that they are in different stages in the adoption of DevOps methodologies such as Continuous Integration and Continuous Deployment (CI/CD). However, one of the main requirements we see is a strong emphasis on automation of delivering resources in a safe, secure, and repeatable manner. One of the fundamental principles of CI/CD is aimed at keeping everyone on the team in sync about changes happening in the codebase. With this in mind, it’s important to fail fast and fail early within a CI/CD workflow to ensure that potential issues are caught before making their way into production.

To do this, we can use services such as AWS CodeBuild for running our tests, along with AWS CodeCommit to store our source code. One of the ways we can “fail fast” is to validate pull requests with tests to see how they will integrate with the current master branch of a repository when first opened in AWS CodeCommit. By running our tests against the proposed changes prior to merging them into the master branch, we can ensure a high level of quality early on, catch any potential issues, and boost the confidence of the developer in relation to their changes. In this way, you can start validating your pull requests in AWS CodeCommit by utilizing AWS Lambda and AWS CodeBuild to automatically trigger builds and tests of your development branches.

We can also use services such as AWS CodePipeline for visualizing and creating our pipeline, and automatically building and deploying merged code that has met the validation bar for pull requests.

The following diagram shows the workflow of a pull request. The AWS CodeCommit repository contains two branches, the master branch that contains approved code, and the development branch, where changes to the code are developed. In this workflow, a pull request is created with the new code in the development branch, which the developer wants to merge into the master branch. The creation of the pull request is an event detected by AWS CloudWatch. This event will start two separate actions:
• It triggers an AWS Lambda function that will post an automated comment to the pull request that indicates a build to test the changes is about to begin.
• It also triggers an AWS CodeBuild project that will build and validate those changes.

When the build completes, AWS CloudWatch detects that event. Another AWS Lambda function posts an automated comment to the pull request with the results of the build and a link to the build logs. Based on this automated testing, the developer who opened the pull request can update the code to address any build failures, and then update the pull request with those changes. Those updates will be built, and the build results are then posted to the pull request as a comment.

Let’s show how this works in a specific example project. This project has its own set of tasks defined in the build specification file that will execute and validate this specific pull request. The buildspec.yml for our example AWS CloudFormation template contains the following code:

version: 0.2

phases:
  install:
    commands:
      - pip install cfn-lint
  build:
    commands:
      - cfn-lint --template ./template.yaml --regions $AWS_REGION
      - aws cloudformation validate-template --template-body file://$(pwd)/template.yaml
artifacts:
  files:
    - '*'

In this example we are installing cfn-lint, which perform various checks against our template, we are also running the AWS CloudFormation validate-template command via the AWS CLI.

Once the code included in the pull request has been built, AWS CloudWatch detects the build complete event and passes along the outcome to a Lambda function that will update the specific commit with a comment that notifies the users of the results. It also includes a link to build logs in AWS CodeBuild. This process repeats any time the pull request is updated. For example, if an initial pull request was opened but failed the set of tests associated with the project, the developer might fix the code and make an update to the currently opened pull request. This will in turn trigger the function to run again and update the comments section with the test results.

Testing and validating pull requests before they can be merged into production code is a common approach and a best practice when working with CI/CD. Once the pull request is approved and merged into the production branch, it is also a good CI/CD practice to automatically build, test, and deploy that code. This is why we’ve structured this into two different AWS CloudFormation stacks (both can be found in our GitHub repository). One contains a base layer template that contains the resources you would only need to create once, in this case the AWS Lambda functions that test and update pull requests. The second stack includes an example of a CI/CD pipeline defined in AWS CloudFormation that imports the resources from the base layer stack.

We start by creating our base layer, which creates the Lambda functions and sets up AWS IAM roles that the functions will use to interact with the various AWS services. Once this stack is in place, we can add one or more pipeline stacks which import some of the values from the base layer. The pipeline will automatically build any changes merged into the master branch of the repository. Once any pipeline stack is complete, we have an AWS CodeCommit repository, AWS CodeBuild project, and an AWS CodePipeline pipeline set up and ready for deployment.

We can now push some code into our repository on the master branch to trigger a run-through of our pipeline.

In this example we will use the following AWS CloudFormation template. This template creates a single Amazon S3 bucket. This template will be the artifact that we push through our CI/CD pipeline and deploy to our stages.

AWSTemplateFormatVersion: '2010-09-09'
Description: 'A sample CloudFormation template that we can use to validate in our pipeline'
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket'

Once this code is tested and approved in a pull request, it will be merged into the production branch as part of the pull request approve and merge process. This will automatically start our pipeline in AWS CodePipeline, and will run through to the stages defined for it. For example:

Now we can make some changes to our code base in the development branch and open a pull request. First, edit the file to make a typo in our CloudFormation template so we can test the validation.

AWSTemplateFormatVersion: '2010-09-09'
Metadata: 
  License: Apache-2.0
Description: 'A sample CloudFormation template that we can use to validate in our pipeline'
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket1'

Notice that we changed the S3 bucket to be AWS::S3::Bucket1. This doesn’t exist, so cfn-lint will return a failure when it attempts to validate the template.

Now push this change into our development branch in the AWS CodeCommit repository and open the pull request against the production (master) branch.

From there, navigate to the comments section of the pull request. You should see a status update that the pull request is currently building.

Once the build is complete, you should see feedback on the outcome of the build and its results given to us as a comment.

Choose the Logs link to view details about the failure. We can see that we were able to catch an error related to linting rules failing.

We can remedy this and update our pull request with the updated code. Upon doing so, we can see another build has been kicked off by looking at the comments of the pull request. Once this has been completed we can confirm that our pull request has been validated as desired and our tests have passed.

Once this pull request is approved and merged to master, this will start our pipeline in AWS CodePipeline, which will take this code change through the specified stages.

 

How to Use Cross-Account ECR Images in AWS CodeBuild for Your Build Environment

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/how-to-use-cross-account-ecr-images-in-aws-codebuild-for-your-build-environment/

AWS CodeBuild now makes it possible for you to access Docker images from any Amazon Elastic Container Registry repository in another account as the build environment. With this feature, AWS CodeBuild allows you to pull any image from a repository to which you have been granted resource-level permissions.

In this blog post, we will show you how to provision a build environment using an image from another AWS account.

Here is a quick overview of the services used in our example:

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It provides a fully preconfigured build platform for most popular programming languages and build tools, including Apache Maven, Gradle, and more.

Amazon Elastic ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

We will use a sample Docker image in an Amazon ECR image repository in AWS account B. The CodeBuild project in AWS account A will pull the images from the Amazon ECR image repository in AWS account B.

Prerequisites:

To get started you need:

·       Two AWS accounts (AWS account A and AWS account B).

·       In AWS account A, an image registry in Amazon ECR. In AWS account B, images that you would like to use for your build environment. If you do not have an image registry and a sample image, see Docker Sample in the AWS CodeBuild User Guide.

·       In AWS account A, an AWS CodeCommit repository with a buildspec.yml file and sample code.

·       Using the following steps, permissions in your Amazon ECR image repository for AWS CodeBuild to pull the repository’s Docker image into the build environment.

To grant CodeBuild permissions to pull the Docker image into the build environment

1.     Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.

2.     Choose the name of the repository you created.

3.     On the Permissions tab, choose Edit JSON policy.

4.     Apply the following policy and save.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CodeBuildAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "<arn of the service role>"  
      },
      "Action": [
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:BatchCheckLayerAvailability"
      ]
    }
  ]
}

To use an image from account B and set up a build project in account A

1. Open the AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use the AWS CodeCommit repository name.

 

5.  For Environment, we will pull the Docker image from AWS account B and use the image to create the build environment to build artifacts. To configure the build environment, choose Custom Image. For Image registry, choose Amazon ECR. For ECR account, choose Other ECR account.

6.  In Amazon ECR repository URI, enter the URI for the image repository from AWS account B and then choose Create build project.

7. Go to the build project you just created, and choose Start build. The build execution will download the source code from the AWS CodeCommit repository and provision the build environment using the image retrieved from the image registry.

Next steps

Now that you have seen how to use cross-account ECR images, you can integrate a build step in AWS CodePipeline and use the build environment to create artifacts and deploy your application. To integrate a build step in your pipeline, see Working with Deployments in AWS CodeDeploy in the AWS CodeDeploy User Guide

If you have any feedback, please leave it in the Comments section below. If you have questions, please start a thread on the AWS CodeBuild forum or contact AWS Support.

 

How to Use Docker Images from a Private Registry for Your Build Environment

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/how-to-use-docker-images-from-a-private-registry-in-aws-codebuild-for-your-build-environment/

AWS CodeBuild now supports using a Docker image that is stored in a private registry as your runtime environment. Previously, the service supported the use of Docker images from public Docker Hub or Amazon ECR only.

In this blog post, we will show you how to use a Docker image from a private registry to create the AWS CodeBuild runtime environment. The credentials for the private registry are stored in AWS Secrets Manager.

Here is an overview of the services used in our example:

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It provides a fully preconfigured build platform for most popular programming languages and build tools, including Apache Maven, Gradle, and more.

Docker Hub repositories allow you to share container images with your team, customers, or the Docker community at large.

AWS Secrets Manager protects secrets required to access your applications, services, and IT resources. The services makes it possible for you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

 

Prerequisites:

To get started you need:

·       A private repository or account.

·       A Secrets Manager secret that stores your Docker Hub credentials. The credentials are used to access your private repository.

·       An AWS account to create an AWS CodeBuild project.

·       A service role created in IAM that grants access to your Secrets Manager secret.

·       An AWS CodeCommit repository set up in your AWS account with a buildspec.yml file and sample code.

Create a private registry

If you do not have a private registry, follow the steps in the documentation

on the Docker website. Alternatively, you can execute the following commands in a terminal to pull an image, get its ID, and push it to a new repository.

docker pull amazonlinux

docker images amazonlinux --format {{.ID}}

docker tag image-id your-username/repository-name:latest

docker login

docker push your-username/repository-name

 

Create a basic secret in AWS Secrets Manager

In AWS Secrets Manager, a basic secret is one with a minimum of metadata and a single encrypted secret value. The one version that’s stored in the secret is automatically labeled AWSCURRENT.

To create a basic secret

1.     Open the AWS Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.

2.     Choose Store a new secret.

3.     In the Select a secret type section, specify the kind of secret that you want to create by choosing Other type of secrets, and then enter a user name and password to access your private registry.

4.     In Secret key/value, create one key-value pair for your Docker Hub user name and one key-value pair for your Docker Hub password.

5.     For Secret name, enter a name, such as dockerhub. You can enter an optional description to help you remember that this is a secret for Docker Hub.

6.     Leave Disable automatic rotation selected because the keys correspond to your Docker Hub credentials.

7.     Review your settings, and then choose Store secret.

 

 

Create an AWS CodeBuild project to pull Docker images from a private registry

Note

If your private registry is in your VPC, it must have public internet access. AWS CodeBuild cannot pull an image from a private IP address in a VPC.

To use a Docker image from a private registry in your AWS CodeBuild project

1.     Open the AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2.     Choose Create project.

3.     In Project configuration, for Project name, enter a name and description for the build project.

4.     In Source, for Source provider, choose the source code provider type. In this example, we are using the name of an AWS CodeCommit repository.

 

5. We will pull the Docker image from a private registry and use the image to create the build environment to build artifacts. To configure the build environment, in Environment, choose Custom image. For Environment type, choose Linux or Windows. For Custom image type, choose Other location, and then enter the image location and the ARN or name of your Secrets Manager credentials.

6. Go to the build project you just created, and choose Start build. The build execution will download the source code from the AWS CodeCommit repository and provision the build environment using the image retrieved from the registry.

Conclusion

Using the above guidelines, you now can now provision build environment using docker images from private registry.

Next steps:

Now that you have seen how to use Docker images to provision build environments from a private registry, you can integrate a build step in AWS CodePipeline and use the build environment to create artifacts and deploy your application. To integrate a build step in your pipeline, see Working with Deployments in AWS CodeDeploy in the AWS CodeDeploy User Guide.

If you have feedback, please leave it in the Comments section below. If you have questions, please start a thread on the AWS CodeBuild forum or contact AWS Support

 

 

 

 

 

New – AWS Toolkits for PyCharm, IntelliJ (Preview), and Visual Studio Code (Preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-toolkits-for-pycharm-intellij-preview-and-visual-studio-code-preview/

Software developers have their own preferred tools. Some use powerful editors, others Integrated Development Environments (IDEs) that are tailored for specific languages and platforms. In 2014 I created my first AWS Lambda function using the editor in the Lambda console. Now, you can choose from a rich set of tools to build and deploy serverless applications. For example, the editor in the Lambda console has been greatly enhanced last year when AWS Cloud9 was released. For .NET applications, you can use the AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services.

AWS Toolkits for PyCharm, IntelliJ, and Visual Studio Code

Today, we are announcing the general availability of the AWS Toolkit for PyCharm. We are also announcing the developer preview of the AWS Toolkits for IntelliJ and Visual Studio Code, which are under active development in GitHub. These open source toolkits will enable you to easily develop serverless applications, including a full create, step-through debug, and deploy experience in the IDE and language of your choice, be it Python, Java, Node.js, or .NET.

For example, using the AWS Toolkit for PyCharm you can:

These toolkits are distributed under the open source Apache License, Version 2.0.

Installation

Some features use the AWS Serverless Application Model (SAM) CLI. You can find installation instructions for your system here.

The AWS Toolkit for PyCharm is available via the IDEA Plugin Repository. To install it, in the Settings/Preferences dialog, click Plugins, search for “AWS Toolkit”, use the checkbox to enable it, and click the Install button. You will need to restart your IDE for the changes to take effect.

The AWS Toolkit for IntelliJ and Visual Studio Code are currently in developer preview and under active development. You are welcome to build and install these from the GitHub repositories:

Building a Serverless application with PyCharm

After installing AWS SAM CLI and AWS Toolkit, I create a new project in PyCharm and choose SAM on the left to create a serverless application using the AWS Serverless Application Model. I call my project hello-world in the Location field. Expanding More Settings, I choose which SAM template to use as the starting point for my project. For this walkthrough, I select the “AWS SAM Hello World”.

In PyCharm you can use credentials and profiles from your AWS Command Line Interface (CLI) configuration. You can change AWS region quickly if you have multiple environments.
The AWS Explorer shows Lambda functions and AWS CloudFormation stacks in the selected AWS region. Starting from a CloudFormation stack, you can see which Lambda functions are part of it.

The function handler is in the app.py file. After I open the file, I click on the Lambda icon on the left of the function declaration to have the option to run the function locally or start a local step-by-step debugging session.

First, I run the function locally. I can configure the payload of the event that is provided in input for the local invocation, starting from the event templates provided for most services, such as the Amazon API Gateway, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and so on. You can use a file for the payload, or select the share checkbox to make it available to other team members. The function is executed locally, but here you can choose the credentials and the region to be used if the function is calling other AWS services, such as Amazon Simple Storage Service (S3) or Amazon DynamoDB.

A local container is used to emulate the Lambda execution environment. This function is implementing a basic web API, and I can check that the result is in the format expected by the API Gateway.

After that, I want to get more information on what my code is doing. I set a breakpoint and start a local debugging session. I use the same input event as before. Again, you can choose the credentials and region for the AWS services used by the function.

I step over the HTTP request in the code to inspect the response in the Variables tab. Here you have access to all local variables, including the event and the context provided in input to the function.

After that, I resume the program to reach the end of the debugging session.

Now I am confident enough to deploy the serverless application right-clicking on the project (or the SAM template file). I can create a new CloudFormation stack, or update an existing one. For now, I create a new stack called hello-world-prod. For example, you can have a stack for production, and one for testing. I select an S3 bucket in the region to store the package used for the deployment. If your template has parameters, here you can set up the values used by this deployment.

After a few minutes, the stack creation is complete and I can run the function in the cloud with a right-click in the AWS Explorer. Here there is also the option to jump to the source code of the function.

As expected, the result of the remote invocation is the same as the local execution. My serverless application is in production!

Using these toolkits, developers can test locally to find problems before deployment, change the code of their application or the resources they need in the SAM template, and update an existing stack, quickly iterating until they reach their goal. For example, they can add an S3 bucket to store images or documents, or a DynamoDB table to store your users, or change the permissions used by their functions.

I am really excited by how much faster and easier it is to build your ideas on AWS. Now you can use your preferred environment to accelerate even further. I look forward to seeing what you will do with these new tools!

Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/devops/build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source/

Today, we are launching support for Amazon Elastic Container Registry (Amazon ECR) as a source provider in AWS CodePipeline. You can now initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. This makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

You can use Amazon ECR as a source if you’re implementing a blue/green deployment with AWS CodeDeploy from the AWS CodePipeline console. For more information about using the Amazon Elastic Container Service (Amazon ECS) console to implement a blue/green deployment without CodePipeline, see Implement Blue/Green Deployments for AWS Fargate and Amazon ECS Powered by AWS CodeDeploy.

This post shows you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. It walks you through setting up a pipeline to build your images when the upstream base image is updated.

Prerequisites

To follow along, you must have these resources in place:

  • A source control repository with your base image Dockerfile and a Docker image repository to store your image. In this walkthrough, we use a simple Dockerfile for the base image:
    FROM alpine:3.8

    RUN apk update

    RUN apk add nodejs
  • A source control repository with your application Dockerfile and source code and a Docker image repository to store your image. For the application Dockerfile, we use our base image and then add our application code:
    FROM 012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image

    ENV PORT=80

    EXPOSE $PORT

    COPY app.js /app/

    CMD ["node", "/app/app.js"]

This walkthrough uses AWS CodeCommit for the source control repositories and Amazon ECR  for the Docker image repositories. For more information, see Create an AWS CodeCommit Repository in the AWS CodeCommit User Guide and Creating a Repository in the Amazon Elastic Container Registry User Guide.

Note: The source control repositories and image repositories must be created in the same AWS Region.

Set up IAM service roles

In this walkthrough you use AWS CodeBuild and AWS CodePipeline to build your Docker images and push them to Amazon ECR. Both services use Identity and Access Management (IAM) service roles to makes calls to Amazon ECR API operations. The service roles must have a policy that provides permissions to make these Amazon ECR calls. The following procedure helps you attach the required permissions to the CodeBuild service role.

To create the CodeBuild service role

  1. Follow these steps to use the IAM console to create a CodeBuild service role.
  2. On step 10, make sure to also add the AmazonEC2ContainerRegistryPowerUser policy to your role.

CodeBuild service role policies

Create a build specification file for your base image

A build specification file (or build spec) is a collection of build commands and related settings, in YAML format, that AWS CodeBuild uses to run a build. Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your base image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the Git commit ID of the source.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your Amazon ECR repository.
version: 0.2

phases:
  pre_build:
    commands:
      - echo.Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace base-image with the name for your base Docker image.
  3. Commit and push your buildspec.yml file to your source repository.
    git add .
    git commit -m "Adding build specification."
    git push

Create a build specification file for your application

Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your source code and your application image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the CodeBuild build ID.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your ECR repository.
version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
artifacts:
  files:
    - imageDetail.json

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace hello-world with the container name in your service’s task definition that references your Docker image.
  3. Commit and push your buildspec.yml file to your source repository.
    git add .
    git commit -m "Adding build specification."
    git push

Create a continuous deployment pipeline for your base image

Use the AWS CodePipeline wizard to create your pipeline stages:

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose Create pipeline.
    If this is your first time using AWS CodePipeline, an introductory page appears instead of Welcome. Choose Get Started Now.
  3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline and choose Next step. For this walkthrough, the pipeline name is base-image.
  4. On the Step 2: Source page, for Source provider, choose AWS CodeCommit.
    1. For Repository name, choose the name of the AWS CodeCommit repository to use as the source location for your pipeline.
    2. For Branch name, choose the branch to use, and then choose Next step.
  5. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is base-image.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  6. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  7. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

Base image pipeline

Create a continuous deployment pipeline for your application image

The execution of the application image pipeline is triggered by changes to the application source code and changes to the upstream base image. You first create a pipeline, and then edit it to add a second source stage.

    1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
    2. On the Welcome page, choose Create pipeline.
    3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline, and then choose Next step. For this walkthrough, the pipeline name is hello-world.
    4. For Service role, choose Existing service role, and then choose the CodePipeline service role you modified earlier.
    5. On the Step 2: Source page, for Source provider, choose Amazon ECR.
      1. For Repository name, choose the name of the Amazon ECR repository to use as the source location for your pipeline. For this walkthrough, the repository name is base-image.

Amazon ECR source configuration

  1. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is hello-world.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  2. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  3. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

The pipeline will fail, because it is missing the application source code. Next, you edit the pipeline to add an additional action to the source stage.

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose your pipeline from the list. For this walkthrough, the pipeline name is hello-world.
  3. On the pipeline page, choose Edit.
  4. On the Editing: hello-world page, in Edit: Source, choose Edit stage.
  5. Choose the existing source action, and choose the edit icon.
    1. Change Output artifacts to BaseImage, and then choose Save.
  6. Choose Add action, and then enter a name for the action (for example, Code).
    1. For Action provider, choose AWS CodeCommit.
    2. For Repository name, choose the name of the AWS CodeCommit repository for your application source code.
    3. For Branch name, choose the branch.
    4. For Output artifacts, specify SourceArtifact, and then choose Save.
  7. On the Editing: hello-world page, choose Save and acknowledge the pop-up warning.

Application image pipeline

Test your end-to-end pipeline

Your pipeline should have everything for running an end-to-end native AWS continuous deployment. Now, test its functionality by pushing a code change to your base image repository.

  1. Make a change to your configured source repository, and then commit and push the change.
  2. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  3. Choose your pipeline from the list.
  4. Watch the pipeline progress through its stages. As the base image is built and pushed to Amazon ECR, see how the second pipeline is triggered, too. When the execution of your pipeline is complete, your application image is pushed to Amazon ECR, and you are now ready to deploy your application. For more information about continuously deploying your application, see Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment in the AWS CodePipeline User Guide.

Conclusion

In this post, we showed you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. You saw how to initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. Support for Amazon ECR in AWS CodePipeline makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

How to Test and Debug AWS CodeDeploy Locally Before You Ship Your Code

Post Syndicated from Kirankumar Chandrashekar original https://aws.amazon.com/blogs/devops/how-to-test-and-debug-aws-codedeploy-locally-before-you-ship-your-code/

AWS CodeDeploy is a powerful service for automating deployments to Amazon EC2, AWS Lambda, and on-premises servers. However, it can take some effort to get complex deployments up and running or to identify the error in your application when something goes wrong.

When I set up new deployments or debug existing ones, I like to test and debug locally for these reasons:

  • To speed up the iteration process.
  • To isolate potential issues.
  • To validate code.

You can test application code packages on any machine that has the CodeDeploy agent installed before you deploy it through the service. Likewise, to debug locally, you just need to install the CodeDeploy agent on any machine, including your local server or EC2 instance.

In this blog post, I will walk you through the steps to validate and debug a sample application package using the codedeploy-local command. You can find the sample package in this GitHub repository.

 

 

Prerequisites

Install the CodeDeploy agent on any supported instance type. For information, see Use the AWS CodeDeploy Agent to Validate a Deployment Package on a Local Machine in the AWS CodeDeploy User Guide.

Step 1

Verify the CodeDeploy agent is installed and ready for local testing. By default, codedeploy-local is installed in the following locations:

On Amazon Linux, RHEL, or Ubuntu Server:

/opt/codedeploy-agent/bin/codedeploy-local

On Windows Server:

C:\ProgramData\Amazon\CodeDeploy\bin

For simplicity, I am creating an alias for /opt/codedeploy-agent/bin/codedeploy-local as codedeploy-local so I can use the absolute path. This is optional.

alias codedeploy-local='sudo /opt/codedeploy-agent/bin/codedeploy-local'

When I execute the codedeploy-local command on the Linux terminal, I get the following response from the agent, which indicates that the agent is installed:

[[email protected] ~]$ codedeploy-local 
ERROR: Expecting appspec file at location /home/ec2-user/appspec.yml but it is not found there. Please either run the CLI from within a directory containing the appspec.yml file or specify a bundle location containing an appspec.yml file in its root directory

If you receive an error that the codedeploy-local command is not available or the package was not found, go back to the prerequisites and install the agent.

Step 2
To test the sample application package using the codedeploy-local command, I have to make sure that the application package is available on the local machine. The sample package I am testing here is an Apache (httpd)-based application.

Use wget to download the package to the local machine.

wget https://s3.amazonaws.com/aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip

Now that the sample package is available locally, I can either unzip the package or use the zip file for testing with the codedeploy-local command.

To test the zip file (archive) package (SampleApp_Linux.zip) with the codedeploy-local command, use the -l or –bundle-location option along with the -t or –type option as shown:

On Linux server:

codedeploy-local --bundle-location /home/ec2-user/CodeDeployPackage/SampleApp_Linux.zip -t zip --deployment-group my-deployment-group

On Windows server:

codedeploy-local --bundle-location C:/path/to/local/bundle.zip --type zip --deployment-group my-deployment-group

To unarchive the zip file, either change the directory (cd) to the top-level directory or provide the absolute path to the application package.

The package can be executed by providing the absolute path to the content as shown here:

codedeploy-local --bundle-location /path/to/local/bundle/directory

Or by changing the directory (cd) to the location of the unarchived package and executing the following command:

codedeploy-local

Executing the codedeploy-local command in the directory where the sample package is unzipped shows whether the deployment was successful or failed.

Here is a successful deployment execution and result:

[email protected] CodeDeployPackage]$ ls -a
.  ..  appspec.yml  index.html  LICENSE.txt  SampleApp_Linux.zip  scripts

[email protected] CodeDeployPackage]$ codedeploy-local
Starting to execute deployment from within folder /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local
See the deployment log at /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs/scripts.log for more details
AppSpec file valid. Local deployment successful

Step 3

Check the codedeploy-local logs and the deployment archive.

In the previous step, I was able to see that the local deployment was successful. The output included:

  • The log location.
  • The location where the deployment-archive was uploaded. It will be used as a staging directory for that deployment.

Because the –deployment-group, -g option was not provided, a local deployment group folder was created in the following location:

/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local

The following shows the listing of the files in the codedeploy-local deployment directory for a deployment:

[email protected] ~]$ ls /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local
deployment-archive  logs

[[email protected] deployment-archive]$ ls -a /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/deployment-archive/
.  ..  appspec.yml  index.html  LICENSE.txt  SampleApp_Linux.zip  scripts

[[email protected] deployment-archive]$ ls -a /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs
.  ..  scripts.log

In the directory path generated for each deployment, default-local-deployment-group  is the name of the deployment group and d-H3OZK261S-local is the deployment ID.

The scripts.log shows the execution logs for the codedeploy-local command for a deployment group and deployment ID. Here is an example of a scripts.log that shows the execution of each lifecycle event defined in the appspec.yml:

[[email protected] deployment-archive]$ cat /opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-H3OZK261S-local/logs/scripts.log
2018-03-13 23:02:37 LifecycleEvent - ApplicationStop
2018-03-13 23:02:37 Script - scripts/stop_server
2018-03-13 23:02:37 [stdout]Stopping httpd: [  OK  ]
2018-03-13 23:02:37 LifecycleEvent - BeforeInstall
2018-03-13 23:02:37 Script - scripts/install_dependencies
2018-03-13 23:02:37 [stdout]Loaded plugins: priorities, update-motd, upgrade-helper
2018-03-13 23:02:37 [stdout]Package httpd-2.2.34-1.16.amzn1.x86_64 already installed and latest version
2018-03-13 23:02:37 [stdout]Nothing to do
2018-03-13 23:02:37 Script - scripts/start_server
2018-03-13 23:02:37 [stdout]Starting httpd: [  OK  ]

There is another log file in this location that comes in handy when deploying the code on the local machine:

/var/log/aws/codedeploy-agent/codedeploy-local.log

You can enable verbose logging in the codedeploy-agent configuration file by setting the parameter :verbose: to true.

By default, the location of the configuration file is:

Amazon Linux, RHEL, or Ubuntu Server instances

/etc/codedeploy-agent/conf/codedeployagent.yml

Windows Server

C:/ProgramData/Amazon/CodeDeploy/conf.yml

Other features for debugging issues locally with codedeploy-local

The codedeploy-local command has other features that you can use to debug and troubleshoot issues.

Override the lifecycle hooks mentioned in the appspec.yml file

You can use codedeploy-local to override the lifecycle hooks provided in the appspec.yml. In this example, only the ApplicationStop lifecycle hook defined in the appspec.yml file will be executed. All other hooks will be ignored.

codedeploy-local -e ApplicationStop

In the same way, you can override the order in which the CodeDeploy agent executes multiple lifecycle hooks. This feature can help you determine and change the sequence before the deployment is performed on the server. For information, see AppSpec ‘hooks’ Section in the AWS CodeDeploy User Guide.

For example, this command executes the BeforeInstall lifecycle hook first and then executes the ApplicationStop lifecycle hook.

codedeploy-local -e BeforeInstall,ApplicationStop

Execute scripts specifically for codedeploy-local

If there are scripts that are used for local testing only and not required for the CodeDeploy deployment, then you can use the $DEPLOYMENT_GROUP_NAME variable, which has a value equal to LocalFleet.

Here are other environment variables and their values:

$APPLICATION_NAME: The location of the deployment package (for example, /home/ec2-user/CodeDeployPackage)

$DEPLOYMENT_ID: Unique per deployment (for example, d-LTVP5L6YY-local)

$DEPLOYMENT_GROUP_ID: The name of the deployment group. When the -g option is used for the command, this value will be passed. For example, in codedeploy-local -g testing, this value is testing. If this option is not set, the value of this environment variable is default-local-deployment-group

$LIFECYCLE_EVENT: The lifecycle hook that echoed this environment variable (for example, ApplicationStop)

Override the CodeDeploy agent configuration

You can override the CodeDeploy agent configuration and use your own configuration file from a custom location. This functionality makes it possible to test multiple configurations with the local deployments using the option -c, –agent-configuration-file while executing the codedeploy-local command. For the options to use, see AWS CodeDeploy Agent Configuration Reference in the AWS CodeDeploy User Guide.

By default, configuration files are stored in the following locations:

On Amazon Linux, RHEL, or Ubuntu Server:

/etc/codedeploy-agent/conf/codedeployagent.yml

On Windows Server:

C:/ProgramData/Amazon/CodeDeploy/conf.yml

Using custom configuration helps when verbose logging is required for package testing. You can do this just by using the -c or –agent-configuration-file option and without changing the default configuration file. Here is an example that shows the use of this option:

codedeploy-local -e BeforeInstall,ApplicationStop -c /<;-local-path->;/

For example, on Amazon Linux, RHEL, or Ubuntu Server instances, when the config file is in /etc/codedeployagent.yml, the command is:

codedeploy-local -e BeforeInstall,ApplicationStop -c /etc/codedeployagent.yml

For example, on Windows Server instances, when the config file is in C:/ProgramData/conf.yml, the command is:

codedeploy-local -e BeforeInstall,ApplicationStop -c C:/ProgramData/conf.yml

Point to an application package in an S3 bucket or GitHub repository

If the application package is stored in an S3 bucket or GitHub repository, codedeploy-local can be executed without downloading the file onto the local machine. You can do this using the -l, –bundle-location and -t, –type with the codedeploy-local command.

Here is an example for deploying a sample application package located in an S3 bucket:

codedeploy-local -l s3://aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip -t zip

Here is an example for deploying a sample application package from a public GitHub repository:

codedeploy-local --bundle-location https://api.github.com/repos/awslabs/aws-codedeploy-sample-tomcat/zipball/master --type zip

If you use GitHub, make sure that the application package with the appspec.yaml is in the root of the directory. If these contents are in a subfolder path, download the package to the local instance or server and then:

  • Execute codedeploy-local from the directory where the file exists.

-OR-

  • Use the -t, –type  option with the value of directory and -l, –bundle-location as the local path.

Troubleshooting common errors using codedeploy-local

The codedeploy-local command can be used to detect if the appspec.yml is in valid YAML format. If the format is invalid, you get the following error:

/usr/share/ruby/vendor_ruby/2.0/psych.rb:205:in `parse': (<unknown>): mapping values are not allowed in this context at line 10 column 13 (Psych::SyntaxError)

If there is an invalid lifecycle hook in the appspec.yml file, the deployment fails with this error:

ERROR: appspec.yml file contains unknown lifecycle events: ["BeforeInstall1"]

The name of a lifecycle hook is case-sensitive. The following error is returned because the BeforeInstall lifecycle hook was entered as Beforeinstall:

ERROR: appspec.yml file contains unknown lifecycle events: ["Beforeinstall"]

If there is any error in the scripts provided for execution in any lifecycle hooks (for example, a problem in the BeforeInstall script), the execution logs show something like this:

codedeploy-local -g testing
Starting to execute deployment from within folder /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local
Your local deployment failed while trying to execute your script at /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/deployment-archive/scripts/install_dependencies
See the deployment log at /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/logs/scripts.log for more details

For the preceding error, when you look at the logs in the deployment directory for the deployment group, you will see something like this:

cat /opt/codedeploy-agent/deployment-root/testing/d-6UBAIVVSK-local/logs/scripts.log
2018-03-21 03:34:04 LifecycleEvent - ApplicationStop
2018-03-21 03:34:04 Script - scripts/stop_server
2018-03-21 03:34:04 [stdout]LocalFleet
2018-03-21 03:34:04 [stdout]/home/ec2-user/CodeDeployPackage
2018-03-21 03:34:04 [stdout]d-6UBAIVVSK-local
2018-03-21 03:34:04 [stdout]testing
2018-03-21 03:34:04 [stdout]ApplicationStop
2018-03-21 03:34:04 [stdout]Stopping httpd: [  OK  ]
2018-03-21 03:34:04 LifecycleEvent - BeforeInstall
2018-03-21 03:34:04 Script - scripts/install_dependencies
2018-03-21 03:34:04 [stdout]Loaded plugins: priorities, update-motd, upgrade-helper
2018-03-21 03:34:04 [stdout]No package httpd1 available.
2018-03-21 03:34:04 [stderr]Error: Nothing to do

This log snippet shows that the install_dependencies script had a package called httpd1 that is not available for installation.

If the appspec.yml is not found in the root of the application package, you will see an error like this:

/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:213:in `parse_app_spec': The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-BE59ORH9I-local/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/default-local-deployment-group/d-BE59ORH9I-local/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html (RuntimeError)
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:100:in `initialize'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:147:in `new'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:147:in `block (3 levels) in map'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:146:in `each'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:146:in `block (2 levels) in map'
    from /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/command_executor.rb:68:in `execute_command'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:85:in `block in execute_events'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:84:in `each'
    from /opt/codedeploy-agent/lib/aws/codedeploy/local/deployer.rb:84:in `execute_events'
    from /opt/codedeploy-agent/bin/codedeploy-local:117:in `<main>'

Conclusion

The codedeploy-local command can be used to validate and debug an application package for deployments to Amazon EC2 instances or on-premises servers. With codedeploy-local, you can test and fix errors on a local machine during the code development phase. CodeDeploy local deployments also make it possible for you to change the order of the lifecycle hooks so you can restructure the appspec.yaml to add commands on the fly.

How to Run Headless Front-End Tests with AWS Cloud9 and AWS CodeBuild

Post Syndicated from Eric Z. Beard original https://aws.amazon.com/blogs/devops/how-to-run-headless-front-end-tests-with-aws-cloud9-and-aws-codebuild/

Automated testing is a critical component to a well-designed software development lifecycle. When you test front-end applications, you often use a browser in combination with testing frameworks. A headless browser is one that is used on a server that does not normally need to run visual applications. In this blog post, I will show you how to configure AWS Cloud9 and AWS CodeBuild to support testing an Angular application with the headless version of Chrome. AWS Cloud9 has deep integration with services such as AWS Lambda, and the environment is easily accessible anywhere, from any internet-connected device.

AWS Cloud9

By default, Cloud9 runs on an Amazon EC2 instance that is managed for you. You can also run it on any Linux machine that is accessible through SSH.

First, create a Cloud9 environment.

  1. Sign in to the AWS Management Console, scroll down to Developer Tools, and choose Cloud9.
  2. On the following page, choose Create Environment.
  3. Enter a name for your environment and then choose Next Step.
  4. On the following page, leave the defaults for the time being and click Next Step.
  5. On the following page, choose Create Environment.

It might take a few minutes for your environment to initialize. Behind the scenes, an EC2 instance is created for you in the region you have currently selected in the console. In the environment, press Alt-T to bring up a bash terminal tab. For the remaining steps in this post, you will enter commands into this tab.

There is a lot to take in if this is your first time using Cloud9. If you need help getting set up or want to learn more, see the Cloud9 User Guide.

Install and configure Angular

The first thing we will do in our new environment is to install and configure an Angular application.

  1. Upgrade Node to the latest version supported by AWS Lambda. (At the time of this writing, that’s 8.10.)
    nvm install 8.10
  2. Install the Angular CLI using npm, the Node Package Manager. Install it as a global package with the –g option so that it is available to run from anywhere in your environment.
    npm install -g @angular/cli
  3. Use the Angular CLI to create an Angular application.
    ng new my-app
    cd my-app/
  4. Run the application to make sure everything is working as expected. To preview a running application in Cloud9, the app must run on a specific port. With Angular, you must disable the default host header check.
    ng serve --port 8080 --host localhost --disable-host-check

     

    On the toolbar, next to Run, choose Preview and then choose Preview Running Application. You should see something like this:

  5. Press Ctrl-C to stop serving and then in the my-app directory, try to test your application.
    cd ..
    ng test --watch=false

    That obviously doesn’t work the way you would expect it to on a regular workstation. The testing framework can’t find Chrome because we are running on a headless EC2 instance. To start addressing the problem, first install a package called Puppeteer as a development dependency in your application.

    I’d like to give credit here to Alex Bainter, a software developer who wrote a comprehensive blog post about replacing PhantomJS with headless Chromium and Karma. His post was extremely helpful to me when I had to figure this out for the first time.

  6. Install Puppeteer and its dependencies.
    npm i -D puppeteer
    npm i –D @angular-devkit/build-angular
  7. You can get a good look at the missing Chrome libraries by running the ldd command on the binary that comes with Puppeteer.
    cd node_modules/puppeteer/.local-chromium/linux-564778/chrome-linux/

    (By the time you read this post, the version number in that path will probably be different. Look in the puppeteer/.local-chromium directory to see what it is for your installation.)

    ldd chrome | grep not

    You should see output that looks like this:

    libXcursor.so.1 => not found
    libXdamage.so.1 => not found
    libXfixes.so.3 => not found
    libcups.so.2 => not found
    libXss.so.1 => not found
    libXrandr.so.2 => not found
    libpangocairo-1.0.so.0 => not found
    libpango-1.0.so.0 => not found
    libcairo.so.2 => not found
    libatk-1.0.so.0 => not found
    libatk-bridge-2.0.so.0 => not found
    libgtk-3.so.0 => not found
    libgdk-3.so.0 => not found
    libgdk_pixbuf-2.0.so.0 => not found

 

Install headless Chrome

Now comes the tricky part. Installing headless Chrome on an Amazon Linux EC2 instance is no simple task. One strategy is to install the various dependencies by compiling from source, but the chain of dependencies for Chrome, which includes gtk+ and glib, soon gets out of hand. I found another blogger who solved the problem by borrowing from the CentOS and Fedora package repositories. Thanks to Yuanyi for this part of the solution.

  1. Install yum packages to cover basic dependencies.
    sudo yum install -y libXcursor libXdamage libcups libXss libXrandr \
        cups-libs dbus-glib libXinerama cairo cairo-gobject pango
  2. Borrow packages from CentOS and Fedora.
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/atk-2.22.0-3.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/at-spi2-atk-2.22.0-2.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://mirror.centos.org/centos/7/os/x86_64/Packages/at-spi2-core-2.22.0-1.el7.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/g/GConf2-3.2.6-7.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libXScrnSaver-1.2.2-6.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libxkbcommon-0.3.1-1.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libwayland-client-1.2.0-3.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/l/libwayland-cursor-1.2.0-3.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/os/Packages/g/gtk3-3.10.4-1.fc20.x86_64.rpm
    sudo rpm -ivh --nodeps http://dl.fedoraproject.org/pub/archive/fedora/linux/releases/16/Fedora/x86_64/os/Packages/gdk-pixbuf2-2.24.0-1.fc16.x86_64.rpm
  3. Edit src/karma.conf.js to require Puppeteer and set the CHROME_BIN environment variable. Here is the full content of that file after the changes.
    const puppeteer = require("puppeteer");
    process.env.CHROME_BIN = puppeteer.executablePath();
    
    module.exports = function (config) {
        config.set({
            basePath: '',
            frameworks: ['jasmine', ' @angular-devkit/build-angular'],
            plugins: [
                require('karma-jasmine'),
                require('karma-chrome-launcher'),
                require('karma-jasmine-html-reporter'),
                require('karma-coverage-istanbul-reporter'),
               require('@angular-devkit/build-angular/plugins/karma')
            ],
            client:{
                clearContext: false // leave Jasmine Spec Runner output visible in browser
            },
        coverageIstanbulReporter: {
            reports: [ 'html', 'lcovonly' ],
            fixWebpackSourcePaths: true
        },
        angularCli: {
            environment: 'dev'
        },
        reporters: ['progress', 'kjhtml'],
        port: 8080,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: true,
        browsers: ['ChromeHeadlessNoSandbox'],
        customLaunchers: {
            ChromeHeadlessNoSandbox: {
                base: 'ChromeHeadless',
                flags: ['--no-sandbox']
            }
        },
        singleRun: false
    
    });
    
    };
  4. Make a small adjustment to your test specification in src/app/app.component.spec.ts so that it is checking for the title in the test called "should render title in a h1 tag". Run ng test again.
    ng test --watch=false

If you see that green SUCCESS indicator, then you have done it! You installed Angular and created an application, installed Puppeteer, and by filling in the missing libraries for Chrome, you made it possible to run headless Chrome tests in Cloud9!

AWS CodeBuild

The next piece of the puzzle is your CI/CD pipeline. When a developer checks in new code, you want to test that code with a continuous integration tool like AWS CodeBuild. With CodeBuild, the problem related to headless Chrome is slightly different than it was with Cloud9, because the default build environment for Node apps is an Ubuntu image. You still need to install Chromium and its dependencies, but Ubuntu packages make it easier.

  1. Navigate to the CodeBuild console and create a new build project. Give it a name and configure the source repository. You will need to store your code for this exercise with one of the providers listed later so that CodeBuild knows where to find it when you start a build. Since you are already logged in to the AWS console, AWS CodeCommit is a good option, but you could also choose Amazon S3, Bitbucket, or GitHub.
  2. Configure the build environment. For Operating system, choose Ubuntu. For Runtime, choose Node.js. You can specify your own container image for the build, but the buildspec.yml described in step 3 works out of the box with the default image.
  3. For the build specification, provide the following buildspec.yml file in the root directory of the source code repository.
    
    version: 0.1
    phases:
      install:
        commands:
    
          # Install the Angular CLI
          - npm install -g @angular/cli
    
          # Install puppeteer as a dev dependency
          - npm i -D puppeteer
          - npm i –D @angular-devkit/build-angular
    
          # Print out missing libs
          - echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
    
          # Upgrade apt
          - apt-get upgrade
    
          # Update libs
          - apt-get update
    
          # Install apt-transport-https
          - apt-get install -y apt-transport-https
    
          # Use apt to install the Chrome dependencies
          - apt-get install -y libxcursor1
          - apt-get install -y libgtk-3-dev
          - apt-get install -y libxss1
          - apt-get install -y libasound2
          - apt-get install -y libnspr4
          - apt-get install -y libnss3
          - apt-get install -y libx11-xcb1
    
          # Print out missing libs
          - echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
    
          # Install project dependencies
          - npm install
    
      pre_build:
        commands
    	  - echo "Nothing to pre_build"
    
      build:
        commands:
    
          - printenv 
    
          # Build the project
          - ng build
    
          # Run headless Chrome tests
          - ng test --watch=false
          - printenv
    
      post_build:
        commands:
    
          - printenv
    
          # Deploy the project to S3
    
          - if [ ${CODEBUILD_BUILD_SUCCEEDING}=1 ]; then aws s3 sync --delete dist/ "s3://${BUCKET_NAME}"; else echo "Skipping aws sync"; fi
    
    artifacts:
      files:
        - src/*
    
    

    Feel free to remove those ldd and printenv statements, but it is worth taking a look at the output to get a better understanding of what is going on with the build.

  4. Specify the location for artifacts. The following step isn’t required, but it makes it possible to incorporate the build project into AWS CodePipeline.
  5. Expand Advanced Settings and configure an environment variable for the website bucket name.
  6. Configure the buckets. CodeBuild can’t write to the S3 buckets unless you give the service explicit permissions to do so. This is one of the most common causes of build failures for projects that involve S3. Attach the following policy to the CodeBuild service role to give it access to those buckets. Choose Continue and Save to create the build project, and then navigate to the IAM console and search for the CodeBuild service role that was just created for you. Add this as an inline policy.
    
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "VisualEditor0",
    			"Effect": "Allow",
    			"Action": "s3:*",
    			"Resource": [
    				"arn:aws:s3:::YOUR_BUCKET_FOR_ARTIFACTS",
    				"arn:aws:s3:::YOUR_BUCKET_FOR_ARTIFACTS /*"
    			]
    		},
    		{
    			"Sid": "VisualEditor1",
    			"Effect": "Allow",
    			"Action": "s3:*",
    			"Resource": [
    				"arn:aws:s3:::YOUR_BUCKET_FOR_THE_WEBSITE",
    				"arn:aws:s3:::YOUR_BUCKET_FOR_THE_WEBSITE /*"
    			]
    		}
    	]
    }
    
    
  7. You should now be able to start the build and see that the compiled website has been copied to your S3 bucket after the build is complete.

 

Alternative Cloud9 installation using SSH and Ubuntu

You can run the Cloud9 IDE from a Linux machine that you create, rather than letting Cloud9 provision it for you. Create a Cloud9 environment and choose Connect and run in remote server. For more information about this type of setup, see Creating an SSH Environment in the AWS Cloud9 User Guide.

After you have configured the environment, the work you have to do is much simpler than on the Amazon Linux instance, because there are Ubuntu packages that install the required dependencies. Follow the instructions earlier in this post until you get to the “Install headless Chrome” section. Issue this command:

sudo apt install -y libxcursor1 libgtk-3-dev libxss1 libasound2 libnspr4 libnss3

You don’t need to borrow from any of the CentOS or Fedora repositories.

Make changes to karma.conf.js as described earlier and you should then be ready to test your application.

 

Summary

You are now able to run headless integration tests using Cloud9 by installing Puppeteer and filling in the required Chrome dependencies. You can also extend this to the container image used to test your application with CodeBuild. Automated testing is vital to a trustworthy DevOps pipeline, and Cloud9 opens up new possibilities for developers of all types, including front-end developers.

Happy coding! –EZB