Tag Archives: AWS CodeDeploy

AWS Developer Tools Expands Integration to Include GitHub

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/aws-developer-tools-expands-integration-to-include-github/

AWS Developer Tools is a set of services that include AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Together, these services help you securely store and maintain version control of your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. These services are designed to enable developers and IT professionals to rapidly and safely deliver software.

As part of our continued commitment to extend the AWS Developer Tools ecosystem to third-party tools and services, we’re pleased to announce AWS CodeStar and AWS CodeBuild now integrate with GitHub. This will make it easier for GitHub users to set up a continuous integration and continuous delivery toolchain as part of their release process using AWS Developer Tools.

In this post, I will walk through the following:

Prerequisites:

You’ll need an AWS account, a GitHub account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodeStar, AWS CodeBuild, AWS CodePipeline, Amazon EC2, Amazon S3.

 

Integrating GitHub with AWS CodeStar

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. Its unified user interface helps you easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, so you can start releasing code faster.

When AWS CodeStar launched in April of this year, it used AWS CodeCommit as the hosted source repository. You can now choose between AWS CodeCommit or GitHub as the source control service for your CodeStar projects. In addition, your CodeStar project dashboard lets you centrally track GitHub activities, including commits, issues, and pull requests. This makes it easy to manage project activity across the components of your CI/CD toolchain. Adding the GitHub dashboard view will simplify development of your AWS applications.

In this section, I will show you how to use GitHub as the source provider for your CodeStar projects. I’ll also show you how to work with recent commits, issues, and pull requests in the CodeStar dashboard.

Sign in to the AWS Management Console and from the Services menu, choose CodeStar. In the CodeStar console, choose Create a new project. You should see the Choose a project template page.

CodeStar Project

Choose an option by programming language, application category, or AWS service. I am going to choose the Ruby on Rails web application that will be running on Amazon EC2.

On the Project details page, you’ll now see the GitHub option. Type a name for your project, and then choose Connect to GitHub.

Project details

You’ll see a message requesting authorization to connect to your GitHub repository. When prompted, choose Authorize, and then type your GitHub account password.

Authorize

This connects your GitHub identity to AWS CodeStar through OAuth. You can always review your settings by navigating to your GitHub application settings.

Installed GitHub Apps

You’ll see AWS CodeStar is now connected to GitHub:

Create project

You can choose a public or private repository. GitHub offers free accounts for users and organizations working on public and open source projects and paid accounts that offer unlimited private repositories and optional user management and security features.

In this example, I am going to choose the public repository option. Edit the repository description, if you like, and then choose Next.

Review your CodeStar project details, and then choose Create Project. On Choose an Amazon EC2 Key Pair, choose Create Project.

Key Pair

On the Review project details page, you’ll see Edit Amazon EC2 configuration. Choose this link to configure instance type, VPC, and subnet options. AWS CodeStar requires a service role to create and manage AWS resources and IAM permissions. This role will be created for you when you select the AWS CodeStar would like permission to administer AWS resources on your behalf check box.

Choose Create Project. It might take a few minutes to create your project and resources.

Review project details

When you create a CodeStar project, you’re added to the project team as an owner. If this is the first time you’ve used AWS CodeStar, you’ll be asked to provide the following information, which will be shown to others:

  • Your display name.
  • Your email address.

This information is used in your AWS CodeStar user profile. User profiles are not project-specific, but they are limited to a single AWS region. If you are a team member in projects in more than one region, you’ll have to create a user profile in each region.

User settings

User settings

Choose Next. AWS CodeStar will create a GitHub repository with your configuration settings (for example, https://github.com/biyer/ruby-on-rails-service).

When you integrate your integrated development environment (IDE) with AWS CodeStar, you can continue to write and develop code in your preferred environment. The changes you make will be included in the AWS CodeStar project each time you commit and push your code.

IDE

After setting up your IDE, choose Next to go to the CodeStar dashboard. Take a few minutes to familiarize yourself with the dashboard. You can easily track progress across your entire software development process, from your backlog of work items to recent code deployments.

Dashboard

After the application deployment is complete, choose the endpoint that will display the application.

Pipeline

This is what you’ll see when you open the application endpoint:

The Commit history section of the dashboard lists the commits made to the Git repository. If you choose the commit ID or the Open in GitHub option, you can use a hotlink to your GitHub repository.

Commit history

Your AWS CodeStar project dashboard is where you and your team view the status of your project resources, including the latest commits to your project, the state of your continuous delivery pipeline, and the performance of your instances. This information is displayed on tiles that are dedicated to a particular resource. To see more information about any of these resources, choose the details link on the tile. The console for that AWS service will open on the details page for that resource.

Issues

You can also filter issues based on their status and the assigned user.

Filter

AWS CodeBuild Now Supports Building GitHub Pull Requests

CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can use prepackaged build environments to get started quickly or you can create custom build environments that use your own build tools.

We recently announced support for GitHub pull requests in AWS CodeBuild. This functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild. You can use the AWS CodeBuild or AWS CodePipeline consoles to run AWS CodeBuild. You can also automate the running of AWS CodeBuild by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the AWS CodeBuild Plugin for Jenkins.

AWS CodeBuild

In this section, I will show you how to trigger a build in AWS CodeBuild with a pull request from GitHub through webhooks.

Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/. Choose Create project. If you already have a CodeBuild project, you can choose Edit project, and then follow along. CodeBuild can connect to AWS CodeCommit, S3, BitBucket, and GitHub to pull source code for builds. For Source provider, choose GitHub, and then choose Connect to GitHub.

Configure

After you’ve successfully linked GitHub and your CodeBuild project, you can choose a repository in your GitHub account. CodeBuild also supports connections to any public repository. You can review your settings by navigating to your GitHub application settings.

GitHub Apps

On Source: What to Build, for Webhook, select the Rebuild every time a code change is pushed to this repository check box.

Note: You can select this option only if, under Repository, you chose Use a repository in my account.

Source

In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild. For Operating system, choose Ubuntu. For Runtime, choose Base. For Version, choose the latest available version. For Build specification, you can provide a collection of build commands and related settings, in YAML format (buildspec.yml) or you can override the build spec by inserting build commands directly in the console. AWS CodeBuild uses these commands to run a build. In this example, the output is the string “hello.”

Environment

On Artifacts: Where to put the artifacts from this build project, for Type, choose No artifacts. (This is also the type to choose if you are just running tests or pushing a Docker image to Amazon ECR.) You also need an AWS CodeBuild service role so that AWS CodeBuild can interact with dependent AWS services on your behalf. Unless you already have a role, choose Create a role, and for Role name, type a name for your role.

Artifacts

In this example, leave the advanced settings at their defaults.

If you expand Show advanced settings, you’ll see options for customizing your build, including:

  • A build timeout.
  • A KMS key to encrypt all the artifacts that the builds for this project will use.
  • Options for building a Docker image.
  • Elevated permissions during your build action (for example, accessing Docker inside your build container to build a Dockerfile).
  • Resource options for the build compute type.
  • Environment variables (built-in or custom). For more information, see Create a Build Project in the AWS CodeBuild User Guide.

Advanced settings

You can use the AWS CodeBuild console to create a parameter in Amazon EC2 Systems Manager. Choose Create a parameter, and then follow the instructions in the dialog box. (In that dialog box, for KMS key, you can optionally specify the ARN of an AWS KMS key in your account. Amazon EC2 Systems Manager uses this key to encrypt the parameter’s value during storage and decrypt during retrieval.)

Create parameter

Choose Continue. On the Review page, either choose Save and build or choose Save to run the build later.

Choose Start build. When the build is complete, the Build logs section should display detailed information about the build.

Logs

To demonstrate a pull request, I will fork the repository as a different GitHub user, make commits to the forked repo, check in the changes to a newly created branch, and then open a pull request.

Pull request

As soon as the pull request is submitted, you’ll see CodeBuild start executing the build.

Build

GitHub sends an HTTP POST payload to the webhook’s configured URL (highlighted here), which CodeBuild uses to download the latest source code and execute the build phases.

Build project

If you expand the Show all checks option for the GitHub pull request, you’ll see that CodeBuild has completed the build, all checks have passed, and a deep link is provided in Details, which opens the build history in the CodeBuild console.

Pull request

Summary:

In this post, I showed you how to use GitHub as the source provider for your CodeStar projects and how to work with recent commits, issues, and pull requests in the CodeStar dashboard. I also showed you how you can use GitHub pull requests to automatically trigger a build in AWS CodeBuild — specifically, how this functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild.


About the author:

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

 

Skill up on how to perform CI/CD with AWS Developer tools

Post Syndicated from Chirag Dhull original https://aws.amazon.com/blogs/devops/skill-up-on-how-to-perform-cicd-with-aws-devops-tools/

This is a guest post from Paul Duvall, CTO of Stelligent, a division of HOSTING.

I co-founded Stelligent, a technology services company that provides DevOps Automation on AWS as a result of my own frustration in implementing all the “behind the scenes” infrastructure (including builds, tests, deployments, etc.) on software projects on which I was developing software. At Stelligent, we have worked with numerous customers looking to get software delivered to users quicker and with greater confidence. This sounds simple but it often consists of properly configuring and integrating myriad tools including, but not limited to, version control, build, static analysis, testing, security, deployment, and software release orchestration. What some might not realize is that there’s a new breed of build, deploy, test, and release tools that help reduce much of the undifferentiated heavy lifting of deploying and releasing software to users.

 
I’ve been using AWS since 2009 and I, along with many at Stelligent – have worked with the AWS Service Teams as part of the AWS Developer Tools betas that are now generally available (including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy). I’ve combined the experience we’ve had with customers along with this specialized knowledge of the AWS Developer and Management Tools to provide a unique course that shows multiple ways to use these services to deliver software to users quicker and with confidence.

 
In DevOps Essentials on AWS, you’ll learn how to accelerate software delivery and speed up feedback loops by learning how to use AWS Developer Tools to automate infrastructure and deployment pipelines for applications running on AWS. The course demonstrates solutions for various DevOps use cases for Amazon EC2, AWS OpsWorks, AWS Elastic Beanstalk, AWS Lambda (Serverless), Amazon ECS (Containers), while defining infrastructure as code and learning more about AWS Developer Tools including AWS CodeStar, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

 
In this course, you see me use the AWS Developer and Management Tools to create comprehensive continuous delivery solutions for a sample application using many types of AWS service platforms. You can run the exact same sample and/or fork the GitHub repository (https://github.com/stelligent/devops-essentials) and extend or modify the solutions. I’m excited to share how you can use AWS Developer Tools to create these solutions for your customers as well. There’s also an accompanying website for the course (http://www.devopsessentialsaws.com/) that I use in the video to walk through the course examples which link to resources located in GitHub or Amazon S3. In this course, you will learn how to:

  • Use AWS Developer and Management Tools to create a full-lifecycle software delivery solution
  • Use AWS CloudFormation to automate the provisioning of all AWS resources
  • Use AWS CodePipeline to orchestrate the deployments of all applications
  • Use AWS CodeCommit while deploying an application onto EC2 instances using AWS CodeBuild and AWS CodeDeploy
  • Deploy applications using AWS OpsWorks and AWS Elastic Beanstalk
  • Deploy an application using Amazon EC2 Container Service (ECS) along with AWS CloudFormation
  • Deploy serverless applications that use AWS Lambda and API Gateway
  • Integrate all AWS Developer Tools into an end-to-end solution with AWS CodeStar

To learn more, see DevOps Essentials on AWS video course on Udemy. For a limited time, you can enroll in this course for $40 and save 80%, a $160 saving. Simply use the code AWSDEV17.

 
Stelligent, an AWS Partner Network Advanced Consulting Partner holds the AWS DevOps Competency and over 100 AWS technical certifications. To stay updated on DevOps best practices, visit www.stelligent.com.

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/bluegreen-infrastructure-application-deployment-blog/

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

 

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

Launch – .NET Core Support In AWS CodeStar and AWS Codebuild

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-net-core-support-in-aws-codestar-and-aws-codebuild/

A few months ago, I introduced the AWS CodeStar service, which allows you to quickly develop, build, and deploy applications on AWS. AWS CodeStar helps development teams to increase the pace of releasing applications and solutions while reducing some of the challenges of building great software.

When the CodeStar service launched in April, it was released with several project templates for Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. Each template provisions the underlying AWS Code Services and configures an end-end continuous delivery pipeline for the targeted application using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

As I have participated in some of the AWS Summits around the world discussing AWS CodeStar, many of you have shown curiosity in learning about the availability of .NET templates in CodeStar and utilizing CodeStar to deploy .NET applications. Therefore, it is with great pleasure and excitement that I announce that you can now develop, build, and deploy cross-platform .NET Core applications with the AWS CodeStar and AWS CodeBuild services.

AWS CodeBuild has added the ability to build and deploy .NET Core application code to both Amazon EC2 and AWS Lambda. This new CodeBuild capability has enabled the addition of two new project templates in AWS CodeStar for .NET Core applications.  These new project templates enable you to deploy .NET Code applications to Amazon EC2 Linux Instances, and provides everything you need to get started quickly, including .NET Core sample code and a full software development toolchain.

Of course, I can’t wait to try out the new addition to the project templates within CodeStar and the update .NET application build options with CodeBuild. For my test scenario, I will use CodeStar to create, build, and deploy my .NET Code ASP.Net web application on EC2. Then, I will extend my ASP.Net application by creating a .NET Lambda function to be compiled and deployed with CodeBuild as a part of my application’s pipeline. This Lambda function can then be called and used within my ASP.Net application to extend the functionality of my web application.

So, let’s get started!

First, I’ll log into the CodeStar console and start a new CodeStar project. I am presented with the option to select a project template.


Right now, I would like to focus on building .NET Core projects, therefore, I’ll filter the project templates by selecting the C# in the Programming Languages section. Now, CodeStar only shows me the new .NET Core project templates that I can use to build web applications and services with ASP.NET Core.

I think I’ll use the ASP.NET Core web application project template for my first CodeStar .NET Core application. As you can see by the project template information display, my web application will be deployed on Amazon EC2, which signifies to me that my .NET Core code will be compiled and packaged using AWS CodeBuild and deployed to EC2 using the AWS CodeDeploy service.


My hunch about the services is confirmed on the next screen when CodeStar shows the AWS CodePipeline and the AWS services that will be configured for my new project. I’ll name this web application project, ASPNetCore4Tara, and leave the default Project ID that CodeStar generates from the project name. Yes, I know that this is one of the goofiest names I could ever come up with, but, hey, it will do for this test project so I’ll go ahead and click the Next button. I should mention that you have the option to edit your Amazon EC2 configuration for your project on this screen before CodeStar starts configuring and provisioning the services needed to run your application.

Since my ASP.Net Core web application will be deployed to an Amazon EC2 instance, I will need to choose an Amazon EC2 Key Pair for encryption of the login used to allow me to SSH into this instance. For my ASPNetCore4Tara project, I will use an existing Amazon EC2 key pair I have previously used for launching my other EC2 instances. However, if I was creating this project and I did not have an EC2 key pair or if I didn’t have access to the .pem file (private key file) for an existing EC2 key pair, I would have to first visit the EC2 console and create a new EC2 key pair to use for my project. This is important because if you remember, without having the EC2 key pair with the associated .pem file, I would not be able to log into my EC2 instance.

With my EC2 key pair selected and confirmation that I have the related private file checked, I am ready to click the Create Project button.


After CodeStar completes the creation of the project and the provisioning of the project related AWS services, I am ready to view the CodeStar sample application from the application endpoint displayed in the CodeStar dashboard. This sample application should be familiar to you if have been working with the CodeStar service or if you had an opportunity to read the blog post about the AWS CodeStar service launch. I’ll click the link underneath Application Endpoints to view the sample ASP.NET Core web application.

Now I’ll go ahead and clone the generated project and connect my Visual Studio IDE to the project repository. I am going to make some changes to the application and since AWS CodeBuild now supports .NET Core builds and deployments to both Amazon EC2 and AWS Lambda, I will alter my build specification file appropriately for the changes to my web application that will include the use of the Lambda function.  Don’t worry if you are not familiar with how to clone the project and connect it to the Visual Studio IDE, CodeStar provides in-console step-by-step instructions to assist you.

First things first, I will open up the Visual Studio IDE and connect to AWS CodeCommit repository provisioned for my ASPNetCore4Tara project. It is important to note that the Visual Studio 2017 IDE is required for .NET Core projects in AWS CodeStar and the AWS Toolkit for Visual Studio 2017 will need to be installed prior to connecting your project repository to the IDE.

In order to connect to my repo within Visual Studio, I will open up Team Explorer and select the Connect link under the AWS CodeCommit option under Hosted Service Providers. I will click Ok to keep my default AWS profile toolkit credentials.

I’ll then click Clone under the Manage Connections and AWS CodeCommit hosted provider section.

Once I select my aspnetcore4tara repository in the Clone AWS CodeCommit Repository dialog, I only have to enter my IAM role’s HTTPS Git credentials in the Git Credentials for AWS CodeCommit dialog and my process is complete. If you’re following along and receive a dialog for Git Credential Manager login, don’t worry just your enter the same IAM role’s Git credentials.


My project is now connected to the aspnetcore4tara CodeCommit repository and my web application is loaded to editing. As you will notice in the screenshot below, the sample project is structured as a standard ASP.NET Core MVC web application.

With the project created, I can make changes and updates. Since I want to update this project with a .NET Lambda function, I’ll quickly start a new project in Visual Studio to author a very simple C# Lambda function to be compiled with the CodeStar project. This AWS Lambda function will be included in the CodeStar ASP.NET Core web application project.

The Lambda function I’ve created makes a call to the REST API of NASA’s popular Astronomy Picture of the Day website. The API sends back the latest planetary image and related information in JSON format. You can see the Lambda function code below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using System.Net.Http;
using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace NASAPicOfTheDay
{
    public class SpacePic
    {
        HttpClient httpClient = new HttpClient();
        string nasaRestApi = "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY";

        /// <summary>
        /// A simple function that retreives NASA Planetary Info and 
        /// Picture of the Day
        /// </summary>
        /// <param name="context"></param>
        /// <returns>nasaResponse-JSON String</returns>
        public async Task<string> GetNASAPicInfo(ILambdaContext context)
        {
            string nasaResponse;
            
            //Call NASA Picture of the Day API
            nasaResponse = await httpClient.GetStringAsync(nasaRestApi);
            Console.WriteLine("NASA API Response");
            Console.WriteLine(nasaResponse);
            
            //Return NASA response - JSON format
            return nasaResponse; 
        }
    }
}

I’ll now publish this C# Lambda function and test by using the Publish to AWS Lambda option provided by the AWS Toolkit for Visual Studio with NASAPicOfTheDay project. After publishing the function, I can test it and verify that it is working correctly within Visual Studio and/or the AWS Lambda console. You can learn more about building AWS Lambda functions with C# and .NET at: http://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

 

Now that I have my Lambda function completed and tested, all that is left is to update the CodeBuild buildspec.yml file within my aspnetcore4tara CodeStar project to include publishing and deploying of the Lambda function.

To accomplish this, I will create a new folder named functions and copy the folder that contains my Lambda function .NET project to my aspnetcore4tara web application project directory.

 

 

To build and publish my AWS Lambda function, I will use commands in the buildspec.yml file from the aws-lambda-dotnet tools library, which helps .NET Core developers develop AWS Lambda functions. I add a file, funcprof, to the NASAPicOfTheDay folder which contains customized profile information for use with aws-lambda-dotnet tools. All that is left is to update the buildspec.yml file used by CodeBuild for the ASPNetCore4Tara project build to include the packaging and the deployment of the NASAPictureOfDay AWS Lambda function. The updated buildspec.yml is as follows:

version: 0.2
phases:
  env:
  variables:
    basePath: 'hold'
  install:
    commands:
      - echo set basePath for project
      - basePath=$(pwd)
      - echo $basePath
      - echo Build restore and package Lambda function using AWS .NET Tools...
      - dotnet restore functions/*/NASAPicOfTheDay.csproj
      - cd functions/NASAPicOfTheDay
      - dotnet lambda package -c Release -f netcoreapp1.0 -o ../lambda_build/nasa-lambda-function.zip
  pre_build:
    commands:
      - echo Deploy Lambda function used in ASPNET application using AWS .NET Tools. Must be in path of Lambda function build 
      - cd $basePath
      - cd functions/NASAPicOfTheDay
      - dotnet lambda deploy-function NASAPicAPI -c Release -pac ../lambda_build/nasa-lambda-function.zip --profile-location funcprof -fd 'NASA API for Picture of the Day' -fn NASAPicAPI -fh NASAPicOfTheDay::NASAPicOfTheDay.SpacePic::GetNASAPicInfo -frun dotnetcore1.0 -frole arn:aws:iam::xxxxxxxxxxxx:role/lambda_exec_role -framework netcoreapp1.0 -fms 256 -ft 30  
      - echo Lambda function is now deployed - Now change directory back to Base path
      - cd $basePath
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
artifacts:
  files:
    - AspNetCoreWebApplication/build_output/**/*
    - scripts/**/*
    - appspec.yml
    

That’s it! All that is left is for me to add and commit all my file additions and updates to the AWS CodeCommit git repository provisioned for my ASPNetCore4Tara project. This kicks off the AWS CodePipeline for the project which will now use AWS CodeBuild new support for .NET Core to build and deploy both the ASP.NET Core web application and the .NET AWS Lambda function.

 

Summary

The support for .NET Core in AWS CodeStar and AWS CodeBuild opens the door for .NET developers to take advantage of the benefits of Continuous Integration and Delivery when building .NET based solutions on AWS.  Read more about .NET Core support in AWS CodeStar and AWS CodeBuild here or review product pages for AWS CodeStar and/or AWS CodeBuild for more information on using the services.

Enjoy building .NET projects more efficiently with Amazon Web Services using .NET Core with AWS CodeStar and AWS CodeBuild.

Tara

 

New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/new-aws-codestar/

It wasn’t too long ago that I was on a development team working toward completing a software project by a release deadline and facing the challenges most software teams face today in developing applications. Challenges such as new project environment setup, team member collaboration, and the day-to-day task of keeping track of the moving pieces of code, configuration, and libraries for each development build. Today, with companies’ need to innovate and get to market faster, it has become essential to make it easier and more efficient for development teams to create, build, and deploy software.

Unfortunately, many organizations face some key challenges in their quest for a more agile, dynamic software development process. The first challenge most new software projects face is the lengthy setup process that developers have to complete before they can start coding. This process may include setting up of IDEs, getting access to the appropriate code repositories, and/or identifying infrastructure needed for builds, tests, and production.

Collaboration is another challenge that most development teams may face. In order to provide a secure environment for all members of the project, teams have to frequently set up separate projects and tools for various team roles and needs. In addition, providing information to all stakeholders about updates on assignments, the progression of development, and reporting software issues can be time-consuming.

Finally, most companies desire to increase the speed of their software development and reduce the time to market by adopting best practices around continuous integration and continuous delivery. Implementing these agile development strategies may require companies to spend time in educating teams on methodologies and setting up resources for these new processes.

Now Presenting: AWS CodeStar

To help development teams ease the challenges of building software while helping to increase the pace of releasing applications and solutions, I am excited to introduce AWS CodeStar.

AWS CodeStar is a cloud service designed to make it easier to develop, build, and deploy applications on AWS by simplifying the setup of your entire development project. AWS CodeStar includes project templates for common development platforms to enable provisioning of projects and resources for coding, building, testing, deploying, and running your software project.

The key benefits of the AWS CodeStar service are:

  • Easily create new projects using templates for Amazon EC2, AWS Elastic Beanstalk, or AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. By selecting a template, the service will provision the underlying AWS services needed for your project and application.
  • Unified experience for access and security policies management for your entire software team. Projects are automatically configured with appropriate IAM access policies to ensure a secure application environment.
  • Pre-configured project management dashboard for tracking various activities, such as code commits, build results, deployment activity and more.
  • Running sample code to help you get up and running quickly enabling you to use your favorite IDEs, like Visual Studio, Eclipse, or any code editor that supports Git.
  • Automated configuration of a continuous delivery pipeline for each project using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.
  • Integration with Atlassian JIRA Software for issue management and tracking directly from the AWS CodeStar console

With AWS CodeStar, development teams can build an agile software development workflow that now only increases the speed in which teams and deploy software and bug fixes, but also enables developers to build software that is more inline with customers’ requests and needs.

An example of a responsive development workflow using AWS CodeStar is shown below:

Journey Into AWS CodeStar

Now that you know a little more about the AWS CodeStar service, let’s jump into using the service to set up a web application project. First, I’ll go to into the AWS CodeStar console and click the Start a project button.

If you have not setup the appropriate IAM permissions, AWS CodeStar will show a dialog box requesting permission to administer AWS resources on your behalf. I will click the Yes, grant permissions button to grant AWS CodeStar the appropriate permissions to other AWS resources.

However, I received a warning that I do not have administrative permissions to AWS CodeStar as I have not applied the correct policies to my IAM user. If you want to create projects in AWS CodeStar, you must apply the AWSCodeStarFullAccess managed policy to your IAM user or have an IAM administrative user with full permissions for all AWS services.

Now that I have added the aforementioned permissions in IAM, I can now use the service to create a project. To start, I simply click on the Create a new project button and I am taken to the hub of the AWS CodeStar service.

At this point, I am presented with over twenty different AWS CodeStar project templates to choose from in order to provision various environments for my software development needs. Each project template specifies the AWS Service used to deploy the project, the supported programming language, and a description of the type of development solution implemented. AWS CodeStar currently supports the following AWS Services: Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. Using preconfigured AWS CloudFormation templates, these project templates can create software development projects like microservices, Alexa skills, web applications, and more with a simple click of a button.

For my first AWS CodeStar project, I am going to build a serverless web application using Node.js and AWS Lambda using the Node.js/AWS Lambda project template.

You will notice for this template AWS CodeStar sets up all of the tools and services you need for a development project including an AWS CodePipeline connected with the services; AWS CodeBuild, AWS CloudFormation, and Amazon CloudWatch. I’ll name my new AWS CodeStar project, TaraWebProject, and click Create Project.

Since this is my first time creating an AWS CodeStar, I will see a dialog that asks about the setup of my AWS CodeStar user settings. I’ll type Tara in the textbox for the Display Name and add my email address in the Email textbox. This information is how I’ll appear to others in the project.

The next step is to select how I want to edit my project code. I have decided to edit my TaraWebProject project code using the Visual Studio IDE. With Visual Studio, it will be essential for me to configure it to use the AWS Toolkit for Visual Studio 2015 to access AWS resources while editing my project code. On this screen, I am also presented with the link to the AWS CodeCommit Git repository that AWS CodeStar configured for my project.

The provisioning and tool setup for my software development project is now complete. I’m presented with the AWS CodeStar dashboard for my software project, TaraWebProject, which allows me to manage the resources for the project. This includes the management of resources, such as code commits, team membership and wiki, continuous delivery pipeline, Jira issue tracking, project status and other applicable project resources.

What is really cool about AWS CodeStar for me is that it provides a working sample project from which I can start the development of my serverless web application. To view the sample of my new web application, I will go to the Application endpoints section of the dashboard and click the link provided.

A new browser window will open and will display the sample web application AWS CodeStar generated to help jumpstart my development. A cool feature of the sample application is that the background of the sample app changes colors based on the time of day.

Let’s now take a look at the code used to build the sample website. In order to view the code, I will back to my TaraWebProject dashboard in the AWS CodeStar console and select the Code option from the sidebar menu.

This takes me to the tarawebproject Git repository in the AWS CodeCommit console. From here, I can manually view the code for my web application, the commits made in the repo, the comparison of commits or branches, as well as, create triggers in response to my repo events.

This provides a great start for me to start developing my AWS hosted web application. Since I opted to integrate AWS CodeStar with Visual Studio, I can update my web application by using the IDE to make code changes that will be automatically included in the TaraWebProject every time I commit to the provisioned code repository.

You will notice that on the AWS CodeStar TaraWebProject dashboard, there is a message about connecting the tools to my project repository in order to work on the code. Even though I have already selected Visual Studio as my IDE of choice, let’s click on the Connect Tools button to review the steps to connecting to this IDE.

Again, I will see a screen that will allow me to choose which IDE: Visual Studio, Eclipse, or Command Line tool that I wish to use to edit my project code. It is important for me to note that I have the option to change my IDE choice at any time while working on my development project. Additionally, I can connect to my Git AWS CodeCommit repo via HTTPS and SSH. To retrieve the appropriate repository URL for each protocol, I only need to select the Code repository URL dropdown and select HTTPS or SSH and copy the resulting URL from the text field.

After selecting Visual Studio, CodeStar takes me to the steps needed in order to integrate with Visual Studio. This includes downloading the AWS Toolkit for Visual Studio, connecting the Team Explorer to AWS CodeStar via AWS CodeCommit, as well as, how to push changes to the repo.

After successfully connecting Visual Studio to my AWS CodeStar project, I return to the AWS CodeStar TaraWebProject dashboard to start managing the team members working on the web application with me. First, I will select the Setup your team tile so that I can go to the Project Team page.

On my TaraWebProject Project Team page, I’ll add a team member, Jeff, by selecting the Add team member button and clicking on the Select user dropdown. Team members must be IAM users in my account, so I’ll click on the Create new IAM user link to create an IAM accounts for Jeff.

When the Create IAM user dialog box comes up, I will enter an IAM user name, Display name, and Email Address for the team member, in this case, Jeff Barr. There are three types of project roles that Jeff can be granted, Owner, Contributor, or Viewer. For the TaraWebProject application, I will grant him the Contributor project role and allow him to have remote access by select the Remote access checkbox. Now I will create Jeff’s IAM user account by clicking the Create button.

This brings me to the IAM console to confirm the creation of the new IAM user. After reviewing the IAM user information and the permissions granted, I will click the Create user button to complete the creation of Jeff’s IAM user account for TaraWebProject.

After successfully creating Jeff’s account, it is important that I either send Jeff’s login credentials to him in email or download the credentials .csv file, as I will not be able to retrieve these credentials again. I would need to generate new credentials for Jeff if I leave this page without obtaining his current login credentials. Clicking the Close button returns me to the AWS CodeStar console.

Now I can complete adding Jeff as a team member in the TaraWebProject by selecting the JeffBarr-WebDev IAM role and clicking the Add button.

I’ve successfully added Jeff as a team member to my AWS CodeStar project, TaraWebProject enabling team collaboration in building the web application.

Another thing that I really enjoy about using the AWS CodeStar service is I can monitor all of my project activity right from my TaraWebProject dashboard. I can see the application activity, any recent code commits, and track the status of any project actions, such as the results of my build, any code changes, and the deployments from in one comprehensive dashboard. AWS CodeStar ties the dashboard into Amazon CloudWatch with the Application activity section, provides data about the build and deployment status in the Continuous Deployment section with AWS CodePipeline, and shows the latest Git code commit with AWS CodeCommit in the Commit history section.

Summary

In my journey of the AWS CodeStar service, I created a serverless web application that provisioned my entire development toolchain for coding, building, testing, and deployment for my TaraWebProject software project using AWS services. Amazingly, I have yet to scratch the surface of the benefits of using AWS CodeStar to manage day-to-day software development activities involved in releasing applications.

AWS CodeStar makes it easy for you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. AWS CodeStar allows you to choose from various templates to setting up projects using AWS Lambda, Amazon EC2, or AWS Elastic Beanstalk. It comes pre-configured with a project management dashboard, an automated continuous delivery pipeline, and a Git code repository using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy allowing developers to implement modern agile software development best practices. Each AWS CodeStar project gives developers a head start in development by providing working code samples that can be used with popular IDEs that support Git. Additionally, AWS CodeStar provides out of the box integration with Atlassian JIRA Software providing a project management and issue tracking system for your software team directly from the AWS CodeStar console.

You can get started using the AWS CodeStar service for developing new software projects on AWS today. Learn more by reviewing the AWS CodeStar product page and the AWS CodeStar user guide documentation.

Tara

New – AWS Management Tools Blog

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-management-tools-blog/

The AWS Blog collection has grown over the past couple of years. As you can see from the list on the right, we now have blogs that cover a wide variety of topics and development tools. We also have blogs that are designed for those of you who read languages other than English!

The AWS Management Tools Blog is the newest member of the collection. This blog focuses on AWS tools that help you to provision, configure, monitor, track, audit, and manage the costs of your AWS and on-premises resources at scale. Topics planned for the blog include deep technical coverage of feature updates, tips and tricks, sample apps, CloudFormation templates, and an on-going discussion of use cases. Here are some of the initial posts:

You can subscribe to the blog’s RSS feed in order to make sure that you see all of this helpful new content!

Jeff;

 

Use Parameter Store to Securely Access Secrets and Config Data in AWS CodeDeploy

Post Syndicated from Chirag Dhull original https://aws.amazon.com/blogs/devops/use-parameter-store-to-securely-access-secrets-and-config-data-in-aws-codedeploy/

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. Many customers use AWS CodeDeploy to automate application deployment because it provides a uniform method for:
• Updating applications across development, staging, and production environments.
• Handling the complexity of updating applications and avoiding service downtime.

 
Visit this post on the management tools blog and learn how to simplify your AWS CodeDeploy workflows by using Parameter Store to store and reference a configuration secret.

AWS Organizations – Policy-Based Management for Multiple AWS Accounts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-organizations-policy-based-management-for-multiple-aws-accounts/

Over the years I have found that many of our customers are managing multiple AWS accounts. This situation can arise for several reasons. Sometimes they adopt AWS incrementally and organically, with individual teams and divisions making the move to cloud computing on a decentralized basis. Other companies grow through mergers and acquisitions and take on responsibility for existing accounts. Still others routinely create multiple accounts in order to meet strict guidelines for compliance or to create a very strong isolation barrier between applications, sometimes going so far as to use distinct accounts for development, testing, and production.

As these accounts proliferate, our customers find that they would like to manage them in a scalable fashion. Instead of dealing with a multitude of per-team, per-division, or per-application accounts, they have asked for a way to define access control policies that can be easily applied to all, some, or individual accounts. In many cases, these customers are also interested in additional billing and cost management, and would like to be able to control how AWS pricing benefits such as volume discounts and Reserved Instances are applied to their accounts.

AWS Organizations Emerges from Preview
To support this increasingly important use case, we are moving AWS Organizations from Preview to General Availability today. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of Organizational Units (OUs), assign each account to an OU, define policies, and then apply them to the entire hierarchy, to select OUs, or to specific accounts. You can invite existing AWS accounts to join your organization and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.

Here are some important terms and concepts that will help you to understand Organizations (this assumes that you are the all-powerful, overall administrator of your organization’s AWS accounts, and that you are responsible for the Master account):

An Organization is a consolidated set of AWS accounts that you manage. Newly-created Organizations offer the ability to implement sophisticated, account-level controls such as Service Control Policies. This allows Organization administrators to manage lists of allowed and blocked AWS API functions and resources that place guard rails on individual accounts. For example, you could give your advanced R&D team access to a wide range of AWS services, and then be a bit more cautious with your mainstream development and test accounts. Or, on the production side, you could allow access only to AWS services that are eligible for HIPAA compliance.

Some of our existing customers use a feature of AWS called Consolidated Billing. This allows them to select a Payer Account which rolls up account activity from multiple AWS Accounts into a single invoice and provides a centralized way of tracking costs. With this launch, current Consolidated Billing customers now have an Organization that provides all the capabilities of Consolidated Billing, but by default does not have the new features (like Service Control Policies) we’re making available today. These customers can easily enable the full features of AWS Organizations. This is accomplished by first enabling the use of all AWS Organization features from the Organization’s master account and then having each member account authorize this change to the Organization. Finally, we will continue to support creating new Organizations that support only the Consolidated Billing capabilities. Customers that wish to only use the centralized billing features can continue to do so, without allowing the master account administrators to enforce the advanced policy controls on member accounts in the Organization.

An AWS account is a container for AWS resources.

The Master account is the management hub for the Organization and is also the payer account for all of the AWS accounts in the Organization. The Master account can invite existing accounts to join the Organization, and can also create new accounts.

Member accounts are the non-Master accounts in the Organization.

An Organizational Unit (OU) is a container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be up to five levels deep. The top of the hierarchy of OUs is also known as the Administrative Root.

A Service Control Policy (SCP) is a set of controls that the Organization’s Master account can apply to the Organization, selected OUs, or to selected accounts. When applied to an OU, the SCP applies to the OU and to any other OUs beneath it in the hierarchy. The SCP or SCPs in effect for a member account specify the permissions that are granted to the root user for the account. Within the account, IAM users and roles can be used as usual. However, regardless of how permissive the user or the role might be, the effective set of permissions will never extend beyond what is defined in the SCP. You can use this to exercise fine-grained control over access to AWS services and API functions at the account level.

An Invitation is used to ask an AWS account to join an Organization. It must be accepted within 15 days, and can be extended via email address or account ID. Up to 20 Invitations can be outstanding at any given time. The invitation-based model allows you to start from a Master account and then bring existing accounts into the fold. When an Invitation is accepted, the account joins the Organization and all applicable policies become effective. Once the account has joined the Organization, you can move it to the proper OU.

AWS Organizations is appropriate when you want to create strong isolation boundaries between the AWS accounts that you manage. However, keep in mind that AWS resources (EC2 instances, S3 buckets, and so forth) exist within a particular AWS account and cannot be moved from one account to another. You do have access to many different cross-account AWS features including VPC peering, AMI sharing, EBS snapshot sharing, RDS snapshot sharing, cross-account email sending, delegated access via IAM roles, cross-account S3 bucket permissions, and cross-acount access in the AWS Management Console.

Like consolidated billing, AWS Organizations also provides several benefits when it comes to the use of EC2 and RDS Reserved Instances. For billing purposes, all of the accounts in the Organization are treated as if they are one account and can receive the hourly cost benefit of an RI purchased by any other account in the same Organization (in order for this benefit to be applied as expected, the Availability Zone and other attributes of the RI must match the attributes of the EC2 or RDS instance).

Creating an Organization
Let’s create an Organization from the Console, create some Organizational Units, and then create some accounts. I start by clicking on Create organization:

Then I choose ENABLE ALL FEATURES and click on Create organization:

My Organization is ready in seconds:

I can create a new account by clicking on Add account, and then selecting Create account:

Then I supply the details (the IAM role is created in the new account and grants enough permissions for the account to be customized after creation):

Here’s what the console looks like after I have created Dev, Test, and Prod accounts:

At this point all of the accounts are at the top of the hierarchy:

In order to add some structure, I click on Organize accounts, select Create organizational unit (OU), and enter a name:

I do the same for a second OU:

Then I select the Prod account, click on Move accounts, and choose the Operations OU:

Next, I move the Dev and Test accounts into the Development OU:

At this point I have four accounts (my original one plus the three that I just created) and two OUs. The next step is to create one or more Service Control Policies by clicking on Policies and selecting Create policy. I can use the Policy Generator or I can copy an existing SCP and then customize it. I’ll use the Policy Generator. I give my policy a name and make it an Allow policy:

Then I use the Policy Generator to construct a policy that allows full access to EC2 and S3, and the ability to run (invoke) Lambda functions:

Remember, that this policy defines the full set of allowable actions within the account. In order to allow IAM users within the account to be able to use these actions, I would still need to create suitable IAM policies and attach them to the users (all within the member account). I click on Create policy and my policy is ready:

Then I create a second policy for development and testing. This one also allows access to AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline:

Let’s recap. I have created my accounts and placed them into OUs. I have created a policy for the OUs. Now I need to enable the use of policies, and attach the policy to the OUs. To enable the use of policies, I click on Organize accounts and select Home (this is not the same as the root because Organizations was designed to support multiple, independent hierarchies), and then click on the checkbox in the Root OU. Then I look to the right, expand the Details section, and click on Enable:

Ok, now I can put all of the pieces together! I click on the Root OU to descend in to the hierarchy, and then click on the checkbox in the Operations OU. Then I expand the Control Policies on the right and click on Attach policy:

Then I locate the OperationsPolicy and click on Attach:

Finally, I remove the FullAWSAccess policy:

I can also attach the DevTestPolicy to the Development OU.

All of the operations that I described above could have been initiated from the AWS Command Line Interface (CLI) or by making calls to functions such as CreateOrganization, CreateAccount, CreateOrganizationalUnit, MoveAccount, CreatePolicy, AttachPolicy, and InviteAccountToOrganization. To see the CLI in action, read Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts.

Best Practices for Use of AWS Organizations
Before I wrap up, I would like to share some best practices for the use of AWS Organizations:

Master Account – We recommend that you keep the Master Account free of any operational AWS resources (with one exception). In addition to making it easier for you to make high-quality control decision, this practice will make it easier for you to understand the charges on your AWS bill.

CloudTrail – Use AWS CloudTrail (this is the exception) in the Master Account to centrally track all AWS usage in the Member accounts.

Least Privilege – When setting up policies for your OUs, assign as few privileges as possible.

Organizational Units – Assign policies to OUs rather than to accounts. This will allow you to maintain a better mapping between your organizational structure and the level of AWS access needed.

Testing – Test new and modified policies on a single account before scaling up.

Automation – Use the APIs and a AWS CloudFormation template to ensure that every newly created account is configured to your liking. The template can create IAM users, roles, and policies. It can also set up logging, create and configure VPCs, and so forth.

Learning More
Here are some resources that will help you to get started with AWS Organizations:

Things to Know
AWS Organizations is available today in all AWS regions except China (Beijing) and AWS GovCloud (US) and is available to you at no charge (to be a bit more precise, the service endpoint is located in US East (Northern Virginia) and the SCPs apply across all relevant regions). All of the accounts must be from the same seller; you cannot mix AWS and AISPL (the local legal Indian entity that acts as a reseller for AWS services accounts in India) in the same Organization.

We have big plans for Organizations, and are currently thinking about adding support for multiple payers, control over allocation of Reserved Instance discounts, multiple hierarchies, and other control policies. As always, your feedback and suggestions are welcome.

Jeff;

Continuous Deployment to Amazon ECS using AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS CloudFormation

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/continuous-deployment-to-amazon-ecs-using-aws-codepipeline-aws-codebuild-amazon-ecr-and-aws-cloudformation/

Thanks to my colleague John Pignata for a great blog on how to create a continuous deployment pipeline to Amazon ECS.

Delivering new iterations of software at a high velocity is a competitive advantage in today’s business environment. The speed at which organizations can deliver innovations to customers and adapt to changing markets is increasingly a pivotal attribute that can make the difference between success and failure.

AWS provides a set of flexible services designed to enable organizations to embrace the combination of cultural philosophies, practices, and tools called DevOps that increases an organization’s ability to deliver applications and services at high velocity.

In this post, I explore the DevOps practice called continuous deployment and outline a reference architecture to implement an automated deployment pipeline for applications delivered as Docker containers onto Amazon ECS using AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation.

What is continuous deployment?

Agility is often cited as a key advantage of cloud computing over the traditional delivery of IT resources. Instead of waiting weeks or months for other departments to provision a new server, developers can create new instances with a click or API call and start using it within minutes. This newfound speed and autonomy frees developers to experiment and deliver new products and features to their customers as quickly as possible.

On top of the cloud, teams are embracing DevOps practices in order to achieve a faster time-to-market, better code quality, and more reliable releases of their products and services. Continuous deployment is a DevOps practice in which new software revisions are automatically built, tested, packaged, and released to production.

Continuous deployment enables developers to ship features and fixes through an entirely automated software release process. Instead of batching up large releases over a period of weeks or months and conducting deployments manually, developers can use automation to deliver versions of their applications many times a day as new software revisions are ready for users. In the same way cloud computing abbreviates the delivery time of resources, continuous deployment reduces the release cycle of new software to your users from weeks or months to minutes.

Embracing this speed and agility has many benefits including:

  • New features and bug fixes are released to users quickly; code sitting in a source code repository does not deliver business value or benefit your customers. By releasing new software revisions as close to immediately as possible, customers start benefiting from your work more quickly and teams can get more focused feedback.
  • Change sets are smaller; large change sets create challenges in pinpointing root causes of issues, bugs, and other regressions. By releasing smaller change sets more frequently, teams can more easily attribute and correct introduced issues.
  • Automated deployment encourages best practices; as any change committed to your source code repository can be deployed immediately via automation, teams have to ensure that changes are well-tested and that their production environments are closely monitored.

How does continuous deployment work?

Continuous deployment is conducted by an automated pipeline that coordinates the activities related to software release and provides visibility into the process. During the process, a releasable artifact is built, tested, packaged, and deployed into a production environment. The releasable artifact might be an executable file, a package of script files, a container, or some other component that ultimately must be delivered to production.

AWS CodePipeline is a continuous delivery and deployment service that coordinates the building, testing, and deployment of your code each time there is a new software revision. CodePipeline provides visible, central orchestration for taking a code change and moving it through a workflow and ultimately into the hands of your users. The pipeline defines stages to retrieve code from a source code repository, build the source code into a releasable artifact, test that artifact, and deliver it to production while ensuring that these stages happen in order and are halted if a failure occurs.

While CodePipeline powers the delivery pipeline and orchestrates the process, it does not have facilities for building or testing the software itself. For these stages, CodePipeline integrates with several other tools, including AWS CodeBuild, which is a fully managed build service. CodeBuild compiles source code, runs tests, and produces software packages that are ready to deploy. That makes it ideal for the build and test stages of a continuous deployment pipeline. Out of the box, CodeBuild has native support for many different kinds of build environments, including building Docker containers.

Containers are a powerful mechanism for software delivery, as they allow for a predictable and reproducible environment and provide a high level of confidence that changes tested in one environment can be successfully deployed. AWS provides several services to run and manage Docker container images. Amazon ECS is a highly scalable and high performance container management service that allows you to run applications on a cluster of Amazon EC2 instances. Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

Finally, CodePipeline integrates with several services to facilitate deployment, including AWS Elastic Beanstalk, AWS CodeDeploy, AWS OpsWorks, and your own custom deployment code or process using AWS Lambda or AWS CloudFormation. These deployment actions can be used to power the final step in your pipeline to push the newly built changes live onto your production environment.

Continuous deployment to Amazon ECS

Here’s a reference architecture that puts these components together to deliver a continuous deployment pipeline of Docker applications onto ECS:

This architecture demonstrates how to deploy containers onto ECS and ECR using CodePipeline to build a fully automated continuous deployment pipeline on top of AWS. This approach to continuous deployment is entirely serverless and uses managed services for the orchestration, build, and deployment of your software.

The pipeline created in the reference architecture looks like the following:

In this post, I discuss each stage in this reference architecture. What happens when a developer changes some copy on a landing page and pushes that change into the source code repository?

First, in the Source stage, the pipeline is configured with details for accessing a source code repository system. In the reference architecture, you have a sample application hosted in a GitHub repository. CodePipeline polls this repository and initiates a new pipeline execution for each new commit. In addition to GitHub, CodePipeline also supports source locations such as a Git repository in AWS CodeCommit or a versioned object stored in Amazon S3. Each new build is retrieved from the source code repository, packaged as a zip file, stored on S3, and sent to the next stage of the pipeline.

The Source stage also defines a template artifact stored on Amazon S3. This is the template that defines the deployment environment used by the deployment stage after a successful build of the application.

The Build stage uses CodeBuild to create a new Docker container image based upon the latest source code and pushes it to an ECR repository. CodePipeline also integrates with a number of third-party build systems, such as Jenkins, CloudBees, Solano CI, and TeamCity.

Finally, the Deploy stage uses CloudFormation to create a new task definition revision that points to the newly built Docker container image and updates the ECS service to use the new task definition revision. After this is done, ECS initiates a deployment by fetching the new Docker container from ECR and restarting the service.

After all of the pipeline’s stages are green, you can reload the application in a web browser and see the developer’s copy changes live in production. This happened automatically without any human invention.

This pipeline is now in production, listening for new code in the source code repository, and ready to ship any future changes that your team pushes into production. It’s also extensible, meaning that new stages can be added to include additional steps. For example, you could include a test stage to execute unit and acceptance tests to ensure the new code revision is safe to deploy to production. After it’s deployed, a notification step could be added to alert your team via email or a Slack channel that a new version is live, along with the details about the change set deployed to production.

Conclusion

We’re excited to see what kinds of applications you can deliver to your users using this approach and how it affects your product development processes. The cloud unlocks massive advantages in agility, and the ability to implement techniques like continuous deployment unlocks a significant competitive advantage.

You’ll find an AWS CloudFormation template with everything necessary to spin up your own continuous deployment pipeline at the AWS Labs EC2 Container Service – Reference Architecture: Continuous Deployment repo on GitHub. If you have any questions, feedback, or suggestions, please let us know!

AWS Webinars – January 2017 (Bonus: December Recap)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-webinars-january-2017-bonus-december-recap/

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.

Now Open – AWS London Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-london-region/

Last week we launched our 15th AWS Region and today we are launching our 16th. We have expanded the AWS footprint into the United Kingdom with a new Region in London, our third in Europe. AWS customers can use the new London Region to better serve end-users in the United Kingdom and can also use it to store data in the UK.

The Details
The new London Region provides a broad suite of AWS services including Amazon CloudWatch, Amazon DynamoDB, Amazon ECS, Amazon ElastiCache, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon EMR, Amazon Glacier, Amazon Kinesis Streams, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Elastic Beanstalk, AWS Snowball, AWS Snowmobile, AWS Key Management Service (KMS), AWS Marketplace, AWS OpsWorks, AWS Personal Health Dashboard, AWS Shield Standard, AWS Storage Gateway, AWS Support API, Elastic Load Balancing, VM Import/Export, Amazon CloudFront, Amazon Route 53, AWS WAF, AWS Trusted Advisor, and AWS Direct Connect (follow the links for pricing and other information).

The London Region supports all sizes of C4, D2, M4, T2, and X1 instances.

Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

From Our Customers
Many AWS customers are getting ready to use this new Region. Here’s a very small sample:

Trainline is Europe’s number one independent rail ticket retailer. Every day more than 100,000 people travel using tickets bought from Trainline. Here’s what Mark Holt (CTO of Trainline) shared with us:

We recently completed the migration of 100 percent of our eCommerce infrastructure to AWS and have seen awesome results: improved security, 60 percent less downtime, significant cost savings and incredible improvements in agility. From extensive testing, we know that 0.3s of latency is worth more than 8 million pounds and so, while AWS connectivity is already blazingly fast, we expect that serving our UK customers from UK datacenters should lead to significant top-line benefits.

Kainos Evolve Electronic Medical Records (EMR) automates the creation, capture and handling of medical case notes and operational documents and records, allowing healthcare providers to deliver better patient safety and quality of care for several leading NHS Foundation Trusts and market leading healthcare technology companies.

Travis Perkins, the largest supplier of building materials in the UK, is implementing the biggest systems and business change in its history including the migration of its datacenters to AWS.

Just Eat is the world’s leading marketplace for online food delivery. Using AWS, JustEat has been able to experiment faster and reduce the time to roll out new feature updates.

OakNorth, a new bank focused on lending between £1m-£20m to entrepreneurs and growth businesses, became the UK’s first cloud-based bank in May after several months of working with AWS to drive the development forward with the regulator.

Partners
I’m happy to report that we are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in the United Kingdom. Here’s a partial list:

  • AWS Premier Consulting Partners – Accenture, Claranet, Cloudreach, CSC, Datapipe, KCOM, Rackspace, and Slalom.
  • AWS Consulting Partners – Attenda, Contino, Deloitte, KPMG, LayerV, Lemongrass, Perfect Image, and Version 1.
  • AWS Technology Partners – Splunk, Sage, Sophos, Trend Micro, and Zerolight.
  • AWS Managed Service Partners – Claranet, Cloudreach, KCOM, and Rackspace.
  • AWS Direct Connect Partners – AT&T, BT, Hutchison Global Communications, Level 3, Redcentric, and Vodafone.

Here are a few examples of what our partners are working on:

KCOM is a professional services provider offering consultancy, architecture, project delivery and managed service capabilities to large UK-based enterprise businesses. The scalability and flexibility of AWS gives them a significant competitive advantage with their enterprise and public sector customers. The new Region will allow KCOM to build innovative solutions for their public sector clients while meeting local regulatory requirements.

Splunk is a member of the AWS Partner Network and a market leader in analyzing machine data to deliver operational intelligence for security, IT, and the business. They use cloud computing and big data analytics to help their customers to embrace digital transformation and continuous innovation. The new Region will provide even more companies with real-time visibility into the operation of their systems and infrastructure.

Redcentric is a NHS Digital-approved N3 Commercial Aggregator. Their work allows health and care providers such as NHS acute, emergency and mental trusts, clinical commissioning groups (CCGs), and the ISV community to connect securely to AWS. The London Region will allow health and care providers to deliver new digital services and to improve outcomes for citizens and patients.

Visit the AWS Partner Network page to read some case studies and to learn how to join.

Compliance & Connectivity
Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC3, PCI DSS Level 1, and many more. Our Cloud Compliance page includes information about these standards, along with those that are specific to the UK, including Cyber Essentials Plus.

The UK Government recognizes that local datacenters from hyper scale public cloud providers can deliver secure solutions for OFFICIAL workloads. In order to meet the special security needs of public sector organizations in the UK with respect to OFFICIAL workloads, we have worked with our Direct Connect Partners to make sure that obligations for connectivity to the Public Services Network (PSN) and N3 can be met.

Use it Today
The London Region is open for business now and you can start using it today! If you need additional information about this Region, please feel free to contact our UK team at [email protected].

Jeff;

Now Open AWS Canada (Central) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-canada-central-region/

We are growing the AWS footprint once again. Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Canada (Central) Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, M4, T2, and X1 instances.

As part of our on-going focus on making cloud computing available to you in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydropower (read about AWS Sustainability to learn more).

Well Connected
After receiving a lot of positive feedback on the network latency metrics that I shared when we launched the AWS Region in Ohio, I am happy to have a new set to share as part of today’s launch (these times represent a lower bound on latency and may change over time).

The first set of metrics are to other Canadian cities:

  • 9 ms to Toronto.
  • 14 ms to Ottawa.
  • 47 ms to Calgary.
  • 49 ms to Edmonton.
  • 60 ms to Vancouver.

The second set are to locations in the US:

  • 9 ms to New York.
  • 19 ms to Chicago.
  • 16 ms to US East (Northern Virginia).
  • 27 ms to US East (Ohio).
  • 75 ms to US West (Oregon).

Canada is also home to CloudFront edge locations in Toronto, Ontario, and Montreal, Quebec.

And Canada Makes 15
Today’s launch brings our global footprint to 15 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year. As a reminder, each Region is a physical location where we have two or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

For more information about current and future AWS Regions, take a look at the AWS Global Infrastructure page.

Jeff;


Région AWS Canada (Centre) Maintenant Ouverte

Nous étendons la portée d’AWS une fois de plus. Notre nouvelle Région du Canada (Centre) est maintenant disponible et vous pouvez commencer à l’utiliser dès aujourd’hui. Les clients d’AWS au Canada et dans les régions du nord des États-Unis ont un accès rapide et à latence réduite à l’ensemble des services d’infrastructure AWS.

Les détails
La nouvelle Région du Canada (Centre) supporte Amazon Elastic Compute Cloud (EC2) et les services associés incluant Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances et Dedicated Hosts.

Également supportés sont Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, et AWS WAF.

La région supporte toutes les tailles des instances C4, D2, M4, T2 et X1.

Dans le cadre de notre mission continue de vous offrir des services infonuagiques de manière écologique, les centres de données d’AWS au Canada sont alimentés par un réseau électrique dont 99 pour cent de l’énergie fournie est de nature hydroélectrique (consultez AWS Sustainability pour en savoir plus).

Bien connecté
Après avoir reçu beaucoup de commentaires positifs sur les mesures de latence du réseau dont je vous ai fait part lorsque nous avons lancé la région AWS en Ohio, je suis heureux de vous faire part d’un nouvel ensemble de mesures dans le cadre du lancement d’aujourd’hui (ces mesures représentent une limite inférieure à la latence et pourraient changer au fil du temps).

Le premier ensemble de mesures concerne d’autres villes canadiennes:

  • 9 ms à Toronto.
  • 14 ms à Ottawa.
  • 47 ms à Calgary.
  • 49 ms à Edmonton.
  • 60 ms à Vancouver.

Le deuxième ensemble concerne des emplacements aux États-Unis :

  • 9 ms à New York.
  • 19 ms à Chicago.
  • 16 ms à USA Est (Virginie du Nord).
  • 27 ms à USA Est (Ohio).
  • 75 ms à USA Ouest (Oregon).

Le Canada compte également des emplacements périphériques CloudFront à Toronto, en Ontario, et à Montréal, au Québec.

Et le Canada fait 15
Le lancement d’aujourd’hui porte notre présence mondiale à 15 régions et 40 zones de disponibilité avec sept autres zones de disponibilité et trois autres régions qui seront mises en opération au cours de la prochaine année. Pour vous rafraîchir la mémoire, chaque région est un emplacement physique où nous avons deux ou plusieurs zones de disponibilité. Chaque zone de disponibilité, à son tour, comprend un ou plusieurs centres de données, chacun doté d’une alimentation, d’une mise en réseau et d’une connectivité redondantes dans des installations distinctes. Avoir deux zones de disponibilité ou plus dans chaque région vous donne la possibilité d’opérer des applications qui sont plus disponibles, plus tolérantes aux pannes et plus durables qu’elles ne le seraient si vous étiez limité à une seule zone de disponibilité.

Pour plus d’informations sur les régions AWS actuelles et futures, consultez la page Infrastructure mondiale AWS.

Jeff;

AWS OpsWorks at re:Invent 2016

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/aws-opsworks-at-reinvent-2016/

AWS re:Invent 2016 is right around the corner. Here’s an overview of where you can meet the AWS OpsWorks team and learn about the service.

DEV305 – Configuration Management in the Cloud
12/1/16 (Thursday) 11:00 AM – Venetian, Level 3, Murano 3205

To ensure that your application operates in a predictable manner in both your test and production environments, you must vigilantly maintain the configuration of your resources. By leveraging configuration management solutions, Dev and Ops engineers can define the state of their resources across their entire lifecycle. In this session, we will show you how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline that assures your production workloads behave in a predictable manner.

DEV305-R – [REPEAT] Configuration Management in the Cloud
12/2/16 (Friday) 9:00 AM – Venetian, Level 1, Sands 202

This is a repeat session of the talk from the previous day if you were unable to attend that one.

LD148 – Live Demo: Configuration Management with AWS OpsWorks
12/1/16 (Thursday) 4:50 PM – Venetian, Hall C, AWS Booth

Join this session at the AWS Booth for a live demo and the opportunity to meet the AWS OpsWorks service team.

AWS re:Invent is a great opportunity to talk with AWS teams. As in previous years, you will find OpsWorks team members at the AWS booth. Drop by and ask for a demo!

Didn’t register before the conference sold out? All sessions will be recorded and posted on YouTube after the conference and all slide decks will be posted on SlideShare.net.

Building a Cross-Region/Cross-Account Code Deployment Solution on AWS

Post Syndicated from BK Chaurasiya original https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/

Many of our customers have expressed a desire to build an end-to-end release automation workflow solution that can deploy changes across multiple regions or different AWS accounts.

In this post, I will show you how you can easily build an automated cross-region code deployment solution using AWS CodePipeline (a continuous delivery service), AWS CodeDeploy (an automated application deployment service), and AWS Lambda (a serverless compute service). In the Taking This Further section, I will also show you how to extend what you’ve learned so that you can create a cross-account deployment solution.

We will use AWS CodeDeploy and AWS CodePipeline to create a multi-pipeline solution running in two regions (Region A and Region B). Any update to the source code in Region A will trigger validation and deployment of source code changes in the pipeline in Region A. A successful processing of source code in all of its AWS CodePipeline stages will invoke a Lambda function as a custom action, which will copy the source code into an S3 bucket in Region B. After the source code is copied into this bucket, it will trigger a similar chain of processes into the different AWS CodePipeline stages in Region B. See the following diagram.

                                                   Diagram1

This architecture follows best practices for multi-region deployments, sequentially deploying code into one region at a time upon successful testing and validation. This architecture lets you place controls to stop the deployment if a problem is identified with release. This prevents a bad version from being propagated to your next environments.

This post is based on the Simple Pipeline Walkthrough in the AWS CodePipeline User Guide. I have provided an AWS CloudFormation template that automates the steps for you.

 

Prerequisites

You will need an AWS account with administrator permissions. If you don’t have an account, you can sign up for one here. You will also need sample application source code that you can download here.

We will use the CloudFormation template provided in this post to create the following resources:

  • Amazon S3 buckets to host the source code for the sample application. You can use a GitHub repository if you prefer, but you will need to change the CloudFormation template.
  • AWS CodeDeploy to deploy the sample application.
  • AWS CodePipeline with predefined stages for this setup.
  • AWS Lambda as a custom action in AWS CodePipeline. It invokes a function to copy the source code into another region or account. If you are deploying to multiple accounts, cross-account S3 bucket permissions are required.

Note: The resources created by the CloudFormation template may result in charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

Let’s Get Started

Choose your source and destination regions for a continuous delivery of your source code. In this post, we are deploying the source code to two regions: first to Region A (Oregon) and then to Region B (N. Virginia/US Standard). You can choose to extend the setup to three or more regions if your business needs require it.

Step 1: Create Amazon S3 buckets for hosting your application source code in your source and destination regions. Make sure versioning is enabled on these buckets. For more information, see these steps in the AWS CodePipeline User Guide.

For example:

xrdeployment-sourcecode-us-west-2-<AccountID>           (Source code bucket in Region A – Oregon)

xrdeployment-sourcecode-us-east-1-<AccountID>           (Source code bucket in Region B – N. Virginia/US Standard)

Note: The source code bucket in Region B is also the destination bucket in Region A. Versioning on the bucket ensures that AWS CodePipeline is executed automatically when source code is changed.

Configuration Setup in Source Region A

Be sure you are in the US West (Oregon) region. You can use the drop-down menu to switch regions.

 

Step 2: In the AWS CloudFormation console, choose launch-stack to launch the CloudFormation template. All of the steps in the Simple Pipeline Walkthrough are automated when you use this template.

This template creates a custom Lambda function and AWS CodePipeline and AWS CodeDeploy resources to deploy a sample application. You can customize any of these components according to your requirements later.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack (for example, XRDepDemoStackA).
  2. In AppName, you can leave the default, or you can type a name of up to 40 characters. Use only lowercase letters, numbers, periods, and hyphens.
  3. In InstanceCount and InstanceType, leave the default values. You might want to change them when you extend this setup for your use case.
  4. In S3SourceCodeBucket, specify the name of the S3 bucket where source code is placed (xrdeployment-sourcecode-us-west-2-<AccountID>). See step 1.
  5. In S3SourceCodeObject, specify the name of the source code zip file. The sample source code, xrcodedeploy_linux.zip, is provided for you.
  6. Choose a destination region from the drop-down list. For the steps in this blog post, choose us-east-1.
  7. In DestinationBucket, type the name of the bucket in the destination region where the source code will be copied  (xrdeployment-sourcecode-us-east-1-<AccountID>). See step 1.
  8. In KeyPairName, choose the name of Amazon EC2 key pair. This enables remote login to your instances. You cannot sign in to your instance without generating a key pair and downloading a private key file. For information about generating a key pair, see these steps.
  9. In SSHLocation, type the IP address from which you will access the resources created in this stack. This is a recommended security best practice.
  10. In TagValue, type a value that identifies the deployment stage in the target deployment (for example, Alpha).
  11. Choose Next.

 

Diagram3

(Optional) On the Options page, in Key, type Name. In Value, type a name that will help you easily identify the resources created in this stack. This name will be used to tag all of the resources created by the template. These tags are helpful if you want to use or modify these resources later on. Choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. (It will.) Review the other settings, and then choose Create.

                                                Diagram4

It will take several minutes for CloudFormation to create the resources on your behalf. You can watch the progress messages on the Events tab in the console.

When the stack has been created, you will see a CREATE_COMPLETE message in the Status column on the Overview tab.

                  Diagram5

Configuration Setup in Destination Region B

Step 3: We now need to create AWS resources in Region B. Use the drop-down menu to switch to US East (N. Virginia).

In the AWS CloudFormation console, choose launch-stack to launch the CloudFormation template.

On the Specify Details page, do the following:

  1. In Stack name, type a name for the stack (for example, XRDepDemoStackB).
  2. In AppName, you can leave the default, or you can type a name of up to 40 characters. Use only lowercase letters, numbers, periods, and hyphens.
  3. In InstanceCount and InstanceType, leave the default values. You might want to change them when you extend this setup for your use case.
  4. In S3SourceCodeBucket, specify the name of the S3 bucket where the source code is placed (xrdeployment-sourcecode-us-east-1-<AccountID>).  This is same as the DestinationBucket in step 2.
  5. In S3SourceCodeObject, specify the name of the source code zip file. The sample source code (xrcodedeploy_linux.zip) is provided for you.
  6. From the DestinationRegion drop-down list, choose none.
  7. In DestinationBucket, type none. This is our final destination region for this setup.
  8. In the KeyPairName, choose the name of the EC2 key pair.
  9. In SSHLocation, type the IP address from which you will access the resources created in this stack.
  10. In TagValue, type a value that identifies the deployment stage in the target deployment (for example, Beta).

Repeat the steps in the Configuration Setup in Source Region A until the CloudFormation stack has been created. You will see a CREATE_COMPLETE message in the Status column of the console.

So What Just Happened?

We have created an EC2 instance in both regions. These instances are running a sample web application. We have also configured AWS CodeDeploy deployment groups and created a pipeline where source changes propagate to AWS CodeDeploy groups in both regions. AWS CodeDeploy deploys a web page to each of the Amazon EC2 instances in the deployment groups. See the diagram at the beginning of this post.

The pipelines in both regions will start automatically as they are created. You can view your pipelines in the AWS CodePipeline console. You’ll find a link to AWS CodePipeline on the Outputs section of your CloudFormation stack.

                                  Diagram7

Note: Your pipeline will fail during its first automatic execution because we haven’t placed source code into the S3SourceCodeBucket in the source region (Region A).

Step 4: Download the sample source code file, xrcodedeploy_linux.zip, from this link and place it in the source code S3 bucket for Region A. This will kick off AWS CodePipeline.

Step 5: Watch the progress of your pipeline in the source region (Region A) as it completes the actions configured for each of its stages and invokes a custom Lambda action that copies the source code into Region B. Then watch the progress of your pipeline in Region B (final destination region) after the pipeline succeeds in the source region (Region A). The pipeline in the destination region (Region B) should kick off automatically as soon as AWS CodePipeline in the source region (Region A) completes execution.

When each stage is complete, it turns from blue (in progress) to green (success).

                                                              Diagram8

Congratulations! You just created a cross-region deployment solution using AWS CodePipeline, AWS CodeDeploy, and AWS Lambda. You can place a new version of source code in your S3 bucket and watch it progress through AWS CodePipeline in all the regions.

Step 6: Verify your deployment. When Succeeded is displayed for the pipeline status in the final destination region, view the deployed application:

  1. In the status area for Betastage in the final destination region, choose Details. The details of the deployment will appear in the AWS CodeDeploy console. You can also pick any other stage in other regions.
  2. In the Deployment Details section, in Instance ID, choose the instance ID of any of the successfully deployed instance.
  3. In the Amazon EC2 console, on the Description tab, in Public DNS, copy the address, and then paste it into the address bar of your web browser. The web page opens the sample web application that was built for you

                             Diagram9

Taking This Further

  1. Using the CloudFormation template provided to you in this post, you can extend the setup to three regions.
  2. So far we have deployed code in two regions within one AWS account. There may be a case where your environments exist in different AWS accounts. For example, assume a scenario in which:
  • You have your development environment running in Region A in AWS Account A.
  • You have your QA environment running in Region B in AWS Account B.
  • You have a staging or production environment running in Region C in AWS Account C.

 

Diagram10

You will need to configure cross-account permissions on your destination S3 bucket and delegate these permissions to a role that Lambda assumed in the source account. Without these permissions, the Lambda function in AWS CodePipeline will not be able to copy the source code into the destination S3 bucket. (See the lambdaS3CopyRole in the CloudFormation template provided with this post.)

Create the following bucket policy on the destination bucket in Account B:

{

                  “Version”: “2012-10-17”,

                  “Statement”: [

                                    {

                                                      “Sid”: “DelegateS3Access”,

                                                      “Effect”: “Allow”,

                                                      “Principal”: {

                                                                        “AWS”: “arn:aws:iam::<Account A ID>:root”

                                                      },

                                                      “Action”: “s3:*”,

                                                      “Resource”: [

                                                                        “arn:aws:s3::: <destination bucket > /*”,

                                                                        “arn:aws:s3::: <destination bucket > “

                                                      ]

                                    }

                  ]

}

 

Repeat this step as you extend the setup to additional accounts.

Create your CloudFormation stacks in Account B and Account C (follow steps 2 and 3 in these accounts, respectively) and your pipeline will execute sequentially.

You can use another code repository solution like AWS CodeCommit or Github as your source and target repositories.

Wrapping Up

After you’ve finished exploring your pipeline and its associated resources, you can do the following:

  • Extend the setup. Add more stages to your pipeline in AWS CodePipeline.
  • Delete the stack in AWS CloudFormation, which deletes the pipeline, its resources, and the stack itself.

This is the option to choose if you no longer want to use the pipeline or any of its resources. Cleaning up resources you’re no longer using is important because you don’t want to continue to be charged.

To delete the CloudFormation stack:

  1. Delete the Amazon S3 buckets used as the artifact store in AWS CodePipeline in the source and destination regions. Although this bucket was created as part of the CloudFormation stack, Amazon S3 does not allow CloudFormation to delete buckets that contain objects.To delete this bucket, open the Amazon S3 console, select the buckets you created in this setup, and then delete them. For more information, see Delete or Empty a Bucket.
  2. Follow the steps to delete a stack in the AWS CloudFormation User Guide.

 

I would like to thank my colleagues Raul Frias, Asif Khan and Frank Li for their contributions to this post.

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/building-end-to-end-continuous-delivery-and-deployment-pipelines-in-aws-and-teamcity/

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

1. Continuous integration server setup using TeamCity

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a TeamCity server. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials. This stack provides an automated way to set up a TeamCity server based on the instructions here. You can download the template used for this setup from here.

The CloudFormation template does the following:

  1. Installs and configures the TeamCity server and its dependencies in Linux.
  2. Installs the AWS CodePipeline plugin for TeamCity.
  3. Installs a sample application with build configurations.
  4. Installs PHP meta-runners required to build the sample application.
  5. Redirects TeamCity port 8111 to 80.

Choose the AWS region where the TeamCity server will be hosted. For this demo, choose US East (N. Virginia).

Select a region

On the Select Template page, choose Next.

On the Specify Details page, do the following:

  1. In Stack name, enter a name for the stack. The name must be unique in the region in which you are creating the stack.
  2. In InstanceType, choose the instance type that best fits your requirements. The default value is t2.medium.

Note: The default instance type exceeds what’s included in the AWS Free Tier. If you use t2.medium, there will be charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH to the EC2 instance. SSH and HTTP access is limited to this IP address range.

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Specify Details

Choose Next.

Although it’s optional, on the Options page, type TeamCityServer for the instance name. This is the name used in the CloudFormation template for the stack. It’s a best practice to name your instance, because it makes it easier to identify or modify resources later on.

Choose Next.

On the Review page, choose Create button. It will take several minutes for AWS CloudFormation to create the resources for you.

Review

When the stack has been created, you will see a CREATE_COMPLETE message on the Overview tab in the Status column.

Events

You have now successfully created a TeamCity server. To access the server, on the EC2 Instance page, choose Public IP for the TeamCityServer instance.

Public DNS

On the TeamCity First Start page, choose Proceed.

TeamCity First Start

Although an internal database based on the HSQLDB database engine can be used for evaluation, TeamCity strongly recommends that you use an external database as a back-end TeamCity database in a production environment. An external database provides better performance and reliability. For more information, see the TeamCity documentation.

On the Database connection setup page, choose Proceed.

Database connection setup

The TeamCity server will start, which can take several minutes.

TeamCity is starting

Review and Accept the TeamCity License Agreement, and then choose Continue.

Next, create an Administrator account. Type a user name and password, and then choose Create Account.

You can navigate to the demo project from Projects in the top-left corner.

Projects

Note: You can create a project from a repository URL (the option used in this demo), or you can connect to your managed Git repositories, such as GitHub or BitBucket. The demo app used in this example can be found here.

We have already created a sample project configuration. Under Build, choose Edit Settings, and then review the settings.

Demo App

Choose Build Step: PHP – PHPUnit.

Build Step

The fields on the Build Step page are already configured.

Build Step

Choose Run to start the build.

Run Test

To review the tests that are run as part of the build, choose Tests.

Build

Build

You can view any build errors by choosing Build log from the same drop-down list.

Now that we have a successful build, we will use AWS CodeDeploy to set up a continuous deployment pipeline.

2. Continuous deployment using AWS CodeDeploy

Click here on this button launch-stack to launch an AWS CloudFormation stack that will use AWS CodeDeploy to set up a sample deployment pipeline. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

You can download the master template used for this setup from here. The template nests two CloudFormation templates to execute all dependent stacks cohesively.

  1. Template 1 creates a fleet of up to three EC2 instances (with a base operating system of Windows or Linux), associates an instance profile, and installs the AWS CodeDeploy agent. The CloudFormation template can be downloaded from here.
  2. Template 2 creates an AWS CodeDeploy deployment group and then installs a sample application. The CloudFormation template can be downloaded from here.

Choose the same AWS region you used when you created the TeamCity server (US East (N. Virginia)).

Note: The templates contain Amazon Machine Image (AMI) mappings for us-east-1, us-west-2, eu-west-1, and ap-southeast-2 only.

On the Select Template page, choose Next.

Picture17

On the Specify Details page, in Stack name, type a name for the stack. In the Parameters section, do the following:

  1. In AppName, you can use the default, or you can type a name of your choice. The name must be between 2 and 15 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.
  1. In DeploymentGroupName, you can use the default, or you type a name of your choice. The name must be between 2 and 25 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.

Picture18

  1. In InstanceType, choose the instance type that best fits the requirements of your application.
  2. In InstanceCount, type the number of EC2 instances (up to three) that will be part of the deployment group.
  3. For Operating System, choose Linux or Windows.
  4. Leave TagKey and TagValue at their defaults. AWS CodeDeploy will use this tag key and value to locate the instances during deployments. For information about Amazon EC2 instance tags, see Working with Tags Using the Console.Picture19
  5. In S3Bucket and S3Key, type the bucket name and S3 key where the application is located. The default points to a sample application that will be deployed to instances in the deployment group. Based on what you selected in the OperatingSystem field, use the following values.
    Linux:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Linux.zip
    Windows:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Windows.zip
  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH/RDP to the EC2 instance.

Picture20

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Follow the prompts, and then choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. Review the other settings, and then choose Create.

Picture21

It will take several minutes for CloudFormation to create all of the resources on your behalf. The nested stacks will be launched sequentially. You can view progress messages on the Events tab in the AWS CloudFormation console.

Picture22

You can see the newly created application and deployment groups in the AWS CodeDeploy console.

Picture23

To verify that your application was deployed successfully, navigate to the DNS address of one of the instances.

Picture24

Picture25

Now that we have successfully created a deployment pipeline, let’s integrate it with AWS CodePipeline.

3. Building a delivery pipeline using AWS CodePipeline

We will now create a delivery pipeline in AWS CodePipeline with the TeamCity AWS CodePipeline plugin.

  1. Using AWS CodePipeline, we will build a new pipeline with Source and Deploy stages.
  2. Create a custom action for the TeamCity Build stage.
  3. Create an AWS CodePipeline action trigger in TeamCity.
  4. Create a Build stage in the delivery pipeline for TeamCity.
  5. Publish the build artifact for deployment.

Step 1: Build a new pipeline with Source and Deploy stages using AWS CodePipeline.

In this step, we will create an Amazon S3 bucket to use as the artifact store for this pipeline.

  1. Install and configure the AWS CLI.
  1. Create an Amazon S3 bucket that will host the build artifact. Replace account-number with your AWS account number in the following steps.
    $ aws s3 mb s3://demo-app-build-account-number
  1. Enable bucket versioning
    $ aws s3api put-bucket-versioning --bucket demo-app-build-account-number --versioning-configuration Status=Enabled
  1. Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.
  • OSX/Linux:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip | aws s3 cp - s3://demo-app-build-account-number
  • Windows:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Windows_App.zip
    $ aws s3 cp ./Sample_Windows_App.zip s3://demo-app-account-number

Note: You can use AWS CloudFormation to perform these steps in an automated way. When you choose launch-stack, this template will be used. Use the following commands to extract the Amazon S3 bucket name, enable versioning on the bucket, and copy over the sample artifact.

$ export bucket-name ="$(aws cloudformation describe-stacks --stack-name “S3BucketStack” --output text --query 'Stacks[0].Outputs[?OutputKey==`S3BucketName`].OutputValue')"
$ aws s3api put-bucket-versioning --bucket $bucket-name --versioning-configuration Status=Enabled && wget https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip && aws s3 cp ./Sample_Linux_App.zip s3://$bucket-name

You can create a pipeline by using a CloudFormation stack or the AWS CodePipeline console.

Option 1: Use AWS CloudFormation to create a pipeline

We’re going to create a two-stage pipeline that uses a versioned Amazon S3 bucket and AWS CodeDeploy to release a sample application. (You can use an AWS CodeCommit repository or a GitHub repository as the source provider instead of Amazon S3.)

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a new delivery pipeline using the application and deployment group created in an earlier step. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

Choose the US East (N. Virginia) region, and then choose Next.

Leave the default options, and then choose Next.

Picture26

On the Options page, choose Next.

Picture27

Select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. This will create the delivery pipeline in AWS CodePipeline.

Option 2: Use the AWS CodePipeline console to create a pipeline

On the Create pipeline page, in Pipeline name, type a name for your pipeline, and then choose Next step.
Picture28

Depending on where your source code is located, you can choose Amazon S3, AWS CodeCommit, or GitHub as your Source provider. The pipeline will be triggered automatically upon every check-in to your GitHub or AWS CodeCommit repository or when an artifact is published into the S3 bucket. In this example, we will be accessing the product binaries from an Amazon S3 bucket.

Choose Next step.Picture29

s3://demo-app-build-account-number/Sample_Linux_App.zip (or) Sample_Windows_App.zip

Note: AWS CodePipeline requires a versioned S3 bucket for source artifacts. Enable versioning for the S3 bucket where the source artifacts will be located.

On the Build page, choose No Build. We will update the build provider information later on.Picture31

For Deployment provider, choose CodeDeploy. For Application name and Deployment group, choose the application and deployment group we created in the deployment pipeline step, and then choose Next step.Picture32

An IAM role will provide the permissions required for AWS CodePipeline to perform the build actions and service calls.  If you already have a role you want to use with the pipeline, choose it on the AWS Service Role page. Otherwise, type a name for your role, and then choose Create role.  Review the predefined permissions, and then choose Allow. Then choose Next step.

 

For information about AWS CodePipeline access permissions, see the AWS CodePipeline Access Permissions Reference.

Picture34

Review your pipeline, and then choose Create pipeline

Picture35

This will trigger AWS CodePipeline to execute the Source and Beta steps. The source artifact will be deployed to the AWS CodeDeploy deployment groups.

Picture36

Now you can access the same DNS address of the AWS CodeDeploy instance to see the updated deployment. You will see the background color has changed to green and the page text has been updated.

Picture37

We have now successfully created a delivery pipeline with two stages and integrated the deployment with AWS CodeDeploy. Now let’s integrate the Build stage with TeamCity.

Step 2: Create a custom action for TeamCity Build stage

AWS CodePipeline includes a number of actions that help you configure build, test, and deployment resources for your automated release process. TeamCity is not included in the default actions, so we will create a custom action and then include it in our delivery pipeline. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

TeamCity’s custom action type (Build/Test categories) can be integrated with AWS CodePipeline. It’s similar to Jenkins and Solano CI custom actions. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

The TeamCity AWS CodePipeline plugin is already installed on the TeamCity server we set up earlier. To learn more about installing TeamCity plugins, see install the plugin. We will now create a custom action to integrate TeamCity with AWS CodePipeline using a custom-action JSON file.

Download this file locally: https://github.com/JetBrains/teamcity-aws-codepipeline-plugin/blob/master/custom-action.json

Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where you have installed the AWS CLI. For information about setting up the AWS CLI, see here.

Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the name of the JSON file you just created.

For example, to create a build custom action:

$ aws codepipeline create-custom-action-type --cli-input-json file://teamcity-custom-action.json

This should result in an output similar to this:

{
    "actionType": {
        "inputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "actionConfigurationProperties": [
            {
                "description": "The expected URL format is http[s]://host[:port]",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "TeamCityServerURL"
            },
            {
                "description": "Corresponding TeamCity build configuration external ID",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "BuildConfigurationID"
            },
            {
                "description": "Must be unique, match the corresponding field in the TeamCity build trigger settings, satisfy regular expression pattern: [a-zA-Z0-9_-]+] and have length <= 20",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": true,
                "name": "ActionID"
            }
        ],
        "outputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "id": {
            "category": "Build",
            "owner": "Custom",
            "version": "1",
            "provider": "TeamCity"
        },
        "settings": {
            "entityUrlTemplate": "{Config:TeamCityServerURL}/viewType.html?buildTypeId={Config:BuildConfigurationID}",
            "executionUrlTemplate": "{Config:TeamCityServerURL}/viewLog.html?buildId={ExternalExecutionId}&tab=buildResultsDiv"
        }
    }
}

Before you add the custom action to your delivery pipeline, make the following changes to the TeamCity build server. You can access the server by opening the Public IP of the TeamCityServer instance from the EC2 Instance page.

Picture38

In TeamCity, choose Projects. Under Build Configuration Settings, choose Version Control Settings. You need to remove the version control trigger here so that the TeamCity build server will be triggered during the Source stage in AWS CodePipeline. Choose Detach.

Picture39

Step 3: Create a new AWS CodePipeline action trigger in TeamCity

Now add a new AWS CodePipeline trigger in your build configuration. Choose Triggers, and then choose Add new trigger

Picture40

From the drop-down menu, choose AWS CodePipeline Action.

Picture41

 

In the AWS CodePipeline console, choose the region in which you created your delivery pipeline. Enter your access key credentials, and for Action ID, type a unique name. You will need this ID when you add a TeamCity Build stage to the pipeline.

Picture42

Step 4: Create a new Build stage in the delivery pipeline for TeamCity

Add a stage to the pipeline and name it Build.

Picture43

From the drop-down menu, choose Build. In Action name, type a name for the action. In Build provider, choose TeamCity, and then choose Add action.

Select TeamCity, click Add action

Picture44

For TeamCity Action Configuration, use the following:

TeamCityServerURL:  http://<Public DNS address of the TeamCity build server>[:port]

Picture45

BuildConfigurationID: In your TeamCity project, choose Build. You’ll find this ID (AwsDemoPhpSimpleApp_Build) under Build Configuration Settings.

Picture46

ActionID: In your TeamCity project, choose Build. You’ll find this ID under Build Configuration Settings. Choose Triggers, and then choose AWS CodePipeline Action.

Picture47

Next, choose input and output artifacts for the Build stage, and then choose Add action.

Picture48

We will now publish a new artifact to the Amazon S3 artifact bucket we created earlier, so we can see the deployment of a new app and its progress through the delivery pipeline. The demo app used in this artifact can be found here for Linux or here for Windows.

Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.

OSX/Linux:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/PhpArtifact.zip | aws s3 cp - s3://demo-app-build-account-number

Windows:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/WindowsArtifact.zip
$ aws s3 cp ./WindowsArtifact.zip s3://demo-app-account-number

From the AWS CodePipeline dashboard, under delivery-pipeline, choose Edit.Picture49

Edit Source stage by choosing the edit icon on the right.

Amazon S3 location:

Linux: s3://demo-app-account-number/PhpArtifact.zip

Windows: s3://demo-app-account-number/WindowsArtifact.zip

Under Output artifacts, make sure My App is displayed for Output artifact #1. This will be the input artifact for the Build stage.Picture50

The output artifact of the Build stage should be the input artifact of the Beta deployment stage (in this case, MyAppBuild).Picture51

Choose Update, and then choose Save pipeline changes. On the next page, choose Save and continue.

Step 5: Publish the build artifact for deploymentPicture52

Step (a)

In TeamCity, on the Build Steps page, for Runner type, choose Command Line, and then add the following custom script to copy the source artifact to the TeamCity build checkout directory.

Note: This step is required only if your AWS CodePipeline source provider is either AWS CodeCommit or Amazon S3. If your source provider is GitHub, this step is redundant, because the artifact is copied over automatically by the TeamCity AWS CodePipeline plugin.

In Step name, enter a name for the Command Line runner to easily distinguish the context of the step.

Syntax:

$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* % teamcity.build.checkoutDir%
$ unzip *.zip -d %teamcity.build.checkoutDir%
$ rm –rf %teamcity.build.checkoutDir%/*.zip

For Custom script, use the following commands:

cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %teamcity.build.checkoutDir%
unzip *.zip -d %teamcity.build.checkoutDir%
rm –rf %teamcity.build.checkoutDir%/*.zip

Picture53

Step (b):

For Runner type, choose Command Line runner type, and then add the following custom script to copy the build artifact to the output folder.

For Step name, enter a name for the Command Line runner.

Syntax:

$ mkdir -p %codepipeline.artifact.output.folder%/<CodePipeline-Name>/<build-output-artifact-name>/
$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* %codepipeline.artifact.output.folder%/<CodePipeline-Name/<build-output-artifact-name>/

For Custom script, use the following commands:

$ mkdir -p %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/
$ cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/

CopyToOutputFolderIn Build Steps, choose Reorder build steps to ensure the copying of the source artifact step is executed before the PHP – PHP Unit step.Picture54

Drag and drop Copy Source Artifact To Build Checkout Directory to make it the first build step, and then choose Apply.Picture55

Navigate to the AWS CodePipeline console. Choose the delivery pipeline, and then choose Release change. When prompted, choose Release.

Choose Release on the next prompt.

Picture56

The most recent change will run through the pipeline again. It might take a few moments before the status of the run is displayed in the pipeline view.

Here is what you’d see after AWS CodePipeline runs through all of the stages in the pipeline:Picture57

Let’s access one of the instances to see the new application deployment on the EC2 Instance page.Picture58

If your base operating system is Windows, accessing the public DNS address of one of the AWS CodeDeploy instances will result in the following page.

Windows: http://public-dns/Picture59

If your base operating system is Linux, when we access the public DNS address of one of the AWS CodeDeploy instances, we will see the following test page, which is the sample application.

Linux: http://public-dns/www/index.phpPicture60

Congratulations! You’ve created an end-to-end deployment and delivery pipeline ─ from source code, to build, to deployment ─ in a fully automated way.

Summary:

In this post, you learned how to build an end-to-end delivery and deployment pipeline on AWS. Specifically, you learned how to build an end-to-end, fully automated, continuous integration, continuous deployment, and delivery pipeline for your application, at scale, using AWS deployment and management services. You also learned how AWS CodePipeline can be easily extended through the use of custom triggers to integrate other services like TeamCity.

If you have questions or suggestions, please leave a comment below.

Fleet Management Made Easy with Auto Scaling

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/fleet-management-made-easy-with-auto-scaling/

If your application runs on Amazon EC2 instances, then you have what’s referred to as a ‘fleet’. This is true even if your fleet is just a single instance. Automating how your fleet is managed can have big pay-offs, both for operational efficiency and for maintaining the availability of the application that it serves. You can automate the management of your fleet with Auto Scaling, and the best part is how easy it is to set up!

There are three main functions that Auto Scaling performs to automate fleet management for EC2 instances:

  • Monitoring the health of running instances
  • Automatically replacing impaired instances
  • Balancing capacity across Availability Zones

In this post, we describe how Auto Scaling performs each of these functions, provide an example of how easy it is to get started, and outline how to learn more about Auto Scaling.

Monitoring the health of running instances

Auto Scaling monitors the health of all instances that are placed within an Auto Scaling group. Auto Scaling performs EC2 health checks at regular intervals, and if the instance is connected to an Elastic Load Balancing load balancer, it can also perform ELB health checks. Auto Scaling ensures that your application is able to receive traffic and that the instances themselves are working properly. When Auto Scaling detects a failed health check, it can replace the instance automatically.

Automatically replacing impaired instances

When an impaired instance fails a health check, Auto Scaling automatically terminates it and replaces it with a new one. If you’re using an Elastic Load Balancing load balancer, Auto Scaling gracefully detaches the impaired instance from the load balancer before provisioning a new one and attaches it back to the load balancer. This is all done automatically, so you don’t need to respond manually when an instance needs replacing.

Balancing capacity across Availability Zones

Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet. What’s more, Auto Scaling only launches into Availability Zones in which there is available capacity for the requested instance type.

Getting started is easy!

The easiest way to get started with Auto Scaling is to build a fleet from existing instances. The AWS Management Console provides a simple workflow to do this: right-click on a running instance and choose Instance Settings, Attach to Auto Scaling Group.

You can then opt to attach the instance to a new Auto Scaling group. Your instance is now being automatically monitored for health and will be replaced if it becomes impaired. If you configure additional zones and add more instances, they will be spread evenly across Availability Zones to make your fleet more resilient to unexpected failures.

Diving deeper

While this example is a good starting point, you may want to dive deeper into how Auto Scaling can automate the management of your EC2 instances.

The first thing to explore is how to automate software deployments. AWS Elastic Beanstalk is a popular and easy-to-use solution that works well for web applications. AWS CodeDeploy is a good solution for fine-grained control over the deployment process. If your application is based on containers, then Amazon EC2 Container Service (Amazon ECS) is something to consider. You may also want to look into AWS Partner solutions such as Ansible and Puppet. One common strategy for deploying software across a production fleet without incurring downtime is blue/green deployments, to which Auto Scaling is particularly well-suited.

These solutions are all enhanced by the core fleet management capabilities in Auto Scaling. You can also use the API or CLI to roll your own automation solution based on Auto Scaling. The following learning path will help you to explore the service in more detail.

  • Launch configurations
  • Lifecycle hooks
  • Fleet size
  • Automatic process control
  • Scheduled scaling

Launch configurations

Launch configurations are the key to how Auto Scaling launches instances. Whenever an Auto Scaling group launches a new instance, it uses the currently associated launch configuration as a template for the launch. In the example above, Auto Scaling automatically created a launch configuration by deriving it from the attached instance. In many cases, however, you create your own launch configuration. For example, if your software environment is baked into an Amazon Machine Image (AMI), then your launch configuration points to the version that you want Auto Scaling to deploy onto new instances.

Lifecycle hooks

Lifecycle hooks let you take action before an instance goes into service or before it gets terminated. This can be especially useful if you are not baking your software environment into an AMI. For example, launch hooks can perform software configuration on an instance to ensure that it’s fully prepared to handle traffic before Auto Scaling proceeds to connect it to your load balancer. One way to do this is by connecting the launch hook to an AWS Lambda function that invokes RunCommand on the instance.

Terminate hooks can be useful for collecting important data from an instance before it goes away. For example, you could use a terminate hook to preserve your fleet’s log files by copying them to an Amazon S3 bucket when instances go out of service.

Fleet size

You control the size of your fleet using the minimum, desired, and maximum capacity attributes of an Auto Scaling group. Auto Scaling automatically launches or terminates instances to keep the group at the desired capacity. As mentioned before, Auto Scaling uses the launch configuration as a template for launching new instances in order to meet the desired capacity, doing so such that they are balanced across configured Availability Zones.

Automatic process control

You can control the behavior of Auto Scaling’s automatic processes such as health checks, launches, and terminations. You may find the AZRebalance process of particular interest. By default, Auto Scaling automatically terminates instances from one zone and re-launches them into another if the instances in the fleet are not spread out in a balanced manner.
You may want to disable this behavior under certain conditions. For example, if you’re attaching existing instances to an Auto Scaling group, you may not want them terminated and re-launched right away if that is required to re-balance your zones. Note that Auto Scaling always replaces impaired instances with launches that are balanced across zones, regardless of this setting. You can also control how Auto Scaling performs health checks, launches, terminations, and more.

Scheduled scaling

Scheduled scaling is a simple tool for adjusting the size of your fleet on a schedule. For example, you can add more or fewer instances to your fleet at different times of the day to handle changing customer traffic patterns. A more advanced tool is dynamic scaling, which adjusts the size of your fleet based on Amazon CloudWatch metrics.

Summary

Auto Scaling can bestow important benefits to cloud applications by automating the management of fleets of EC2 instances. Auto Scaling makes it easy to monitor instance health, automatically replace impaired instances, and spread capacity across multiple Availability Zones.

If you already have a fleet of EC2 instances, then it’s easy to get started with Auto Scaling in just a few clicks. After your first Auto Scaling group is working to safeguard your existing fleet, you can follow the suggested learning path in this post. Over time, you can explore more features of Auto Scaling and further automate your software deployments and application scaling.

If you have questions or suggestions, please comment below.

AWS Developer Tool Recap – Recent Enhancements to CodeCommit, CodePipeline, and CodeDeploy

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-developer-tool-recap-recent-enhancements-to-codecommit-codepipeline-and-codedeploy/

The AWS Developer Tools help you to put modern DevOps practices to work! Here’s a quick overview (read New AWS Tools for Code Management and Deployment for an in-depth look):

AWS CodeCommit is a fully-managed source code control service. You can use it to host secure and highly scalable private Git repositories while continuing to use your existing Git tools and workflows (watch the Introduction to AWS CodeCommit video to learn more).

AWS CodeDeploy automates code deployment to Amazon Elastic Compute Cloud (EC2) instances and on-premises servers. You can update your application at a rapid clip, while avoiding downtime during deployment (watch the Introduction to AWS CodeDeploy video to learn more).

AWS CodePipeline is a continuous delivery service that you can use to streamline and automate your release process. Checkins to your repo (CodeCommit or Git) will initiate build, test, and deployment actions (watch Introducing AWS CodePipeline for an introduction). The build can be deployed to your EC2 instances or on-premises servers via CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks.

You can combine these services with your existing build and testing tools to create an end-to-end software release pipeline, all orchestrated by CodePipeline.

We have made a lot of enhancements to the Code* products this year and today seems like a good time to recap all of them for you! Many of these enhancements allow you to connect the developer tools to other parts of AWS so that you can continue to fine-tune your development process.

CodeCommit Enhancements
Here’s what’s new with CodeCommit:

  • Repository Triggers
  • Code Browsing
  • Commit History
  • Commit Visualization
  • Elastic Beanstalk Integration

Repository Triggers – You can create Repository Triggers that Send Notification or Run Code whenever a change occurs in a CodeCommit repository (these are sometimes called webhooks — user-defined HTTP callbacks). These hooks will allow you to customize and automate your development workflow. Notifications can be delivered to an Amazon Simple Notification Service (SNS) topic or can invoke a Lambda function.

Code Browsing – You can Browse Your Code in the Console. This includes navigation through the source code tree and the code:

Commit History – You can View the Commit History for your repositories (mine is kind of quiet, hence the 2015-era dates):

Commit Visualization – You can View a Graphical Representation of the Commit History for your repositories:

Elastic Beanstalk Integration – You can Use CodeCommit Repositories with Elastic Beanstalk to store your project code for deployment to an Elastic Beanstalk environment.

CodeDeploy Enhancements
Here’s what’s new with CodeDeploy:

  • CloudWatch Events Integration
  • CloudWatch Alarms and Automatic Deployment Rollback
  • Push Notifications
  • New Partner Integrations

CloudWatch Events Integration – You can Monitor and React to Deployment Changes with Amazon CloudWatch Events by configuring CloudWatch Events to stream changes in the state of your instances or deployments to an AWS Lambda function, an Amazon Kinesis stream, an Amazon Simple Queue Service (SQS) queue, or an SNS topic. You can build workflows and processes that are triggered by your changes. You could automatically terminate EC2 instances when a deployment fails or you could invoke a Lambda function that posts a message to a Slack channel.

CloudWatch Alarms and Automatic Deployment Rollback – CloudWatch Alarms give you another type of Monitoring for your Deployments. You can monitor metrics for the instances or Auto Scaling Groups managed by CodeDeploy and take action if they cross a threshold for a defined period of time, stop a deployment, or change the state of an instance by rebooting, terminating, or recovering it. You can also automatically rollback a deployment in response to a deployment failure or a CloudWatch Alarm.

Push Notifications – You can Receive Push Notifications via Amazon SNS for events related to your deployments and use them to track the state and progress of your deployment.

New Partner Integrations – Our CodeDeploy Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

CodePipeline Enhancements
And here’s what’s new with CodePipeline:

  • AWS OpsWorks Integration
  • Triggering of Lambda Functions
  • Manual Approval Actions
  • Information about Committed Changes
  • New Partner Integrations

AWS OpsWorks Integration – You can Choose AWS OpsWorks as a Deployment Provider in the software release pipelines that you model in CodePipeline:

You can also configure CodePipeline to use OpsWorks to deploy your code using recipes contained in custom Chef cookbooks.

Triggering of Lambda Functions – You can now Trigger a Lambda Function as one of the actions in a stage of your software release pipeline. Because Lambda allows you to write functions to perform almost any task, you can customize the way your pipeline works:

Manual Approval Actions – You can now add Manual Approval Actions to your software release pipeline. Execution pauses until the code change is approved or rejected by someone with the required IAM permission:

Information about Committed Changes – You can now View Information About Committed Changes to the code flowing through your software release pipeline:

 

New Partner Integrations – Our CodePipeline Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

New Online Content
In order to help you and your colleagues to understand the newest development methodologies, we have created some new introductory material:

Thanks for Reading!
I hope that you have enjoyed this quick look at some of the most recent additions to our development tools.

In order to help you to get some hands-on experience with continuous delivery, my colleagues have created a new Pipeline Starter Kit. The kit includes a AWS CloudFormation template that will create a VPC with two EC2 instances inside, a pair of applications (one for each EC2 instance, both deployed via CodeDeploy), and a pipeline that builds and then deploys the sample application, along with all of the necessary IAM service and instance roles.


Jeff;

Now Open – AWS US East (Ohio) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-us-east-ohio-region/

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 20 ms to JFK (home to an Internet exchange point).
  • 29 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and another exchange point).
  • 91 ms to SFO (home of the US West (Northern California) Region).

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!


Jeff;

 

AWS Week in Review – September 19, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2016/

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 19

Tuesday

September 20

Wednesday

September 21

Thursday

September 22

Friday

September 23

Saturday

September 24

Sunday

September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.